Google: They attempted to clone Gemini by asking it 100,000 questions

The imitation of a successful product is a practice as old as commerce itself, but the advent of artificial intelligence has radically transformed the ways in which competition seeks to appropriate others’ trade secrets.

If, in the past, reverse engineering required the physical dismantling of a machine or the complex decoding of software, today the challenge is fought on the level of semantic queries.

According to the latest report issued by Google’s Threat Intelligence Group, Mountain View recently thwarted a cloning attempt of its flagship model, Gemini, carried out through a strategy as simple as it is insidious: a barrage of questions.

Gemini under attack, they tried to clone it

Gemini 3
Credits: Gemini 3

The report highlights how a single actor sent over over 100,000 prompts to the chatbot, executing what experts describe as a model extraction operation or ‘distillation’.

Technically, this practice involves repeatedly interrogating an advanced AI system to study its responses and using those output data to train a competing system, often smaller and cheaper.

The peculiarity of these attacks lies in their nature: the perpetrators do not breach Google’s servers nor look for traditional cybersecurity vulnerabilities.

Conversely, they use the official and legitimate APIs provided by the company, yet abuse them for purposes that violate the terms of service.

For Google, this activity amounts to a true intellectual property theft, aimed at replicating the model’s capabilities without bearing its substantial development and research costs.

In pursuit of complex reasoning

The attack described in the document is extremely targeted. The campaign’s operators sought to force Gemini to reveal the details of its internal decision-making process, known as ‘chain of thought’.

Normally, language models do not expose all the logical steps that lead to a conclusion, but through specific and repeated prompts, the attackers hoped to capture this capacity for reasoning to transfer it to their own system.

The goal was to replicate not only Gemini’s deductive logic across different contexts, but also its multilingual capabilities. Fortunately, Google’s monitoring systems detected the abnormal activity in real time.

Data-flow analysis allowed engineers to identify the large-scale extraction attempt and to recalibrate the protections of the software, preventing sensitive details about the internal workings of the AI from being exposed.

Unfair competition and other risks

Although the tech giant has chosen not to disclose the suspects’ names, the report suggests that most of these attempts come from private companies and researchers seeking a quick competitive edge.

John Hultquist, Google’s chief intelligence analyst, has emphasized how this trend is likely to grow.

As more and more companies seek to build customized AI systems, the temptation to ‘copy tasks’ from sector leaders will become increasingly strong, turning the distillation into a systemic threat to the industry.

The document does not stop at denouncing cloning attempts. In addition to model extraction, Google has identified malevolent actors engaged in exploiting Gemini for purely criminal purposes. There have been cases of AI-assisted phishing campaigns and even malware programmed to call Gemini’s APIs and generate harmful code in real time.

In every detected instance, the company has responded by disabling the involved accounts and updating security barriers, in a continuous cat-and-mouse game that defines the current cybersecurity landscape.