The dominant narrative describes OpenAI and the xAI of Elon Musk as two entities in fierce competition, engaged in a solitary race toward technological supremacy.
The reality emerging from The Guardian’s recent analyses paints a scenario of alarming permeability: the boundaries between models have narrowed to the point that ChatGPT has begun to add to its responses materials sourced from Grokipedia.
More than a clash of titans, we are witnessing a dangerous cross-contamination.
The encyclopedia generated by Musk’s AI, known for its controversial positions, is indeed polluting the world’s most-used model’s datasets, raising structural doubts about OpenAI’s ability to filter unreliable sources and preserve the integrity of the information provided to users.
ChatGPT reads Grokipedia, and this is not good at all

Lanciata nell’ottobre scorso, Grokipedia nasce come risposta diretta alle lamentele di Elon Musk, il quale ha più volte accusato Wikipedia di nutrire un pregiudizio “woke” e liberale contro i conservatori.
The aim was to create an ideological counterweight, but the result proved problematic in terms of factual accuracy.
Sebbene molti articoli appaiano come copie dirette di Wikipedia, i reporter hanno presto notato deviazioni allarmanti. Grokipedia ha ospitato contenuti che collegano la pornografia alla crisi dell’AIDS, offrendo giustificazioni ideologiche per lo schiavismo e utilizzando terminologie denigratorie nei confronti delle persone transgender.
Tali distorsioni non sorprendono se si considera che l’enciclopedia è associata a un chatbot che, in passato, si è autodefinito “Mecha Hitler” e che è stato utilizzato per inondare X (ex Twitter) di deepfake a sfondo sessuale.
Until recently, it was believed that these contents remained confined to Musk’s ecosystem, intended only for users of his platform. The reality revealed by The Guardian’s investigation shows instead that the fence has been breached.
The problem of unverified sources
The investigation highlighted that GPT-5.2 cited Grokipedia on nine occasions in response to a little more than a dozen different questions.
Anche Claude, the model developed by Anthropic, seems to have fallen into the same trap, using the xAI database to respond to some queries.
The dynamics of this integration are particularly dangerous. Tests have shown that ChatGPT tends to ignore Grokipedia when it comes to widely debated historical topics and on which Musk’s inaccuracies are known, such as the January 6th Capitol insurrection or the HIV/AIDS epidemic. In these cases, security filters and the prevalence of authoritative sources seem to hold.
The real danger lies in the shadows. OpenAI’s artificial intelligence has used Grokipedia as a source for darker and more obscure topics, where fact-checking is less immediate for the average user.
A striking example concerns statements about Sir Richard Evans, which The Guardian had previously debunked, but which reappeared in the AI’s responses precisely because they were drawn from Musk’s database.
This mechanism risks acting as a form of “recycling” of false information: a lie generated by a biased AI is adopted by an AI considered “reliable”, thereby obtaining greater credibility in the eyes of the end user.
OpenAI’s defense and the future of information
In light of these findings, an OpenAI spokesperson merely stated that the company aims to draw from a wide range of public sources and viewpoints.
This response appears weak in the face of the systematic nature of the distortions present in Grokipedia. Uncritically accepting content generated by another AI, programmed with a specific and politically oriented bias, undermines the very foundations of the usefulness of tools like ChatGPT.
If the goal is to provide a useful and truthful assistant, including a source that justifies slavery or spreads medical conspiracy theories is not pluralism, but information pollution.
In an era when truth is increasingly difficult to verify, the fact that a model’s hallucinations become the facts of another represents a logical short circuit that developers can no longer afford to ignore.



