Generative artificial intelligence is bringing with it unprecedented ethical and legal challenges, putting the ability of social media platforms to moderate potentially harmful content to the test. Recently, the European Union has launched a new investigation into X and its integrated chatbot, Grok, focusing on the alleged distribution of sexually explicit images generated by AI — including those featuring minors.
The new action is not an isolated event, but part of a broader picture of rising tensions between European regulators and how the platform is managed by Elon Musk.
Europe won’t back down: X and Grok in the crosshairs, this time for deepfakes

The European Commission’s investigation aims to determine whether X has complied with the legal obligations under the Digital Services Act, or whether, on the contrary, it has treated the fundamental rights of European citizens, particularly women and children, as a mere collateral damage of its services. The Commission’s Executive Vice-President, Henna Virkkunen, defined sexual deepfakes as a violent and unacceptable form of human degradation.
The core of the investigation concerns the risk-reduction measures adopted by X when it implemented Grok on the platform; according to the EU, the risks related to the creation and dissemination of manipulated content seem to have materialized (without particular countermeasures).
Beyond the specific issue of generative AI, the European Union has decided to expand an investigation already underway in 2023 regarding X’s recommendation algorithm and the tools used to prevent the circulation of illicit content. The investigation comes at a geopolitically extremely delicate moment, characterized by frictions between the US administration and Europe regarding the regulation of large American tech companies.
Just a few weeks ago, the previous 120-million-euro penalty for DSA violations that hit X, triggering Elon Musk’s ire.
While X continues to reassure everyone by claiming to be collaborative, the facts seem to indicate otherwise. The Grok case shows that merely launching a powerful tool is not enough to be modern; one must also take responsibility for what that tool creates, and in this case we’re talking about odious deepfakes featuring even minors. And Europe, this time, seems intent on presenting a very hefty bill.



