Global Outrage After Musk’s AI Grok Creates Sexual Images of Kids
European regulators have launched a fierce backlash against Elon Musk’s social media platform X after its artificial intelligence chatbot, Grok, was found generating sexualized images of women and children, including minors, triggering alarm across multiple countries and reigniting global concerns over AI safety and online abuse.
Officials from the European Commission and the United Kingdom described the content as “illegal,” “disturbing,” and “unacceptable,” saying it violates some of the most fundamental protections meant to safeguard children and internet users from exploitation.
The controversy erupted after reports revealed that Grok, X’s built-in AI chatbot, was capable of producing explicit and highly sexualized images of women and minors when prompted. The tool reportedly allowed users to create images of people, including children wearing little or no clothing,through what X has previously described internally as a “spicy mode.”
“This Is Not Spicy. This Is Illegal.”
The European Commission responded swiftly and forcefully. Speaking to reporters, Commission spokesperson Thomas Regnier made it clear that EU officials were deeply troubled by what they had discovered.
“This is not spicy. This is illegal. This is appalling. This is disgusting,” Regnier said bluntly. “This has no place in Europe.”
Regnier added that European authorities were fully aware that X had been offering features that enabled the generation of such content and emphasized that platforms operating in Europe are required by law to prevent illegal material from circulating.
The Commission’s remarks signal that X could soon face regulatory action under Europe’s strict digital laws, which place legal responsibility on tech companies to monitor and remove harmful content, especially when it involves children.
UK Regulator Demands Answers
In the United Kingdom, communications regulator Ofcom also stepped in, demanding urgent explanations from X and Musk’s AI company, xAI. Ofcom said it wanted to know how Grok was able to generate sexualized images of people, particularly children, and whether the platform had failed in its legal duty to protect users.
“We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK,” an Ofcom spokesperson said.
British law is especially strict when it comes to non-consensual intimate images and child sexual abuse material, including AI-generated sexual deepfakes. Producing, sharing, or hosting such material is a criminal offense. Platforms are also legally required to take proactive steps to prevent users from encountering illegal content and to remove it immediately when discovered.
Ofcom said it was aware of “serious concerns” surrounding Grok’s image-generation capabilities and is assessing whether X has breached its obligations under UK law.
Silence From X, Mockery From Musk
Despite the growing backlash, X did not immediately respond to requests for comment from European regulators. In a previous statement addressing similar concerns, the company dismissed criticism by claiming, “Legacy Media Lies.”
Elon Musk himself appeared to treat the issue lightly. On social media, Musk responded to criticism by posting laughing-crying emojis in reaction to images of public figures that had been edited to appear in revealing outfits a response that further angered critics and regulators.
For many officials, Musk’s reaction underscored what they see as a troubling lack of seriousness when it comes to the risks of generative AI and online exploitation.
France and India Join the Chorus
The outrage is not limited to Europe and Britain. French ministers confirmed last week that they had reported X to prosecutors and regulators after discovering sexualized and sexist images being generated and shared on the platform.
In a statement, French officials described the content as “manifestly illegal” and said it could not be justified under free speech protections.
India has also demanded explanations from X, with government officials describing the AI-generated material as obscene and deeply concerning. While Indian authorities have not yet announced formal action, their involvement adds to mounting global pressure on Musk’s platform.
AI, Power, and Accountability
The Grok controversy highlights a growing international struggle over how powerful AI tools should be regulated and who should be held responsible when they are misused.
Generative AI has advanced rapidly, allowing users to create realistic images, videos, and text with minimal effort. While the technology offers enormous creative and commercial potential, it also presents serious risks when safeguards fail.
Experts warn that without strong moderation systems, AI tools can easily be weaponised to create sexual abuse material, harass individuals, or spread harmful misinformation.
In Europe, regulators argue that tech companies must not wait for damage to occur before acting. Instead, platforms are expected to design their systems with safety built in from the start.
“This is about protecting people, especially children, from harm,” one EU official said privately. “Innovation does not excuse negligence.”
What Comes Next for X?
X now faces potential investigations, fines, or legal orders to change how Grok operates within Europe and the UK. Under European digital laws, companies that fail to prevent illegal content can face penalties worth billions of euros.
Regulators may also require X to suspend or heavily restrict Grok’s image-generation features until they can demonstrate compliance with child protection and online safety rules.
For Elon Musk, the controversy represents another major challenge in his effort to reshape X into what he calls a “free speech platform.” Critics argue that Musk’s hands-off approach has allowed dangerous content to flourish, while supporters claim regulators are overreaching.
What is clear, however, is that tolerance for AI-driven abuse is rapidly disappearing. Governments around the world are signaling that platforms will be held accountable no matter how powerful their owners may be.
As investigations continue, X’s handling of Grok may become a defining test case for how far regulators are willing to go to rein in artificial intelligence before it causes irreversible harm.