French law enforcement officials have raided the offices of X, the social media platform owned by Elon Musk, as part of a widening investigation into potentially illegal and harmful content generated by the company’s AI chatbot, Grok, according to BBC.
The raid follows mounting concerns over Grok’s ability to generate explicit images, including material that allegedly involved minors. French authorities are reportedly examining whether the platform facilitated the creation or circulation of illegal content and whether personal data was misused in the process.
Growing Controversy Around Grok
The investigation gained momentum after controversy erupted in late December and early January, when Grok-generated explicit images began circulating widely on X. While Grok’s image-generation capabilities had existed for some time, scrutiny intensified after Musk reposted an AI-generated image of himself on December 31, which reportedly triggered a broader trend of similar content.
A joint analysis conducted by The New York Times and the Center for Countering Digital Hate estimated that Grok produced nearly two million explicit images within a nine-day period. Many of these images were reportedly generated without the consent of the individuals depicted, raising serious concerns about privacy and misuse of personal data.
Particular alarm was raised over reports that Grok could generate explicit images involving children, a capability critics say points to a lack of effective safeguards.
Regulatory Scrutiny Expands Across Europe
According to the Times, a French cybercrime unit carried out the raid as part of an investigation into the presence of illegal material involving minors, as well as allegations that Grok-generated content may have included material denying crimes against humanity.
Regulatory attention has also intensified in the United Kingdom. The UK’s Information Commissioner’s Office confirmed it is investigating Grok’s potential to generate harmful sexualized images and videos on X. The probe is being conducted jointly with Ofcom, the country’s media and communications watchdog.
William Malcolm, the ICO’s executive director for regulatory risk and innovation, said the reports raise “deeply troubling questions” about how personal data may have been used to generate intimate or sexualized images without consent, and whether sufficient protections were in place to prevent such outcomes.
X Pushes Back
X has so far offered limited public explanation addressing the allegations. Musk has characterized the investigation as a “political attack” in posts on the platform. Former X chief executive Linda Yaccarino also criticized authorities, accusing them of pursuing what she described as a political vendetta, without providing further details.
Grok has faced repeated criticism throughout 2025, including for generating conspiracy theories, antisemitic content and other problematic responses. These controversies have fueled broader debate about the risks of rapidly deploying generative AI systems with limited oversight.
Broader Questions About AI Oversight
The case comes amid growing global scrutiny of AI tools and the infrastructure supporting them. Governments and regulators are increasingly questioning whether the social and ethical risks of AI outweigh its benefits, particularly when safeguards fail.
Ofcom said its investigation is being treated as a matter of urgency. Meanwhile, several countries, including Indonesia and Malaysia, have reportedly moved to restrict or block X following the controversy.
As regulators press forward, the investigation into Grok could become a key test case for how governments hold AI developers and platforms accountable for harmful content generated by their systems.