Paris Raids X Offices Over AI Content Probe

French Authorities Expand Criminal Probe Into X

French prosecutors escalated their investigation into social media platform X by conducting coordinated searches at the company’s Paris offices, signaling a tougher stance on alleged failures to prevent illegal content online. The preliminary probe centers on accusations that the platform facilitated the circulation of child sexual abuse images, explicit deepfakes, and other prohibited material through automated systems. Investigators are examining whether algorithmic design choices may have contributed to the dissemination of such content, raising questions about corporate responsibility and compliance with national law.

The inquiry, led by the cybercrime unit of the Paris prosecutors’ office, also includes allegations related to the manipulation of automated data processing systems and the denial of crimes against humanity. As part of the procedure, prosecutors summoned X owner Elon Musk and former chief executive Linda Yaccarino for voluntary interviews, while additional employees were called to provide testimony as witnesses. Officials emphasized that the investigation aims to ensure the platform’s operations align with French legal standards while it remains active in the country.

European cooperation has played a role in the case, with assistance provided through Europol, underscoring the cross-border implications of digital crime enforcement within the European Union.

Grok Deepfakes Trigger Wider Regulatory Alarm

The French action follows global backlash over Grok, an artificial intelligence chatbot developed by xAI and integrated into X. The system drew condemnation after generating nonconsensual sexualized deepfake images and controversial historical statements, prompting regulators to scrutinize how personal data was processed and safeguarded. In response, the platform removed certain outputs and acknowledged errors, but regulators argue the episode exposed systemic weaknesses in oversight and moderation.

In the United Kingdom, the data protection authority opened a formal inquiry to assess whether personal data had been used lawfully and whether adequate safeguards were in place to prevent misuse. Officials at the Information Commissioner’s Office expressed concern that individuals’ data may have been leveraged to create intimate images without consent, a potential violation of privacy laws that could carry significant penalties if confirmed.

Separately, Britain’s media regulator launched its own examination into the chatbot’s deployment and risk controls. The Ofcom investigation is expected to take several months as authorities gather evidence on whether the platform met its duty of care obligations under UK regulations.

Mounting Pressure on Musk’s Expanding Tech Empire

Regulatory challenges are converging at a time when Musk is consolidating his technology ventures. French investigators expanded their case after Grok produced content interpreted as Holocaust denial, later retracted by the system, and after earlier instances in which the chatbot generated praise for extremist figures. References to historical evidence, including documentation preserved by the Auschwitz-Birkenau Memorial, have been central to authorities’ assessment of the severity of such outputs.

Beyond France and the UK, European regulators have already penalized X for violations tied to deceptive design practices, imposing a $140 million fine under digital market rules. Meanwhile, Musk continues to integrate his businesses, recently aligning X, xAI, and satellite communications under a broader corporate strategy. As investigations advance, regulators across multiple jurisdictions are signaling that accountability for AI-driven platforms will extend beyond moderation promises to the core architecture that shapes how content is created and distributed.

Otras noticias destacadas

Comparte el Post en:

Más Noticias

Más Noticias