Data protection complaints filed against X for training algorithms with user data

The logos of Grok and xAI
Last week X informed the Irish data protection authority that it would suspend its use of EU citizens’ data for the time being. (Source: IMAGO / ZUMA Press Wire)

The social media platform X (formerly Twitter) trained a so-called artificial intelligence (AI) model with user data. Users were not given advance notice that their data would be employed for this purpose, prompting the Austrian data privacy organization Noyb to file legal complaints in nine European countries.

X offers paying users the option to use its chatbot, Grok, which was developed by the company xAI. To teach the software language skills, developers “train” it with large amounts of data – including posts by human users.

Noyb reports that X began “irreversibly” feeding the data of European users into Grok back in May. But the company did not ask for users’ consent for this type of data processing, as required by the General Data Protection Regulation (GDPR) – in fact, X didn’t inform users at all.

According to Noyb, most users only found out about X’s use of their data after reading a post by an X user. The user had pointed out a new default setting that allows the platform to employ user data for “training and fine-tuning.”

A simple yes or no question

According to Noyb, companies that want to process personal data must have a legal basis for doing so, one rooted in the GDPR. But instead of relying on the consent of its users, which would provide such a basis, X is claiming a “legitimate interest.” As Noyb points out, however, “This approach has already been rejected by the Court of Justice [of the European Union] in a case concerning Meta’s use of personal data for targeted advertising.”

Max Schrems, chairman of Noyb, said in a statement: “Companies that interact directly with users simply need to show them a yes/no prompt before using their data. They do this regularly for lots of other things, so it would definitely be possible for AI training as well.”

X has also violated several other provisions of the GDPR, Noyb argues. In response, the organization has filed complaints with the national data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland and Spain.

X in court

Last week it was reported that Ireland’s Data Protection Commission (DPC) had taken legal action against X. Because the company’s European headquarters are in Ireland, it falls under the DPC’s jurisdiction.

In response, X agreed “to suspend its processing of the personal data contained in the public posts of X’s EU/EEA users.”

According to Noyb, however, a court hearing last week “revealed that the DPC seems to have been mainly concerned with so-called ‘mitigation’ measures.” The Irish data watchdog “does not seem to go for the core violations” – namely X’s failure to obtain users’ consent.

Schrems added: “The court documents are not public, but from the oral hearing we understand that the DPC was not questioning the legality of this processing itself. It seems the DPC was concerned with so-called ‘mitigation measures’ and a lack of cooperation from Twitter [X]. The DPC seems to take action around the edges, but shies away from the core problem.”

In Noyb’s view, many questions went unanswered during the court hearing. The organization has filed complaints “to ensure that the core legal problems around Twitter’s AI training are fully addressed.” The more EU data protection authorities get involved, the higher the pressure will be on both the DPC and X.

In light of the fact that the data processing has already begun, Noyb has requested an “urgency procedure.” This would empower authorities to take preliminary action.

Opt-out possible

X users who would like to object to their data being used to train the company’s AI model can go to their settings and uncheck the box next to the option “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”

Facebook’s parent company Meta had planned to employ user data to train its own AI model. In this case the company did inform users of the new measure – but instead of asking for their consent, Meta only informed users of their right to object.

In response, Noyb filed eleven complaints with national data protection authorities. Consumer advocates also issued a warning to Meta.

In June Meta announced that for the time being it would not employ users’ posts for machine learning purposes. (dpa / js)