OpenAI faces fresh EU troubles over ChatGPT’s ‘hallucinations’

29 Apr 2024

Image: © DDimaXX/Stock.adobe.com

Noyb claims ChatGPT shared a false birthday about a public figure and denied his request to access and erase data related to him.

Digital rights group Noyb has filed a GDPR complaint against OpenAI due to claims that ChatGPT is sharing incorrect information about people.

The EU’s General Data Protection Regulation (GDPR) requires information about individuals to be accurate and that these individuals have full access to stored information that relates to them. Noyb claims that OpenAI’s ChatGPT shares false information about individuals and that the company “cannot say where the data comes from or what data ChatGPT stores” about individuals.

The organisation has filed a GDPR complaint with the Austrian data protection authority on behalf of an unnamed public figure and claims ChatGPT provided inaccurate results about his birthday – data that Noyb claims isn’t available online.

These types of mistakes made by large language models like ChatGPT are known as ‘hallucinations’, which are an ongoing issue with AI systems. IBM describes hallucinations as a model perceiving patterns or objects that are nonexistent, creating nonsensical or inaccurate outputs.

The complainant in the case filed an “access and erasure request” with OpenAI but the company refused his request and argued that it was not possible to correct the data, according to Noyb. The digital rights group wants Austria’s data protection authority to investigate OpenAI’s data processing and the measures taken to ensure the accuracy of personal data.

Noyb also wants OpenAI to comply with the individual’s request to access data and to bring its processing in line with GDPR. OpenAI did not respond to a request for comment from SiliconRepublic.com.

Maartje de Graaf, a data protection lawyer at Noyb, said making up false information is “problematic in itself” but said there can be “serious consequences” when it is about individuals. Last year, a US radio host sued OpenAI after ChatGPT claimed the individual had been accused of defrauding and embezzling funds.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals,” de Graaf said. “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

Earlier this year, Italy’s data protection authority claimed OpenAI has breached data privacy regulation in the EU. ChatGPT was temporarily banned in Italy last year over alleged privacy violations. This watchdog said it will take into account the work being conducted by a European task force assessing ChatGPT.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com