BETA
This is a BETA experience. You may opt-out by clicking here

Breaking

Edit Story

Data Activists Target OpenAI In Challenge To ChatGPT’s ‘Hallucination’ Problem

Following

Topline

Privacy activists filed a complaint against ChatGPT maker OpenAI on Monday over the company’s failure to correct misinformation its chatbot regularly “hallucinates” about people, a move that will increase pressure on tech firms to address what has become a well known but difficult-to-fix problem as they race to roll out the AI tools to more customers.

Key Facts

Vienna-based nonprofit noyb, short for “none of your business, filed a data protection complaint with Austria’s data watchdog, accusing OpenAI of violating Europe’s General Data Protection Regulation (GDPR), the strictest privacy and security law in the world.

“Simply making up data about individuals is not an option,” the group said in a statement about the complaint, which was filed on behalf of an unnamed “public figure” and accuses OpenAI of refusing to correct or erase false information and statements it had made up about the individual.

For example, the group said ChatGPT gave “various inaccurate information” when asked about the figure’s birth date and that OpenAI said “there is no way to prevent its systems” from displaying the false information.

Instead, the complaint said OpenAI only offered to block or filter results based on prompts like the figure’s name — which would filter all information about them — something noyb said would still leave the incorrect data in OpenAI’s systems, “just not shown to users.”

It also accused OpenAI of failing to disclose relevant information about the person when requested, including what data had been processed, its sources and who it had been shared with, a legal obligation noyb lawyer Maartje de Graaf said “applies to all companies,” adding that it is “clearly possible” to keep track of information sources when training an AI system.

OpenAI, which has previously acknowledged the problem AI hallucinations pose to tools like ChatGPT, did not immediately respond to Forbes’ request for comment.

Crucial Quote

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” de Graaf said in a statement. “It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

News Peg

The complaint ups the ante against OpenAI over how it uses and trains the models powering ChatGPT. The company, in common with many top generative AI makers, is already facing a litany of copyright lawsuits over the data it has used to train its models and a host of other legal issues like the privacy of the data it scraped for training. So called hallucinations, where the AI system produces a misleading or false result on a prompt but presents it as if true, are also a growing legal headache, such as a defamation suit from a radio host in Georgia. The complaint is not the first time OpenAI has run up against Europe’s powerful data protection regime and the company was already forced to make changes by Italy’s data protection authority in 2023.

What To Watch For

GDPR is a powerful set of rules that can force a company to make major changes to its operations in order to keep operations going in the world’s largest trading bloc. It also empowers regulators to levy fines of up to 4% of global turnover. Data complaints can evolve to cover the entire EU if the issue stretches beyond one country’s borders, with investigations happening through cooperating watchdogs. Noyb said it expects the matter will be dealt with in such a manner, potentially upping the stakes for OpenAI, though investigations can take years to resolve. AI firms will be watching the outcome of the case keenly.

Key Background

Noyb has been a potent force within the European data protection space since its founding in 2017, bringing a total of 839 cases resulting in €1.74 billion ($1.86 billion) in fines. Its cofounder, Max Schrems, is the activist and lawyer behind some of the most devastating challenges to data sharing deals between the U.S. and EU for major companies like Meta. Schrems’ challenges ultimately overturned two of these major deals — the Privacy Shield and the EU-U.S. Safe Harbor — which forced companies to rethink online business.

Forbes Valuation

Altman is worth $1 billion, Forbes estimates. Though Altman is best known for cofounding and leading OpenAI, he has no equity in it and does not owe his wealth to it. Instead, Altman’s fortune and billionaire status comes from his investments, which includes stakes in newly floated Reddit, fintech darling Stripe and nuclear fusion venture Helion. Before OpenAI, Altman founded social mapping company Loopt and served as partner and president at startup accelerator Y Combinator.

Further Reading

ForbesMeta AI Declares War On OpenAI, Google With Standalone Chatbot - What To Know About 'Llama 3' ModelForbesSam Altman Cut From OpenAI's Startup Fund-Here's What The ChatGPT Maker Invests InForbesMusk Reignites Feud With Sam Altman Over OpenAI's Ditched Non-Profit PromiseForbesOpenAI Made Sam Altman Famous. His Investments Made Him A Billionaire.

OpenAI is pursuing a new way to fight A.I. ‘hallucinations’ (CNBC)

OpenAI’s GPT Store Is Triggering Copyright Complaints (Wired)

Follow me on Twitter or LinkedInSend me a secure tip

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.