ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU

Trending 2 weeks ago

OpenAI is facing different privateness title successful nan European Union. This one, which has been revenge by privateness authorities nonprofit noyb connected behalf of an individual complainant, targets nan inability of its AI chatbot ChatGPT to correct misinformation it generates astir individuals.

The inclination of GenAI devices to nutrient accusation that’s plain incorrect has been good documented. But it besides sets nan exertion connected a collision people pinch nan bloc’s General Data Protection Regulation (GDPR) — which governs really nan individual information of location users tin beryllium processed.

Penalties for GDPR compliance failures tin scope up to 4% of world yearly turnover. Rather much importantly for a resource-rich elephantine for illustration OpenAI: Data protection regulators tin bid changes to really accusation is processed, truthful GDPR enforcement could reshape really generative AI devices are capable to run successful nan EU.

OpenAI was already forced to make immoderate changes aft an early involution by Italy’s information protection authority, which concisely forced a section unopen down of ChatGPT back successful 2023.

Now noyb is filing nan latest GDPR title against ChatGPT pinch nan Austrian information protection authority connected behalf of an unnamed complainant who recovered nan AI chatbot produced an incorrect commencement day for them.

Under nan GDPR, group successful nan EU person a suite of authorities attached to accusation astir them, including a correct to person erroneous information corrected. noyb contends OpenAI is failing to comply pinch this responsibility successful respect of its chatbot’s output. It said nan institution refused nan complainant’s petition to rectify nan incorrect commencement date, responding that it was technically intolerable for it to correct.

Instead it offered to select aliases artifact nan information connected definite prompts, specified arsenic nan sanction of nan complainant.

OpenAI’s privacy policy states users who announcement nan AI chatbot has generated “factually inaccurate accusation astir you” tin taxable a “correction request” done aliases by emailing However, it caveats nan statement by warning: “Given nan method complexity of really our models work, we whitethorn not beryllium capable to correct nan inaccuracy successful each instance.”

In that case, OpenAI suggests users petition that it removes their individual accusation from ChatGPT’s output wholly — by filling retired a web form.

The problem for nan AI elephantine is that GDPR authorities are not à la carte. People successful Europe person a correct to petition rectification. They besides person a correct to petition deletion of their data. But, arsenic noyb points out, it’s not for OpenAI to take which of these authorities are available.

Other elements of nan title attraction connected GDPR transparency concerns, pinch noyb contending OpenAI is incapable to opportunity wherever nan information it generates connected individuals comes from, nor what information nan chatbot stores astir people.

This is important because, again, nan regularisation gives individuals a correct to petition specified info by making a alleged taxable entree petition (SAR). Per noyb, OpenAI did not adequately respond to nan complainant’s SAR, failing to disclose immoderate accusation astir nan information processed, its sources, aliases recipients.

Commenting connected nan title successful a statement, Maartje de Graaf, information protection lawyer astatine noyb, said: “Making up mendacious accusation is rather problematic successful itself. But erstwhile it comes to mendacious accusation astir individuals, location tin beryllium superior consequences. It’s clear that companies are presently incapable to make chatbots for illustration ChatGPT comply pinch EU law, erstwhile processing information astir individuals. If a strategy cannot nutrient meticulous and transparent results, it cannot beryllium utilized to make information astir individuals. The exertion has to travel nan ineligible requirements, not nan different measurement around.”

The institution said it’s asking nan Austrian DPA to analyse nan title astir OpenAI’s information processing, arsenic good arsenic urging it to enforce a good to guarantee early compliance. But it added that it’s “likely” nan lawsuit will beryllium dealt pinch via EU cooperation.

OpenAI is facing a very akin title successful Poland. Last September, the section information protection authority opened an investigation of ChatGPT pursuing nan complaint by a privateness and information interrogator who besides recovered he was incapable to person incorrect accusation astir him corrected by OpenAI. That title besides accuses nan AI elephantine of failing to comply pinch nan regulation’s transparency requirements.

The Italian information protection authority, meanwhile, still has an unfastened investigation into ChatGPT. In January it produced a draught decision, saying past that it believes OpenAI has violated nan GDPR successful a number of ways, including successful narration to nan chatbot’s inclination to nutrient misinformation astir people. The findings besides pertain to different crux issues, specified arsenic nan lawfulness of processing.

The Italian authority gave OpenAI a period to respond to its findings. A last determination remains pending.

Now, pinch different GDPR title fired astatine its chatbot, nan consequence of OpenAI facing a drawstring of GDPR enforcements crossed different Member States has dialed up.

Last fall nan institution opened a location agency successful Dublin — successful a move that looks intended to shrink its regulatory consequence by having privateness complaints funneled by Ireland’s Data Protection Commission, acknowledgment to a system successful nan GDPR that’s intended to streamline oversight of cross-border complaints by funneling them to a azygous personnel authorities authority wherever nan institution is “main established.”