OpenAI's ChatGPT fails to meet EU data accuracy standards
ChatGPT is not fully compliant with European Union data rules, as per a task force of the EU's privacy watchdog. This task force was established last year in response to concerns raised by Italy's authority and other national regulators about the widely used artificial intelligence (AI) chatbot. Despite efforts by OpenAI to reduce factually false output from ChatGPT, the measures taken are deemed insufficient for compliance with the data accuracy principle.
Task force report highlights data accuracy issues
In the report, the task force stated, "Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle." The report also pointed out that due to the probabilistic nature of ChatGPT, its current training approach could lead to biased/fabricated outputs. It further noted that end users are likely to perceive these outputs as factually accurate.
Investigations into ChatGPT's compliance still underway
According to the report, investigations by national privacy watchdogs in some EU member states are still ongoing, making it impossible to provide a full description of the results at this stage. The task force report emphasized that these findings should be seen as a 'common denominator' among national authorities. OpenAI has not yet responded to these findings.