Apple suspends AI news summary feature after generating fake headlines
What's the story
Apple has temporarily pulled the plug on its AI-powered news notification summarizer after several cases of incorrect headlines.
The move comes after BBC and press freedom groups criticized the company.
The tech giant had heavily advertised the feature, called Apple Intelligence, which has now been seen producing misleading or completely false summaries of news headlines that look just like regular push notifications.
Beta release
Apple rolls out beta update to disable AI feature
On Thursday, Apple issued a beta software update for developers, which deactivates the AI feature for news and entertainment headlines.
The company plans to roll out this update to all users as it works on improving the AI feature. The idea is to bring back the improved feature in a future update.
As part of this new beta release, Apple will clearly mention these summaries are AI-generated and may contain errors.
Media backlash
BBC and NYT raise concerns over AI summaries
The BBC had earlier raised concerns with Apple over the technology, after it produced false headlines.
In one case, the feature falsely claimed that murder suspect Luigi Mangione in UnitedHealthcare's CEO's death had killed himself.
Likewise, three The New York Times articles were wrongly summarized into a single push notification, falsely claiming Israeli Prime Minister Benjamin Netanyahu had been arrested.
Warnings issued
Press freedom groups warn against AI-generated news summaries
Press freedom groups have warned about the potential risks of these AI-generated summaries for users seeking reliable information.
Reporters Without Borders described it as "a danger to the public's right to reliable information on current affairs," while the National Union of Journalists stressed that "the public must not be placed in a position of second-guessing the accuracy of news they receive."
Both organizations have called for the removal of these AI-powered summaries.
AI challenges
AI's tendency to fabricate information: A known issue
Apple's latest trouble with its AI feature isn't a one-off case. Other popular models such as ChatGPT have also been known to generate confident "hallucinations."
Large-language models, the technology behind these tools, are programmed to respond with "a plausible sounding answer" to prompts, according to Suresh Venkatasubramanian, a Brown University professor who helped co-author the White House's Blueprint for an AI Bill of Rights.