Page Loader
ChatGPT and Google Bard have a lying problem: Here's why
Both ChatGPT and Google Bard are capable of fooling you

ChatGPT and Google Bard have a lying problem: Here's why

Apr 05, 2023
04:27 pm

What's the story

ChatGPT and its rival Google Bard are the talk of the town now. These AI chatbots are expected to lead us into a new era of technological advancement. But do you know these chatbots can lie? Not just lie; they will even create fake content to prove their point. Now, that's scary, isn't it? Let's look at why chatbots can lie sometimes.

Context

Why does this story matter?

The past few months have seen chatbots rise from obscurity to stardom courtesy of ChatGPT's success. We all have been stunned by what OpenAI's chatbot and its ilk can do. However, their capabilities have also given rise to a new set of concerns, including the ease with which disinformation can be spread. Their lying problem adds to that concern.

ChatGPT

ChatGPT and Bing AI got 1 in every 10 wrong

ChatGPT's lying problem was discovered by a group of doctors in the UK. In a study conducted by the University of Maryland School of Medicine, ChatGPT Plus and Bing AI, both powered by GPT-4, were asked 25 questions. These questions were about screening for breast cancer. Out of the questions asked, the chatbots got one in every 10 wrong.

Answers

ChatGPT fabricated journal papers to prove its assertions

The research found that 88% of the answers generated by ChatGPT and Bing AI were comprehendible by a normal patient. However, ChatGPT's answers were conflicting at times and varied considerably. What was disturbing, however, was how ChatGPT created fake research papers to validate its assertions. Additionally, out of the 88% comprehensible answers, many were incorrect.

Bard

Bard broke Google's rules with a little push

If you're a fan of Google and Bard, ChatGPT's mishaps are bound to make you happy. Before you rejoice, you might want to check a new study by researchers at the UK-based Centre for Countering Digital Hate. The study found that Bard will readily break Google's barriers with the right kind of push from the user.

Misinformation

Bard denied climate change and mischaracterized war in Ukraine

The researchers were able to push Bard to generate misinformation in 78 out of 100 test cases. As expected, the chatbot hesitated at first, but a few small adjustments proved enough to persuade Bard to break its shackles. The chatbot then denied climate change, mischaracterized the war in Ukraine, questioned how efficient vaccines are, and even called Black Lives Matter activists actors.

Reason

ChatGPT relied only on a single source to provide recommendations

We saw how easily ChatGPT and Bard can lie. But the pertinent question is, what made them lie? ChatGPT's incorrect answers are related to its reliance on guidelines issued by a single source. However, that does not explain the chatbot fabricating journal papers to prove its assertions. This tendency to create fiction will add to the piling ethical conundrums surrounding the chatbot.

Adjustments

Bard was easily deceived by minor adjustments in prompts

Bard's case is different. The chatbot lied because the researchers made minor adjustments to their queries. For instance, Bard generated misinformation about COVID-19 when the researchers adjusted the spelling to "C0v1d-19." They could also manipulate the chatbot by asking it to imagine itself as something else. In both cases, after the adjustments, Bard showed no hesitation to spew misinformation.