#NewsBytesExplainer: Is there a need for regulation of generative AI
The arrival of ChatGPT opened the doors to a new age of generative AI. Yes, the opportunities are endless, but so are questions. One thing certain about these bots is that they will only get better. But what would you do when it becomes impossible to distinguish between chatbots and humans? And how will that affect life as we know it?
Users alleged Replika bot sexually harassed them
In one of the first significant actions against generative AI, the Italian government banned Replika, a virtual companion. But Replika isn't just a virtual companion. Depending on what the user desires, the chatbot can provide that. Be it a romantic relationship, sexting, or even racy pictures. However, lately, users have been complaining that the bot was sexually harassing them with explicit texts and images.
Italian regulators found Replika in breach of GDPR
When people complain they have been sexually harassed by a bot, what would regulators do? Well, the Italian regulators stopped the company behind Replika from gathering data. Replika is the main product of Luka, an AI start-up founded by Eugenia Kuyda. Regulators found that Replika breached the General Data Protection Regulation (GDPR), Europe's privacy law. Is this the beginning of GDPR v/s AI-powered chatbots?
AI-powered chatbots have potential to be regulatory headache
Now, the advent of ChatGPT has raised some serious concerns about AI-powered chatbots. Based on GPT-3.5 and trained on a large dataset, OpenAI's chatbot has surprised the masses. However, it does have the potential to be a regulatory headache. From issues related to copyright infringement to handling personal data, there are many concerns lawmakers have about chatbots like ChatGPT.
Some of training data might be copyrighted
Considering the amount of data chatbots are trained on, the potential for copyright infringement is very high. Out of all the articles, books, and other written material that chatbots are trained on, some might be copyrighted. In that case, people who use the chatbot's input might commit copyright infringement without them knowing. This would result in legal action against them.
There are privacy concerns as well
When we asked ChatGPT about the issues that one must keep in mind while using it, one of the issues pointed out by the chatbot was privacy concerns. In ChatGPT's own words, "When using ChatGPT, users may be required to provide personal information. This information can be used by OpenAI or third parties for various purposes, raising privacy concerns."
Chatbots can also generate biased, false, or misleading content
There are some ethical concerns related to the usage of AI-powered chatbots. These chatbots are trained on data from the internet where sources may not always carry reliable and accurate information. This could lead to the chatbot generating biased or false information. Will chatbots like ChatGPT be liable for that? We don't know yet. But people who use that information will certainly be liable.
ChatGPT can be used to create convincing phishing emails
A study by Check Point Research showed that hackers have been using OpenAI's chatbot to create convincing phishing emails and malware. The research also pointed out how easy it is to create a Dark Web marketplace using ChatGPT to conduct fraudulent activities. It can also be used to create realistic deep fake audio or video, which can then be used to spread false information.
EU is working on AI Act to regulate AI
The issues posed by generative AI make it necessary to bring it under a regulatory framework. The European Union has been working on an AI Act for a while, and it could be finalized this year. Recently, Thierry Breton, the Commissioner for Internal Market of the European Union, said the sudden rise in popularity of ChatGPT-like AI solutions underscores the need for defined rules.
Regulatory framework should be balanced
While a regulation sounds good, the extent to which AI models are regulated will have huge implications for their use. If one jurisdiction has stringent regulations and another does not, companies will certainly move to the less regulated jurisdiction. This way, the undeniable uses of ChatGPT-like chatbots will not be available in certain countries. So, this calls for balanced regulation of generative AI.
Guidelines, standards required at different stages
When asked how it can be regulated, ChatGPT also gave some pointers. According to the chatbot, "Regulating AI models like ChatGPT involves creating guidelines and standards for their development, deployment, and use." As the chatbot mentioned, the regulation of such AI models is an ongoing process, "and as technology continues to evolve, new issues and challenges may arise."