Elon Musk, experts call for pausing AI development: Here's why
Over 1,100 signatories, including Elon Musk and Steve Wozniak, have signed an open letter asking all AI labs to immediately pause training AI systems more powerful than GPT-4 for six months. The letter has called attention to the need to develop robust protocols before we enter the next phase of AI development. That brings us to the question, why do we need such protocols?
Why does this story matter?
The last few months have seen AI development taking a giant leap, courtesy of the competition triggered by ChatGPT's rise to stardom. The capabilities of AI systems such as GPT-4 have stunned everyone. But that has also given rise to a set of new concerns. With companies working on AI aiming for much-advanced systems, only the sky is the limit now.
Safety protocols must be audited by independent experts: Signatories
In the letter, the signatories have asked stakeholders to "develop and implement" shared safety protocols for advanced AI designs. According to them, these safety protocols must be "rigorously audited and overseen by independent outside experts." The letter calls for governments to step in and institute a moratorium if all key actors involved are not pausing AI development.
AI has two sides: The good and the bad
We all have seen what an AI chatbot as advanced as ChatGPT can do. It can do anything from writing essays to coding to ordering food on your behalf (with plugins). However, we have also seen the other side of things. The side that is rife with disinformation, deep fakes, and cyber attacks. It is the second side that has prompted the letter.
AI models can become potential fake news mines
One of the biggest issues of AI systems is their potential to become fake news mines. Their ability to mimic humans can lead to a flurry of fake news, as the AI is not committed to truth. As AI systems are trained on large datasets, the chances of them inheriting human biases are highly probable. Disinformation and prejudices could cause harm to people.
AI becoming sentient is a major concern
We all remember when Google fired Blake Lemoine, an engineer working on LaMDA, for claiming that the AI became sentient. Google and AI experts were quick to shut off Lemoine's claims. Many believe that sentient AI is not possible because we don't have the infrastructure yet. However, with the pace at which AI development is taking place, it might happen sooner than we imagine.
Fraudsters can use AI systems to trick people
The development of AI has been a boon for many. A group that has benefitted a lot is fraudsters. Fraudsters are using ChatGPT to create elaborate and sophisticated phishing emails to lead people to traps. ChatGPT and other advanced AI systems can create hyper-realistic phone scripts, which fraudsters can use to impersonate customer service representatives to gain access to sensitive information.
AI has blurred the line between real and fake
Generative AI's issues are not limited to text-generating AIs. AI systems that can generate images and videos have brought a new dimension to fake images and videos. The constant development in technology has resulted in more realistic fakes than ever. The potential for mischief is endless here. From fake news to fake products, this could lead to several issues for humanity.
Do we need to pause development of advanced AI systems?
So far, we have seen all the problems advanced AI systems can cause. Does that mean we need to pause the development of AI systems more powerful than GPT-4? It is imperative that all stakeholders hold a discussion to reach a consensus on what needs to be done to make AI systems safe. However, a long pause might derail innovation and development.