How is OpenAI's new GPT-4 model better than GPT-3.5
OpenAI's new large language model (LLM) is here. The powerful GPT-4 is a multimodal LLM, which means it can respond to both texts and images. The new model has big shoes of GPT-3.5-powered ChatGPT to fill. According to the company, GPT-4 is its "most advanced system." But is it advance enough to better the runaway success that ChatGPT has been? Let's see.
GPT-4 has better reasoning capabilities than GPT-3.5
OpenAI has talked about the different ways GPT-4 is better than GPT-3.5. The LLM is better than ChatGPT in advanced reasoning capabilities and beat the chatbot's scores in both Uniform Bar Exam and Biology Olympiad. GPT-3.5 is prone to errors. Per the company, GPT-4 is 40% more likely to generate factual results than its predecessor. It is currently available to OpenAI's paid users.
GPT-4 is much better in complex tasks
Both GPT-4 and GPT-3.5 are trained on a supercomputer OpenAI developed with Azure. According to the company, the differences between the two LLMs can be "subtle" in a casual conversation. However, in complex tasks, GPT-4 is more reliable, creative, and able to handle more nuanced directions. Apart from performing better in different tests, GPT-4 also outclassed GPT-3.5 in ML benchmarks, including MMLU and HellaSwag.
The new Bing is running on GPT-4
Some of us may have already come across GPT-4. As we doubted, the LLM is behind Microsoft's new Bing, its chatbot-cum-search engine co-developed with OpenAI. The company confirmed the news after OpenAI officially released GPT-4. The multimodal LLM's other early adopters include Morgan Stanley, Stripe, Khan Academy, Duolingo, and the government of Iceland, among others.
The LLM can caption and understand images
The biggest advantage GPT-4 has over GPT-3.5 is its ability to understand images. Users can give the LLM prompts containing both texts and images. GPT-4 exhibits "similar capabilities" as it does on text-only inputs. The LLM can caption and understand relatively complex images. The feature is not available to the public yet. OpenAI is testing it with Be My Eyes, a Danish mobile app.
OpenAI has put a usage cap on GPT-4
GPT-4 is available to OpenAI's paying customers through ChatGPT Plus. Of course, those of you who have access to the new Bing can also get a taste of OpenAI's new LLM. OpenAI has put a usage cap on GPT-4. Pricing is $0.03 per 1,000 prompt tokens and $0.12 per 1,000 completion tokens. Developers can register on a waitlist to access the API.