TECHNOLOGY

OpenAI introduces multimodal LLM GPT-4

By Athik Saleh

1

New AI model launched

OpenAI has launched its new multimodal LLM, GPT-4, which can respond to both texts and images and is touted as the company's most advanced system.

2

Better advanced reasoning

OpenAI discussed the advantages of GPT-4 over GPT-3.5, including improved advanced reasoning capabilities and a lower error rate.

3

GPT-4 outperformed GPT-3.5

OpenAI's GPT-4 outperformed GPT-3.5 in complex tasks and ML benchmarks, despite subtle differences in casual conversations.

4

Bing uses GPT-4

Microsoft's new Bing search engine is powered by GPT-4. Other early adopters include Morgan Stanley, Stripe, and Khan Academy.

5

Multimodal model

OpenAI's GPT-4 can accept prompts containing both texts and images. The LLM can caption and understand relatively complex images.

6

GPT-4 is accessible now

OpenAI's GPT-4 is available to paying customers through ChatGPT Plus, while developers can register on a waitlist to access the API.

For more Technology news click on the link below