How OpenAI's fine-tuning feature transforms GPT-4o AI model
OpenAI has introduced a new feature for its latest artificial intelligence (AI) model, GPT-4o. Launched in May, the AI model now allows developers and organizations to train it using their own datasets. This update aims to enhance the accuracy of generated responses by incorporating more relevant and focused data related to specific use cases. Previously, GPT-4 functioned as a "black box," delivering impressive results but lacking the ability to adapt to specific needs.
OpenAI offers free training tokens to boost GPT-4o models
In addition to the new feature, OpenAI has announced that it will provide free training tokens for a month. This initiative is designed to help organizations enhance their GPT-4o models. The company stated in a blog post that this fine-tuning feature was "one of the most requested features from developers." It further explained that fine-tuning would allow customization of response structure and tone, and enable GPT-4o to follow complex domain-specific instructions.
OpenAI's free training tokens: A closer look
OpenAI has committed to providing organizations with free training tokens until September 23. Enterprises using GPT-4o will receive one million training tokens daily, while those using the mini version of the model, GPT-4o mini, will get two million training tokens per day. After this period, fine-tuning training for these models will cost $25 (approximately ₹2,000) per million tokens.
How to fine-tune GPT-4o models
To fine-tune GPT-4o models, users need to access the fine-tuning dashboard and select "gpt-4o-2024-08-06" from the base model drop-down menu. For the mini model, they should choose the "gpt-4o-mini-2024-07-18" base model. However, these AI models are only available to users subscribed to OpenAI's paid tiers.