IBM reveals new AI models similar to OpenAI's GPT-4
IBM has introduced a set of new generative AI models, called the Granite series for its Watsonx data science platform. These large language models (LLMs) are designed to summarize, analyze, and generate text, similar to OpenAI's ChatGPT and GPT-4. IBM is also launching Tuning Studio, which will let users tune generative AI models to their data in Watsonx.ai. The latter is part of the Watsonx platform which lets users train and monitor the AI models after deployment.
The models will be made available in Q3 2023
IBM's new Granite series AI models will be made available in the third quarter of this year. Interestingly, before the models are released, the company plans to reveal the data used to train them and the process involved in filtering and processing that data. This move demonstrates IBM's commitment to transparency in AI development and ensures that users have a clear understanding of the models' capabilities and limitations.
Everything about Watsonx.ai's Tuning Studio tool
Tuning Studio, the new tool within Watsonx.ai, will let users tailor generative AI models for new tasks with as few as 100 to 1,000 examples. Once users list a task and furnish the required examples in the specified data format, they can deploy the model by means of an API from the IBM Cloud. This streamlined process could make it easier for businesses to adapt AI models to their specific needs.
IBM's synthetic data generator could 'reduce risk' in AI-model training
IBM is also planning to bring a synthetic data generator for tabular data in Watsonx.ai. The tool will "assist users in creating artificial tabular data sets from custom data schemas or internal data sets," per IBM's official press release. In this way, companies can "extract insights for AI model training and fine-tuning or scenario simulations with reduced risk." However, it remains to be seen how effective this approach will be in mitigating the drawbacks of training AI with synthetic data.