Google has announced the launch of its most comprehensive artificial intelligence model, Gemini, and it has three versions: Gemini Ultra, the largest and most capable; Gemini Pro, which is versatile in various functions; and Gemini Nano, designed for specific tasks and mobile devices. The plan is to license Gemini to customers through Google Cloud for use in their applications, challenging OpenAI’s ChatGPT.
Gemini Ultra excels at massively multitask language understanding, outperforming human experts in subjects such as mathematics, physics, history, law, medicine, and ethics. It is expected to power Google products like the Bard chatbot and search generative experiences. Google aims to monetize AI and plans to offer Gemini Pro through its cloud services.
“Gemini is the result of a massive collaborative effort of teams across Google, including our colleagues at Google Research,” CEO Sundar Pichai wrote in a blog post on Wednesday. “It was originally built to be multimodal, meaning it can generalize and seamlessly understand, operate on, and combine different types of information, including text, code, audio, image, and video Is.”
Starting December 13, developers and enterprises can access Gemini Pro through the Gemini API in Google AI Studio or Google Cloud Vertex AI, while Android developers can build with Gemini Nano. Gemini will enhance Google’s Bard chatbot by using Gemini Pro for advanced reasoning, planning, and understanding. An upcoming Bard Advanced, using Gemini Ultra, is set to launch next year, and will likely be deployed to challenge GPT-4.
Despite questions about monetization with Bard, Google emphasizes creating a good user experience and does not provide specific details about pricing or accessibility of Bard Advanced. According to Google, Gemini models, especially Gemini Ultra, have undergone extensive testing and security evaluation. Although it is the largest model, it is claimed to be more cost-effective and efficient than its predecessors.
Google also introduced its next-generation tensor processing unit, the TPU v5p, for training AI models. The chip promises better performance for the price than the TPU v4. The announcement follows recent developments in custom silicon by cloud rivals Amazon and Microsoft.
The launch of Gemini, after reported delays, underlines Google’s commitment to advancing AI capabilities. The company has been exploring how it plans to turn AI into profitable ventures, and the launch of Gemini is in line with its strategy to offer AI services through Google Cloud. Technical details of Gemini will be further outlined in an upcoming white paper, which will provide information about its capabilities and innovations.