The agentic era began in December with the launch of an experimental version of Google Gemini with Flash 2.0 upgrade. It has been designed as a highly efficient model for developers which offers low latency and improved performance.
Earlier this year, some enhancements were made to 2.0 Flash Thinking Experimental in Google AI Studio. It boosted its capabilities by merging the rapid processing of Flash with advanced reasoning for more intricate challenges.
Last week, an updated version of 2.0 Flash was made accessible to all users of the Gemini app on both desktop as well as mobile platforms. It enabled users to explore innovative methods for creation, collaboration, and interaction with Gemini.
Now, the updated Gemini 2.0 Flash is available for all through the Gemini API in Google AI Studio and Vertex AI. It allows developers to create production applications making use of 2.0 Flash.
In addition to this, an experimental version of Gemini 2.0 Pro is now available in Google AI Studio, Vertex AI, and Gemini app for Gemini advanced users. Gemini 2.0 Pro is known as the top model for coding performance and handling complex prompts.
New models, Gemini 2.0 Flash-Lite & Gemini 2.0 Pro Experimental the most effective option to date, are being launched in public preview within Google AI Studio and Vertex AI. Moreover, 2.0 Flash Thinking Experimental will be accessible to users of the Gemini app in the model dropdown on both desktops as well as mobile devices.
All these models will support multimodal input with the text output upon release with additional modalities that are expected to be available to the users in upcoming months. Initially, the Flash series of models was unveiled at I/O 2024 and since then it has gained popularity among developers for its strong performance. This makes it a deal for high-volume and high-frequency tasks at scale.