Google AI announces Gemini 1.5 Pro and Gemma 2 for developers

Google AI announces Gemini 1.5 Pro and Gemma 2 for developers
Google AI announces Gemini 1.5 Pro and Gemma 2 for developers

Google AI has begun giving developers access to expanded context windows and cost-reduction features, starting with the Gemini 1.5 Pro large language model (LLM). Previously available on a waitlist, the full context window of 2 million tokens is now open to all developers. Google is also introducing context caching to reduce the costs of tasks that reuse similar information across multiple prompts.

With the new update, Gemini 1.5 Pro and Flash models can now dynamically generate and execute Python code. Models can also use data analysis and mathematics to effectively solve reasoning problems. According to Google, the code execution environment is secure and does not have Internet access; the company also specifies that developers will only be charged for the final result generated by the model. The lightweight Gemma 2 model is now also available for use in the Google AI Studio. Those interested can access it under “Advanced Settings” in Google AI Studio.

Several companies are already using Google Gemini 1.5 Flash. The list of applications using the technology includes Envision, which helps visually impaired users, Plural’s policy analysis, Zapier’s video processing, and Dot’s custom AI memory system. The other element that’s currently rolling out to developers is what Google’s post calls “text tuning,” which is expected to be fully available in mid-July.

-

-

PREV Despite financial difficulties, this mother gave her son the Elden Ring DLC ​​and moved the entire community
NEXT They have a voice… but lack candidates: this Liège tourist guide reveals the reality of his job