Google Introduces Interactions API for Gemini Models

Google launches Interactions API, providing a unified interface for Gemini models and agents, enabling complex AI workflows and enterprise integration.

4 min read15 views
Google Introduces Interactions API for Gemini Models

Google Launches Interactions API: Unified Gateway for Gemini Models and Advanced AI Agents

Google has unveiled the Interactions API in public beta on December 11, 2025, providing developers with a single, streamlined interface to access Gemini models like Gemini 3 Pro and the new Gemini Deep Research agent, marking a pivotal shift toward stateful, agentic AI applications. This launch, accessible via Google AI Studio with existing Gemini API keys, addresses limitations of the prior stateless generateContent API by enabling complex workflows involving interleaved messages, tool calls, and long-running tasks. Developers can now embed autonomous research capabilities directly into apps, with plans for custom agent support and enterprise integration through Vertex AI.


Official graphic from Google Blog illustrating the Interactions API as a unified foundation for models and agents, showing interconnected workflows between Gemini models and Deep Research agent.

Background and Evolution of Google's AI Developer Tools

The Interactions API emerges amid rapid advancements in AI agent technology, where simple text generation has given way to multi-turn, state-managed interactions requiring "thinking" steps and external tool integration. Previously, Google's generateContent endpoint handled basic request-response cycles, but it struggled with agentic loops involving conversation history and background processing. The new API introduces a RESTful endpoint that unifies access to models such as Gemini 2.5 Pro, Gemini 2.5 Flash, and the preview Gemini 3 Pro, alongside agents like deep-research-pro-preview-12-2025.

This beta release coincides with upgrades to Gemini Deep Research, Google's state-of-the-art research agent now capable of executing long-horizon investigations and synthesizing comprehensive reports. Unlike consumer-facing tools, the API targets developers building production-ready applications, with Google emphasizing its role in the Agent Development Kit (ADK) and Agent2Agent (A2A) ecosystem. The timing—mid-December 2025—positions Google to compete aggressively in the agentic AI race, following industry trends toward autonomous systems from rivals like OpenAI and Anthropic.


Screenshot from Google AI for Developers docs showing a sample Deep Research output: a detailed report with citations and analysis generated via Interactions API.

Key Features and Technical Capabilities

The Interactions API stands out for its developer-friendly design, prioritizing server-side state management to track conversation histories without client-side boilerplate. Core features include:

  • Server-Side State: Use previous_interaction_id to reference prior sessions, simplifying multi-turn agents in ADK workflows.
  • Background Execution: Set background=True for long tasks like deep research; clients poll for results via interaction ID, avoiding timeouts.
  • Model Context Protocol (MCP) Support: Enables seamless connections to external tools and data sources, with native integration for third-party systems.
  • Nested Data Model: Handles interpretable structures for messages, thoughts, and tool calls, supporting streaming responses and moderation.
  • Supported Entities: Models like gemini-3-pro-preview and agent deep-research-pro-preview-12-2025; future expansions include custom agents.

Code integration is straightforward, as shown in ADK examples:

from google.adk.agents.llm_agent import Agent
from google.adk.models.google_llm import Gemini
from google.adk.tools.google_search_tool import GoogleSearchTool

root_agent = Agent(
    model=Gemini(model="gemini-2.5-flash", use_interactions_api=True),
    tools=[GoogleSearchTool()],
)

This setup allows agents to leverage Google Search and custom tools effortlessly. Google also open-sourced DeepSearchQA, a benchmark for evaluating web research agents, underscoring commitment to rigorous testing.


Diagram from Google Developers Blog depicting ADK agent flow with Interactions API, highlighting state management and background execution.

Gemini Deep Research: The Flagship Agent

At the API's core is Gemini Deep Research (Preview), an upgraded agent that performs autonomous investigations, browsing the web, analyzing data, and generating reports with citations. Developers can invoke it via agent="deep-research-pro-preview-12-2025", embedding capabilities like chart generation and MCP-extended data access into apps. Upcoming consumer rollout to the Gemini app will broaden reach, but the API prioritizes programmatic control for complex use cases.

Industry Impact and Future Roadmap

This launch reinforces Google's enterprise ambitions, with Vertex AI integration on the horizon to scale agentic apps for businesses. By centralizing models and agents, the API reduces fragmentation in a crowded landscape of APIs, potentially accelerating adoption among the 2 million+ Gemini API users. Challenges remain: as a beta, it may see breaking changes, and costs for high-volume inference could deter startups.

Analysts view it as a "decisive move toward agent-centric AI," aligning with MCP's cloud infrastructure role. Future plans include more built-in agents, custom development tools, and richer outputs like visual analytics. Google also teases A2A protocol enhancements for multi-agent systems.


Screenshot from Google AI Studio showing Interactions API access with Gemini API key, including model/agent selection dropdown.

The Interactions API positions Google as a leader in practical AI tooling, empowering developers to create intelligent, scalable agents that transform industries from research to automation. Early adopters report streamlined workflows, signaling strong potential as the platform matures.

Tags

GoogleInteractions APIGemini ModelsAI AgentsVertex AIAgent Development KitGemini Deep Research
Share this article

Published on December 11, 2025 at 05:00 PM UTC • Last updated 4 hours ago

Related Articles

Continue exploring AI news and insights