Google released two new autonomous research agents on Monday (April 21) — Deep Research and Deep Research Max — built on Gemini 3.1 Pro and designed for enterprise workflows that require exhaustive, multi-source analysis.

Both agents are available now in paid preview through the Gemini API. They mark a significant upgrade from the original Deep Research agent Google launched in December 2025 via the Interactions API.

“Deep Research has transformed from a sophisticated summarization engine into a foundation for enterprise workflows across finance, life sciences, market research, and more,” Google said in its announcement.

Two Agents, Two Use Cases

The split comes down to speed versus depth:

Deep Research is optimized for low latency and efficiency. It’s meant for real-time applications — think search interfaces or APIs where users need fast results. Google says it delivers “significantly reduced latency and cost at higher quality levels” compared to the December release.

Deep Research Max is the heavy hitter. It uses extended test-time compute to run multiple rounds of searching, reasoning, and refining before producing a final report. Google positions it for asynchronous workflows: the example they give is a nightly cron job that generates exhaustive due diligence reports for analysts by morning.

On industry benchmarks tracking retrieval and reasoning capabilities, Deep Research Max outperforms the December release by a wide margin. Google says Max consults significantly more sources and catches critical nuances that the earlier version missed.

Deep Research Max performance across industry standard benchmarks t

What’s New

The biggest addition is Model Context Protocol (MCP) support. Developers can now connect Deep Research to internal databases, proprietary data streams, or specialized providers like financial data services — not just the open web. This turns Deep Research from a web searcher into an agent that can navigate any data repository.

Google is already working with FactSet, S&P Global, and PitchBook on MCP server designs to let customers integrate their financial data offerings into Deep Research-powered workflows.

Other new features:

Native visualizations: Deep Research can now generate charts and infographics inline, using HTML or Google’s Nano Banana system. No more text-only reports.

Multimodal input: Feed the agent PDFs, CSVs, images, audio, or video as context for research.

Collaborative planning: Review and refine the research plan before the agent executes it.

Extended tooling: Combine Google Search, remote MCP servers, URL Context, Code Execution, and File Search in a single workflow — or turn off web access entirely to search only proprietary data.

Real-time streaming: Track intermediate reasoning steps as the agent works.

Where It’s Already Running

The Deep Research infrastructure powers research features across several Google products, such as the Gemini app, NotebookLM, Google Search (AI Mode), and Google Finance. The API release lets developers tap into the same system.

Use Cases

Google highlights finance and life sciences as primary targets. Investment firms can automate due diligence by pulling market signals, competitive intelligence, and risk analysis into a single report. Biomedical researchers can organize literature and evidence to accelerate drug development.

The broader pitch: AI is shifting from a conversational assistant to an autonomous agent. Deep Research doesn’t just answer questions — it executes multi-step research processes, integrates sources, generates structured reports with citations, and produces visualizations. That’s closer to what a human analyst actually does.

Deep Research and Deep Research Max are available now via the Gemini API. Google says they’ll also be available to startups and enterprises through Google Cloud soon.