Gemini’s latest updates turn it into a full AI “workspace”: you get a much faster default model (Gemini 3 Flash), deep NotebookLM integration, visual and interactive answers, custom mini-app Gems, upgraded Deep Research and Deep Think modes, plus early agent features that can plan and execute real-world tasks for you. Below is an in-depth breakdown of what’s new and how it changes day-to-day use.
Gemini 3 Flash: New Everyday Engine
Gemini 3 Flash is now the main model behind the “Fast” setting in the Gemini app, bringing frontier-level intelligence at near-real-time speed. It is designed to handle text, images, audio, code, and other modalities in a single conversation, but with latency low enough for interactive use such as chat, coding, or iterative brainstorming.
Compared with the earlier 2.5 Flash, Gemini 3 Flash offers stronger reasoning and code generation, often matching or beating older Pro-tier models on complex tasks while remaining cheaper and widely available across devices. For free users, “Fast” access to Gemini 3 Flash is now considered “general access,” while heavier “Thinking & Pro” modes (Ultra / Deep Think / Deep Research) are behind more restrictive daily limits or subscriptions.
NotebookLM + Gemini: One Knowledge Brain
NotebookLM, Google’s note- and source-centric “AI research notebook,” can now be attached directly inside a Gemini chat as if it were just another file or source. From the attachment menu, you can pick one or more notebooks, and Gemini will treat them as a live knowledge base it can cite, summarize, and remix in real time.
This matters because NotebookLM is optimized for organizing long-form materials: you can load Google Docs, PDFs, Drive folders, website links, and YouTube videos into structured notebooks, then let Gemini generate study guides, scripts, outlines, or briefs grounded explicitly in those sources. A creator could, for example, store all research for a video in one notebook, then in Gemini attach that notebook plus a “YouTube Script” Gem to auto-generate scripts that remain faithful to the raw research.
Visual Layout, Dynamic View, and Rich Outputs
The Gemini app now offers experimental “visual layout” and “dynamic view” modes that transform long answers into interactive, magazine-style experiences rather than plain text blocks. When enabled in Labs, a query like “plan a 3-day trip to Rome next summer” produces a scrollable layout with day-by-day cards, images, and clickable modules that let you expand or tweak specific parts of the itinerary.
Visual layout uses Gemini’s multimodal understanding to automatically choose images, section headings, and interactive components that match the topic, aiming to make dense information easier to scan and personalize. Dynamic view layers on controls so you can quickly filter or adjust aspects of the answer—such as budget or time—without rewriting the entire prompt, ideal for planning, learning, and content exploration.
New Gems: Mini AI Apps with Opal
Gems (custom Gemini personas) have evolved into full mini apps powered by an internal Google Labs tool called Opal. In the Gems tab on the web, an experimental “Build AI apps” banner opens a visual builder where you describe the app you want (e.g., “scan my fridge and suggest recipes”) and Gemini turns that prompt into a reusable app with its own UI.
Google ships several starter Gems, like “Recipe Genie” and “Claymation Expert,” which you can run as-is or remix — adding new inputs, steps, and outputs. Power users can switch from the simplified Gemini view to the Advanced Editor at opal.google for more granular control over logic, data flow, and interface, then share the finished mini app as a link so anyone with Gemini can use it.
Deep Research: Full Research Agent
Gemini Deep Research is now a dedicated mode that behaves like an auto-pilot research assistant, capable of spending several minutes traversing hundreds of sources before writing a structured, cited report. Rather than one-shot answers, it plans, executes multiple web browsing passes, and compiles findings into sections, each with citations, making it suitable for literature-style reviews or competitive analysis.
The newest upgrade adds rich visuals: Deep Research reports can include interactive charts, graphs, and infographics so you can quickly see trends such as emissions comparisons or cost breakdowns at a glance. Ultra or Google AI Pro subscribers get higher limits and more sophisticated reports, while free users typically receive a small number of Deep Research runs per day in “Thinking” mode.
Deep Think: Heavyweight Reasoning Mode
Gemini 3 Deep Think is a special mode for Google AI Ultra subscribers that pushes reasoning beyond the typical chat experience, especially in math, science, and formal logic. It uses iterative and parallel reasoning—exploring multiple hypotheses at once—then consolidates them into a final answer, drastically improving accuracy on multi-step puzzles and proofs.
Crucially, Deep Think can expose parts of its internal chain-of-thought, so you can see a step-by-step decomposition, check assumptions, and adjust or challenge specific steps. This makes it particularly useful for debugging complex code, validating scientific derivations, or solving contest-level math problems where process transparency matters as much as the final result.
Gemini Agent: From Chatbot to Doer
Gemini Agent is an early-stage feature that turns Gemini into a multi-step task executor capable of acting across Google services and the web, not just replying in text. Users can describe end-to-end tasks—such as “plan a two-day CES trip from Denver to Las Vegas, find flights and hotels near the convention center, and create a schedule around major keynotes”—and the agent will automatically research dates, search flights, map hotels, and put together an itinerary.
Behind the scenes, the agent chains together searches, calls to services like Maps and Gmail, and internal planning steps, then surfaces a human-readable plan you can edit or approve. Early access appears limited to certain Google AI subscribers and regions, but it signals a shift from passive Q&A towards persistent, autonomous workflows that save time on planning, booking, and information gathering.
Other Notable Improvements
- Nano Banana & image editing: A new, more precise image editor lets you circle or draw directly on an image to indicate editing regions, useful for product shots or thumbnails.
- Audio & Live updates: New Gemini 2.5 Flash native audio models and upgraded TTS (Flash and Pro) improve real-time conversations, dubbing, and expressive speech, especially in the Live API.
- Local visual answers: Local results inside Gemini now resemble a mini-Maps experience, with photos, ratings, and key details for nearby shops and venues without leaving the chat.
Disclaimer
This article is based on officially announced features, product documentation, and early access reports. Availability, limits, and functionality may vary by region, device, and subscription tier. Some features described are experimental and may change, expand, or be discontinued as Google continues development.




