At the centre of the rollout is Gemini’s integration with Google Photos. The AI can now scan a user’s photo library, including labelled faces, relationships and past moments, to create customised images. Instead of manually uploading reference images or writing detailed prompts, users can rely on the system to “fill in the blanks” using existing data.

Google says this makes AI interactions more intuitive. A user could request an image such as a family vacation or a personal scenario, and Gemini would generate it using stored visual references. A “Sources” option allows users to see which images were used to guide the output, and prompts can be refined if results are inaccurate.

The feature is powered by the Nano Banana 2 image generator and is designed to reduce the need for complex prompts. By connecting multiple apps, Gemini can also offer contextual assistance beyond images – referencing past emails to suggest appointments, using search history to recommend content, or drawing from photos to infer preferences.

“Personal Intelligence gives Gemini an inherent understanding of your preferences from the start,” Google said, adding that it allows the system to work with real-life context rather than abstract instructions.

The rollout is currently limited to paid Gemini subscribers in the United States, including Google AI Plus, Pro and Ultra users, with plans to expand to more regions and integrate further into Chrome and Search.