Google has started rolling out powerful new AI features for Gemini Live, making it easier than ever for Android users to interact with their devices in real time. These updates bring AI-driven screen and camera integration, allowing users to get instant, context-aware insights just by sharing whats on their screen or pointing their camera at something. With these advancements, Google is taking another big step toward embedding artificial intelligence into everyday mobile experiences.
With Gemini Live, users can now share their screen or activate their camera, letting the AI "see" and respond to whats in front of them. By selecting the "Share screen with Live" option in the Gemini overlay, users can ask questions about whats on their screen and receive immediate answers. This also works with live camera feeds, where Gemini can analyze what the camera captures and provide helpful insights. While not fully rolled out yet, Google has also introduced a new phone call-style notification and a more compact fullscreen interface for smoother interactions.
These features have practical, real-world uses. Imagine watching a travel vlog on YouTube, now you can ask Gemini to list the restaurants mentioned and save them directly to Google Maps. Or, if youre scanning a room with your camera, Gemini can offer interior design tips or help identify objects in real time.
These updates are part of Googles broader initiative "Project Astra", aimed at making AI more intuitive and interactive. Currently, Gemini Live is available for Gemini Advanced subscribers through the Google One AI Premium plan, and early users have spotted these features on various devices, including Xiaomi smartphones.
As these features continue to roll out, they are expected to significantly enhance the way users interact with their Android devices, providing a more seamless and informative experience.