For years, mobile AI has mostly meant smarter chatbots. Now, Google is changing that script. As agentic features arrive first on the Pixel 10 and Galaxy S26, Gemini begins to step out of the chat window and into action.
Now in beta, the new features let Gemini handle multi-step tasks directly on Android devices. It can follow the flow of a conversation, understand what’s on screen, and take action inside third-party apps, whether that’s ordering dinner or booking a ride. It’s a direction Apple previewed for Siri nearly two years ago, though those capabilities have yet to move beyond staged demos.
At Google’s launch event, Sameer Samat, president of Android, showcased a recorded demonstration of the system in action. In the demo, Gemini monitored a lively group chat in which family members were deciding what to eat. When prompted, the assistant synthesized the conversation, identified preferences, and initiated the order through Grubhub. The user then reviewed the order and confirmed it manually, keeping one human checkpoint in the loop.
The demo looked simple, but the implications are broader. Gemini understands context, works across apps, and acts independently. It reflects Google’s move to make AI a background system rather than a visible add-on.
The company recently introduced AI-assisted browsing in Chrome, and bringing similar autonomy to Android completes the next logical step: embedding an intelligent agent across the system layer.
Google’s timing also puts more pressure on Apple. During WWDC 2024, Apple showed off its Apple Intelligence plans, saying Siri would be able to read what’s on your screen, jump between apps, and grab things like flight details from your email. But those features still haven’t actually rolled out.
With delays announced earlier in 2025 and Bloomberg reporting that parts of the plan might not arrive until iOS 27, Apple’s big AI push still feels more like an idea than a product. Google, on the other hand, is getting ready to roll out something people can actually try.

The real test for Gemini will be how smoothly it works in everyday use. Since the tools are rolling out in beta, Google expects potential friction. Some third-party developers may be reluctant to let Gemini operate inside their apps, a dynamic often labeled the “DoorDash problem,” where automation conflicts with engagement-driven business models. At launch, the assistant supports only a limited number of food delivery and rideshare services.
It’s still too early to tell whether those restrictions will hold things back or just open the door to broader app integration later on. If the demo works just as well on real phones, it could change the way AI assistants deal with today’s app-based web.
Maybe you would like other interesting articles?

