In a recent statement, Pavan Davuluri, Microsoft’s President of Windows, reaffirmed the company’s long-term plan to make Windows an “AI-native” platform. The vision is to turn the operating system into a more proactive, agent-like environment, one where built-in AI tools can understand context, make decisions, and handle complex tasks on their own.
Windows is evolving into an agentic OS, connecting devices, cloud, and AI to unlock intelligent productivity and secure work anywhere. Join us at #MSIgnite to see how frontier firms are transforming with Windows and what’s next for the platform. We can’t wait to show you!…
— Pavan Davuluri (@pavandavuluri) November 10, 2025
Davuluri shared the post ahead of Microsoft’s Ignite conference, happening November 18 – 21. But the reaction wasn’t what the company hoped for. Instead of excitement, it triggered a flood of complaints from Windows users, many of whom said they’ve had enough of AI being pushed deeper into the OS.
The post quickly drew hundreds of replies, most of them negative. Many users argued that Microsoft’s vision for Windows no longer aligns with what people actually want. A recurring question emerged: why is the company doubling down on its AI strategy despite so much pushback? One commenter even accused Microsoft of being “stubbornly attached” to a direction few users seem to support.

At the center of Microsoft’s plan for a smarter, more “agentic” Windows is Copilot. The company imagines it growing into something far more capable, not just launching apps or tidying up files, but summarizing long documents, drafting emails, and managing workflows on its own. All a user would need to do is describe the task, and Copilot would handle the rest.
Looking ahead, Microsoft plans to give Copilot three major upgrades: Voice, Vision, and Actions. Copilot Voice will handle spoken, natural-language commands. Copilot Vision will scan and interpret content on open webpages to pull useful information. And Copilot Actions will let the assistant interact directly with apps and local files, even using Connectors to pull extra data from the cloud. Together, these features hint at a future where users could run entire workflows by voice, making the keyboard and mouse far less essential.
Maybe you would like other interesting articles?

