Google is bringing Gemini, its generative AI, to cars that support Android Auto in the next few months, the company announced ahead of its 2025 I/O developer conference. The integration aims to make driving “more productive — and fun,” according to a blog post.
Patrick Brady, VP of Android for Cars, described this move as “one of the largest transformations in the in-vehicle experience that we’ve seen in a very, very long time.” Gemini will enhance the Android Auto experience in two primary ways. Firstly, it will act as a more powerful smart voice assistant, enabling users to send texts, play music, and perform other tasks without needing to use precise commands.
Video: Google
Gemini’s natural language capabilities will allow it to understand and respond to more complex requests. For instance, it can “remember” a contact’s language preference for text messages and handle translations accordingly. Google claims Gemini will also be able to find restaurants along a planned route, including responding to specific queries like “taco places with vegan options” by leveraging Google listings and reviews.
The second main feature is “Gemini Live,” which enables the AI to engage in full conversations on various topics, from travel ideas to Roman history. Brady argued that Gemini’s natural language capabilities will reduce cognitive load by making it easier to ask Android Auto to perform tasks.
Video: Google
Gemini will initially rely on Google’s cloud processing for both Android Auto and cars with Google Built-In. However, Google is working with automakers to enhance edge computing capabilities, which should improve performance and reliability. Brady mentioned that modern cars generate significant data from onboard sensors and cameras, and while Google has “nothing to announce” regarding Gemini’s potential use of this multi-modal data, he acknowledged the potential for “really, really interesting use cases in the future.”
Gemini on Android Auto and Google Built-In will be available in countries where Google’s generative AI model is already accessible and will support over 40 languages.