Large language models (LLMs) differ in their capabilities. To give you control and flexibility in AI-assisted software development, Android Studio lets you choose the LLM that powers the IDE's AI functionality. The LLM must be local, running on your personal machine.
Local LLM support is available in the Android Studio Narwhal 4 Feature Drop release, which you can download from the canary channel.
Choose an LLM
A local LLM offers an alternative to the LLM support built into Android Studio; however, Gemini in Android Studio typically provides the best AI experience for Android developers because of the powerful Gemini models. You can select from a variety of Gemini models for your Android development tasks, including the no-cost default model or models accessed with a paid Gemini API key.
Local LLM capability is a great option if you need to work offline, must adhere to strict company policies on AI tool usage, or are interested in experimenting with open-source research models.
Set up local LLM support
Download and install Android Studio Narwhal 4 Feature Drop Canary 2 or higher.
Install an LLM provider such as LM Studio or Ollama on your local computer.
Add the model provider to Android Studio.
Go to Settings > Tools > Gemini > Model Providers
Configure the model provider:
- Select the icon
- Enter a description of the model provider (typically the model provider's name)
- Set the port on which the provider is listening
- Enable a model
Figure 1. Model provider settings. Download and install a model of your choice.
See the LM Studio and Ollama model catalogs. For the best experience with Agent Mode in Android Studio, select a model that has been trained for tool use.
Start your inference environment.
The inference environment serves your LLM to local applications. Configure a sufficiently large context length token window for optimal performance. For detailed instructions on starting and configuring your environment, see the Ollama or LM Studio documentation.
Select a model.
Open Android Studio. Navigate to the Gemini chat window. Use the model picker to switch from the default Gemini model to your configured local model.
Figure 2. Model picker.
After you've connected Android Studio to your local model, you can use the chat features within the IDE. All interactions are powered entirely by the model running on your local machine, giving you a self-contained AI development environment.
Consider performance limitations
A local, offline model typically won't be as performant or intelligent as the cloud-based Gemini models. Chat responses from local models are usually less accurate and have higher latency compared to cloud-based models.
Local models are usually not fine-tuned for Android development, and local models can return responses that are uninformed about the Android Studio user interface. Some Android Studio AI features and Android development use cases are nonfunctional with a local model. However, the AI chat feature in Android Studio is generally supported by local models.
For fast, accurate responses on all aspects of Android development and support for all Android Studio features, Gemini in Android Studio, powered by the Gemini models, is your best solution.
Try it and provide feedback
You can explore local LLM support by downloading the latest preview version of Android Studio from the canary release channel.
Check known issues to see if any problems you encounter are already documented. If you find new issues, report bugs.