Ollama is a command-line application for running generative AI models locally on your own computer. A new update is rolling out with some impressive improvements, alongside Ollama’s own desktop application for easier use.

If you’re not familiar with it, Ollama allows you to run generative AI models like DeepSeek-R1, Google’s Gemma 3, Meta’s Llama 3, Microsoft’s Phi 4 and Phi 4 Mini, and LLaVA on your own hardware. You can use them in a chat context, just like ChatGPT and other cloud services, or connect them to automations and scripts for batch operations.

Ollama desktop app screenshot.

Ollama version 0.10 has two completely new features. First is theollama pscommand, which shows you the context length of your loaded models, so you can quickly tell how much information each model is currently using. Second, when you’re using the OpenAI-compatible API to process images, it now supports WebP images alongside other formats like JPEG and PNG.

The main exciting changes here might be the performance improvements.Gemma 3n models, which are similar to Google’s Gemini Nano model designed for phones and tablets, now have 2–3 times faster performance. If you’re using multiple GPUs for any model, you may expect performance gains of 10–30%.

There are also a few helpful bug fixes and other smaller improvements, as seen in the full changelog below.

There’s also a new Ollama desktop application, available for macOS, Linux, or Windows, so you may use it in a standard chat interface without a Terminal. There have been many front-end apps built around Ollama to serve that purpose, such asOpen WebUIandmacLlama, so it’s interesting to see the project itself finally try to make one. The interface is simple and easy to understand, and it supports multimodal input and Markdown.

One of Ollama’s developers said ina Hacker News comment, “We are all developers ourselves, and we use it. In fact, there are many self-made prototypes before this from different individuals. We were hooked, so we built it for ourselves.”

You candownload the Ollama appfrom the project’s website. The command-line version is available from software repositories likeHomebrewandDocker Hub.