I'm excited to share my experience with a local, open-source AI alternative to Claude Code, and it's completely free! But can it really compete with the big names? Let's dive in and find out.
The Quest for a Free AI Coding Companion
I recently stumbled upon Goose and Qwen3-coder, two intriguing AI tools that, according to online chatter, might just be the perfect duo to challenge Claude Code's pricey plans. So, I decided to put them to the test and see if they could live up to the hype.
Setting Up the AI Stack
The first step was to download and install Goose and Ollama. Goose, developed by Jack Dorsey's company Block, is an open-source agent framework, while Ollama serves as an LLM server. Later, we'll download the Qwen3-coder model from within Ollama.
I encountered a small hiccup initially, as I tried to get Goose to communicate with Ollama before actually setting up Ollama. A rookie mistake, but an important lesson learned!
Next, we installed Ollama and Qwen3-coder. I recommend starting with Ollama, and I opted for the app version over the command-line version for ease of use. Once Ollama is installed, you'll notice a chat-like interface with a default model, gpt-oss-20b. From here, we chose Qwen3-coder:30b, a coding-optimized model with an impressive 30 billion parameters.
One of the standout features of this setup is that your AI runs locally on your machine, eliminating the need to send data to the cloud. This not only ensures privacy but also showcases the power of local AI.
Installing Goose and Taking It for a Spin
With Ollama and Qwen3-coder in place, it was time to install Goose. The installation process was straightforward, and we configured Goose to work with Ollama as its provider. This step was a bit tricky, as it required setting up the connection between the two applications.
Once configured, I gave Goose a test run by asking it to build a simple WordPress plugin. Unfortunately, it failed on the first attempt, and even after explaining the issue, it struggled on the second and third tries. However, by the fifth round, Goose finally got it right, and it was quite proud of its achievement!
First Impressions and Performance
I must admit, I was a bit disappointed that it took Goose five attempts to succeed in such a simple test. In contrast, other free chatbots, like Grok and pre-Gemini 3 Gemini, aced this test on their first try. But here's where it gets interesting: agentic coding tools like Claude Code and Goose work directly on the source code, so each correction improves the codebase.
Performance-wise, I was impressed. Running this setup on my M4 Max Mac Studio with 128GB of RAM, I found the overall performance to be quite good. I didn't notice a significant difference in prompt turnaround compared to cloud-based products like Claude Code and OpenAI Codex, which rely on massive AI infrastructures.
Final Thoughts and Future Plans
These are just my initial impressions, and I plan to put this free solution through its paces with a larger project to truly evaluate its capabilities. Stay tuned for that analysis!
Have you tried running a coding-focused LLM locally with tools like Goose, Ollama, or Qwen? What was your experience like, and what hardware did you use? If you've used cloud options like Claude or OpenAI Codex, how does local performance compare? I'd love to hear your thoughts in the comments below. Let's discuss and compare notes!