BlogHub
2 min read
By Samir Gurung

Claude Code

Trying Claude code for free using ollama

tech
Claude Code

Lesson learned from running local AI on limited hardware πŸ’»πŸ€–

recently discovered that it’s possible to run Claude Code locally and offline using Ollama β€” instantly exciting for private, local AI development.

Then reality kicked in.

My machine: MacBook Air M2 (256GB SSD, 8GB RAM)

Free storage at the time: ~3GB β€” not nearly enough to experiment with large models.

jkhg

IWhile searching for a workaround, I found a cleanup tool called Mole. Clearing caches, unused files, and system clutter freed up ~22GB β€” a big win. With enough disk space, I installed Ollama and downloaded gpt-oss:20b (~13GB).

Installation went smoothly.

Running it with Claude Code? πŸ’₯ My laptop crashed.

The real bottleneck was clear:

πŸ‘‰ An 8GB RAM MacBook Air cannot handle a 20B parameter model, especially for code-heavy workloads.

Key takeaways:

Storage enables installation β€” RAM enables execution

Bigger models β‰  better experience on constrained hardware

Local AI is powerful, but model size must match your system

Cleanup tools can unlock possibilities you didn’t know you had

This hands-on failure taught me more than any spec sheet ever could.

Lesson for fellow devs experimenting with local AI:

check your hardware limits before going big πŸš€

Resources:

Ollama + Claude Code
Mole
Published on February 13, 2026