Claude Code
Trying Claude code for free using ollama

Lesson learned from running local AI on limited hardware π»π€
recently discovered that itβs possible to run Claude Code locally and offline using Ollama β instantly exciting for private, local AI development.
Then reality kicked in.
My machine: MacBook Air M2 (256GB SSD, 8GB RAM)
Free storage at the time: ~3GB β not nearly enough to experiment with large models.
jkhg
IWhile searching for a workaround, I found a cleanup tool called Mole. Clearing caches, unused files, and system clutter freed up ~22GB β a big win. With enough disk space, I installed Ollama and downloaded gpt-oss:20b (~13GB).
Installation went smoothly.
Running it with Claude Code? π₯ My laptop crashed.
The real bottleneck was clear:
π An 8GB RAM MacBook Air cannot handle a 20B parameter model, especially for code-heavy workloads.
Key takeaways:
Storage enables installation β RAM enables execution
Bigger models β better experience on constrained hardware
Local AI is powerful, but model size must match your system
Cleanup tools can unlock possibilities you didnβt know you had
This hands-on failure taught me more than any spec sheet ever could.
Lesson for fellow devs experimenting with local AI:
check your hardware limits before going big π
Resources:
Ollama + Claude CodeMole