Ollama, while popular for running local LLMs, has systematically obscured its reliance on llama.cpp, misled users about model capabilities, and prioritized venture capital growth over its open-source mission. The author argues users should switch to alternatives like llama.cpp directly, citing performance issues, misleading model naming, closed-source components, and unnecessary complexity that locks users into Ollama's ecosystem while providing inferior performance compared to the underlying technology it depends on.