Open source is catching up faster than anyone expected
The gap between closed and open models is shrinking every month.
Six months ago, open-source models were a novelty. Fun to play with, not serious enough for production. That’s changed.
Llama 3 is genuinely good. Mistral is punching well above its weight. DeepSeek came out of nowhere and embarrassed models with ten times the funding. The gap between “best available” and “best open-source” is now months, not years.
Why this matters
If you’re building AI products, the calculus has shifted. You used to need an API key to a frontier model for anything serious. Now you can self-host something competitive on decent hardware. That changes the economics completely.
The real question
How long before an open model matches GPT-4 level across the board? My guess is we’re already closer than OpenAI would like. The next Llama release might close it entirely for most practical use cases.
The companies charging premium API prices should be nervous. The moat is getting shallower every quarter.