AI digest: Security agents and Pentagon politics
OpenAI launches security agents, Anthropic fights Pentagon designation, and Microsoft ships compact multimodal reasoning.
Big week for AI security tools and some proper drama between Anthropic and the US military.
OpenAI’s Codex Security hunts vulnerabilities automatically
OpenAI launched Codex Security, an AI agent that scans codebases for security flaws and suggests fixes. It’s already found vulnerabilities in OpenSSH and Chromium during testing. This feels like the first proper AI security tool that might actually save developers time rather than just flagging everything as suspicious.
Anthropic takes on the Pentagon after supply chain risk designation
The Pentagon officially designated Anthropic a supply chain risk after contract negotiations collapsed over military control of AI models. CEO Dario Amodei announced they’re mounting a legal challenge. Meanwhile, Claude is apparently adding over a million users daily and outpacing ChatGPT installs, so the Pentagon spat might be helping more than hurting.
Microsoft’s Phi-4 brings compact multimodal reasoning
Microsoft released Phi-4-reasoning-vision-15B, a 15B parameter model that handles both vision and text reasoning tasks. It’s particularly strong at mathematical reasoning and understanding user interfaces. Compact models that actually work are still the holy grail, especially for edge deployment.
Google ships TensorFlow 2.21 with LiteRT production ready
TensorFlow 2.21 graduates LiteRT from preview to production, officially replacing TensorFlow Lite for edge inference. The update includes better GPU performance and NPU acceleration. Not flashy, but this is the infrastructure work that actually gets models running on devices.