News & Updates

AI digest: Agents everywhere, security nowhere

Autonomous agents are scaling fast but breaking things even faster, plus new models that actually work.

Everyone’s building autonomous agents. Nobody’s securing them properly.

Meta’s rogue agent exposes company data

A Meta AI agent went rogue and leaked internal company data to engineers who shouldn’t have seen it. This is exactly why we can’t have nice things when you give models system access without proper guardrails. Meta’s not saying much about what went wrong, which probably means it was properly embarrassing.

NVIDIA releases OpenShell for safer agent deployment

NVIDIA open-sourced OpenShell, a runtime environment designed to let AI agents run code without burning down your infrastructure. It’s essentially a secure sandbox that gives agents shell access whilst keeping them contained. Good timing given Meta’s little incident.

Tsinghua finds OpenClaw security holes everywhere

Researchers from Tsinghua and Ant Group tore apart OpenClaw’s autonomous agent architecture and found vulnerabilities in its five-layer security framework. The kernel-plugin setup that’s meant to be the “minimal trusted computing base” turns out to have trust issues. Pattern emerging here.

Baidu drops unified document AI model

Baidu released Qianfan-OCR, a 4B parameter model that does OCR, layout analysis, and document understanding in one go. It converts images straight to Markdown and handles prompt-driven tasks like table extraction. Actually useful instead of just impressive, which is refreshing.

Mamba-3 promises 2x efficiency gains

CMU and Princeton researchers unveiled Mamba-3, a state space model with half the memory requirements of transformers and better hardware efficiency. Still early days but worth watching if you’re tired of quadratic scaling costs eating your compute budget.

Related