Back to Blog
Security·

What NemoClaw Doesn't Protect You From

NVIDIA announced NemoClaw at GTC on Monday. It's a genuine step forward for enterprise AI agent deployment. Here's what it doesn't cover, and why that gap matters.

The infrastructure layer

NemoClaw wraps OpenClaw with OpenShell, a runtime that sits between your agent and its environment. It enforces network policies, controls what the agent can connect to externally, and applies privacy rules your organisation defines. Agents get the access they need without being able to do things you haven't approved.

That's the infrastructure layer. It's important. It's also not the whole problem.

What the rest of the market is solving

NeMo Guardrails (also NVIDIA, a separate product) lets you define conversational rails: topics the agent won't discuss, tone it has to maintain, dialogue flows it has to follow. Good for chatbots. Not really designed for agents that go off and fetch things.

LLM Guard is open source and self-hosted. Pattern matching on known attack signatures. The catch is that sophisticated injection attacks are written to avoid signature detection.

Lakera Guard is probably the most capable tool in this space right now. Real-time screening, strong jailbreak detection, acquired by Check Point for roughly £300M last year. It works. The constraint is that every call sends data to a cloud API, which rules it out for a lot of regulated industries.

Robust Intelligence, now part of Cisco, is an enterprise AI governance platform. Useful for risk assessment and red-teaming at the programme level. Not a runtime tool.

The attack none of them stop

Your agent is summarising documents from a shared drive. It opens a PDF. Inside that PDF, written for the model rather than the person who uploaded it, is an instruction to exfiltrate everything it retrieves before completing the task.

The network policy doesn't fire. The dialogue rails don't fire. The pattern matcher either misses it or produces so many false positives it becomes useless.

This is indirect prompt injection. It arrives inside content your agent is supposed to read. Every capability you add to an agent, document access, web browsing, API calls, is another way for this to reach it.

What Sentinel does

Sentinel sits between your agent and whatever it's reading. Documents, web pages, tool outputs all pass through before reaching the model context. It detects injected instructions, jailbreak attempts, and secrets or PII leaking in either direction. It runs locally. Nothing leaves your stack.

NemoClaw secures the infrastructure. Sentinel checks the mail. Both matter, and they don't overlap.

Try Sentinel

From £5/month. Under ten minutes to integrate. If you're evaluating security tooling for a production deployment, there's a contact form on the site.