On March 24, 2026, Sonatype researcher Callum McMahon discovered that LiteLLM versions 1.82.7 and 1.82.8 on PyPI had been compromised. The tampered packages contained a credential stealer and malware dropper designed to exfiltrate API keys, SSH keys, cloud tokens, and CI/CD secrets to attacker-controlled infrastructure.
LiteLLM is a unified interface for switching between LLM providers — OpenAI, Anthropic, Google, and others. Its position in the AI stack makes it a high-value target: it sits directly between your application and every API key you've configured.
This is the kind of attack nono was built to prevent.
The Attack Surface
The trojanised package exploited two capabilities that most applications have by default:
- Unrestricted filesystem access — read SSH keys from
~/.ssh, cloud credentials from~/.awsand~/.config/gcloud, environment variables containing API tokens - Unrestricted network access — exfiltrate stolen data to attacker-controlled servers, download second-stage payloads
A Python application using litellm legitimately needs to reach LLM API endpoints. It does not need to read your SSH keys or connect to arbitrary domains. But without a sandbox, nothing enforces that distinction.
Sandboxing with nono
nono applies OS-enforced isolation using Landlock (Linux) and Seatbelt (macOS). Once the sandbox is applied, unauthorised operations are structurally impossible — no amount of malicious code inside the process can bypass kernel-level restrictions.
Network Allowlisting
nono's network proxy acts as a gateway between the sandboxed process and the outside world. The minimal network profile restricts outbound connections to LLM API endpoints only:
nono run --network-profile minimal -- python my_app.py
The sandboxed process can reach api.openai.com, api.anthropic.com, api.mistral.ai, and other legitimate LLM providers. Connections to anything else — including attacker C2 infrastructure — are blocked. The malware's exfiltration channel is gone.
If your application needs additional hosts, add them explicitly:
nono run --network-profile minimal \--allow-domain huggingface.co \-- python my_app.py
Filesystem Deny Rules
nono's filesystem policy blocks access to sensitive directories by default:
~/.ssh— SSH keys~/.aws,~/.config/gcloud— cloud credentials~/.gnupg— GPG keys/etc/shadow,/etc/passwd— system credentials
The credential stealer finds nothing to steal.
The write side is equally restricted. nono confines writes to the working directory and /tmp — a malware dropper attempting to install a persistent payload in ~/.local/bin or /usr/local/bin gets EPERM.
Credential Isolation
nono supports a phantom token pattern: real API keys are loaded from the system keystore into the proxy process (which runs outside the sandbox). The sandboxed application connects through localhost and never sees the actual credentials.
First, store your API keys in the system keystore:
macOS:
security add-generic-password -s "nono" -a "openai_api_key" -w "sk-xxx"security add-generic-password -s "nono" -a "anthropic_api_key" -w "sk-xxx"
Linux:
echo -n "sk-xxx" | secret-tool store --label="nono: openai_api_key" \service nono username openai_api_key target defaultecho -n "sk-xxx" | secret-tool store --label="nono: anthropic_api_key" \service nono username anthropic_api_key target default
Then reference them via --proxy-credential:
nono run --network-profile minimal \--proxy-credential openai \--proxy-credential anthropic \-- python my_app.py
Even if malicious code inspects every environment variable and config file inside the sandbox, the real tokens aren't there. The proxy injects them into outbound requests on behalf of the application.
Putting It Together
A single command combines all three layers:
nono run --profile developer \--network-profile minimal \--proxy-credential openai \--proxy-credential anthropic \-- python my_app.py
This gives you:
| Layer | What it blocks |
|---|---|
| Filesystem sandbox | Reading SSH keys, cloud tokens, system credentials |
| Network allowlist | Exfiltration to attacker domains, payload downloads |
| Credential isolation | API key theft from environment or config files |
Defense in Depth, Not Detection
Traditional supply chain defenses focus on detecting tampered packages — scanning for known malware signatures, monitoring package registry activity, pinning dependency versions. These are valuable but reactive. They fail against zero-day incidents like this one, where the malicious versions were live on PyPI before anyone knew.
nono takes a different approach: assume the code inside the sandbox might be hostile, and enforce that it can only do what it legitimately needs to do. A litellm-based application needs to call LLM APIs. It does not need to read ~/.ssh/id_rsa or connect to attacker-controlled infrastructure. nono makes that distinction enforceable at the kernel level.
The malicious litellm code runs. Its credential stealer finds no credentials. Its exfiltration attempt hits a wall. Its malware dropper can't write outside the project directory. The application continues to work normally, calling the LLM APIs it was designed to use.
Supply chain attacks succeed because we give every dependency the same privileges as our own code. nono changes that equation. For a hands-on walkthrough, see how we sandboxed a GitHub bot with all three layers. The project is on GitHub — star it if you want to follow development.
Next steps
- OS Sandbox — How Landlock and Seatbelt enforcement works under the hood
- Credential Injection — The phantom token pattern in detail
- Wrapping a GitHub Bot — End-to-end walkthrough of sandboxing an LLM agent
- Docs — Full CLI reference
- GitHub — Source code