Python and Rust Have the Same Supply Chain Problem as NPM
Last post I walked through the threat model for supply chain attacks and dug into the NPM ecosystem specifically: postinstall scripts, npm ci, pnpm’s release-age cooldown. The same structural problems exist in Python and Rust, but the failure modes are different and the tooling has evolved in some surprising directions. Worth understanding both, because if you write any backend code in 2026 you’re probably touching at least one of these ecosystems.
Python: setup.py Is a Remote Code Execution Primitive
The thing most Python developers don’t appreciate is that pip install runs arbitrary code by default. Not after install. During install. If a package ships a setup.py, that file is executed in a Python interpreter the moment pip resolves the dependency. Whatever the author wrote, including reading ~/.aws/credentials, scraping environment variables, or opening a reverse shell, runs as your user with full filesystem access.
This is the part that confuses people coming from other ecosystems: venv and virtualenv don’t help. They isolate Python package versions to avoid conflicts. They are not a security boundary. A package installed inside a virtualenv has the exact same privileges as the user who ran pip install. None of this is a bug, exactly. It’s just an artifact of setup.py being a regular Python script that pip has always been willing to execute.
The defense-in-depth stack for Python looks like this:
Stop using pip. I mean it. pip is the worst package manager in mainstream use today and it is the single biggest reason Python’s supply chain story is a disaster. It has no native lockfile. requirements.txt is a shopping list, not a lockfile; it tells pip what to fetch, not what you actually got last time. Run pip install -r requirements.txt twice on two different days and you can get two different dependency trees, because pip resolves transitive deps fresh every time against whatever happens to be on PyPI in that moment. Builds aren’t reproducible. Hashes aren’t verified by default. There’s no separation between “what I asked for” and “what was actually resolved.”
Every other ecosystem solved this a decade ago. npm has package-lock.json. Cargo has Cargo.lock. Bundler has Gemfile.lock. pip has vibes.
The --require-hashes flag exists, technically, but it’s duct tape on a broken design. You have to generate the hashes with a separate tool (pip-tools), maintain them by hand, and remember to pass the flag on every install. Nobody does this in practice. The Python Packaging Authority spent fifteen years insisting pip was fine while every other community built proper lockfile-based managers.
Use uv or Poetry. Both produce real lockfiles with SHA-256 hashes for every direct and transitive dependency, both make installs reproducible by default, both are dramatically faster than pip. uv in particular is the obvious default for new projects in 2026, it’s a drop-in replacement that’s roughly 10-100x faster and treats the lockfile as a first-class artifact instead of an afterthought. Hash verification isn’t a flag you have to remember. It’s how the tool works.
This doesn’t protect you from a malicious package you pinned on day one. But it does slam the door on silent registry tampering, makes “what’s actually deployed?” a question with an answer, and gets you out of the pip swamp.
pip-audit for known vulnerabilities. Scans your environment or requirements file against the OSV database, PyPA advisories, and GitHub advisories. Run it in CI. Combined with a real lockfile you get a tight loop: pin exact versions, scan those versions for CVEs, fail the build if anything critical shows up.
Trusted Publishing (OIDC). If you maintain a package on PyPI, get rid of your long-lived API token and switch to OIDC-based publishing. Your CI runner generates ephemeral, short-lived tokens scoped to a specific repository, branch, and workflow. Leaked PyPI tokens have been the source of multiple high-profile compromises. Trusted Publishing makes the credential effectively un-leakable because it doesn’t exist as a persistent secret.
The thing I’d actually call out, though, is that none of the Python tooling addresses the setup.py execution problem at install time. Hash pinning verifies you got the right bytes. It doesn’t tell you those bytes aren’t malicious. For that you’re back to either sandboxing the install (Docker, devcontainers) or trusting the registry’s malware detection, which lags by hours to days.
Rust: The Safety Guarantees Stop at the Compiler
Rust’s reputation for safety is real, but it’s a property of the compiled language, not the supply chain. The borrow checker doesn’t help you when the crate you’re depending on exfiltrates your SSH key during cargo build.
The mechanism is build.rs. Crates can include a build script that runs before the compiler, with full user privileges. Procedural macros do the same thing at compile time. In both cases, the code can read files, open network sockets, do whatever it wants. A malicious build.rs is effectively an unsandboxed unsafe block that bypasses code review because nobody reads build scripts. The Rust core team has been discussing sandboxing for years, but nothing has shipped.
This isn’t theoretical. Two examples from the last six months:
- September 2025:
faster_logandasync_printlnwere caught scraping Ethereum and Solana private keys at runtime and exfiltrating them to Cloudflare workers. - March 2026:
chrono_anchor,dnp3times, andtime-sync, all masquerading as time utilities, were transmitting.envfile contents to threat actors.
Both clusters used compromised GitHub OAuth credentials to push under legitimate-looking namespaces. crates.io authenticates via GitHub, so a phished GitHub account is a phished crates.io account.
The defensive tooling is actually better than what most ecosystems have:
| Tool | What it does |
|---|---|
cargo-audit |
Scans Cargo.lock against the RustSec Advisory Database. Run in CI. |
cargo-deny |
Lints the dependency graph. Block specific crates, enforce license policies, restrict registries. |
cargo-crev |
Decentralized “web of trust” where developers cryptographically sign crate reviews. Elegant, but heavy lift in practice. |
cargo-vet |
Mozilla’s pragmatic answer to crev. Centralized audit records per org, with the ability to import audits from peer orgs (Google, Mozilla, Embark) instead of re-auditing every transitive dep yourself. |
If you’re picking one to start with, cargo-audit is the easy baseline. It’s npm audit for Rust and you should be running it in CI yesterday. cargo-deny is the next step up. It lets you actually enforce policy, which is what you want once you’ve used cargo-audit long enough to be tired of triaging the same warnings.
cargo-vet is the interesting one for any team beyond about five engineers. The insight is that you don’t actually need to audit every crate. You just need to know that someone you trust did. By importing audit records from Mozilla and Google, a small team can effectively delegate the audit work for hundreds of common dependencies without running anything themselves. It’s the closest thing the Rust ecosystem has to a working trust network, and it works because the cryptographic overhead lives at the org level instead of being pushed onto individual developers.
The Pattern Across All Three Ecosystems
NPM, PyPI, and crates.io all share the same fundamental design flaw: package installation executes attacker-controlled code by default. NPM has postinstall. Python has setup.py. Rust has build.rs and proc macros. Different files, same problem.
The mitigations also rhyme. Lock your versions to specific hashes. Run an audit tool in CI. Where possible, prevent install-time execution entirely (--ignore-scripts, pre-built wheels, sandboxed build scripts when they finally land in Cargo). Where you can’t, isolate the install with devcontainers, ephemeral CI runners, anything that contains the blast radius when a dependency turns out to be hostile.
Next post I’ll get into the isolation side specifically: devcontainers, OrbStack, Landlock, and the practical question of how a solo developer with no security budget actually keeps their laptop from getting owned by an AI agent that just pip installed a hallucinated package name.
Sources
- Securing Package Managers: Why NPM, PyPI, and Cargo Are High-Value Targets
- Defense in Depth: A Practical Guide to Python Supply Chain
- PyPI Security: How to Safely Install Python Packages
- Rust Supply Chain Security — Managing crates.io Risk
- crates.io: Malicious crates faster_log and async_println
- Five Malicious Rust Crates and AI Bot Exploit CI/CD Pipelines
- About RustSec Advisory Database
- cargo-vet FAQ
- Auditing Rust Crates Effectively (arXiv)
- Explore sandboxed build scripts — Rust Project Goals
I’d appreciate a follow. You can subscribe with your email below. The emails go out once a week, or you can find me on Mastodon at @[email protected].
/ DevOps / Python / security / Rust / Supply-chain