All articles
Security·

A Supply Chain Attack Just Hit the Tool That Powers Most AI Applications

In five days, a single attacker compromised a vulnerability scanner, a code analysis platform, and the most widely used AI proxy in Python — used by NASA, Netflix, and Stripe. Here's what happened and what it means for businesses using AI tools.

A Supply Chain Attack Just Hit the Tool That Powers Most AI Applications

Between March 19 and March 24, 2026, a single threat actor called TeamPCP pulled off one of the most surgical supply chain attacks the software industry has seen.

They compromised Trivy, a widely used open-source vulnerability scanner. Then they used the stolen credentials to attack Checkmarx, a code analysis platform. Then — using credentials harvested from that attack — they published poisoned versions of LiteLLM to PyPI, the main package repository for Python.

LiteLLM is the most widely used proxy for managing AI API connections in the Python ecosystem. Organizations including NASA, Netflix, Stripe, and NVIDIA use it. If your AI tools are built in Python and talk to multiple AI providers, there's a reasonable chance LiteLLM is somewhere in the stack.

Two poisoned versions — 1.82.7 and 1.82.8 — contained a credential stealer that harvested API keys and other secrets from any system that installed them.

How a Vulnerability Scanner Became the Entry Point

The attack is worth understanding because of how it chained together.

Trivy is a tool that companies use to scan their software for security vulnerabilities. It's supposed to make you safer. But Trivy's own CI/CD pipeline — the automated system that builds and publishes the tool — had credentials that could be stolen.

Once TeamPCP had those credentials, they could push malicious code through Trivy's build system. From there, they accessed credentials for other tools in the same pipeline. Each compromised tool gave them keys to the next one.

The chain ended at LiteLLM, where the payoff was enormous: access to AI API keys worth potentially millions of dollars in usage. Anyone running the poisoned LiteLLM versions was silently sending their API credentials to the attacker.

Why AI Tools Are Especially Vulnerable

AI applications have a dependency problem that traditional software doesn't.

A typical AI application might use LiteLLM to manage API connections, an embedding library for search, a vector database for storage, an orchestration framework for workflows, and a dozen smaller packages for data processing. Each of those packages has its own dependencies. Each dependency is a potential attack surface.

The speed of the AI ecosystem makes this worse. As we explored in Open-Source AI Agents Are Exciting — and Dangerous, the rush to adopt new packages often outpaces security review. New packages appear daily. Developers install them quickly because they need the functionality. Version pinning — locking to a specific, verified version — is often skipped because the tools are changing so fast.

The result is a supply chain that moves faster than most security practices can keep up with.

What This Means for Your Business

If your business uses AI tools — even indirectly through a vendor — here's what's practical:

Ask your vendors about their supply chain. If a company builds your AI-powered customer service bot, ask what tools they use under the hood. Ask whether they pin dependency versions. Ask how quickly they respond to security advisories. These aren't unreasonable questions — they're the same due diligence you'd do for any critical vendor.

Watch for credential exposure. We wrote about validation patterns that catch these issues automatically in Self-Validating AI Agents: The Feature That Changes Everything. If your business has API keys for AI services (OpenAI, Anthropic, Google), those keys are valuable targets. Treat them like passwords. Rotate them regularly. Don't store them in code repositories or configuration files that might be shared.

Understand the difference between "open source" and "secure." Open-source software is often excellent. It's also maintained by small teams with limited security budgets. The Trivy team — who build a security scanner used by thousands of companies — didn't rotate their CI credentials for five days after they were exposed. That's not negligence. That's a resource problem.

The Broader Pattern

This attack wasn't the first and won't be the last. The open-source AI supply chain is now critical enterprise infrastructure, but it's defended with the security posture of volunteer side projects.

That gap between how important these tools are and how they're protected is where attackers live. TeamPCP found it. Others will too.

For businesses, the takeaway is straightforward: the AI tools you use are only as secure as the weakest link in their supply chain. And right now, a lot of those links are thinner than anyone would like.

Related: How to Secure Your AI Agents Before They Become a Liability and Self-Validating AI Agents: The Feature That Changes Everything.

Share:

Stay Connected

Get practical insights on using AI and automation to grow your business. No fluff.