ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why

AppSec as attacker: Inside Trivy–LiteLLM

The perimeter isn't your firewall — it's your CI/CD pipeline. Here’s what to know about TeamPCP's supply chain attack.

Cascading supply chain attack

Security teams have spent decades hardening the network edge. Firewalls, WAFs, zero-trust network access — all pointing outward, watching for threats arriving through the front door. But this attack walked in through the back, wearing a badge it earned by being a security scanner.

TeamPCP didn't target a vulnerable application. They targeted the trust we place in our tooling. When Trivy — a premier open-source vulnerability scanner used by thousands — tells you a container is clean, you believe it. When a GitHub Action is published under an official account with 76 tagged releases, you trust it. The attackers understood this — and exploited it with precision.

The threat group didn't find a gap in your defenses. They became part of your defenses.

Here’s what you need to know about the TeamPCP supply chain attack — and why it matters.

The attack sequence

First came the breach. TeamPCP compromised Trivy (by Aqua Security), force-pushing malware to 76 official GitHub Action tags — used to note versions as you release packages — turning a trusted security scanner into the initial vector.

Then the harvest. Organizations running Trivy in their CI/CD pipelines had their secrets exfiltrated in plaintext via memory scraping: AWS keys, GCP tokens, PyPI credentials — gone.

Then the expansion. Using those stolen tokens, the attackers backdoored LiteLLM v1.82.8 on PyPI, embedding malware directly into the backbone of modern AI infrastructure.

Why LiteLLM? The force multiplier effect

The pivot from Trivy to LiteLLM wasn't random. It was strategic. LiteLLM is the universal proxy layer for LLM APIs — the connective tissue between AI applications and the models they call. Poison this layer, and you don't compromise one company. You compromise every team that uses it. LiteLLM is used in Open WebUI, which is the de-facto standard for AI chat web fronts.

With 95 million monthly downloads, a single compromised version reaches millions of environments within hours. And teams running LiteLLM are, by definition, managing high-value AI infrastructure: OpenAI API keys, Anthropic credentials, Vertex AI service accounts. The prize at the end of this chain wasn't some legacy corporate database — it was the keys to organizations' entire AI stacks.

A supply chain attack against an AI proxy is an attack against every model call your application makes — and every secret required to make it.

The stealth factor: A 20-year-old Python trick

What makes this attack technically sophisticated is the delivery mechanism. The attackers didn't use a malicious import that would trigger linters or dependency scanners. They used a .pth file.

Path configuration files are a Python feature dating to the early 2000s. Any line in a .pth file that begins with import is executed the moment the Python interpreter initializes — before a single line of your own code runs. Traditional scanners looking for suspicious entries in requirements.txt wouldn't flag it, because the execution vector isn't a package. It's a configuration file sitting quietly in Python's path.

The payload harvested cloud credentials, cryptocurrency wallet data (MetaMask, Exodus), and in environments with sufficient permissions, deployed privileged pods into Kubernetes clusters. The malware executed before a single line of your application ran. You never had a chance to catch it in flight.

What this means for AI infrastructure teams

The security community has understood supply chain risk in theory for years. SolarWinds made it visceral. The npm ecosystem's endless typosquatting made it routine. But the Trivy–LiteLLM cascade represents a new maturity level in attacker sophistication — a multi-stage campaign that used one trusted tool to compromise another, targeting specifically the AI infrastructure layer that most organizations have only recently built.

If your team is running LiteLLM, or anything like it, the questions you need to be asking are not just "are we patched?" They're structural: How do we verify the integrity of tools running in our CI/CD pipelines? How do we detect execution that happens before our application code? How are we monitoring for credential exfiltration that doesn't generate HTTP logs?

ReversingLabs Spectra Assure’s Machine Learning (ML) models detected the malicious intent in LiteLLM where traditional scanners missed the behavioral shift of .pth injection. Behavioral ML — not signature matching — is what closed this gap.

Immediate action items

  • Pin GitHub Actions by commit SHA, not tag. Tags are mutable — an attacker who controls the repo can move them. Replace uses: aquasecurity/trivy-action@v0.20.0 with the full SHA of the commit you've verified.

  • Audit .pth files in every Python environment. Run find / -name "*.pth" and review anything that doesn't originate from a package you explicitly installed. Any file with an import line warrants immediate investigation.

  • Rotate CI/CD secrets now if Trivy ran without SHA pinning. Assume any AWS keys, GCP tokens, or PyPI credentials that appeared in environment variables during a Trivy action run within the compromised window were exfiltrated.

  • Add behavioral ML scanning to your supply chain checks. Signature-based scanners missed this. The detection gap isn't a configuration problem — it's an architecture problem. Tools that analyze behavioral patterns rather than known-bad signatures are what's required.

  • Implement egress monitoring in your CI/CD runners. Runners that only need to talk to your artifact registry and cloud provider should not be making arbitrary outbound connections. Credential exfiltration means traffic somewhere — make sure you'd see it.

Join RL's free Spectra Assure Community to leverage advanced binary analysis to discover the newest open-source threats and malicious packages like the ones from the TeamPCP campaign.

Back to Top