Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
Security teams have spent decades hardening the network edge. Firewalls, WAFs, zero-trust network access — all pointing outward, watching for threats arriving through the front door. But this attack walked in through the back, wearing a badge it earned by being a security scanner.
TeamPCP didn't target a vulnerable application. They targeted the trust we place in our tooling. When Trivy — a premier open-source vulnerability scanner used by thousands — tells you a container is clean, you believe it. When a GitHub Action is published under an official account with 76 tagged releases, you trust it. The attackers understood this — and exploited it with precision.
The threat group didn't find a gap in your defenses. They became part of your defenses.
Here’s what you need to know about the TeamPCP supply chain attack — and why it matters.
First came the breach. TeamPCP compromised Trivy (by Aqua Security), force-pushing malware to 76 official GitHub Action tags — used to note versions as you release packages — turning a trusted security scanner into the initial vector.
Then the harvest. Organizations running Trivy in their CI/CD pipelines had their secrets exfiltrated in plaintext via memory scraping: AWS keys, GCP tokens, PyPI credentials — gone.
Then the expansion. Using those stolen tokens, the attackers backdoored LiteLLM v1.82.8 on PyPI, embedding malware directly into the backbone of modern AI infrastructure.
The pivot from Trivy to LiteLLM wasn't random. It was strategic. LiteLLM is the universal proxy layer for LLM APIs — the connective tissue between AI applications and the models they call. Poison this layer, and you don't compromise one company. You compromise every team that uses it. LiteLLM is used in Open WebUI, which is the de-facto standard for AI chat web fronts.
With 95 million monthly downloads, a single compromised version reaches millions of environments within hours. And teams running LiteLLM are, by definition, managing high-value AI infrastructure: OpenAI API keys, Anthropic credentials, Vertex AI service accounts. The prize at the end of this chain wasn't some legacy corporate database — it was the keys to organizations' entire AI stacks.
A supply chain attack against an AI proxy is an attack against every model call your application makes — and every secret required to make it.
What makes this attack technically sophisticated is the delivery mechanism. The attackers didn't use a malicious import that would trigger linters or dependency scanners. They used a .pth file.
Path configuration files are a Python feature dating to the early 2000s. Any line in a .pth file that begins with import is executed the moment the Python interpreter initializes — before a single line of your own code runs. Traditional scanners looking for suspicious entries in requirements.txt wouldn't flag it, because the execution vector isn't a package. It's a configuration file sitting quietly in Python's path.
The payload harvested cloud credentials, cryptocurrency wallet data (MetaMask, Exodus), and in environments with sufficient permissions, deployed privileged pods into Kubernetes clusters. The malware executed before a single line of your application ran. You never had a chance to catch it in flight.
The security community has understood supply chain risk in theory for years. SolarWinds made it visceral. The npm ecosystem's endless typosquatting made it routine. But the Trivy–LiteLLM cascade represents a new maturity level in attacker sophistication — a multi-stage campaign that used one trusted tool to compromise another, targeting specifically the AI infrastructure layer that most organizations have only recently built.
If your team is running LiteLLM, or anything like it, the questions you need to be asking are not just "are we patched?" They're structural: How do we verify the integrity of tools running in our CI/CD pipelines? How do we detect execution that happens before our application code? How are we monitoring for credential exfiltration that doesn't generate HTTP logs?
ReversingLabs Spectra Assure’s Machine Learning (ML) models detected the malicious intent in LiteLLM where traditional scanners missed the behavioral shift of .pth injection. Behavioral ML — not signature matching — is what closed this gap.
Join RL's free Spectra Assure Community to leverage advanced binary analysis to discover the newest open-source threats and malicious packages like the ones from the TeamPCP campaign.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial