Blogs / Incident Deep Dives

TanStack npm Hack Explained: What JavaScript Developers Must Do Now

A practical visual guide to the TanStack npm supply chain compromise of May 2026, including how the attack worked, why trusted publishing did not save it, what the malware did, and the exact response steps JavaScript teams should take now.

May 12, 202618 min readSevyDevy Team
TanStacknpm SecuritySupply Chain SecurityJavaScript SecurityGitHub ActionsOIDCCI/CD SecurityReact RouterIncident ResponseAppSecMalwareDeveloper Security

Table of content

  1. 1. Introduction
  2. 2. The Short Version
  3. 3. At a Glance: How the Attack Happened
  4. 4. Why This Was Bigger Than a Normal Dependency Bug
  5. 5. What Actually Happened
  6. 6. The Three-Part Attack Chain
  7. 7. 1) pull_request_target Became the Entry Point
  8. 8. 2) Cache Poisoning Turned CI Performance Into an Attack Surface
  9. 9. 3) OIDC Token Extraction Let the Attacker Publish as the Trusted Pipeline
  10. 10. What the Malware Did
  11. 11. Why Install Time Was the Real Blast Radius
  12. 12. Who Should Assume Exposure?
  13. 13. What Was and Was Not Affected
  14. 14. What JavaScript Developers Must Do Now
  15. 15. Immediate Response Flow
  16. 16. How to Check Your Project Quickly
  17. 17. Why Lockfiles Still Matter
  18. 18. Why This Incident Changed How We Should Think About Trusted Publishing
  19. 19. Hardening Lessons for CI/CD Teams
  20. 20. What Small Teams and Solo Developers Should Do
  21. 21. Final Takeaway

Introduction

On May 11, 2026, the TanStack ecosystem was hit by a serious npm supply chain compromise. This was not a typo-squatting scam, not a fake package, and not a simple maintainer-token leak. The malicious releases were published through TanStack’s legitimate trusted publishing identity after the attacker hijacked the GitHub Actions runner during the release process.

That detail makes this incident especially important for modern JavaScript teams. Many developers assume trusted publishing means a package is safe. This incident proved a harder truth: if the runner itself is compromised before publish, the final artifact can still look legitimate at the identity layer.

If your team uses TanStack packages, especially routing-related packages, this matters because the attack path was the package installation process itself. A normal install could be enough to expose secrets from developer machines or CI runners.

The Short Version

  • 84 malicious versions were published across 42 @tanstack/* packages.
  • The attack window was short, but the blast radius was high.
  • The attacker chained pull_request_target misuse, GitHub Actions cache poisoning, and OIDC token extraction from runner memory.
  • No npm tokens were stolen, and the official npm publish step itself was not modified.
  • Installing an affected version could expose secrets from local machines or CI runners.
  • Any install host that resolved an affected version should be treated as potentially compromised.

At a Glance: How the Attack Happened

The attacker first used a dangerous GitHub Actions workflow pattern, then poisoned a cache entry, then waited for a real release run to restore that poisoned cache. Once the trusted runner was infected, malicious code extracted an OIDC token from memory and published infected package versions as if they came from the legitimate pipeline.

Why This Was Bigger Than a Normal Dependency Bug

A normal dependency bug is usually a flaw in code. You patch it and move on. A supply chain compromise is different. It turns your package manager, your install process, and your CI/CD pipeline into the delivery mechanism for malicious code.

This TanStack incident is even more important because it weakens a common engineering assumption: that trusted publishing is enough protection. In this case, the publish identity was legitimate. The build environment behind it was not.

What Actually Happened

According to TanStack’s postmortem, the attacker chained three weaknesses together. First, they abused pull_request_target. Second, they poisoned a GitHub Actions cache across the fork-to-base trust boundary. Third, they extracted an OIDC token from the trusted runner’s memory and used it to publish malicious packages directly to npm.

This is why the incident is so uncomfortable for engineering teams. The packages were not published through a fake identity. They were published through the real release identity after the runner was compromised mid-flight.

The Three-Part Attack Chain

1) pull_request_target Became the Entry Point

The attack started with a workflow that used pull_request_target. That event is powerful and dangerous because it runs in the context of the base repository. If it checks out or executes fork-controlled code in the wrong way, an attacker can get code execution inside a privileged CI context.

That is what made the first step possible. The attacker did not need direct write access to the main repository. They only needed a path to get fork-controlled content involved in a privileged workflow.

2) Cache Poisoning Turned CI Performance Into an Attack Surface

The attacker then poisoned the GitHub Actions cache so that a later trusted workflow on main would restore malicious state. This is one of the most important lessons in the whole incident. Engineering teams often think of caches only as performance helpers. In reality, a shared cache across trust boundaries can become a stealthy persistence layer.

Once the poisoned cache was saved under a key the legitimate release workflow would use, the attacker no longer needed to stay visible in the pull request. The payload was already waiting for the next real release run.

3) OIDC Token Extraction Let the Attacker Publish as the Trusted Pipeline

The final step was extracting an OIDC token from the GitHub Actions runner process memory. That token is what allowed the malicious publish to happen through TanStack’s legitimate trusted publisher binding.

This is the key shift in mindset developers need after this incident. Trusted publishing proves where the publish came from. It does not prove that the runner generating the artifact was clean when it did it.

What the Malware Did

TanStack’s postmortem says the payload harvested credentials from common developer and cloud environments. That included AWS metadata and Secrets Manager access, GCP metadata, Kubernetes service-account tokens, Vault tokens, npm credentials, GitHub tokens, and SSH private keys.

The payload also exfiltrated data over the Session or Oxen messenger file-upload network instead of using a traditional attacker-controlled command-and-control domain. On top of that, it tried to self-propagate by discovering other packages maintained by the victim and republishing them with the same malicious injection.

  • Harvest credentials from cloud and developer environments.
  • Read npm, GitHub, and SSH-related secrets from the install host.
  • Exfiltrate data over Session / Oxen infrastructure.
  • Attempt self-propagation into other packages maintained by the victim.
  • Run during installation, which means compromise could happen before app code even starts.

Why Install Time Was the Real Blast Radius

The most important practical point is this: the malicious code ran during package installation. That means the dangerous moment was not when your app shipped to production. The dangerous moment was when a developer machine or CI runner resolved and installed an affected version.

This is why teams must think beyond source code review. Even if your application never deployed the bad package, the install host may still have leaked secrets.

Who Should Assume Exposure?

  • Any developer machine that installed an affected @tanstack package on May 11, 2026.
  • Any CI runner that performed a fresh install of affected versions during the compromise window.
  • Any preview environment or ephemeral build system that resolved affected versions from the registry.
  • Any team using loose version ranges without strict lockfile discipline.

If one of these conditions applies, the safe assumption is not that you probably got lucky. The safe assumption is that the install host may be compromised until proven otherwise.

What Was and Was Not Affected

TanStack said 42 packages and 84 malicious versions were involved. At the same time, they also confirmed that several major families were clean, including @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, and the @tanstack/start meta-package.

That means teams should investigate actual installed packages and versions instead of panicking blindly across the entire TanStack brand.

What JavaScript Developers Must Do Now

This is not just a dependency update task. It is an incident response task. If an affected version was installed, your goal is not only to remove the package. Your goal is to assess whether the install environment can still be trusted.

  • Identify whether your machines or CI runners installed affected versions.
  • Treat impacted install hosts as potentially compromised.
  • Contain and rebuild from a clean state instead of trusting in-place cleanup.
  • Rotate every secret that was reachable from the install host.
  • Delete node_modules and reinstall only from reviewed safe versions.
  • Audit GitHub Actions workflows using pull_request_target and cross-boundary caches.
  • Review cloud, GitHub, npm, and SSH audit trails after the install time.

Immediate Response Flow

How to Check Your Project Quickly

Start with your lockfile and dependency graph. The first question is simple: did your project resolve an affected @tanstack package version?

# search lockfiles for TanStack packages
grep -R "@tanstack/" package-lock.json yarn.lock pnpm-lock.yaml bun.lock 2>/dev/null

# inspect installed packages
npm ls @tanstack/react-router @tanstack/router-core @tanstack/history @tanstack/router-plugin @tanstack/router-vite-plugin @tanstack/react-start

# if needed, inspect tarballs safely without executing scripts
npm pack @tanstack/react-router@<version>
tar -xzf tanstack-react-router-*.tgz
cat package/package.json

If you confirm an affected version was installed, escalate internally as a real security incident.

Why Lockfiles Still Matter

Strict lockfile discipline remains one of the most practical defenses in incidents like this. If your team had already committed a safe lockfile and used deterministic installs, you were much less likely to silently pull newly compromised versions from the registry.

A lockfile is not just a reproducibility tool. In supply chain incidents, it becomes a security boundary.

Why This Incident Changed How We Should Think About Trusted Publishing

Trusted publishing is still valuable. But this incident showed its limit. Provenance and identity can prove where a build came from. They cannot prove the runner was uncompromised before the build was created.

That means mature engineering teams need a deeper trust model: workflow isolation, cache hygiene, OIDC privilege minimization, and strong boundaries between untrusted PR activity and trusted release workflows.

Hardening Lessons for CI/CD Teams

  • Avoid running fork-controlled code in pull_request_target contexts.
  • Do not share caches across untrusted PR workflows and trusted release workflows.
  • Keep OIDC permissions as narrow as possible.
  • Reduce secrets available to install-time processes on runners.
  • Separate test and release trust boundaries more aggressively.
  • Treat caches as security-relevant state, not just speed improvements.
  • Add package-age and dependency-review controls before production adoption.

What Small Teams and Solo Developers Should Do

Small teams are often more exposed than large enterprises because the same machine may hold source access, cloud access, deployment access, and package publishing credentials at once.

  • Use exact versions for critical dependencies.
  • Keep deploy credentials separate from normal development environments when possible.
  • Do not allow broad long-lived secrets to sit on laptops or general-purpose CI runners.
  • Review workflow files like you would review backend auth code.
  • Treat fresh installs of important dependencies as security-sensitive events.

Final Takeaway

The TanStack npm compromise was not just another package incident. It was a reminder that the real target is often the delivery pipeline itself. Once attackers get inside CI/CD trust boundaries, they can produce artifacts that look legitimate enough to pass normal confidence checks.

For JavaScript developers, the practical lesson is immediate: verify installed versions, treat affected hosts as potentially compromised, rotate secrets, rebuild clean, and audit the workflow design that made the compromise possible.

Modern engineering speed only matters if your trust model is stronger than the attacker’s creativity. This incident showed how quickly that trust can collapse when workflow security is treated as a detail instead of a core part of software delivery.

Related blogs