A threat actor group known as TeamPCP compromised two GitHub Actions maintained by Checkmarx, one of the world's leading application security testing companies, following a breach discovered after . The attack targeted automation tools that software development teams across the industry rely on daily. By injecting malicious code into those tools, TeamPCP positioned itself to steal credentials from any repository running the compromised Actions and, from there, to push unauthorized code, exfiltrate secrets, or gain access to cloud infrastructure, all without ever touching a victim's own systems directly.

The short version: a security company's own security tooling became the attack vector. The interesting part is why that is so much harder to defend against than a direct breach.

What Checkmarx Does, and Why the Irony Matters

Checkmarx is not a peripheral player in enterprise software security. The company, founded in 2006 and now operating at global scale, provides AST platforms that developers use to scan their own code for vulnerabilities before shipping it to production. Enterprises pay Checkmarx specifically because they want someone with deep security expertise guarding their development pipeline. The company's clients include major financial institutions, healthcare organizations, and government agencies, exactly the kinds of organizations that have the most to lose if their code contains exploitable flaws.

That context gives the TeamPCP attack an edge that a breach of, say, a retail chain would not carry. Checkmarx is in the business of being the last line of defense. Its tools sit inside the software development process at thousands of organizations. When the attacker is not targeting a company's product but rather the tools developers use to build and verify that product, the trust relationship being exploited is precisely the one that was supposed to make the software ecosystem safer.

This is not unprecedented: the SolarWinds attack in 2020 followed the same structural logic, targeting a network management tool trusted by federal agencies and Fortune 500 companies. But it carries a particular sting when the compromised tool is made by a company whose core value proposition is security expertise.

What GitHub Actions Are, and Why They Make a Tempting Target

GitHub Actions is the system GitHub built to run those automated workflows. Developers write these workflows as configuration files, and within those workflows, they can call pre-built modules (also called Actions) created by third parties. Checkmarx, naturally, publishes its own Actions so that development teams can drop a Checkmarx security scan directly into their automated pipeline without having to write the integration themselves.

That convenience is the attack surface. When a development team uses a third-party Action, they are trusting that the code inside that Action does what it claims to do and only what it claims to do. If the Action's publisher is compromised, every team using that Action is now running attacker-controlled code inside their own automated pipeline, with full access to everything that pipeline touches.

What CI/CD Means, and Why Credentials There Are So Valuable

To do all of that automatically, CI/CD pipelines need credentials. They need permission to access cloud infrastructure, to push code to production servers, to read from and write to databases, to interact with deployment platforms. These credentials are typically stored as encrypted secrets within the repository or pipeline environment, inaccessible to the developers who configured them once set, but automatically available to the pipeline that needs them.

For an attacker who compromises a GitHub Action, those secrets become accessible. The malicious code embedded in the Action runs in exactly the same environment where those secrets live. Extracting a cloud provider access key, a database password, or a deployment token from within an automated pipeline is not meaningfully different, from a technical standpoint, from reading a variable in a program. Once the attacker controls code running inside the pipeline, the pipeline's access becomes the attacker's access.

What stolen CI/CD credentials enable is, in practice, nearly unlimited lateral movement within an organization's infrastructure. An AWS access key obtained from a compromised pipeline can be used to spin up new compute instances, access stored data, or establish persistent access that survives the original compromised Action being removed. A deployment token can be used to push malicious code to production, not just to one organization's repository, but to the compiled software artifacts that organization ships to its own customers.

The Supply Chain Attack Model: How One Compromise Becomes Thousands

What makes this category of attack particularly efficient from an adversarial standpoint is the multiplication factor. A direct attack on a single organization yields access to that organization's systems. A supply chain attack (one that compromises a tool used by many organizations) multiplies that yield across every downstream user of that tool, without requiring the attacker to break into each target individually.

Think of it the way you would a contaminated ingredient in a food supply chain. If a single restaurant kitchen uses spoiled eggs, the damage is contained to that kitchen's customers. But if those eggs are distributed by a supplier to a hundred restaurants across a city, the contamination reaches every table in every one of those hundred restaurants before anyone realizes the source of the problem. The supplier did not need to break into each restaurant, they were already inside the supply chain.

Software supply chains work the same way. Checkmarx's GitHub Actions were the eggs. TeamPCP contaminated them at the source. Every development team that pulled those Actions into their pipeline in the window between the breach and its detection was running attacker-controlled code. The attacker did not need to identify, research, and individually compromise each of those teams. The compromise traveled with the tool.

This attack pattern has a documented history that is growing more frequent and more consequential. In , almost exactly a year before the TeamPCP incident, the tj-actions/changed-files GitHub Action was compromised in a strikingly similar operation. That Action was used by thousands of repositories. The attacker modified it to print CI/CD secrets into pipeline logs, making them accessible to anyone monitoring those logs. Security researchers at StepSecurity were the first to document the scope of that incident, which ultimately traced back to a compromised personal access token.

The pattern from tj-actions to TeamPCP's attack on Checkmarx is not coincidence. It reflects a strategic shift in how sophisticated threat actors approach software infrastructure. Direct attacks on hardened enterprise targets are costly and increasingly difficult. Supply chain attacks, by contrast, exploit the trust that makes automated development pipelines functional. Every security measure an organization implements to protect its own perimeter is irrelevant if the attacker enters through a tool the organization chose to trust. This same exploitation of trusted relationships is visible in the Anthropic internal asset exposure, where misconfiguration rather than malice created an analogous trust failure.

TeamPCP: What Is Known About the Threat Actor

TeamPCP is a threat actor group with a documented focus on development infrastructure. The group has been observed targeting continuous integration systems, source code management platforms, and the tooling developers use to automate their workflows. Their operational pattern favors access that is persistent and quiet rather than immediately destructive: compromising a pipeline tool and waiting for credentials to flow through it is more valuable, over time, than a noisy ransomware deployment that triggers immediate incident response.

The Checkmarx compromise follows that pattern. The breach was discovered after , which suggests the attack may have had a window of operation before detection. The specific mechanism by which TeamPCP gained access to Checkmarx's GitHub Actions publishing pipeline had not been fully disclosed in public-facing reporting at the time of publication. The two broad possibilities in incidents of this type are either a compromised developer account with sufficient permissions to publish updated Actions, or a compromise of the automation systems Checkmarx itself uses to build and publish those Actions, the latter representing a particularly deep level of access.

What is clear from the structure of the attack is that TeamPCP understood both the technical architecture of GitHub Actions and the trust model that makes supply chain attacks effective. This was not opportunistic scanning for misconfigured servers. It was a targeted operation against a specific, high-value point in the software development toolchain.

Detecting and Responding to This Category of Attack

The detection challenge in supply chain attacks on GitHub Actions is structural. When developers review the code in their own repositories, they are checking the code they wrote. The Actions they call are typically referenced by name and version tag rather than inspected line by line on every pipeline run. A malicious change to a referenced Action does not show up as a change in the developer's own codebase: it shows up as a change in a dependency they are trusting implicitly.

Security researchers at organizations including Chainguard and Wiz have noted that pinning GitHub Actions to specific commit hashes, rather than version tags, significantly reduces the attack surface for this category of compromise. A version tag like v3.2.1 can be silently reassigned to point to different code by whoever controls the repository. A commit hash is immutable: it identifies one specific, unchangeable snapshot of the code. Any modification to the Action code would produce a different hash, and a pipeline pinned to the original hash would not run the modified version. GitHub's supply chain security documentation covers commit-pinning and related hardening practices in detail.

Security experts across the industry also recommend implementing runtime monitoring for CI/CD pipelines that flags unexpected outbound network connections, unusual access to stored secrets, or Actions performing operations outside their expected scope. Endor Labs security researcher Henrik Plate, who has published extensively on software supply chain risk, has written that "the gap between what developers think their pipelines are doing and what they are actually doing is where supply chain attackers operate." That gap is partly a tooling problem and partly an organizational one: it requires both technical controls and the processes to review them.

For organizations that were running the compromised Checkmarx Actions during the exposure window, the immediate recommended response involves rotating all CI/CD credentials that were available to those pipelines, auditing deployment logs for unauthorized pushes, and reviewing cloud infrastructure access logs for activity from the pipeline's credentials in the relevant period.

The Broader Pattern: CI/CD as the New Attack Frontier

The security industry has spent the past decade hardening application perimeters. Firewalls, intrusion detection systems, vulnerability scanning, penetration testing: the investments organizations have made in securing their running production systems are substantial. But that investment has collectively made the direct attack path more expensive, pushing sophisticated threat actors toward vectors that bypass perimeter defenses entirely.

CI/CD pipelines represent one of the most consistently underprotected areas of enterprise infrastructure. A 2025 survey by Anchore found that fewer than a third of organizations had implemented runtime monitoring for their CI/CD pipelines. The same survey found that while most organizations scanned their production code for vulnerabilities, significantly fewer had implemented supply chain security controls for the tools running their pipelines, the exact gap that TeamPCP exploited.

The economics favor attackers in this space. GitHub alone hosts millions of repositories, a substantial proportion of which use GitHub Actions. The ecosystem of publicly available Actions runs to tens of thousands of modules, the majority maintained by individuals or small teams with limited security operations capacity. Identifying high-value Actions (ones used by many organizations, maintained by teams with access to sensitive downstream infrastructure) is, for a sufficiently resourced threat actor, a straightforward research exercise. Compromising one such Action can yield credentials across hundreds or thousands of organizations simultaneously. These same dynamics drive the nation-state cyber campaigns documented in the surge in cyber retaliation following US-Israel strikes against Iran, where development tooling has become an increasingly attractive target for hostile actors.

"The natural progression of where attackers go when the front door gets harder to kick in," said Anne Connell, Principal Security Researcher at Mandiant, describing the shift in adversarial focus toward supply chain attacks on development tooling.

Anne Connell, Principal Security Researcher, Mandiant

The implication is that supply chain attacks on development tooling are not a temporary trend to be solved by patching the specific vulnerabilities exploited in any single incident. They are a durable feature of the threat landscape that the software development industry has not yet fully adapted to.

What This Means for Software Supply Chain Security Going Forward

The Checkmarx incident adds a significant data point to an argument that parts of the security industry have been making for several years: the trust model underlying modern software development is structurally misaligned with the threat landscape. Development teams build software by assembling components from dozens of sources (open-source libraries, cloud services, third-party Actions, package registries). That assembly model is enormously productive. It is also a threat surface that no single organization can fully control.

The response to this reality is taking shape along several tracks. At the standards level, the OpenSSF has been developing frameworks for software supply chain integrity, including SLSA, which defines increasingly rigorous levels of verification for where code came from and whether it has been tampered with. The SLSA framework documentation outlines the four levels of supply chain assurance organizations can work toward. GitHub itself has invested in features that make it easier to pin Actions to specific commit hashes and to require code signing for published packages.

At the regulatory level, the White House Executive Order on cybersecurity and subsequent guidance from the CISA have both addressed software supply chain security as a priority area, pushing federal contractors toward SBOMs (essentially a verified ingredient list for software, documenting every component and its provenance). CISA's supply chain security guidance sets out the current federal posture on this category of risk. Those requirements are beginning to propagate into commercial procurement standards as well.

But standards and requirements operate at the policy level. The TeamPCP attack on Checkmarx happened inside an organization with the security expertise and resources to know better. The question the industry is now being forced to answer is not whether supply chain attacks are a serious threat (the evidence on that point is unambiguous) but whether the organizational processes and tooling to defend against them can mature faster than threat actors' ability to exploit them.

Checkmarx, the company hired to find vulnerabilities in other organizations' code, now has a breach to investigate in its own pipeline tooling. That is, at minimum, a forcing function for the company to review its own security practices with the same rigor it brings to its customers'. Whether the broader industry takes the same lesson from the incident (that the tools doing the securing need to be secured with equal discipline) will determine how many similar incidents follow.

Sources