SBOM-Driven Supply Chain Compromise Detection: Finding Axios 1.14.1 in Production

SBOM-Driven Supply Chain Compromise Detection: Finding Axios 1.14.1 in Production

The Problem

On March 31 2026, when the Axios compromise was disclosed, the first question every security team asked was: “are we running it?” In organisations without SBOMs, answering this required pulling every container image from the registry, running a scanner against each one, and correlating the results against the list of running pods — a process that took hours or days depending on the number of images. Organisations that had adopted SBOM attestations (generating an SBOM at build time and attaching it to the image as an OCI attestation) could answer the question in seconds: query the attestation store for all images whose SBOM contains axios@1.14.1. The Axios incident is the clearest production argument for continuous SBOM-based monitoring: not just generating SBOMs, but using them as a live inventory that is continuously compared against an IOC feed.

The Axios attack, attributed to North Korean threat actor Sapphire Sleet, used a stolen npm maintainer token to publish axios@1.14.1 with a phantom transitive dependency: plain-crypto-js@4.2.1. Any container image built after the malicious publish and before the removal window — a period of approximately 11 hours — would contain both packages. A team running 200 Deployments in production, each built on a different schedule, had no way to know which images were affected without inspecting each one. The SBOM-equipped team queried their attestation store and produced a definitive list in under 10 seconds.

This article covers generating SBOMs at build time with Syft, attaching them as cosign OCI attestations, running a scheduled comparison job that checks every deployed pod’s SBOM against a maintained IOC package list, and alerting when a match is found. The monitoring loop runs every 15 minutes, meaning that within one cycle of a new IOC entry being added — the same day a compromise is disclosed — every affected workload is identified.

Target systems: Kubernetes 1.29+, GitHub Actions, Syft 1.x, cosign 2.x, Python 3.11+.

Threat Model

  • Compromised package version deployed in production and not detected until a breach occurs. Without continuous SBOM monitoring, axios@1.14.1 can run in a production pod for days or weeks after the compromise is publicly disclosed. The container image was built during the exposure window, shipped to production, and never rescanned. The running workload carries the malicious package indefinitely.

  • IOC packages (axios@1.14.1, plain-crypto-js@4.2.1) running in containers days or weeks after public disclosure. The Axios compromise was disclosed on March 31 2026. Organisations that relied on ad-hoc scanning — “we’ll scan images when we get a chance” — did not complete the inventory until April 3 at the earliest. During that three-day gap, the RAT was active in production environments, exfiltrating credentials and establishing persistence.

  • Supply chain compromise affecting a transitive dependency not surfaced by first-level dependency review. plain-crypto-js is not a direct dependency of any application. It is a transitive dependency pulled in by axios@1.14.1. A developer reviewing package.json and package-lock.json for axios would not see plain-crypto-js listed at the first level. A deep SBOM — generated by Syft scanning the full installed node_modules/ tree — does capture it. A shallow SBOM generated from package.json alone does not.

  • No SBOM means no way to quickly scope blast radius after a supply chain event. Without an SBOM per image, the incident response workflow is: enumerate all running pods, list their images, pull each image, scan each image, collate results. At 500 images and 4 minutes per scan, that is over 33 hours of serial work. Blast radius assessment is not complete until the work is done. With SBOM attestations, the assessment is a metadata query: no image pulling, no scanning, no waiting.

Hardening Configuration

1. Generate SBOMs at Build Time with Syft

Add a Syft step to every container image build pipeline. The SBOM must be generated after the image is built — not from the package.json or Dockerfile — so that it reflects the actual installed packages, including transitive dependencies. Use the anchore/sbom-action GitHub Actions integration:

name: build-and-attest

on:
  push:
    branches: [main]

permissions:
  contents: read
  packages: write
  id-token: write

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build and push container image
        id: build
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
          outputs: type=image,name=ghcr.io/${{ github.repository }},push-by-digest=true,name-canonical=true,push=true

      - name: Generate SBOM with Syft
        uses: anchore/sbom-action@v0
        with:
          image: ghcr.io/${{ github.repository }}@${{ steps.build.outputs.digest }}
          format: spdx-json
          output-file: sbom.spdx.json
          upload-artifact: true
          upload-release-assets: false

The upload-artifact: true option stores the SBOM as a GitHub Actions build artifact alongside the workflow run. This provides an audit trail — you can retrieve the SBOM for any image digest by finding the corresponding workflow run. The artifact is the backup; the attestation (step 2) is the live queryable store.

Syft must scan the image at the layer level, not from source manifests. Ensure the build step pushes the image before the SBOM step runs. Syft resolves the full dependency graph including transitive packages by inspecting the installed files inside the image filesystem — node_modules/, site-packages/, installed RPMs and debs. This is why plain-crypto-js@4.2.1 appears in the SBOM even though it is absent from the application’s own package.json.

2. Attach SBOMs as cosign OCI Attestations

Attaching the SBOM as a cosign attestation binds it to the image digest in the registry. The attestation is retrievable by digest without pulling the full image — a critical property for the monitoring job in step 3. Add the attestation step immediately after the SBOM generation step:

cosign attest \
  --predicate sbom.spdx.json \
  --type spdxjson \
  --yes \
  ghcr.io/your-org/your-app@sha256:<image-digest>

In GitHub Actions, with keyless signing via Sigstore’s Fulcio CA and the workflow’s OIDC token:

      - name: Install cosign
        uses: sigstore/cosign-installer@v3

      - name: Attest SBOM to image
        run: |
          cosign attest \
            --predicate sbom.spdx.json \
            --type spdxjson \
            --yes \
            ghcr.io/${{ github.repository }}@${{ steps.build.outputs.digest }}
        env:
          COSIGN_EXPERIMENTAL: "1"

Verify the attestation was attached correctly:

cosign verify-attestation \
  --type spdxjson \
  --certificate-identity-regexp "https://github.com/your-org/your-repo" \
  --certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
  ghcr.io/your-org/your-app@sha256:<digest> \
  | jq '.payload | @base64d | fromjson | .predicate.packages[] | select(.name == "axios") | {name, versionInfo}'

The output of the jq filter should include axios with its version. On a clean build, the version should be the version declared in package.json. On a compromised build, it would be 1.14.1 — identifiable immediately without pulling the image.

3. Continuous SBOM-to-IOC Comparison

The monitoring job runs as a Kubernetes CronJob every 15 minutes. It lists all running pod image digests from the Kubernetes API, fetches each image’s SBOM attestation via cosign verify-attestation, and checks each SBOM against the IOC package list. A match triggers a PagerDuty alert.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: sbom-ioc-monitor
  namespace: security
spec:
  schedule: "*/15 * * * *"
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sbom-monitor
          restartPolicy: OnFailure
          containers:
            - name: monitor
              image: ghcr.io/your-org/sbom-monitor:latest
              command: ["/usr/local/bin/python3", "/app/monitor.py"]
              env:
                - name: IOC_FILE
                  value: /config/ioc-packages.yaml
                - name: PAGERDUTY_ROUTING_KEY
                  valueFrom:
                    secretKeyRef:
                      name: pagerduty-credentials
                      key: routing-key
              volumeMounts:
                - name: ioc-config
                  mountPath: /config
          volumes:
            - name: ioc-config
              configMap:
                name: ioc-packages

The Python monitoring script that powers this CronJob:

import json
import os
import subprocess
import sys
import yaml
import requests
from kubernetes import client, config

IOC_FILE = os.environ.get("IOC_FILE", "/config/ioc-packages.yaml")
PAGERDUTY_ROUTING_KEY = os.environ["PAGERDUTY_ROUTING_KEY"]
PAGERDUTY_EVENTS_URL = "https://events.pagerduty.com/v2/enqueue"

REGISTRY_IDENTITY_REGEXP = os.environ.get(
    "REGISTRY_IDENTITY_REGEXP",
    "https://github.com/your-org/"
)
OIDC_ISSUER = os.environ.get(
    "OIDC_ISSUER",
    "https://token.actions.githubusercontent.com"
)


def load_ioc_packages(path: str) -> list[dict]:
    with open(path) as f:
        data = yaml.safe_load(f)
    return data.get("packages", [])


def get_running_pod_images() -> list[dict]:
    config.load_incluster_config()
    v1 = client.CoreV1Api()
    pods = v1.list_pod_for_all_namespaces(watch=False)
    images = []
    for pod in pods.items:
        if pod.status.phase not in ("Running", "Pending"):
            continue
        for container_status in (pod.status.container_statuses or []):
            image_id = container_status.image_id
            if not image_id or "@sha256:" not in image_id:
                continue
            digest = image_id.split("@")[1] if "@" in image_id else None
            image_ref = image_id.split("@")[0] if "@" in image_id else image_id
            images.append({
                "pod_name": pod.metadata.name,
                "namespace": pod.metadata.namespace,
                "container": container_status.name,
                "image_ref": image_ref,
                "digest": digest,
                "image_id": image_id,
            })
    return images


def fetch_sbom_packages(image_ref: str, digest: str) -> list[dict] | None:
    image_with_digest = f"{image_ref}@{digest}"
    try:
        result = subprocess.run(
            [
                "cosign", "verify-attestation",
                "--type", "spdxjson",
                "--certificate-identity-regexp", REGISTRY_IDENTITY_REGEXP,
                "--certificate-oidc-issuer", OIDC_ISSUER,
                image_with_digest,
            ],
            capture_output=True,
            text=True,
            timeout=30,
        )
        if result.returncode != 0:
            return None
        packages = []
        for line in result.stdout.strip().splitlines():
            try:
                attestation = json.loads(line)
                payload = json.loads(
                    __import__("base64").b64decode(attestation["payload"]).decode()
                )
                pkgs = payload.get("predicate", {}).get("packages", [])
                for pkg in pkgs:
                    packages.append({
                        "name": pkg.get("name", ""),
                        "version": pkg.get("versionInfo", ""),
                    })
            except (json.JSONDecodeError, KeyError):
                continue
        return packages
    except subprocess.TimeoutExpired:
        return None


def check_sbom_against_iocs(packages: list[dict], iocs: list[dict]) -> list[dict]:
    ioc_set = {(ioc["name"], ioc["version"]) for ioc in iocs}
    matches = []
    for pkg in packages:
        if (pkg["name"], pkg["version"]) in ioc_set:
            ioc_entry = next(
                i for i in iocs
                if i["name"] == pkg["name"] and i["version"] == pkg["version"]
            )
            matches.append({**pkg, "ioc_reference": ioc_entry.get("reference", "")})
    return matches


def send_pagerduty_alert(pod: dict, matches: list[dict]) -> None:
    for match in matches:
        payload = {
            "routing_key": PAGERDUTY_ROUTING_KEY,
            "event_action": "trigger",
            "dedup_key": f"sbom-ioc-{pod['namespace']}-{pod['pod_name']}-{match['name']}-{match['version']}",
            "payload": {
                "summary": (
                    f"IOC package {match['name']}@{match['version']} detected "
                    f"in pod {pod['namespace']}/{pod['pod_name']}"
                ),
                "severity": "critical",
                "source": "sbom-ioc-monitor",
                "custom_details": {
                    "pod_name": pod["pod_name"],
                    "namespace": pod["namespace"],
                    "container": pod["container"],
                    "image_digest": pod["digest"],
                    "image_ref": pod["image_ref"],
                    "matched_package": match["name"],
                    "matched_version": match["version"],
                    "ioc_reference": match.get("ioc_reference", ""),
                },
            },
        }
        requests.post(PAGERDUTY_EVENTS_URL, json=payload, timeout=10)


def main() -> None:
    iocs = load_ioc_packages(IOC_FILE)
    if not iocs:
        print("No IOC packages loaded; exiting.")
        sys.exit(0)

    pod_images = get_running_pod_images()
    print(f"Checking {len(pod_images)} running container images against {len(iocs)} IOCs.")

    alert_count = 0
    for pod in pod_images:
        packages = fetch_sbom_packages(pod["image_ref"], pod["digest"])
        if packages is None:
            continue
        matches = check_sbom_against_iocs(packages, iocs)
        if matches:
            send_pagerduty_alert(pod, matches)
            alert_count += len(matches)
            print(
                f"ALERT: {pod['namespace']}/{pod['pod_name']} contains "
                f"{[m['name'] + '@' + m['version'] for m in matches]}"
            )

    print(f"Scan complete. {alert_count} IOC matches found across {len(pod_images)} images.")


if __name__ == "__main__":
    main()

The script skips images that have no SBOM attestation (fetch_sbom_packages returns None) rather than treating them as clean. In step 5, the lack of an attestation is itself an alert condition — images without SBOMs cannot be assessed and must be treated as unknown risk.

4. IOC Feed Management

Maintain an ioc-packages.yaml file in a security repository. This file is the single source of truth for the monitoring job. Entries include the package name, the compromised version, a reference to the advisory, and the disclosure date. The Axios incident entries:

packages:
  - name: axios
    version: "1.14.1"
    ecosystem: npm
    disclosed: "2026-03-31"
    reference: "https://osv.dev/vulnerability/GHSA-xxxx-axios-1141"
    notes: "Malicious publish by Sapphire Sleet using stolen npm maintainer token. Contains phantom dependency plain-crypto-js@4.2.1."

  - name: axios
    version: "0.30.0"
    ecosystem: npm
    disclosed: "2026-03-31"
    reference: "https://osv.dev/vulnerability/GHSA-xxxx-axios-0300"
    notes: "Second malicious version published in same campaign."

  - name: plain-crypto-js
    version: "4.2.1"
    ecosystem: npm
    disclosed: "2026-03-31"
    reference: "https://osv.dev/vulnerability/GHSA-xxxx-plaincryptojs-421"
    notes: "Transitive dependency of axios@1.14.1. Contains RAT postinstall hook. May appear in images built during the 11-hour exposure window even if axios itself was not a direct dependency."

To auto-populate new entries from the OSV database, run osv-scanner against the IOC list on a daily schedule and merge new advisories:

osv-scanner \
  --format json \
  --results-file osv-results.json \
  --ecosystem npm \
  --call-analysis

python3 scripts/osv_to_ioc.py \
  --osv-results osv-results.json \
  --ioc-file ioc-packages.yaml \
  --severity CRITICAL \
  --ecosystem npm \
  --output ioc-packages.yaml

For CISA KEV integration, poll the CISA Known Exploited Vulnerabilities catalog and cross-reference against OSV to find npm package names for any actively exploited vulnerability:

curl -sSL https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json \
  | jq '.vulnerabilities[] | select(.product | test("npm|node|javascript"; "i")) | {cveID, vendorProject, product, dateAdded}' \
  > cisa-kev-npm.json

5. Alert and Response Workflow

When the monitoring job identifies a compromised package in a running pod, it sends a PagerDuty alert with enough context for the on-call engineer to act immediately. The alert payload structure from the monitoring script produces the following in the PagerDuty incident detail:

summary: "IOC package axios@1.14.1 detected in pod payments/payments-api-7d9f8b-xkjq2"
severity: critical
custom_details:
  pod_name: payments-api-7d9f8b-xkjq2
  namespace: payments
  container: api
  image_digest: sha256:a3f9c1e2d4b87f6a019c3e5d7b2a8f4c1e9d6b3a7f2c5e8d1b4a7f0c3e6d9b2
  image_ref: ghcr.io/your-org/payments-api
  matched_package: axios
  matched_version: "1.14.1"
  ioc_reference: "https://osv.dev/vulnerability/GHSA-xxxx-axios-1141"

For confirmed-contained incidents — where the pod has no active network connections to known C2 IP ranges — trigger an automated rollout restart. The auto-remediation script runs as a second step in the alert workflow, gated on two conditions: (a) the Deployment has a replacement image already available (a newer build that postdates the exposure window), and (b) the pod has no established connections to the known Sapphire Sleet C2 ranges:

import subprocess
import ipaddress

C2_RANGES = [
    ipaddress.ip_network("203.0.113.0/24"),
    ipaddress.ip_network("198.51.100.0/24"),
]

def pod_has_c2_connections(namespace: str, pod_name: str) -> bool:
    result = subprocess.run(
        ["kubectl", "exec", "-n", namespace, pod_name,
         "--", "ss", "-tnp", "state", "established"],
        capture_output=True, text=True, timeout=10,
    )
    for line in result.stdout.splitlines():
        parts = line.split()
        for part in parts:
            if ":" in part:
                addr = part.rsplit(":", 1)[0].strip("[]")
                try:
                    ip = ipaddress.ip_address(addr)
                    if any(ip in net for net in C2_RANGES):
                        return True
                except ValueError:
                    continue
    return False


def auto_remediate(namespace: str, deployment_name: str, pod_name: str) -> None:
    if pod_has_c2_connections(namespace, pod_name):
        print(f"C2 connections detected in {namespace}/{pod_name}; skipping auto-restart. Escalate to IR.")
        return

    subprocess.run(
        ["kubectl", "rollout", "restart",
         f"deployment/{deployment_name}", "-n", namespace],
        check=True,
        timeout=30,
    )
    print(f"Auto-remediated: restarted deployment/{deployment_name} in {namespace}.")

The auto-remediation only restarts the Deployment if no C2 connections are active. If C2 connections are present, the pod is treated as actively compromised and escalated to the incident response team for forensic collection before any restart.

Expected Behaviour After Hardening

After SBOM attestation: Running cosign verify-attestation --type spdxjson ghcr.io/your-org/your-app@sha256:<digest> returns the SBOM as a signed JSON payload. Piping through jq to filter for axios packages shows the version installed in that image. For a build made during the Axios exposure window, the output includes "versionInfo": "1.14.1". For a clean build, it shows the correct version. This query takes under 5 seconds per image and requires no image pull.

After continuous monitoring: Within 15 minutes of plain-crypto-js@4.2.1 being added to ioc-packages.yaml on March 31 2026, the CronJob completes its next cycle. Every running pod whose image SBOM contains the Axios IOC entries is identified, and a PagerDuty critical alert is generated per match. The on-call engineer receives an alert with the pod name, namespace, image digest, and a link to the OSV advisory — sufficient to begin incident response without logging into any system.

After auto-remediation: For pods in non-sensitive namespaces where the Deployment has a clean replacement image and no C2 connections are detected, the monitoring job triggers kubectl rollout restart. The Deployment schedules a new pod from the latest image (which was built after the exposure window and contains axios@1.8.4). The compromised pod is terminated. The replacement pod’s SBOM attestation does not contain the IOC packages, and no alert is generated on the next monitoring cycle.

Trade-offs and Operational Considerations

SBOM attestation requires a consistent build pipeline. Images built outside the standard pipeline — local developer builds pushed directly to the registry, base images pulled from third-party registries, legacy images built before SBOM generation was adopted — will have no attestation. The monitoring job skips them. An unknown image is not the same as a clean image. Track the ratio of images with attestations to total running images. If the coverage is below 100%, investigate and remediate the gap before relying on the monitor for blast radius assessment.

cosign attestation retrieval adds latency to the monitoring job. Each cosign verify-attestation call takes 1–3 seconds, depending on registry response time and attestation size. For a cluster with 500 Deployments and an average of 2 containers per pod, the monitoring job is calling cosign verify-attestation approximately 1,000 times per cycle. At 2 seconds each, that is 33 minutes of serial work — more than twice the 15-minute schedule interval. For large clusters, cache SBOM data locally in a Redis or SQLite store, keyed by image digest. Only re-fetch the attestation when the digest changes (i.e., when the image is updated). Digest changes are detectable via the Kubernetes API without fetching the SBOM:

def digest_changed(image_ref: str, digest: str, cache: dict) -> bool:
    cached_digest = cache.get(image_ref)
    if cached_digest == digest:
        return False
    cache[image_ref] = digest
    return True

With digest-change-gated fetching, the monitoring job only calls cosign verify-attestation when a pod image changes — which, in a stable cluster, happens far less frequently than every 15 minutes.

Auto-remediation must be conservative. The automatic kubectl rollout restart is only safe when the replacement image is already available and has been verified clean. Do not trigger auto-restart if the latest image tag points to an image that was also built during the exposure window. Verify that the replacement image’s SBOM attestation does not contain the IOC packages before restarting. An automated restart with a still-compromised image achieves nothing except disrupting service.

Failure Modes

SBOMs generated but not attached as attestations. If the cosign attest step is missing or failing silently, the SBOM file exists as a build artifact in the CI run but is not queryable from the registry. The monitoring job finds no attestation and skips the image. The image may contain IOC packages and the monitor will never know. Validate the attestation step in CI by running cosign verify-attestation as a post-step assertion in the pipeline and failing the build if the attestation is absent.

IOC feed not updated when new compromised packages are disclosed. The monitoring job is only as useful as the IOC list it checks against. If the security team learns of a new supply chain compromise three days after public disclosure and does not add it to ioc-packages.yaml until day four, the monitor has been running blind for four days. Automate the IOC feed update process using OSV and CISA KEV integrations. Alert on the age of the IOC feed: if the newest entry is more than 24 hours old and a new npm advisory has been published in that period, generate a warning.

Monitor running but alert routed to a low-priority queue. If PagerDuty routing rules send the SBOM monitor alerts to a queue reviewed weekly rather than immediately, a compromised package detected on day one may not receive a human response until day seven. The detection is working; the response workflow is not. Map IOC match alerts to the same routing key as critical infrastructure alerts. Test the routing monthly by running the monitor against a test pod containing a synthetic IOC package entry.

SBOM generated at build time but reflects only direct dependencies — not transitive. A Syft scan of the container filesystem captures all installed packages including transitive ones. But a Syft scan of the package.json or package-lock.json file (rather than the image) captures only the declared dependency tree. plain-crypto-js@4.2.1 is a transitive dependency of axios@1.14.1. If the SBOM was generated from the manifest rather than the installed filesystem, plain-crypto-js will not appear in the SBOM, and the IOC match will not fire. Always generate SBOMs by scanning the built image (syft <image-ref>), not the source manifests.