Supply Chain Risk Management: A Programme for Third-Party Software and Dependency Risk
Problem
Modern production software is approximately 80% third-party code. A typical Node.js application has 500–1,000 transitive npm dependencies. A Python microservice may import 200 packages, each pulling its own subtree. A container image layers base OS packages, language runtimes, system libraries, and application dependencies into a single artifact where no single team owns more than a fraction of the surface area.
This dependency concentration has made software supply chains the preferred attack vector for sophisticated adversaries. The SolarWinds compromise injected malicious code into a build system used by 18,000 organisations. The XZ Utils backdoor embedded itself in a compression library present in almost every Linux distribution, installed by a threat actor who spent two years cultivating maintainer trust. The event-stream npm attack targeted a specific downstream consumer by hijacking a transitive dependency. The common thread: attackers are not breaking down the front door. They are walking in through a dependency your team approved three years ago and has not reconsidered since.
A supply chain risk management programme is the organisational response to this threat. It is not a single tool, scanner, or policy. It is a set of linked practices: maintaining a current inventory of what you actually depend on, scoring the risk each dependency represents, applying tiered controls commensurate with that risk, monitoring for compromise and new vulnerability signals, and having a tested incident response procedure for the day a dependency you use is confirmed malicious or compromised.
Target systems: Any software development organisation shipping code with third-party dependencies. Tools referenced include Syft, Grype, SPDX tools, CycloneDX CLI, OSSF Scorecard, Deps.dev, Socket.dev, Artifactory/Nexus (private registry), GitHub Advanced Security, and standard CI/CD pipelines.
Threat Model
1. Compromised package version. An attacker gains access to a maintainer’s npm, PyPI, or RubyGems account — via credential stuffing, phishing, or by finding an account with no MFA — and publishes a new version of a legitimate package containing malicious code. Your Dependabot or Renovate automation pulls the update. CI passes because the malicious code does not break tests. It reaches production and exfiltrates secrets or establishes a reverse shell.
2. Dependency confusion / namespace attack. An attacker publishes a public package with the same name as an internal private package. Package managers configured to check public registries before private ones fetch and execute the malicious public version. First demonstrated by Alex Birsan in 2021 against Microsoft, Apple, and Shopify, this attack remains viable wherever private package namespacing is not enforced at the registry level.
3. Transitive dependency injection. Your direct dependencies look fine. A rarely-examined transitive dependency four levels deep has been abandoned by its maintainer and taken over by a new “contributor” who added a cryptominer in a “minor bugfix” release. You have no policy on transitive dependencies. Your scanner checks your direct dependencies only. You have no SBOM telling you this package exists in your build.
4. Compromised build tool or CI runner. The attack is not in a package — it is in the build system that assembles the packages. A compromised GitHub Actions runner, a malicious Makefile include, a tampered Docker base image, or an injected build step rewrites your compiled artifact after the source is assembled. The code in your repository is clean. The artifact that ships is not.
5. Malicious open-source maintainer. A threat actor invests months or years cultivating the appearance of a legitimate contributor to a dependency you use. Once they gain commit access or maintainer rights, they introduce a backdoor. The XZ Utils attack is the documented example; the technique requires patience but yields access to millions of downstream consumers simultaneously.
Blast radius: Supply chain attacks are lateral movement at scale. A single compromised dependency installed by 100 of your services compromises all 100 simultaneously with a single attacker action. Controls that would stop a direct attack — network segmentation, least privilege, EDR — are often bypassed because the malicious code runs as the legitimate application process with its legitimate credentials.
Implementation
Phase 1: Attack Surface Taxonomy
Before building controls, map the full attack surface. Supply chain risk is not just npm packages. The complete attack surface includes:
Direct and transitive dependencies. Libraries your code explicitly imports, and every package those libraries import in turn. Transitive dependencies are typically 5–10x the count of direct dependencies and receive far less scrutiny.
Build tools. Compilers, build systems (Make, Gradle, Cargo), bundlers (webpack, esbuild), and code generators. A compromised compiler or build plugin can inject malicious code into every artifact your CI produces.
CI/CD platforms and their integrations. GitHub Actions workflows, Jenkins plugins, ArgoCD operators, Tekton tasks. A malicious GitHub Action used in your workflow has full access to your repository secrets and build environment.
Base container images. The ubuntu:22.04, python:3.12-slim, or node:20-alpine images your Dockerfiles inherit from. A compromised base image tag propagates across every service that uses it.
Cloud services and infrastructure APIs. Terraform providers, Helm chart repositories, Kustomize bases pulled from GitHub. An attacker controlling a Terraform provider can modify what infrastructure your CI deploys.
Open-source maintainer accounts. The human accounts that sign releases, push to package registries, and merge PRs. Account compromise is faster and cheaper than code compromise. MFA bypass and credential theft on package registry accounts is an active threat.
Document this taxonomy for your organisation. Every category requires a distinct set of controls.
Phase 2: Dependency Inventory with SBOM
You cannot manage risk in what you cannot see. A Software Bill of Materials (SBOM) is a structured machine-readable inventory of every component in an artifact.
Generating SBOMs. Use Syft to generate SBOMs at build time. Run it against container images (post-build) and source trees (for language-level dependency graphs):
# Generate SBOM for a container image (CycloneDX JSON format)
syft nginx:1.27 -o cyclonedx-json > nginx-1.27.sbom.json
# Generate SBOM for a source tree
syft dir:./myapp -o spdx-json > myapp.sbom.json
# Generate SBOM for a specific package type in a project
syft packages dir:./myapp --scope all-layers -o cyclonedx-json \
| jq '.components | length'
Integrate SBOM generation into your CI pipeline as a mandatory build step, not an optional scan:
# GitHub Actions — SBOM generation and upload as attestation
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: ${{ env.IMAGE_REF }}
format: cyclonedx-json
output-file: sbom.cyclonedx.json
- name: Attest SBOM
uses: actions/attest-sbom@v1
with:
subject-name: ${{ env.IMAGE_REF }}
sbom-path: sbom.cyclonedx.json
push-to-registry: true
SBOM format comparison. Two formats dominate:
| Property | SPDX | CycloneDX |
|---|---|---|
| Governing body | Linux Foundation / SPDX workgroup | OWASP |
| Primary use case | Licence compliance, legal | Security, vulnerability correlation |
| CVE correlation | Possible but secondary | First-class via vulnerabilities component |
| VEX support | Via external profile | Native vulnerabilities field |
| Tooling breadth | Broader (Fossology, FOSSA) | Stronger in security tooling (Grype, Dependency-Track) |
| Government compliance | NTIA baseline compliant | NTIA baseline compliant |
For security-first supply chain programmes, CycloneDX is the better default: its vulnerabilities and services components make it directly consumable by vulnerability correlation tools. Use SPDX when licence management is the primary use case or when a downstream partner requires it.
SBOM storage and querying. SBOMs are only useful if they are queryable. Store them in a purpose-built tool:
- OWASP Dependency-Track is the open-source standard: it ingests CycloneDX SBOMs, correlates components against the NVD, OSV, and GitHub Advisory databases, and provides a per-project vulnerability dashboard.
- Grype can query an SBOM file directly from the command line for one-off vulnerability checks.
# Scan an SBOM file for known vulnerabilities
grype sbom:myapp.sbom.json --fail-on high
# Query Dependency-Track API for project vulnerabilities
curl -s -H "X-Api-Key: $DT_API_KEY" \
"https://dependencytrack.example.com/api/v1/vulnerability/project/$PROJECT_UUID" \
| jq '.[] | select(.severity == "CRITICAL") | {name, cvssV3BaseScore, component: .components[].name}'
Upload SBOMs to Dependency-Track automatically as part of the release pipeline. Every deployed artifact should have a corresponding SBOM visible in the tool, so that when a new CVE is published you can immediately answer: “do we have this component in production?”
Phase 3: Risk Scoring Dependencies
An inventory alone does not prioritise remediation effort. Apply a consistent risk score to each dependency to determine which warrant the most stringent controls.
OSSF Scorecard. Run OpenSSF Scorecard against every direct dependency to assess security process maturity. Key checks relevant to supply chain risk:
export GITHUB_AUTH_TOKEN="ghp_your_token"
# Score a dependency
scorecard --repo=github.com/example/mylib --format json \
| jq '{
aggregate: .score,
signed_releases: (.checks[] | select(.name == "Signed-Releases") | .score),
branch_protection: (.checks[] | select(.name == "Branch-Protection") | .score),
code_review: (.checks[] | select(.name == "Code-Review") | .score),
maintained: (.checks[] | select(.name == "Maintained") | .score)
}'
A dependency scoring below 4 out of 10 on Scorecard, particularly with low Signed-Releases and Code-Review scores, is structurally predisposed to undetected compromise. Treat it as Tier 1 risk regardless of its current vulnerability status.
Criticality score. Google’s criticality_score tool quantifies how widely a project is depended on across the open-source ecosystem — a signal of blast radius if compromised:
criticality_score --repo github.com/example/mylib
# Returns: criticality_score, commit_frequency, contributor_count,
# dependents_count, github_mention_count
High criticality score + low Scorecard score = highest supply chain risk: many consumers, poor security practices.
Composite risk dimensions. Score each direct dependency across four dimensions:
| Dimension | High Risk Signal | Data Source |
|---|---|---|
| Maintenance activity | < 10 commits / 90 days; no response to issues | Scorecard “Maintained” check |
| Security policy | No SECURITY.md; no private disclosure channel | Scorecard “Security-Policy” check |
| Release integrity | Unsigned releases; no SLSA provenance | Scorecard “Signed-Releases”; SLSA verifier |
| OpenSSF Best Practices | No badge or < 50% passing | https://bestpractices.coreinfrastructure.org |
Phase 4: Tiered Controls by Risk
Use your risk scores to assign each dependency to a control tier. Apply controls appropriate to the risk, not the same controls to everything.
Tier 1 — Critical dependencies. These are packages that, if compromised, would have the highest impact: packages with deep transitive reach (imported by many of your other dependencies), packages with access to credentials or network sockets, packages that are part of your authentication or cryptography stack, and packages with low Scorecard scores (below 5) or no signed releases.
Controls for Tier 1:
- Pin by digest, not version tag. A version tag is mutable. A digest (
sha256:abc123) is a content address and cannot be forged. - Require SLSA provenance attestation where the publisher provides it.
- SBOM generated and stored for every release of every service that uses this package.
- Monitoring active: subscribe to GitHub repository notifications, NVD feeds, and OSV advisories for the package.
- Manual approval required for version upgrades; do not auto-merge Dependabot PRs for Tier 1 packages.
# Pin npm package by integrity hash (package-lock.json does this automatically;
# verify the lockfile is committed and verified in CI)
npm ci # uses lockfile — never `npm install` in CI
# Pin Python dependency by hash in requirements.txt
pip-compile --generate-hashes requirements.in
# Output format: package==1.2.3 --hash=sha256:abc123...
# Pin Go module by sum database entry (go.sum enforces this)
go mod verify # verify all module checksums against go.sum
Tier 2 — Important dependencies. Packages with moderate risk scores, no known vulnerabilities, and reasonable maintenance signals. Direct dependencies that are not in your security-critical path.
Controls for Tier 2:
- Pin by exact version (e.g.,
mylib==2.3.1, notmylib>=2.3). - SCA scanning on every build via Grype, Trivy, or Snyk.
- Automated Dependabot / Renovate PRs for patch and minor updates, with CI required to pass before merge.
- CVE monitoring via Dependency-Track or GitHub Dependabot alerts.
Tier 3 — Standard dependencies. Low-risk packages: well-maintained, high Scorecard scores, no security-sensitive functionality, widely used with strong ecosystem oversight.
Controls for Tier 3:
- SCA scanning on every build; block on critical CVEs.
- Dependabot / Renovate automated PRs permitted to auto-merge if CI passes.
- SBOM inclusion in standard build SBOM.
Phase 5: Monitoring for Supply Chain Events
Tier controls protect you at build time. Monitoring protects you from events that occur between builds.
Socket.dev for npm packages. Socket analyses npm packages for behavioural signals beyond CVEs: new network access, new shell invocations, new environment variable reads, install hooks added in a new version, obfuscated code. Integrate Socket into CI to block packages that exhibit supply-chain attack patterns even before a CVE is published:
# Socket CI integration — blocks installs with supply chain risk signals
npx @socketsecurity/cli ci --allow-install --dry-run
PyPI malware feeds. The Python Software Foundation publishes a real-time event feed for PyPI package events. Monitor for new releases of packages you use, and subscribe to the PyPI security advisories list. Tools like pip-audit check installed packages against the OSV database:
pip-audit --requirement requirements.txt --format json \
| jq '.dependencies[] | select(.vulns | length > 0)'
GitHub Security Advisories. Subscribe to the GitHub Advisory Database for your dependencies. GitHub Advisory feeds are the fastest path to CVE data for many package ecosystems. Dependency-Track subscribes to these feeds automatically. For manual monitoring:
# Query GitHub Advisory Database for a specific package
gh api graphql -f query='
{
securityVulnerabilities(ecosystem: NPM, package: "mypackage", first: 5) {
nodes {
advisory { summary severity publishedAt }
vulnerableVersionRange
firstPatchedVersion { identifier }
}
}
}'
OSV.dev for cross-ecosystem CVE correlation. The Open Source Vulnerability database (osv.dev) aggregates advisories from PyPI, npm, Maven, Go, Rust, and many other ecosystems in a unified schema. Query it via API or use osv-scanner:
osv-scanner --lockfile package-lock.json --lockfile requirements.txt \
--format table
Deps.dev. Google’s deps.dev provides dependency graph data, known vulnerability counts, Scorecard scores, and OpenSSF Best Practices badge status via API. Use it to monitor the health of dependencies over time:
curl -s "https://api.deps.dev/v3alpha/systems/npm/packages/lodash/versions/4.17.21" \
| jq '{advisories: .advisoryKeys, scorecard: .scorecardV2.score}'
Phase 6: Private Registries as a Control Layer
A private registry proxy is one of the highest-leverage controls in a supply chain programme. Rather than allowing your CI pipelines and developer machines to fetch packages directly from public registries, all package fetches flow through a controlled internal proxy.
What a private registry proxy provides:
- Allowlist control. Only packages explicitly approved pass through. Unapproved packages — including typosquatted and dependency-confusion packages — are blocked at the registry layer.
- Scan-on-download. The proxy scans every version before serving it. A package that fails the vulnerability policy is blocked before it reaches any build.
- Audit log. Every package download is logged with the requesting service identity, enabling retrospective analysis after a supply chain incident.
- Caching and availability. The registry caches approved versions, insulating your builds from upstream outages and reducing the surface area of live dependency resolution at build time.
Configuring Artifactory as an npm proxy:
# .npmrc pointing to private registry proxy
registry=https://artifactory.example.com/artifactory/api/npm/npm-virtual/
//artifactory.example.com/artifactory/api/npm/npm-virtual/:_authToken=${ARTIFACTORY_TOKEN}
# Scope internal packages to internal registry
@myorg:registry=https://artifactory.example.com/artifactory/api/npm/npm-local/
Configuring pip to use Artifactory:
# pip.conf
[global]
index-url = https://artifactory.example.com/artifactory/api/pypi/pypi-virtual/simple/
trusted-host = artifactory.example.com
Block direct public registry access in CI. Network policy in CI runners should deny outbound HTTPS to registry.npmjs.org, pypi.org, registry.go.dev, and other public registries. All traffic must pass through the internal proxy. This enforces the allowlist at the network layer rather than relying on configuration.
Mitigating dependency confusion. Configure your private registry to prioritise internal packages over public packages for the same name. In Artifactory, set internal local repositories before remote proxy repositories in the virtual repository aggregation order. Establish a naming convention for internal packages (e.g., @myorg/ scope for npm) and enforce that convention in the proxy allowlist.
Phase 7: Incident Response for Supply Chain Compromise
When a dependency you use is confirmed compromised — malicious code confirmed in a published version, maintainer account hijacked, or a CVE published for a version you are running — you need a tested response procedure.
Detection signals. You will learn about a compromise from one of:
- GitHub Security Advisory or NVD CVE publication for a package you use (surfaced by Dependency-Track or Dependabot alerts)
- Socket.dev or PyPI security feed alert for a new version of a package you use
- Direct public disclosure (blog post, security researcher tweet, news article)
- Internal detection: anomalous network calls, unexpected process executions, EDR alert from a process matching a known package
Emergency response procedure:
# Step 1: Determine which services are affected
# Query Dependency-Track for all projects containing the affected component
curl -s -H "X-Api-Key: $DT_API_KEY" \
"https://dependencytrack.example.com/api/v1/component/identity?purl=pkg:npm/malicious-pkg" \
| jq '.[].project.name'
# Step 2: Identify the affected version range from the advisory
# and confirm which version you are running
cat package-lock.json | jq '.packages["node_modules/malicious-pkg"].version'
# Step 3: Check if a safe version is available
npm view malicious-pkg versions --json | jq 'last(.[]) '
Emergency update procedure:
# Force update to safe version
npm install malicious-pkg@SAFE_VERSION
# Verify the lockfile reflects the safe version
cat package-lock.json | jq '.packages["node_modules/malicious-pkg"].version'
# Verify the installed package hash matches the expected hash
# (published in the advisory or in the safe version's npm metadata)
npm view malicious-pkg@SAFE_VERSION dist.integrity
# Rebuild and redeploy all affected services
# (do not wait for the normal release cycle)
Rollback. If no safe version is available, roll back affected services to the last artifact built before the compromised version was introduced. Your SBOM records tell you which build first included the affected version. Use your CI/CD platform’s artifact history to identify and redeploy the last clean artifact:
# Identify the last clean image using SBOM evidence
# (assumes SBOMs stored in Dependency-Track per build)
curl -s -H "X-Api-Key: $DT_API_KEY" \
"https://dependencytrack.example.com/api/v1/project?name=myservice&sortBy=lastBomImport&sortOrder=ASC" \
| jq '.[] | select(.lastBomImport < "2026-01-15T00:00:00Z") | {version, lastBomImport}'
Post-incident review. After recovery, answer:
- How long was the compromised package in production?
- Which detection source found it? How quickly?
- Was the SBOM sufficient to identify all affected services immediately?
- Did the private registry proxy block the compromised version from reaching any service, or did it pass through?
- What monitoring rule would have caught this faster?
Verification
After implementing the programme, verify each layer:
# 1. Confirm SBOM is generated for every container image
syft image:myapp:latest -o cyclonedx-json | jq '.metadata.component.name'
# 2. Confirm SCA scan runs and fails on critical CVEs
grype sbom:myapp.sbom.json --fail-on critical; echo "Exit: $?"
# 3. Confirm all packages are pinned by hash in lockfiles
# npm: verify lockfile is present and used in CI
test -f package-lock.json && npm ci --dry-run
# Python: verify hash pinning in requirements
grep -c "\-\-hash=sha256:" requirements.txt
# 4. Confirm dependency traffic routes through private registry
# (run from within a CI runner network)
curl -v https://registry.npmjs.org 2>&1 | grep -E "(connect|refused|403)"
# 5. Confirm Dependency-Track receives SBOMs for new builds
curl -s -H "X-Api-Key: $DT_API_KEY" \
"https://dependencytrack.example.com/api/v1/project?name=myapp" \
| jq '.[0].lastBomImport'
# 6. Confirm Scorecard scores are tracked for direct dependencies
scorecard --repo=github.com/example/critical-dep --format json \
| jq '.score'
Hardening Checklist
- [ ] Supply chain attack surface taxonomy documented (dependencies, build tools, CI, base images, registry accounts)
- [ ] SBOM generated for every container image and source artifact at build time
- [ ] SBOMs ingested into Dependency-Track or equivalent for vulnerability correlation
- [ ] SBOM format chosen (CycloneDX for security focus; SPDX for licence focus)
- [ ] Direct dependencies scored using OSSF Scorecard; scores tracked over time
- [ ] Dependencies assigned to risk tiers (Tier 1 / 2 / 3) with documented criteria
- [ ] Tier 1 dependencies pinned by digest or cryptographic hash
- [ ] Tier 2 dependencies pinned by exact version
- [ ] SCA scanning (Grype/Trivy/Snyk) runs on every build; critical CVEs block deployment
- [ ] Private registry proxy configured as the only permitted package source in CI
- [ ] Dependency confusion mitigated: internal package names scoped or allowlisted at registry
- [ ] Direct public registry access blocked by network policy in CI runners
- [ ] Socket.dev (npm) or pip-audit (Python) monitoring active for behavioural supply chain signals
- [ ] OSV scanner or Dependency-Track subscribed to GitHub Advisory and NVD feeds
- [ ] Dependabot / Renovate enabled; Tier 1 PRs require manual approval
- [ ] Supply chain incident response procedure documented and tested
- [ ] SBOM-based affected service identification tested against a simulated advisory
- [ ] Post-incident review template defined
Related Articles
- OpenSSF Scorecard for Supply Chain Security — detailed Scorecard configuration, CI enforcement, and interpreting check results
- Vulnerability Management Program — SLA-based remediation workflow for vulnerabilities discovered through SCA scanning
- npm Package Integrity Verification — digest pinning, provenance attestation, and Sigstore verification for the npm ecosystem
- DevSecOps Maturity Model — integrating supply chain controls into a broader DevSecOps programme