DevSecOps Done Right: Embedding Security into Every Stage of Your Pipeline

A practical guide to shifting security left: SAST, DAST, container scanning, secrets detection, IaC analysis, and GitHub Actions hardening.

Tech Talk News Editorial9 min readUpdated Nov 20, 2024
#devsecops#security#cicd#devops#sast
ShareXLinkedInRedditEmail
DevSecOps Done Right: Embedding Security into Every Stage of Your Pipeline

Adding Snyk to a pipeline and calling it DevSecOps is like wearing a seatbelt while texting. You've checked the box, but you haven't actually changed the risk. I've seen both good and bad security programs up close, and the difference isn't the tooling. It's whether security is integrated into the workflow developers already live in, or whether it exists as a separate function that periodically produces reports nobody reads.

Most "security gates" get bypassed the first time they slow down a release. That's not a developer problem. It's a calibration problem. If your gates cry wolf on every PR, they'll get disabled or overridden. The goal is fewer, higher-confidence signals that developers trust enough to actually act on.

What Shifting Left Actually Means

In the traditional model, security review happens after development: penetration tests run against staging environments weeks before release, findings get deprioritized against feature work, and the feedback loop is so long that fixing issues requires context-switching back to code written months ago. Nobody wins in that model.

Shifting left means moving security checks as early as possible. IDE plugins catch issues as you type. Pre-commit hooks catch issues before code is pushed. CI catches issues before code is merged. CD gates prevent insecure artifacts from reaching production. The cost of fixing a vulnerability scales exponentially with how late it's found. A finding in a pre-commit hook takes minutes to fix; the same finding in a production pentest can take weeks and still leave the question of whether it was already exploited.

Security that only security engineers can run is security theater. The goal is findings that reach the developer who wrote the code within minutes of writing it.

SAST: Static Application Security Testing

SAST analyzes source code for security vulnerabilities without executing it. Three tools worth knowing:

  • Semgrep: The most developer-friendly SAST tool available. Rules are readable YAML patterns, you can write custom rules for your codebase's specific patterns in minutes, and it runs fast enough to be a CI gate. The free rule registry covers OWASP Top 10 for every major language. Run it on every PR.
  • CodeQL: GitHub's semantic analysis engine. Slower than Semgrep but it understands data flow and can trace a user-controlled value through multiple function calls to find injection vulnerabilities that pattern matching misses. Run it on merge to main and weekly on scheduled workflows.
  • SonarQube / SonarCloud: Combines SAST with code quality metrics. Better suited as a developer-facing quality dashboard than a security gate due to its false positive rate, but useful for tracking security debt over time.
# .github/workflows/sast.yml
name: SAST

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  semgrep:
    runs-on: ubuntu-latest
    container:
      image: semgrep/semgrep
    steps:
    - uses: actions/checkout@v4
    - name: Run Semgrep
      run: |
        semgrep ci           --config=p/owasp-top-ten           --config=p/nodejs           --config=p/typescript           --sarif           --output=semgrep.sarif
      env:
        SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}

    - name: Upload SARIF results
      uses: github/codeql-action/upload-sarif@v3
      with:
        sarif_file: semgrep.sarif
      if: always()

Dependency Scanning

Your application is mostly third-party code. A vulnerability in a transitive dependency is your vulnerability. Three layers of defense:

  • Dependabot / Renovate: Automated dependency update PRs that surface new versions and security advisories. Renovate is more configurable (grouping, scheduling, automerge rules) and worth the setup time for larger repositories.
  • Snyk: Scans dependencies against its vulnerability database, understands reachability (does your code actually call the vulnerable function?), and can suggest fix PRs. The reachability analysis dramatically reduces false positives compared to simple version-range matching. This is the differentiator that makes Snyk worth using.
  • OWASP Dependency-Check: Open-source, runs against the NVD database, integrates with Maven, Gradle, npm, and most major build systems. Less polished than Snyk but free and useful as a second opinion.
# npm audit in CI : fail on high/critical
- name: Audit dependencies
  run: npm audit --audit-level=high

# Snyk scan
- name: Snyk security scan
  uses: snyk/actions/node@master
  with:
    args: --severity-threshold=high --fail-on=all
  env:
    SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

Container Security

Container images accumulate vulnerabilities through their base image, installed OS packages, and application dependencies. Scan images at build time and again on a schedule. Base image vulnerabilities appear after your image was already built, and you won't know unless you're scanning regularly.

Trivy is the current standard for container scanning: fast, accurate, covers OS packages and application dependencies in a single scan, and outputs SARIF for GitHub Security. Grype is an alternative worth running as a second opinion for critical images.

# Scan container image in CI after build
- name: Build image
  run: docker build -t myapp:\${{ github.sha }} .

- name: Run Trivy vulnerability scan
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: myapp:\${{ github.sha }}
    format: sarif
    output: trivy-results.sarif
    severity: CRITICAL,HIGH
    exit-code: '1'  # Fail the build on critical findings

- name: Upload Trivy scan results
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: trivy-results.sarif
  if: always()

Dockerfile hardening that eliminates vulnerability surface before scanning:

# Multi-stage build with distroless final image
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

# Distroless : no shell, no package manager, no attack surface
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
# Run as non-root
USER nonroot:nonroot
EXPOSE 3000
CMD ["dist/server.js"]

Secrets Scanning: This Should Block Merges

My strong opinion on this: secrets scanning should block merges, not just warn. A warning that a developer has to actively dismiss is a warning that gets dismissed. If a secret makes it into your Git history, it's compromised. Automated scanners continuously index public repositories and sell found credentials. For private repositories, the risk is insider threat and repository exposure. Neither outcome requires a warning level of urgency.

  • GitLeaks: Scans Git history for secrets using regex patterns and entropy analysis. Run as a pre-commit hook and in CI. Can scan the full history of a repository to find secrets committed years ago.
  • TruffleHog: More accurate than GitLeaks for reducing false positives; uses verified detectors that check whether found credentials are actually valid against the target service's API.
  • GitHub Secret Scanning: Free for public repositories, included in GitHub Advanced Security for private repos. Push protection mode blocks pushes containing detected secrets before they land in the repository. Enable this.
# .pre-commit-config.yaml : run gitleaks before every commit
repos:
- repo: https://github.com/gitleaks/gitleaks
  rev: v8.18.4
  hooks:
  - id: gitleaks

# In CI : scan the PR diff
- name: TruffleHog scan
  uses: trufflesecurity/trufflehog@main
  with:
    path: ./
    base: \${{ github.event.repository.default_branch }}
    head: HEAD
    extra_args: --only-verified

IaC Security Scanning

Infrastructure as Code is often the most consequential attack surface. A misconfigured S3 bucket policy, an overly permissive IAM role, or an unencrypted RDS instance can expose more data than an application vulnerability. Scan your Terraform, CloudFormation, Helm charts, and Kubernetes manifests with the same rigor as application code. Most teams don't, and it shows.

  • Checkov: Covers Terraform, CloudFormation, Kubernetes, Dockerfile, ARM templates, and more. 1,000+ built-in checks against CIS benchmarks and cloud provider best practices.
  • tfsec / Trivy: tfsec has been merged into Trivy, which now handles Terraform scanning alongside container and OS package scanning. One tool for multiple surfaces.
# Checkov scan on Terraform changes
- name: Checkov IaC scan
  uses: bridgecrewio/checkov-action@master
  with:
    directory: ./infrastructure/terraform
    framework: terraform
    output_format: sarif
    output_file_path: checkov.sarif
    soft_fail: false  # true for advisory, false for gate
    check: CKV_AWS_18,CKV_AWS_66,CKV_AWS_111  # Or use skip_check for exclusions

- name: Upload Checkov results
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: checkov.sarif
  if: always()

SBOM Generation and Policy-as-Code

A Software Bill of Materials is a machine-readable inventory of every component in your software, including transitive dependencies. Executive Order 14028 made SBOMs mandatory for US federal software vendors, and they're rapidly becoming a standard procurement requirement in regulated industries. If you're selling to enterprise customers, you'll be asked for this.

Generate SBOMs in CycloneDX or SPDX format using Syft (for container images) or your package manager's built-in tooling. Store them alongside your release artifacts.

# Generate SBOM with Syft
- name: Generate SBOM
  uses: anchore/sbom-action@v0
  with:
    image: myapp:${{ github.sha }}
    format: spdx-json
    output-file: sbom.spdx.json

- name: Attest SBOM
  uses: actions/attest-sbom@v1
  with:
    subject-name: myapp
    subject-digest: sha256:${{ steps.build.outputs.digest }}
    sbom-path: sbom.spdx.json

Policy-as-code with OPA (Open Policy Agent) and Rego lets you enforce security policies as code that's versioned, tested, and auditable. Use Conftest to run OPA policies against Kubernetes manifests, Terraform plans, and Dockerfile output in CI.

GitHub Actions Security Hardening

Your CI/CD pipeline is a privileged execution environment with access to production secrets and deployment credentials. Supply chain attacks via compromised Actions and script injection via untrusted input are increasingly common. This is an area where the defaults are insecure and you have to explicitly opt into better behavior.

  • Pin Actions to a commit SHA, not a mutable tag: uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 instead of @v4
  • Set minimal permissions on GITHUB_TOKEN using the permissions key; deny everything except what the job needs
  • Never interpolate untrusted input directly into run steps; use environment variables instead
  • Use OpenID Connect (OIDC) for cloud credentials instead of long-lived secrets stored in GitHub
  • Enable branch protection rules and require status checks to pass before merging
# Hardened workflow example
name: Secure Deploy

on:
  push:
    branches: [main]

permissions:
  contents: read        # Minimal : only what's needed
  id-token: write       # Required for OIDC
  security-events: write # Required for SARIF upload

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2

    # OIDC : no long-lived AWS secrets stored in GitHub
    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2
      with:
        role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsDeployRole
        aws-region: us-east-1

    # Avoid script injection : use env var, not direct interpolation
    - name: Deploy
      env:
        PR_TITLE: ${{ github.event.pull_request.title }}
      run: |
        echo "Deploying for: $PR_TITLE"  # Safe : not interpolated into shell

Security Gates vs. Advisory Findings

Not every security finding should block a merge. Gates that are too strict get disabled. Findings that are all advisory get ignored. Both failure modes are real. A pragmatic framework that actually works in practice:

  • Hard gates (block merge): Critical and high CVEs with known exploits in direct dependencies; secrets detected in code; SAST findings for injection vulnerabilities (SQL, command, SSRF); IaC configs that expose resources to the public internet.
  • Advisory (creates issue, doesn't block): Medium CVEs in transitive dependencies; SAST findings with high false positive rates; new Dockerfile best practice violations; dependency versions that are outdated but not vulnerable.
  • Informational (dashboard only): Low CVEs; code quality findings; CIS benchmark deviations with documented risk acceptance.

Start with SAST and secrets scanning. They're high value and low noise relative to other categories. Get those calibrated and trusted before adding more tools. Every tool you add that generates noise without action makes developers numb to security findings and erodes the entire program. The best DevSecOps programs I've seen aren't the ones with the most tools. They're the ones where developers actually fix what the tools flag.

ShareXLinkedInRedditEmail