Control 2 in the Practical Web Security for SitecoreAI series. [The intro covered why headless architecture changes where security lives. Control 1 covered security headers and CSP.]
The first post in this series made the case that security issues in headless architectures rarely come from deep exploits. They come from small, missed controls at system boundaries. Credential management is where that pattern shows up most consistently.
The numbers are clear: 88% of basic web application breaches involved stolen credentials (Verizon DBIR 2025). Meanwhile 28.6 million new secrets were detected in public GitHub commits in 2025, up 34% year on year (GitGuardian State of Secrets Sprawl 2026). More APIs means more tokens. More tokens mean more opportunities for exposure.
In a monolithic Sitecore deployment, the credential surface was limited. In a headless SitecoreAI build, it has expanded significantly, and most teams haven't fully mapped where that expansion happened.
Traditional Sitecore had credentials too, but they lived in a small number of well-understood places: database connection strings, service accounts, a handful of API integrations. The surface was manageable.
A modern SitecoreAI build introduces a different set of credentials by default:
These are all credentials and all targets. Unlike traditional deployments, many of them now live close to, or inside, the build process, the repository, or the client-side bundle.
That proximity is the problem.
1. Into the repository
The most common path. A .env file gets committed, or an API key gets hardcoded in source code instead of being injected through environment variables at build or runtime. What makes this worse is that git retains the full history. Deleting the file in a later commit doesn't remove it. The secret is still in the history, visible to anyone who can clone the repo. Properly removing it requires rewriting git history, which most teams never do.
2. Into the build pipeline
Build pipelines (GitHub Actions, Azure DevOps, or whatever your team uses) need credentials to deploy. Those credentials should be stored as pipeline secrets, which most platforms will automatically mask in log output. The problem is when credentials are stored as regular environment variables instead of secrets, or when debug logging and echo statements inadvertently print them in plain text. Pipeline logs are often more accessible than teams realize. Treat them as a potential exposure surface, not just a debugging tool.
3. Into the client-side bundle
This is the hardest to spot and the easiest to introduce by accident. Most frontend frameworks have a convention for exposing environment variables to client-side code. In Next.js for example, any environment variable prefixed with NEXT_PUBLIC_ is inlined into the client-side JavaScript bundle at build time. It becomes visible to anyone who opens your site and views the source. That prefix exists for a reason: some configuration genuinely needs to be browser-accessible. But it gets misused regularly, often by developers who add the prefix out of habit or because they can't work out why a variable isn't available in a component.
The result is server-side secrets shipped directly to the browser in plain text. Here's a pattern from a real production SitecoreAI site:
The hardcoded API key fallback, the editing secret with a placeholder string, and the full config structure are all visible to anyone browsing the site. This was an otherwise well-built implementation.
Use the right storage for each environment
Local development: .env.local, gitignored, with non-production keys only. Never use production credentials in a local environment file.
Deployed environments: use your hosting platform's secrets management. Vercel provides sensitive environment variables that are stored in an unreadable format. Arc (Dataweavers' in-tenant Azure hosting platform for Sitecore headless) manages environment-specific secrets for any environment, injected at build and runtime without appearing in the codebase, secrets never leave your Azure tenant.
For organizations managing credentials across multiple services, a dedicated vault such as AWS Secrets Manager or Azure Key Vault gives you centralized control with access auditing, rotation support, and a single source of truth. Your hosting platform, like Arc, then consumes secrets from the vault rather than being the vault itself. This can be done through IaC tooling like Terraform at provisioning time, or through your CI/CD pipeline pulling values from the vault before triggering a build.
The principle is straightforward: secrets should never live in the repository. Not in committed files, not in the history, not in deployment scripts.
Separate browser-safe config from server-only secrets
Before exposing any environment variable or configuration value to the client, ask one question: does this value need to be accessible in the browser, or only in server-side code?
Most frontend frameworks have a convention for exposing variables to client-side code. In Next.js it's the NEXT_PUBLIC_ prefix. Other frameworks have equivalents: NUXT_PUBLIC_ in Nuxt, VITE_ in Vite-based frameworks, VUE_APP_ in Vue CLI. The naming differs but the effect is the same: any value marked as public is bundled into the client-side application and visible to anyone viewing your site.
Angular works differently. There is no public prefix. Instead, Angular uses environment files (environment.ts, environment.production.ts) that are compiled into the browser application at build time. Anything placed in these files should be treated as public, even though the mechanism looks different.
If the value is a secret, token, private API key, connection string, or anything that grants privileged access to a service, it should not be in the browser. Keep it server-side. The frontend should call a backend API, and the backend should use the secret when communicating with the protected service. Reserve the public mechanism for genuinely browser-safe configuration: feature flags, public analytics IDs, public-facing URLs, and CAPTCHA site keys.
Watch for hardcoded fallbacks
The most dangerous secrets pattern in headless builds isn't an obvious mistake. It's the fallback string:
process.env.API_KEY || 'sk-dev-hardcoded-key-1234'
This pattern is almost always introduced as a development convenience and almost always forgotten before it reaches production. If the environment variable is ever unset, misconfigured, or renamed, the hardcoded value ships silently. The fix is to make the application fail loudly when a required secret is absent. A startup error you can diagnose is better than a credential leak you may never detect.
Getting secrets out of the wrong places is the first step. The second is ensuring that when a credential does leak, and over a long enough timeline one will, the damage is contained.
Scope tokens to minimum access
Where your platform allows it, every token should have the narrowest permissions its use case requires. API tokens for third-party integrations should be limited to the specific endpoints and operations they need. No token should be shared across sites or reused for purposes beyond what it was issued for.
SitecoreAI provides two Context IDs per environment: a live Context ID for published content via the Delivery API, and a preview Context ID that includes drafts and unapproved content. Keep these separate. The preview Context ID should never appear in production configuration, as it exposes unpublished content. However, neither can be scoped to a specific site within the environment, so a leaked Context ID exposes the full content of that environment. Sitecore's own best practice guidance recommends proxying Edge queries through serverless functions or Next.js API routes rather than calling Edge directly from the browser, specifically to avoid exposing tokens to the client.
Note that Sitecore's Context ID documentation warns that using NEXT_PUBLIC_SITECORE_EDGE_CONTEXT_ID exposes your Context ID on the client. If client-side Edge requests are required for analytics tracking, understand that this value is visible to anyone viewing your site and plan accordingly.
Over-scoped tokens are common because they're convenient. A single admin-level token works everywhere, which is exactly why teams reach for them. But a single leaked admin token compromises everything it has access to. If a Context ID is compromised, it can be regenerated through the Deploy app, but all apps and services using the old ID will need to be updated and redeployed.
Rotate on a schedule
Long-lived tokens that never rotate are a standing liability. A credential leaked six months ago and never rotated is still valid today. Build rotation into your operational cadence. Rotate Edge Context IDs and preview tokens at least quarterly. Rotate custom API tokens issued to third parties whenever a team member leaves or a vendor relationship changes.
Most platforms make this straightforward. The barrier is usually process, not tooling.
Enable push protection
GitHub Advanced Security and similar tools can scan commits for credential patterns before they reach the remote repository. Push protection rejects the push if a secret is detected. It's a last-resort catch that prevents the most common path to credential exposure.
This doesn't replace a vault or proper secrets management. But it catches the mistake before it becomes permanent.
Before moving on, run a quick audit of your current state:
Most teams find at least one issue immediately. That's expected. The important part is mapping the exposure before something else does.
The next post covers WAF configuration: why default OWASP rulesets rarely align with modern headless traffic patterns and how to move from log mode to active blocking without generating a wave of false positives.
Control 3: WAF Configuration, Moving from Passive Observer to Active Control