Philosophy Paper

Evidence-Sealed Authorization

A Framework for Cryptographically Verifiable Security Compliance Infrastructure

A proposal for modernizing software security authorization through machine-generated evidence and automated control evaluation.

14 March 2026

Abstract

The current practice of software security authorization in government and regulated environments relies heavily on narrative documentation, manual assessment, and periodic review cycles. Under the NIST Risk Management Framework, achieving an Authority to Operate typically requires months of labor-intensive effort, costing hundreds of thousands to millions of dollars per system. Even modern continuous authorization approaches, while faster, still depend on fragmented tool outputs with limited cryptographic integrity.

This paper proposes an alternative architectural paradigm: Evidence-Sealed Authorization. In this model, compliance evidence is derived directly from machine-generated observations of software structure, build provenance, and runtime environment configuration. A deterministic policy engine evaluates that evidence against security control frameworks. The resulting compliance artifacts are then cryptographically sealed into portable, tamper-evident evidence packages that can be independently verified by assessors and authorizing officials.

The proposed architecture eliminates the majority of manual documentation effort, protects proprietary source code, and enables continuous compliance monitoring through signed environment snapshots. Preliminary economic modeling suggests cost reductions of 85 to 95 percent compared to traditional authorization processes, with timelines compressed from months to days. The architecture also introduces a new concept of compliance closeness, replacing binary pass/fail status with quantifiable alignment metrics and cryptographically recorded drift detection.

This paper describes the full framework, compares it against existing models, provides economic projections, and outlines implementation considerations for government programs, defense organizations, and commercial software vendors.

1. Background

1.1 The NIST Risk Management Framework

The Risk Management Framework published by the National Institute of Standards and Technology provides the authoritative process for security authorization of federal information systems. NIST Special Publication 800-53 defines the catalog of security controls that systems must implement, and the RMF lifecycle prescribes a structured sequence of steps: categorization, control selection, implementation, assessment, authorization, and continuous monitoring.

At the center of this process is the Authority to Operate. An Authorizing Official evaluates a package of evidence and documentation to determine whether the residual risk of operating a system is acceptable. The core artifacts in this package typically include a System Security Plan, Security Assessment Report, and Plan of Action and Milestones.

1.2 The Documentation Burden

In practice, the ATO process has become synonymous with extensive documentation. System Security Plans routinely span 300 to 800 pages of narrative text describing how each security control is implemented. These documents are largely written by hand, often by dedicated compliance consultants, and must be reviewed, revised, and maintained over the life of the system.

Industry estimates for a single ATO range from $300,000 to over $2 million depending on system complexity. Timelines of 6 to 24 months are common.

These figures represent not only direct labor but opportunity cost, as engineering teams divert effort from development to compliance support.

1.3 Continuous Authorization Approaches

Recognizing these challenges, programs such as Platform One and Kessel Run within the Department of Defense have pioneered continuous authorization models. These approaches integrate security scanning tools directly into CI/CD pipelines, producing automated evidence of vulnerability management, dependency health, and configuration compliance.

Continuous ATO pipelines reduce assessment timelines to 3 to 6 months and lower costs to the $250,000 to $750,000 range. They represent a meaningful improvement. However, as discussed in the following section, they leave several structural gaps unaddressed.

2. Limitations of Current Compliance Models

Document-Based Evidence

Traditional ATO evidence consists primarily of written narratives. These documents describe intended behavior rather than observed reality. They are expensive to produce, difficult to verify, and rapidly become stale as systems evolve.

Fragmented Tool Outputs

Continuous authorization pipelines produce evidence from many disconnected sources with different formats and trust models. No cryptographic binding exists between them. Assessors must trust each tool independently with no unified chain of custody.

Weak Artifact Integrity

Neither traditional nor continuous models produce evidence with strong integrity guarantees. Documents can be edited after the fact. Scanner outputs can be regenerated. There is typically no cryptographic proof that evidence corresponds to a specific system version at a specific point in time.

Vendor Friction

Traditional accreditation often requires exposing architecture details, dependency information, or source code to assessors. This creates friction for vendors with proprietary technology. Many companies avoid government markets entirely because of compliance burden and IP exposure risks.

Assessment Labor Costs

Security Control Assessors spend the majority of their time reconstructing system context from documentation rather than evaluating actual security posture. Manual evidence collection, cross-referencing of artifacts, and narrative interpretation consume assessment cycles. This labor-intensive model does not scale.

3. Evidence-Sealed Authorization Architecture

This paper proposes a new paradigm called Cryptographically Verifiable Compliance Infrastructure. The core premise is that security authorization should be derived from machine-generated evidence of system reality rather than human-authored documentation.

3.1 Design Principles

01

Observable Evidence

Evidence should be generated deterministically from observable system properties, not written by humans.

02

Policy-Driven Evaluation

Compliance evaluation should be performed by policy engines against structured evidence, not inferred from narratives.

03

Cryptographic Integrity

All artifacts should be cryptographically sealed to ensure integrity, provenance, and non-repudiation.

04

Source Code Protection

Proprietary source code should never need to leave the vendor environment.

3.2 Architectural Components

SSEE

Software Structural Evidence Extractor

A deterministic analysis engine that maps a software codebase into a machine-readable structural model. Describes components, dependencies, interfaces, privilege requirements, cryptographic usage, network behavior, and file system access patterns without exposing raw source code.

REOP

Runtime Environment Observation Probe

A minimally privileged observer deployed inside a target environment to capture deterministic evidence about system configuration and security posture. Records network ingress/egress rules, identity and access configuration, encryption settings, logging configuration, runtime protections, and inherited platform controls. Read-only, unable to execute arbitrary code or modify host configuration.

CCES

Continuous Compliance Evidence Stream

A time-ordered stream of sealed evidence observations that enables continuous monitoring, drift detection, and compliance closeness measurement over the operational lifecycle of a system.

ACEE

Automated Control Evaluation Engine

A policy-driven evaluation system that maps software and environment evidence against security control frameworks such as NIST SP 800-53, the NIST Secure Software Development Framework, or organizational overlays. Uses reusable policy packs rather than hardcoded rules.

MCAG

Machine-Readable Compliance Artifact Generator

Composes structured compliance artifacts from evaluation results. Outputs include System Security Plan fragments, assessment results, component definitions, and remediation records in machine-readable formats such as OSCAL.

CESL

Cryptographic Evidence Sealing Layer

Signs, timestamps, and binds all upstream artifacts into tamper-evident evidence bundles. Ensures the entire chain from source snapshot to deployment receipt can be independently verified.

CCEP

Cryptographic Compliance Evidence Package

The portable, tamper-evident authorization artifact containing all signed evidence, evaluation results, and compliance documents necessary for authorization review.

SSEE → REOP → CCES → ACEE → MCAG → CESL → CCEP

Evidence generation through sealing pipeline

4. End-to-End Authorization Workflow

4.1

Software Structure Evidence Generation

The vendor runs the Software Structural Evidence Extractor against their codebase within their own environment. The extractor produces a deterministic inventory of system modules, APIs, data flows, external dependencies, cryptographic usage, network behavior, and privilege requirements. This output becomes the foundation for the system component definition. The source code never leaves the vendor environment. Only the derived structural description is exported.

4.2

Build and Supply Chain Attestation

During compilation, the build pipeline generates provenance records documenting the source snapshot hash, dependency hashes, compiler version, build environment metadata, and final artifact hash. A Software Bill of Materials is produced alongside the provenance record. Both artifacts are sealed by the Cryptographic Evidence Sealing Layer, establishing a verifiable software origin.

4.3

Environment Baseline Capture

Before deployment, the Runtime Environment Observation Probe is installed in the target environment. The probe passively inspects local configuration reality: network rules, identity models, filesystem expectations, cryptographic configuration, logging settings, and inherited platform controls. It produces a deterministic environment record that is signed locally and exported.

4.4

Automated Compliance Evaluation

Three categories of evidence now exist: software structure evidence, build provenance evidence, and environment configuration evidence. The Automated Control Evaluation Engine ingests these artifacts and compares them against the applicable policy pack. For each security control, the engine determines whether the control is directly evidenced, partially inferred, inherited from the platform, requiring external attestation, or not applicable.

4.5

Evidence Sealing and Package Assembly

The complete evidence bundle is assembled and sealed. The Cryptographic Evidence Sealing Layer signs the package, binding together the artifact identity, policy pack version, environment observation digest, evaluation results, and timestamp. If the build environment was ephemeral, a destruction attestation is included confirming that no source code was retained.

4.6

Security Control Assessor Review

The Security Control Assessor receives the sealed evidence package. Instead of manually reconstructing the system from narrative documentation, the assessor reviews structured implementation statements, assessment results, and evidence references. Cryptographic signatures allow the assessor to verify artifact integrity and replay policy evaluations.

4.7

Authorizing Official Decision

The Authorizing Official receives the complete authorization package: System Security Plan, Security Assessment Report, any Plan of Action and Milestones, and the signed evidence receipts. The AO evaluates system purpose, residual risk, and mitigation strategy. They do not require access to proprietary source code. Their decision rests on the cryptographic evidence chain.

5. Continuous Authorization Model

5.1 Continuous Evidence Streams

After initial authorization, the system continues generating evidence. Environment probes periodically capture configuration state. Build pipelines produce provenance records with each release. Dependency monitoring tracks supply chain changes. Each observation is sealed and appended to the authorization evidence stream, creating a time-ordered record of system security posture.

5.2 Drift Detection

By comparing current environment observations against the approved baseline, the system can detect configuration drift with precision. Rather than waiting for periodic reassessment, drift is identified as it occurs. Sealed environment snapshots provide cryptographic proof of what changed, when, and by how much.

5.3 Compliance Closeness

The architecture introduces a new concept: compliance closeness. Instead of expressing authorization as a binary pass/fail status, the system computes a quantifiable alignment metric.

▪ 96% alignment to approved baseline

▪ 2 configuration drifts detected

▪ No prohibited egress paths observed

▪ Binary provenance matched to approved build receipt

5.4 Authorization State Monitoring

The cumulative effect is a continuously computed Authorization State Record. Authorizing Officials can review system security posture at any time without commissioning a full reassessment. Humans intervene when risk thresholds change, anomalies appear, controls fail, or major system changes occur. The default mode is automated evidence accumulation with exception-based human review.

6. Economic Impact

6.1 Cost Comparison by Authorization Model

ModelCost RangeTimelineEvidence Integrity
Traditional RMF$750K — $2M6 — 24 monthsLow
Continuous ATO$250K — $750K3 — 6 monthsModerate
Evidence-Sealed$75K — $250K2 — 6 weeksHigh
Self-Service (Mature)$15K — $75KDays to weeksHigh

6.2 Efficiency Gains by Activity

ActivityTraditional BaselineEstimated Reduction
Documentation labor35 — 40% of total cost90 — 95%
Assessment preparation20 — 25% of total cost80 — 90%
Evidence collectionManual, scattered90%
Vendor onboardingHigh friction70 — 80%
Authorization timeline6 — 24 months80 — 95%

6.3 Marginal Cost Dynamics

A critical property of the architecture is that marginal authorization costs decrease sharply with scale. Once the evidence infrastructure is established, reusable policy packs, environment profiles, and evidence pipelines reduce the cost of accrediting each additional system. The economic model resembles CI/CD infrastructure: the first pipeline is expensive to build, but the hundredth is nearly free.

The practical cost floor for a mature self-service implementation is estimated at $10,000 to $50,000 per system — approximately 95% reduction from traditional baselines.

6.4 Unlocking Distributed Capability Development

The economic effects described above compound into a strategic outcome that extends well beyond cost savings. When the authorization barrier drops from hundreds of thousands of dollars and many months of effort to tens of thousands of dollars and days, the population of organizations capable of delivering accredited software to the government expands dramatically. Compliance ceases to function as a filter that selects for large prime contractors and instead becomes a lightweight qualification step that any competent engineering team can satisfy.

This changes the acquisition model itself. Government programs could issue capability-development challenges or bounties for specific mission needs, knowing that the compliance infrastructure required to authorize the resulting software is no longer a prohibitive barrier for participants. Small teams, independent developers, nontraditional defense companies, and academic research groups could build, submit, and achieve authorization for purpose-built tools at a pace and cost that current processes make impossible.

Consider the implications for rapid prototyping in operational environments. A field unit identifies a capability gap. A distributed network of developers responds with candidate solutions. Each submission arrives with a sealed evidence package that an assessor can verify in hours rather than months. The best solution is authorized and deployed on a timeline measured in weeks.

This is not speculative. Every component of the workflow exists or is emerging. The only missing element is the compliance infrastructure that makes the authorization step fast enough to keep pace with the development step.

The self-service maturity state of this architecture effectively converts authorization from a bottleneck into a throughput multiplier. Instead of compliance slowing capability delivery, the evidence infrastructure becomes the mechanism that enables a broader, faster, more competitive ecosystem of builders to serve government missions. It transforms the economics of defense software from a procurement problem into a participation problem, and solves the participation problem by removing the barrier that kept most capable builders out.

7. Security and Trust Model

The architecture strengthens trust in compliance evidence through several mechanisms that are absent from current models.

7.1 Artifact Integrity

Every evidence artifact in the chain is cryptographically signed at the time of generation. This includes software structural maps, build provenance records, environment observations, evaluation results, and the final compliance package. Modification of any artifact after signing is detectable.

7.2 Supply Chain Provenance

Build provenance records link the deployed binary to the source snapshot that produced it, the dependencies that were resolved, and the build environment that compiled it. This chain is consistent with emerging frameworks such as SLSA and in-toto.

7.3 Environment Verification

The Runtime Environment Observation Probe provides direct evidence of deployment reality rather than documented intent. Assessors can verify that the environment in which software operates matches the approved security posture.

7.4 Replayable Evaluation

Because the Automated Control Evaluation Engine operates on structured evidence with versioned policy packs, evaluations are deterministic and replayable. An assessor can re-execute the same evaluation and obtain identical results, making the assessment auditable and reproducible.

7.5 Ephemeral Environment Trust

For vendors using ephemeral build or evaluation environments, the architecture supports destruction attestations. These records cryptographically confirm that source code entered a controlled environment, evidence was generated, and the environment was destroyed. Only signed evidence survives. This enables proprietary code evaluation without persistent exposure.

8. Implementation Considerations

8.1 Policy Pack Design

The architecture depends on well-structured policy packs that encode control expectations for specific frameworks and deployment contexts.

NIST 800-53 Rev 5 baseline configurations
SSDF vendor attestation requirements
Air-gapped deployment constraints
Organizational overlays

Policy packs should be versioned, reusable, and independently auditable.

8.2 Machine-Readable Frameworks

The Open Security Controls Assessment Language (OSCAL) provides the natural expression layer for machine-readable compliance artifacts. OSCAL supports component definitions, System Security Plans, assessment plans, assessment results, and Plans of Action and Milestones. Adoption of OSCAL as the artifact format enables interoperability between evidence generators, evaluation engines, and assessment tools.

8.3 Evidence Confidence Classification

Not every security control can be fully evaluated from automated evidence. The architecture must classify each finding into confidence levels:

Directly observed
Partially inferred
Inherited from platform controls
Requiring external human attestation
Not applicable

Many controls depend on operational procedures, personnel practices, physical security, or governance structures that exist outside the scope of automated observation. Accurate confidence classification prevents false assurance and maintains credibility with assessors.

8.4 Environment Scoping

The architecture must explicitly define system boundaries and control inheritance. For a compiled binary running in an air-gapped environment, many cloud-native and network-oriented controls may be out of scope or inherited from the enclave platform. Proper scoping prevents false compliance burden and produces more useful evidence. The difference between a containerized cloud application and a standalone binary on a classified network is substantial, and the evidence infrastructure must reflect that difference.

8.5 Human Oversight

Human risk acceptance remains a necessary component of any authorization decision. The architecture automates evidence generation and evaluation but does not eliminate the Authorizing Official role. Instead, it transforms that role from document reviewer to evidence verifier, allowing human judgment to focus on residual risk assessment, mission context, and exception management. The goal is not to remove humans from authorization decisions but to provide them with stronger, more trustworthy evidence on which to base those decisions.

9. Call to Action

The current state of software security authorization imposes significant cost, delay, and friction on the organizations that can least afford it. Defense programs, critical infrastructure operators, and government agencies all operate under compliance requirements designed for a previous era of software development. The architectural paradigm described in this paper offers a path toward modernization that strengthens assurance while dramatically reducing burden.

For Government Programs

Programs pursuing DevSecOps modernization should experiment with machine-generated compliance evidence alongside existing authorization processes. Pilot programs that accept structured evidence packages in addition to traditional documentation would provide valuable data. Programs should consider how low-cost, self-service authorization infrastructure could enable capability-development challenges and bounty models that open government technology problems to a dramatically wider base of builders.

For Security Assessors

Assessors building automated assessment pipelines should explore the consumption of cryptographically sealed evidence packages. The shift from document interpretation to evidence verification represents a fundamental improvement in assessment efficiency and consistency. Assessors who develop competency in evaluating structured, machine-readable evidence will be positioned for the future of compliance evaluation.

For Software Vendors

Vendors serving government and regulated markets should invest in build provenance, software bill of materials generation, and deterministic codebase analysis. These capabilities reduce compliance burden regardless of which authorization model a customer uses. Vendors who can provide signed evidence packages alongside their software products will have a meaningful competitive advantage.

For Standards Bodies

The development of open standards for compliance evidence packaging, artifact signing, and evidence exchange formats will accelerate adoption. OSCAL provides a strong foundation. Complementary standards for provenance attestation, environment observation records, and evidence bundle formats would complete the ecosystem. Alignment with SLSA and in-toto is essential to avoid fragmentation.

The transition from document-driven to evidence-driven authorization is not a theoretical future. The component technologies exist. The standards are emerging. The economic incentive is clear. What remains is the will to assemble these capabilities into a coherent infrastructure and the willingness of authorization communities to accept machine-generated evidence as a basis for trust.

The architecture described in this paper provides a concrete proposal for that assembly. It is offered as a starting point for experimentation, collaboration, and refinement across the defense, cybersecurity, and software engineering communities.

Build Compliance Infrastructure That Scales

From evidence generation to cryptographic sealing — authorization modernized.

Start a Conversation