← Back to Blog
Series: Reference ArchitecturePart 7
Tushar Mishra

Golden Path Demo: Intent → Action → Outcome → Accountability

A guided walkthrough of the FuseGov closed-loop: an agent attempts a high-risk action, enforcement escalates to a human approver, outcomes are verified against intent, and an Evidence Pack is produced.

In this post
  • The end-to-end loop: declared intent → governed action → observed outcome → verified alignment.
  • Human approval for high-risk actions with step-up authentication.
  • Audit-ready Evidence Pack generated for every decision and outcome.
Golden PathOperational AuthenticityAgentic AIHuman in the LoopEvidence PacksPolicy EnforcementOutcome VerificationAccountabilityFuseGov

Golden Path Demo: Intent → Action → Outcome → Accountability

This page demonstrates the end-to-end operational authenticity loop—from declared intent to governed actions, verified outcomes, and human accountability, with Evidence Packs produced automatically.

It’s a “golden path” because it shows the minimum set of capabilities an enterprise needs to deploy agentic AI safely in production.

What you’ll see in this walkthrough

  • An agent is registered with an owner and a permitted purpose
  • Tools are governed via a Tool Registry with risk tiers and scope constraints
  • A Policy Bundle is deployed to enforcement points (gateway/sidecar)
  • The agent attempts a high-risk action
  • FuseGov escalates to a human approval workflow (step-up auth)
  • The action executes (or is blocked)
  • Outcomes are observed and verified against intent
  • A complete Evidence Pack is generated and exportable to SIEM/GRC

Step 0 — The closed-loop reference (what we are implementing)

flowchart LR
  I["Declared Intent<br/>Purpose + Policies"] --> A["Governed Actions<br/>Gateway or Sidecar PEP"]
  A --> D{Decision}
  D -->|Allow| T["Tool/API Execution"]
  D -->|Deny| X["Blocked + Evidence"]
  D -->|Escalate| H["Human Approval<br/>Step-up Auth"]

  H -->|Approved| T
  H -->|Rejected| X

  T --> O["Outcome Observation<br/>State Change or Result"]
  O --> V["Outcome Verification<br/>Expected vs Actual"]
  V -->|Aligned| OK["Recorded as Aligned<br/>Evidence Pack"]
  V -->|Not Aligned| AL["Alert + Containment<br/>Pause/Quarantine"]

  A --> E["Evidence Packs<br/>Decision + Context"]
  T --> E
  O --> E
  H --> E

Step 1 — Register the agent (owner + purpose + boundaries)

Every agent must have:

  • a named owner (business accountability)
  • a declared purpose
  • defined risk limits and data boundaries

Example: Agent registration (concept)

{
  "agent_id": "agent-finops-001",
  "name": "FinOps Assistant",
  "environment": "prod",
  "owner": {
    "name": "Head of Finance Ops",
    "role": "Agent Owner",
    "email": "owner@example.com"
  },
  "purpose": "Optimize cloud spend within approved budgets without changing IAM or network policies.",
  "max_data_classification": "INTERNAL",
  "allowed_intents": [
    "report_cost_anomalies",
    "scale_down_nonprod",
    "tag_untagged_resources"
  ]
}

What this enables

  • You can always answer: who is responsible for this agent?
  • You can enforce purpose-driven boundaries, not just identity-based permissions.

Step 2 — Register tools (risk tiers + scope constraints)

Agent safety is primarily governed by the action surface: tools and APIs.

Each tool is registered with:

  • risk tier (LOW/MED/HIGH/CRITICAL)
  • allowed operations and scopes
  • spend/rate caps
  • conditions requiring approval

Example: Tool registry entry (concept)

{
  "tool_id": "cloud-control-plane",
  "system": "AWS Control Plane",
  "risk_tier": "CRITICAL",
  "allowed_operations": ["READ", "WRITE"],
  "blocked_operations": ["ADMIN"],
  "scope_constraints": [
    "account == 'sandbox-prod'",
    "no_iam_role_create == true"
  ],
  "spend_caps": {
    "max_cost_per_day_usd": 200
  },
  "approval_rules": [
    {
      "when": "operation == 'WRITE' && resource_type in ['compute','network']",
      "then": "ESCALATE"
    }
  ]
}

Step 3 — Deploy a Policy Bundle (versioned, signed controls)

Policies must be deployable artifacts:

  • versioned
  • approved
  • signed
  • distributed to enforcement points (gateway/sidecar)

Example: Policy bundle (concept)

{
  "bundle_id": "fg-policy",
  "version": "1.0.3",
  "mode": {
    "default": "ENFORCE",
    "unknown_tool": "DENY"
  },
  "controls": [
    {
      "id": "CTL-010",
      "name": "High-risk approvals",
      "when": "tool.risk_tier in ['HIGH','CRITICAL'] && request.op in ['WRITE','DELETE']",
      "then": "ESCALATE",
      "step_up_auth": true
    },
    {
      "id": "CTL-001",
      "name": "Scope restriction",
      "when": "request.scope within tool.scope_constraints",
      "then": "ALLOW"
    }
  ],
  "integrity": {
    "bundle_hash": "sha256:3bf1...",
    "signed_by": "kms://fusegov/prod/policy-signing"
  }
}

This is how auditors can trace exactly which rules were active at the time of any event.

Step 4 — The agent attempts a high-risk action

The agent proposes a tool call:

  • Tool: cloud-control-plane (CRITICAL)
  • Operation: WRITE
  • Target: compute.scale_down

Because this meets the approval rule, the enforcement point escalates.

Action request (concept)

{
  "trace_id": "tr_9c21",
  "agent_id": "agent-finops-001",
  "tool_id": "cloud-control-plane",
  "operation": "WRITE",
  "resource_type": "compute",
  "action": "scale_down",
  "scope": {
    "account": "sandbox-prod",
    "region": "ap-southeast-2"
  },
  "requested_change": {
    "instance_id": "i-123456",
    "from": "m5.4xlarge",
    "to": "m5.2xlarge"
  }
}

Step 5 — Enforcement decision: escalate to human approval

FuseGov records:

  • policy version
  • triggered controls
  • escalation reason
  • required approver + step-up auth
sequenceDiagram
  participant Agent
  participant PEP as Gateway/Sidecar PEP
  participant Approver
  participant Tool
  participant Evidence as Evidence Pack

  Agent->>PEP: Request high-risk action
  PEP->>PEP: Evaluate policy bundle + tool registry
  PEP->>Approver: Escalate (step-up auth)
  Approver-->>PEP: Approve / Reject
  alt Approved
    PEP->>Tool: Execute tool call
    Tool-->>PEP: Result / state change
  else Rejected
    PEP-->>Agent: Deny + rationale
  end
  PEP->>Evidence: Emit decision + approval + outcome
  Evidence-->>Approver: Owner/security notifications

Step 6 — Human approves (step-up authentication)

Approval captures:

  • approver identity
  • reason
  • step-up confirmation
  • timestamp
  • optional constraints (time window, max scope)

Approval event (concept)

{
  "trace_id": "tr_9c21",
  "approval_id": "apr_4412",
  "approver": {
    "name": "FinOps Manager",
    "email": "approver@example.com",
    "step_up_auth": true
  },
  "decision": "APPROVE",
  "rationale": "Scale-down approved for non-critical workload to reduce spend.",
  "constraints": {
    "valid_for_minutes": 15,
    "max_instances": 1
  },
  "timestamp": "2026-01-09T02:10:00Z"
}

Step 7 — Action executes and outcomes are observed

FuseGov captures:

  • execution result
  • post-action verification signal (read-after-write or system event)
  • what changed

Outcome observation (concept)

{
  "trace_id": "tr_9c21",
  "tool_id": "cloud-control-plane",
  "execution_status": "SUCCESS",
  "observed_change": {
    "instance_id": "i-123456",
    "instance_type_before": "m5.4xlarge",
    "instance_type_after": "m5.2xlarge"
  },
  "timestamp": "2026-01-09T02:10:08Z"
}

Step 8 — Outcome verification (aligned vs not aligned)

Outcome verification compares:

  • declared intent/purpose
  • policy constraints
  • expected outcome model
  • observed outcome

Example verification: aligned

{
  "trace_id": "tr_9c21",
  "verification": {
    "status": "ALIGNED",
    "expected": "Scale down 1 non-critical instance within sandbox-prod",
    "observed": "Scaled down 1 instance within sandbox-prod",
    "checks": [
      "scope_ok",
      "budget_cap_ok",
      "no_iam_changes",
      "no_data_export"
    ]
  }
}

Example verification: not aligned (triggers containment)

{
  "trace_id": "tr_9c21",
  "verification": {
    "status": "NOT_ALIGNED",
    "reason": "Unexpected IAM policy change detected in same session",
    "containment": "PAUSE_ACTIONS",
    "notifications": ["agent_owner", "security_control_owner"]
  }
}

If verification fails, FuseGov can automatically:

  • pause actions
  • quarantine the agent
  • or force escalation-only mode until reviewed
  • …and evidence every step.

Step 9 — Evidence Pack generated (audit-ready, exportable)

The Evidence Pack bundles:

  • request + decision
  • policy version + hash
  • approval trail
  • execution telemetry
  • outcome verification
  • notifications

Evidence pack header (concept)

{
  "evidence_pack_id": "ep_7f93",
  "trace_id": "tr_9c21",
  "agent_id": "agent-finops-001",
  "policy_version": "1.0.3",
  "controls_evaluated": ["CTL-010", "CTL-001"],
  "decision": "ESCALATE",
  "approval_id": "apr_4412",
  "outcome_verification": "ALIGNED",
  "integrity": {
    "event_hash": "sha256:a3f2...",
    "previous_hash": "sha256:9c1d..."
  }
}

Exports can go to:

  • SIEM/SOC for detection
  • GRC for audit testing
  • data lake for analytics and drift detection

Result (the three things enterprises need)

Outcome aligned / not alignedHuman owner notifiedEvidence pack generated

Tushar Mishra
FuseGov Team | Autonomous Systems Governance

Want the “Boundary Governance” checklist?

A simple, practical worksheet teams use to map autonomous actions to enforcement points, policies, and audit signals.

No spam. If you’re building autonomous systems, you’ll get invited to the early program.