● Technical Depth

The ISSM Test — Building a System I Would Have to Audit

Travis D. Butera  ·  ISSM, U.S. Navy Senior Chief

For 18+ years, I have walked into other people’s systems and told them what was wrong. I have found Plan of Action and Milestones (POA&M) entries that had not been touched in two years. I have found Assured Compliance Assessment Solution (ACAS) scans run on schedule but never read. I have found Key Management Infrastructure (KMI) programs that looked compliant on paper and were indefensible in person.

Last year, I became the engineer. I built Combat Information Center (CIC) — a production artificial intelligence (AI) orchestrator that runs my home infrastructure. And because I cannot turn off 18 years of Information System Security Manager (ISSM) instinct, I did something most personal project developers never do: I ran an honest assessment of my own work against the same standard I apply to operational systems.

Here is what I found.

The System

CIC is not a hobby script. It is a production system: 10 containerized services managed via docker-compose and systemd, running on a Hyper-V virtual machine on bare metal, with a FastAPI backend, PostgreSQL 16 database, and live integrations to Cloudflare and OPNsense firewall application programming interfaces (APIs). The primary function is an AI chat interface powered by Anthropic Claude tool-use that can interrogate and control my home network in natural language.

The repository is public at github.com/tdbutera/cic. Every commit, every pull request, every CI/CD gate is visible. That transparency was intentional — I wanted a system I could not hide behind.

Control 1: AC-6 — Least Privilege

Risk Management Framework (RMF) control AC-6 requires that users and processes receive only the permissions necessary to perform their function. In operational environments, I have seen this violated constantly: service accounts with domain admin, technicians with root on every box, audit users who can write to the audit log.

When I built CIC’s Cloudflare integration, my first instinct was to generate an API token with full account access — easy, convenient, and exactly what I would flag in an assessment. Instead I stopped, went back to the Cloudflare dashboard, and created a scoped token with exactly two permissions: Zone:DNS:Edit for a single zone and Zone:Cache:Purge. Nothing else. The OPNsense integration gets a dedicated user account with only the API privileges that service actually needs.

The uncomfortable truth: scoping permissions properly took three times longer than using broad credentials. That is why commands do not do it. It requires discipline that only shows up when someone is checking.

The civilian hiring objection I hear most often is: “Can military IT people actually do the work, or just manage it?” The answer is in the commit history.

Control 2: Continuous Monitoring — The CI/CD Pipeline as ConMon

In the Risk Management Framework (RMF), continuous monitoring (ConMon) is the ongoing process of verifying that your security posture matches your documentation. In practice, it means running scans, reviewing findings, and having humans accountable for what the system reports.

The CIC Continuous Integration/Continuous Deployment (CI/CD) pipeline enforces something structurally similar. Nine parallel quality gates run on every pull request: static analysis, strict type checking with mypy --strict, unit tests, dependency vulnerability scanning, secrets scanning, container linting, and more. No merge occurs without 100 percent pass rate.

The analogy is not perfect — a CI/CD pipeline is not a security accreditation. But the discipline is identical: automation that makes it harder to merge insecure code than to fix it. That is exactly what a mature ConMon program is supposed to accomplish. Running the scan is not the mission. Acting on what it finds is the mission.

Control 3: Data Classification — The Privacy Routing Layer

One of the harder problems in classified environments is data boundary enforcement: ensuring that information does not leave its authorized boundary. I have seen this failure in the wild — personnel who sent Controlled Unclassified Information (CUI) through personal email because it was faster, or who used a public AI chatbot to summarize a sensitive document because the approved tool was slow.

CIC has this problem too. My home network contains information I do not want traversing Anthropic’s API: network topology, device inventory, firewall rules, credentials. When a user query requires that context, I do not send it to Claude. The routing layer intercepts the query, identifies the data classification tier, and processes privacy-sensitive requests entirely locally using a smaller model with no external API call.

In Sensitive Compartmented Information Facility (SCIF) environments and Department of Defense (DoD) cloud enclaves, commercial AI APIs are prohibited for exactly this reason. The architecture I built at home is directly applicable to those environments — not because I planned it that way, but because the threat model is structurally the same.

Control 4: Audit Trail — What I Did Not Do Well

Here is where the honest ISSM assessment diverges from the confident builder narrative.

My audit logging is insufficient. I have systemd journal logs and FastAPI request logs, but I do not have a centralized, tamper-evident audit trail with correlation IDs that would satisfy an assessor. If I were auditing CIC as an operational system, I would write a POA&M finding: Audit trail does not meet AU-3 (Content of Audit Records) or AU-9 (Protection of Audit Information) requirements. Planned remediation: structured logging to append-only store with integrity verification.

That finding is on my backlog. It is not closed. I know this because I ran the assessment honestly rather than marking the control “implemented” and moving on.

The most dangerous thing in an RMF package is not a missing control. It is a control marked “Implemented” that nobody can defend. I learned that by watching other people do it, and confirmed it by almost doing it myself.

What Building This Taught Me

Three things:

First, security discipline is a habit, not a compliance event. The developers I assess who do security well are not following a checklist during a pre-assessment sprint. They built the habit into their workflow. Scoped tokens every time. Secrets in environment files every time. Pull requests with gates, every time. It costs nothing once it is the default.

Second, the gap between “compliant” and “secure” is visible from the inside. CIC would probably pass a surface-level assessment. The audit logging gap is real but not immediately apparent. The only reason I know about it is because I looked for it. That is what ISSMs are supposed to do — not just verify that boxes are checked, but understand the system well enough to know what the boxes are actually measuring.

Third, the skills transfer. The same threat modeling, the same control mapping, the same honest assessment process that I apply to operational submarine networks is directly applicable to commercial infrastructure. The context changes. The discipline does not.

The repository is at github.com/tdbutera/cic. The commit history is the evidence.

Travis D. Butera

TB
Travis D. Butera
U.S. Navy Senior Chief & ISSM with 18+ years executing Department of Defense (DoD) cybersecurity, RMF/ATO lifecycles, and enterprise IT programs across seven operational submarines. Builder of CIC — production AI infrastructure at github.com/tdbutera/cic. Navy Enlisted Classification (NEC) 741A (ISSM), NEC 742A (NSVT). Active TS/SCI. Available October 2027.

travis@buteranet.com  ·  buteranet.com