How CIOs can combine procedural and semantic scaffolding to make AI reliable across missions and markets.

The fragility of AI brilliance
Artificial intelligence (AI) is already reshaping the way government agencies analyze intelligence, monitor threats and deliver services. Today’s frontier models can sift through massive text collections, draft reports in minutes and summarize field data with speed no human team could match. GDPval, a benchmark released by OpenAI in September that measures how well AI models perform on professional tasks directly tied to US economic output, confirms that these systems are beginning to deliver professional-level outputs on tasks that map directly to US economic activity and national security missions.
Yet alongside this brilliance is fragility. The same models that generate situational reports in record time can misinterpret basic instructions, misformat key documents or hallucinate facts that undermine trust. A model may produce a border security report but mislabel a lawful crossing as a breach. It may summarize FEMA’s disaster assessments, but confuse damaged infrastructure with unaffected facilities.
These are not trivial errors. In the homeland security environment, they carry operational and political consequences. The tension between dazzling performance and brittle reliability is what makes scaffolding essential.
Insights from GDPval
GDPval did more than measure headline accuracy. It showed how prompt scaffolding, which structures workflows so models must check and re-check their own work, directly improves reliability. Rendering outputs into documents to verify formatting caught avoidable errors. Using best-of-N sampling with a separate judge model raised quality ratings. These methods did not make the models inherently more intelligent, but they made them more disciplined.
According to OpenAI’s GDPval announcement, “In the real world, tasks aren’t always clearly defined with a prompt and reference files… we plan to expand GDPval to include more occupations, industries and task types, with the long-term goal of better measuring progress on diverse knowledge work.”
Even the strongest systems stumble when they misapply domain concepts or ignore reference data. These are exactly the kinds of failures that ontologies prevent.
Similar risks appear in the private sector, too. Banks deploying AI for fraud detection, insurers automating claims triage and healthcare systems summarizing patient records all face the same fragility between procedural accuracy and semantic understanding.
The same structural weaknesses surface across industries. In finance, models trained for fraud detection may flag legitimate transactions as anomalies when semantic context, such as account type or jurisdiction, is missing. In healthcare, generative systems that draft clinical summaries can conflate diagnosis categories or misclassify procedures, creating compliance risks under HIPAA and payer rules. In manufacturing, predictive maintenance models may misinterpret vibration data if ontology-based asset hierarchies are not in place, leading to unnecessary downtime or safety issues. In media and legal services, AI summarization systems may blur distinctions between client privilege, intellectual property and public data when meaning is not properly constrained.
Building reliability: Procedural and semantic scaffolding
Prompt scaffolding is procedural. It tells the model how to behave. Instead of relying on a single raw prompt, scaffolding builds a process in which the model generates, checks and revises. It is the checklist you would give a junior analyst. This discipline catches formatting errors and skipped sections but does not address what the model knows.
The deeper failures of AI are semantic. A model may follow every step and still get the meaning wrong, confusing a humanitarian encounter with a narcotics seizure or misclassifying debris clearance as infrastructure repair. Semantic modeling with ontologies can address this. Ontologies create formal representations of a domain, defining entities, distinguishing categories and encoding the rules that govern valid states of affairs. Many of the most widely used frameworks, including those aligned with ISO/IEC 21838 top-level ontology standards, provide the formal foundations for such semantic scaffolding. Additional examples include domain frameworks aligned with the Common Core Ontologies (CCO) — a suite developed in coordination with the National Institute of Standards and Technology (NIST) and grounded in the ISO/IEC 21838 top-level ontology standard.
As John Beverley of the University at Buffalo and the National Center for Ontological Research explains, “Prompts can shape how an AI system behaves, but they can’t ground what it knows. Ontologies do that. They give structure to meaning itself. Without that layer of semantic discipline, you’re building on sand, no matter how refined your prompts are.”
Together, procedural scaffolding governs process while semantic scaffolding governs meaning. One prevents careless shortcuts, the other prevents conceptual confusion.
Ontologies are not a replacement for modern machine-learning techniques such as fine-tuning or retrieval-augmented generation. They complement them. Procedural improvements like model-evaluation pipelines, human-in-the-loop review and prompt orchestration can strengthen discipline, but they do not resolve meaning. Ontologies fill that final gap by formalizing context and ensuring consistency across systems that learn differently. The most reliable AI architectures will combine these approaches rather than choose between them.
Real-world consequences
On the border, AI tools used by Customs & Border Protection may deploy AI to generate daily situation reports. Without an ontology, the system may confuse a smuggling event with a humanitarian encounter, distorting the operational picture for decision makers.
At FEMA, AI could draft impact assessments but may misclassify critical infrastructure damage, treating a flooded hospital as equivalent to a flooded road, skewing resource allocation.
At the TSA or Coast Guard, AI might be used to evaluate compliance reports but could conflate international regulatory requirements with internal guidelines, leading to errors with legal and diplomatic consequences.
These are not stylistic mistakes. They strike at the heart of mission effectiveness.
Whether the mission is public or commercial, the types of failure are the same. Procedural scaffolds catch surface errors, while semantic scaffolds prevent deeper misunderstandings. CIOs building AI for customer service, risk analysis or logistics need both. The governance structures used in defense and emergency management, such as checklists, validation steps and ontological frameworks, translate directly into enterprise practices like model cards, data catalogs and compliance ontologies. What changes is the vocabulary of the domain, not the underlying logic of reliability.
Leadership implications for CIOs and the path to scaffolded intelligence
For homeland security leaders, the implications are immediate. Model size produces capability, not trust. Trust comes from the structures you build around the model. Relying on prompts alone is like hiring a promising intern and giving them only a checklist. They may follow the process, but without subject-matter understanding, they will still make fundamental mistakes. An ontology is the professional education that ensures comprehension.
The same principle applies in the private sector. A global bank, a hospital system or a logistics enterprise may all deploy AI at scale, but without semantic governance, each risks elegant failure: outputs that are procedurally correct yet semantically wrong.
The path forward lies in hybrid architectures where statistical models are embedded within scaffolded systems. Models generate hypotheses, procedural scaffolds enforce process discipline and ontologies validate outputs against domain rules. This not only improves accuracy but also moves AI closer to auditable reasoning. When a system reaches a conclusion, leaders can trace it back to a formal relation in the ontology rather than a statistical guess. That auditability is what makes AI trustworthy in high-stakes national missions.
The future of AI — whether in homeland security, finance or healthcare — will not be defined by scale but by scaffolding. Leaders who invest in both procedural and semantic scaffolding will shape the systems we can depend on to secure borders, respond to disasters and protect the nation.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?