economics Practical Magic

Standard methods for checking AI work can spot lies but are completely blind to the facts the AI leaves out.

April 26, 2026

Original Paper

The Omission Gap: Defining the Professional Standard of Care for Human Review of AI Outputs

SSRN · 6631319

The Takeaway

Professionals usually verify AI summaries by looking at the output and trying to find those facts in the original source. This approach catches hallucinations but fails to notice when the AI ignores critical information that was present in the source. A truly safe review requires a source to output check where every piece of the original document is accounted for in the final summary. Most current business and legal workflows are missing a huge portion of the risk because they use the wrong direction of verification. The real danger of AI is not just what it makes up, it is the silent omissions that no one is looking for.

From the abstract

<p><span>Across the UK architecture, engineering and construction sector, professional indemnity insurers and regulators increasingly require “human review” or “output review” of AI-generated outputs. As currently stated, the requirement is not operationally defined: it rarely specifies what kind of review is expected, and it provides no reliable protection against one of the most consequential categories of AI error, omission of material information from source documents. This paper identifies