Your AI Updated the File. Did It Preserve What It Didn't Touch?

PG-005 April 6, 2026 Thomas W. Gantz

A companion verification procedure for any AI-assisted file update

Your AI Updated the File. Did It Preserve What It Didn't Touch? A three-step verification procedure: save the prior version, run a comparison pass, you decide.
The AI's failure mode is not in what it edited. It is in what it silently compressed, reworded, or dropped while carrying the rest of the file forward.

A different failure mode

When you ask an AI to edit a document, the risk is that it changes things you did not intend to change. PG-01 addresses that problem directly, with an Edit Pass and a Verification Pass that confirms intended changes were made correctly.

This guide addresses something different. Editing means changing content. Updating a structured file means adding or modifying specific elements while the rest is supposed to stay exactly as written. When you ask an AI to update a structured file -- adding a new section, logging a new entry, bumping a version number -- the risk is not in what it was asked to change. It is in what it was not asked to change.

The unchanged sections are supposed to be identical to the source. Often they are not.

What silent compression looks like in practice

You ask an instance to add a new entry to a log file. The instance adds the entry correctly. But in the process of producing the full updated file, it also:

None of these changes appear alongside the new entry in any obvious way, because they did not happen there. They happened in the parts the AI was regenerating while completing the task you gave it.

The output looks correct. The new entry is there. The file length is approximately right. The problem is invisible until something downstream depends on the section that was quietly reworded.

A file that passes a quick read is not a file that has been verified. Structural documents require structural verification.

Why this happens

When asked to produce a full updated file, an AI instance must regenerate every section, not just the changed one. As it does this across a long document, it works from its own understanding of what the content means, not from the exact prior wording. Unchanged sections are any part of the file that was not included in the update instructions -- and they are supposed to be reproduced verbatim.

The result is a document close to the original in meaning but not verbatim identical in the sections that were not supposed to change. In most documents this is harmless. In governance files, operational procedures, configuration files, system prompts, checklists, and any structured document where the specific wording of rules matters, it is a reliability failure.

This is not a bug in the AI. It is the default behavior of a system that was not explicitly instructed to treat preservation of unchanged content as a first-class requirement.

The distinction from PG-01

Both PG-01 and this guide address silent loss in AI-assisted document work, but they target different failure modes.

PG-01 asks: Did the AI make the changes I requested without losing anything else?

This guide asks: In the sections I did not ask the AI to touch, is every word exactly as it was?

When to use this procedure

Use this procedure after any AI-assisted file update when:

If losing a single sentence from an unchanged section would create a downstream problem, use this procedure. The cost is small. The risk of skipping it is cumulative and often invisible.

The procedure: three steps

Step 1 — Retain the prior version before any update begins
Step 2 — After the update is delivered, run a preservation verification pass
Step 3 — Human reviews the report and confirms or flags

Step 1 — Retain the prior version

Before asking any AI to update a structured file

Save the complete prior version as an immutable artifact before any update session begins. Do not rely on the AI to preserve it. Do not rely on memory. If the prior version is not available for comparison, this procedure cannot be run.

This step costs nothing and enables every subsequent verification. If you skip it, you cannot confirm what changed and what did not.


Step 2 — Run the preservation verification pass

After the updated file is delivered, give both versions to a verification instance

Provide a fresh instance with both the prior version and the new version. Use a different instance than the one that produced the update, or at minimum a fresh session with no memory of the update task. The verification instance has one job: compare the two files and report every difference between them.

YOU SAY:I am giving you two versions of the same file. Version A is the prior version. Version B is the updated version. Your job is comparison only. Do NOT rewrite, improve, summarize, or editorialize either version. Do NOT fix anything you find. Do NOT reassure me that the files look correct. For each difference you find, report: - Which section it is in - What the prior version said (exact wording) - What the updated version says (exact wording) - Whether this appears to be an intended change or an unintended change Organize your report by section. After the section-by-section report, produce a summary: - Total number of differences found - How many appear to be intended changes - How many appear to be unintended changes - Any sections that were entirely absent from one version [VERSION A -- prior version] [paste the complete prior version here] [VERSION B -- updated version] [paste the complete updated version here]
What to expect: A section-by-section report identifying every difference between the two files, with exact wording from each version side by side. The summary counts should reflect every difference found, not just the ones the instance considers significant. If the instance produces reassurance instead of a report, or begins fixing things rather than reporting them, repeat the instruction explicitly.

The key requirement is exact wording, not paraphrase. You need to see what each version actually says in the differing sections, not a summary of how they differ. Paraphrase can obscure precisely the kind of subtle substitution this procedure is designed to catch.


Step 3 — Human reviews and decides

The human is the final authority on what counts as an acceptable difference

Read the report. For each difference identified, decide whether it was intended or not. Intended differences are the ones you asked for. Every other difference is a candidate for correction.

If the report identifies unintended changes, return to the instance that produced the update and request a corrected version with explicit instruction to preserve the affected sections verbatim. Then run the verification pass again on the corrected version.

The AI does not decide what counts as an acceptable difference. You do.

A note on the instruction "do not fix anything"

The verification pass instruction must explicitly prohibit editing and fixing. Without this instruction, AI instances will drift back into improvement mode: they will find something that looks like it could be better and change it rather than reporting it.

This is the same dynamic PG-01 addresses in its verification pass instruction. AI systems default to improvement, not to pure observation. You have to explicitly constrain the task to analysis only, and the constraint needs to be stated plainly enough that the instance does not rationalize its way around it.

What the report should and should not look like

ACCEPTABLE REPORT (this example is drawn from an operations file, but the format applies to any structured document):
 
Section 3.2 -- Default Commenting Posture
Prior version: "Prefer one coherent paragraph over fragmented micro-comments"
Updated version: "Prefer coherent paragraphs over fragmented micro-comments"
Assessment: Appears unintended -- wording changed without instruction
 
Section 9.5 -- New Entry
Prior version: Section did not exist
Updated version: Full new entry present
Assessment: Appears intended -- this is the requested addition
 
Summary: 2 differences found. 1 intended. 1 unintended.

The following responses are not valid verification results, even if they feel reassuring.

-- "The files look substantially the same. The new section was added correctly."
-- "I made a small improvement to Section 3.2 for clarity."
-- "No significant differences detected."

Reassurance without a structured report is not a valid verification result. "Substantially the same" is not "verbatim identical in unchanged sections." If the instance cannot produce a structured report, the verification has not been completed.

What this procedure does not do

This procedure confirms that unchanged sections are verbatim identical to the prior version. It does not verify that the original content was correct or complete to begin with. It does not confirm that the intended changes were made correctly -- that is what PG-01 addresses. And it does not replace the operator's judgment about whether the file as a whole serves its purpose. Those remain human decisions.

Checklist

The cost of this procedure is one additional pass. The cost of skipping it accumulates silently across every file that was not checked.

Core Practitioner Guides

Five guides covering the foundational skills for working reliably with any AI system.

Further reading

This guide addresses a specific instance of a broader failure mode: AI systems that produce fluent, plausible outputs while silently diverging from source material. The formal treatment of this class of failures is distributed across the Synthience Institute research corpus.

Full framework documentation available at the Synthience Institute community on Zenodo.

Document: PG-005 Practitioner Guide
Version: 1.0
Author: Thomas W. Gantz
Affiliation: The Synthience Institute
Date: April 6, 2026
License: CC-BY 4.0