You Are Accepting the First Adequate Answer

PG-004 April 5, 2026 Thomas W. Gantz

How to instruct any AI to keep improving its output before it responds to you

You Are Accepting the First Adequate Answer: a guide to iterative evaluation and enhancement cycles in AI interaction.

The problem

When an AI responds to your prompt, it stops at the first answer it considers adequate. It does not re-examine that answer, look for weaknesses, or improve it before presenting it to you. You receive a first draft dressed as a final result.

This is not a flaw. It is the default. The model's job as it understands it is to produce a response. Once it has done that, the job is complete. It has no standing instruction to switch into editor mode and evaluate what it just produced. That instruction has to come from you, and most users never give it.

The model does not self-improve by default. Your instructions define the interaction.

A note on chain of thought reasoning

Some models use chain of thought reasoning, thinking through a problem before deciding what to say. This guide describes something different. Chain of thought happens before the model decides what to say. What follows instructs the model to treat what it has decided to say as a draft, re-enter the process as a critic, identify weaknesses, improve the output, and repeat until it cannot find any further improvement. Only then does it respond.

You are not asking the model to think harder on the way to a first answer. You are asking it to treat its first answer as a starting point and keep working.


Method 1: Silent cycling (recommended default)

The model runs all evaluation and enhancement cycles internally before responding. You do not see the cycles. You receive only the final result. To confirm the method was applied, require the model to report how many cycles it completed.

Step 1 — Add the instruction

Add this to the end of any prompt, or set it as a standing instruction for the session

YOU SAY:Before responding, run repeated cycles of internal evaluation and enhancement. In each cycle, assess your draft output for quality, accuracy, completeness, gaps, and tone. Apply every improvement you identify. Continue cycling until you cannot identify any further meaningful improvement. Only then respond. Begin your response by stating how many evaluation and enhancement cycles you completed.
What to expect: The response opens with a line such as "I completed 4 evaluation and enhancement cycles." followed by the final output. That line confirms the instruction was followed. If the model responds without the cycle count, it has not followed the instruction. Repeat it or set it as a standing system-level instruction.

Step 2 — Read the cycle count

The number of cycles tells you how much improvement work happened before you saw the output. A higher count generally means more issues were found and fixed. A count of 1 on a first request is worth noting: most outputs have room for improvement, so a single cycle may mean the model did not engage deeply. If the output looks strong, accept it. If not, repeat the instruction explicitly.

If the model reports 1 cycle after previously running several, it has reached the point where it genuinely cannot identify further improvement. Accept the result. Do not keep pushing.

Step 3 — Review the output

Read the response knowing it has already been self-evaluated. Your judgment is still the final pass. The cycling improves the output relative to the first draft. It does not replace your review.

Step 4 — Make it permanent for recurring tasks

Add the instruction to any reusable prompt template or system prompt so it applies automatically without needing to be typed each time. When set at the system level, every response applies the cycling method by default.


Method 2: Visible cycling (diagnostic option)

Use this when you want to see the model's reasoning across each cycle: what it assessed, what it found, and what it changed. Useful for auditing the improvement process or understanding how the model is interpreting your requirements. Not recommended as a default because the output becomes substantially longer and harder to read.

Add this in place of, or in addition to, the standard instruction above

YOU SAY:Before responding, run repeated cycles of evaluation and enhancement. In each cycle, display your evaluation: what you assessed, what you identified as needing improvement, and what you changed. Continue until you cannot identify any further improvement. Then present the final result.
What to expect: Each cycle appears as a labeled section showing the model's assessment and changes. The final result appears after the last cycle. Use selectively. This is a diagnostic tool, not an everyday mode.

Extend the method: multiple instances and platforms

Running this method once with a single instance already produces better output than the default. That is the minimum and it is worth doing on its own.

If the output matters enough and time allows, go further. Take the result to a second instance on the same platform and instruct it to run its own evaluation and enhancement cycles. Different instances notice different things.

Further still: take the output to a model on a different platform. Claude, GPT, Gemini, and Grok each have different training distributions and different tendencies. What one misses, another may catch. The method scales from a single session to a full multi-platform review depending on how much the output is worth and how much time you have. Any point on that spectrum is better than not doing it at all.


Checklist

Core Practitioner Guides

Five guides covering the foundational skills for working reliably with any AI system.

Further reading

The iterative self-evaluation technique in this guide is a practitioner application of a principle developed across the Synthience Institute research corpus: that structured interaction discipline shapes AI output quality more than raw model capability.

Full framework documentation available at the Synthience Institute community on Zenodo.

Document: PG-004 Practitioner Guide
Version: 1.0
Author: Thomas W. Gantz
Affiliation: The Synthience Institute
Date: April 5, 2026
License: CC-BY 4.0