OMGfixMD · diagnosis
Diagnosis · April 2026 · 6 min read

Look. Typing feedback to the AI stops working at five corrections.

It is 23:48. The model just returned 1,800 words. You read it once. Five things need work. You scroll back to the chat box, you put your cursor in it, you start typing the first one. Forty-five seconds later, you are proud. Four to go.

Reader, the chat box does not break at five. It breaks at three. By the fourth correction, you are already sending a message you know is wrong.

TL;DR

The chat box is a single small text area at the bottom of a scrolling conversation. Around correction three, the original answer has scrolled out of view, your working memory is full, and you stop describing locations and start paraphrasing them. By correction four you are knowingly sending a message you expect the model to mis-target — some shipped feedback beats none, you tell yourself, and you hit send. By correction five you have decided "the model will probably catch that one," and three of five make the trip. The chat box is fine for one correction. It is not a multi-point feedback surface. The fix is a workspace where the fifth correction costs the same as the first and arrives anchored. Format and mechanism: the guide. Tool: OMGfixMD.

Correction one is fine. That is the trap.

Watch yourself type the first correction. "In the third bullet under Architecture — not the second one, the third — change 'stateful' to 'stateless.'" Forty-five seconds. You re-read it. Looks good. You feel like you have a system.

This is the moment that misleads you. The chat box did not just do its job; it spent your budget on it. The original answer is still on screen, your working memory is empty, the field has focus, you have not had to describe a location yet — "Architecture" is the heading right above the cursor. Correction one is not representative of correcting an LLM answer. It is the only correction the chat box was actually built for.

Correction two is where the surface starts charging interest. You go to raise the FAQ tone problem and realize you cannot remember the exact phrasing. You scroll up. The chat box loses focus. You scroll back down. The sentence you had in your head is gone. You type a worse version. Two minutes in, two of five.

Correction three is when the surface stops being a feedback surface

By the third correction, three things have gone wrong at once, and they compound.

The original answer has scrolled out of view permanently. Not just out of focus — out of working memory. You can no longer describe the third location precisely because you cannot see it, and scrolling back loses the sentence you had in your head about it. You are writing in the dark.

You have started paraphrasing locations instead of describing them. You stop typing "the third bullet under Architecture" and start typing "somewhere there's a paragraph on retention that needs to say it more directly." You know, as your fingers move, that "somewhere there's a paragraph" is approximately the worst possible address you could give a 1,800-word document. You type it anyway. The writing tax has gotten high enough that an imprecise sentence costs less than a precise one.

Your working memory is now full of bookkeeping. You are not thinking about the substance of correction three. You are thinking which one was this again, did I cover the FAQ tone thing, how many of these are left. The actual feedback — the thing you wanted to say about retention — gets a smaller share of your attention than the ledger of which corrections you have and have not raised yet.

Three of five. With a known mis-target baked in for the next turn. You have not even hit send yet.

Correction four is the moment you start lying to yourself

This is the part nobody writes about because it makes the reviewer look bad. So let's write it.

Correction four is the moment you raise a point you have already decided isn't worth raising precisely. Not because it doesn't matter — it does — but because describing it precisely, in this small field, with the original scrolled away and three corrections of cognitive load already on your back, costs more than the correction is worth. So you write half of what you wanted to say. "Also fix the retention thing," you type. You know, as you type it, that this will not work. You send it anyway. Some shipped feedback beats none, you tell yourself, in the voice of a person who has just made a deal with the part of themselves that wanted to do this properly. OMG.

This is the actual failure mode. Not "the user gave up." Not "the user got tired." The reviewer knowingly shipped a message they expected the model to misinterpret, because the alternative — closing the loop precisely on point four — cost more than they had left to spend at 23:48. The chat box did not fail to receive the message. The reviewer failed to write the version of the message they would have written if writing it cost what writing correction one cost.

Correction five is the one that quietly never gets typed

By correction five, you don't even pretend. You write none of it. "The model will probably catch that one," you say, in your head, audibly, in the voice of a person who has given up. The step that is out of order. The hallucinated dependency. The bullet that will quietly become a bug in production three weeks from now.

It will not catch it. You knew, when you didn't type it, that it would not catch it. The chat box did not lose your message — there was no message. The fifth correction lived its entire life inside your head, between the moment you noticed the problem and the moment you decided four corrections was enough.

This is the gap. Not the chat box's plumbing. Not the model's reasoning. The four minutes between reading the answer and deciding which two of the five points aren't worth typing — and the small text area at the bottom of the screen that pretends to be the right surface for the decision.

The chat box has no anchor, and that is also the problem

Even the corrections that do make it into the message arrive at the model as prose hints, not as anchored edits. "The third bullet under Architecture" is a few dozen tokens competing for attention against an 1,800-word document with two "Architecture" sections, four lists, and a nested bullet that could plausibly count as "third" depending on how you count. The model has to guess which span you meant. Sometimes its guess is right.

The chat box has no surface for saying this exact passage, this exact heading, apply this exact note. No highlight tool. No per-passage note. The reviewer carries the anchoring in their head and translates it into prose — and around correction three the translation breaks down, paraphrases substitute for descriptions, and the model converts "guess which third bullet" into "rewrite a wide enough region that something looks correct." (Why the model does that is its own diagnosis. The point here: even if you get all five corrections out, the chat box can't carry the address each one belongs to.)

Two failures stacked. The writing tax that keeps points four and five from being written. The no-anchor problem that mis-targets the ones that did. Three of five raised, two of three mis-targeted, one round-trip burned. Another round begins.

The chat box doesn't fail at five corrections. It fails at three. By the fourth, you're knowingly shipping a message you expect the model to mis-target — and the fifth one lives its entire life inside your head, because typing it cost more than the correction was worth.

What has to change

Not the model. The model that wrote your 1,800-word answer is the same model that, given five anchored corrections in one paste, applies five anchored corrections in one paste. That part has worked since early 2024.

What has to change is the surface where the corrections get raised. A workspace where the fifth correction costs the same as the first (so it actually gets written), and where every correction arrives at the model anchored to the verbatim passage and the heading it sits under (so the model lands every fix on its exact target). Comment on each passage where it sits. Emit the whole set as paired quote-plus-note pairs. One paste, five anchored edits, no second round.

The full format and the manual recipe — what the paired block looks like, why the model parses it cleanly — are in the guide. The argument for why this should be a primitive in every chat interface is in the manifesto. OMGfixMD is the browser tool that does the highlighting and the export for you, because the manual recipe takes ten minutes and we got tired of doing it. The document never leaves your browser.


Correction one is fine. The chat box was built for it. The other four need a surface that doesn't bill you for typing.

Questions people actually ask

Why doesn't it work to just type my feedback into the chat when I have five things to correct?

Because the chat box stops being a feedback surface around the third correction, not the fifth. By point three the original answer has scrolled out of view, your working memory is full, and you start paraphrasing the location of the fix instead of describing it. By point four you are knowingly sending a message you expect the model to mis-target — some shipped feedback beats none, you tell yourself, and you hit send. By point five the writing tax has compounded enough that you have decided "the model will probably catch that one." It will not. Three of five make it into the message. The other two come back wrong on the next turn. The chat box is fine for one correction. It is not a multi-point feedback surface. The format that is — paired passages with notes — is in the guide.

Why does the original answer scrolling away matter so much?

Because every correction after the first one needs the location to still be visible while you describe it. The chat box is one small text area at the bottom of a scrolling conversation — when you scroll up to re-find the third bullet, the chat box loses focus, and when you scroll back down the sentence you had in your head is gone. You write a worse version. By the third correction you stop scrolling entirely and just paraphrase "somewhere there's a paragraph on retention." That paraphrase is the message the model receives, and on a 1,800-word document it is approximately the worst possible address for an edit.

Why doesn't a numbered list fix the problem?

Because a numbered list does not lower the writing tax — it only formats the output of it. You still have to type each item in the same small text area with the same scrolled-away original. The numbered list helps the model parse what you wrote; it does not help you write it. The give-up moment around point four happens before any formatting choice gets made. The numbered list arrives at the model with three items in it instead of five, same as a paragraph would.

Why doesn't the chat box have a way to anchor a comment to a specific passage?

Because the chat box was designed for prose dialog, not for multi-point editing of a long generated document. There is no highlight tool, no per-passage note, no notion that the user has a list of edits each anchored to a specific span the model produced ninety seconds ago. The reviewer carries the anchoring in their head and translates it into prose for every correction. Around correction three the translation breaks down — paraphrases substitute for descriptions, the model has to guess which "third bullet" is meant, and "guess which one" is exactly the failure mode the model converts into "rewrite widely enough that something looks correct." The full mechanism is in the rewrites-the-whole-document diagnosis.

So what is the actual fix?

A surface where the fifth correction costs the same as the first, and where every correction arrives at the model anchored to the verbatim passage and the heading it sits under. That means: comment on each passage where it sits, in a workspace built for it, and emit the whole set as paired quote-plus-note pairs the model applies in one round-trip. The full pattern, manual recipe, and mechanism live in the multi-passage feedback guide. The browser tool that automates the highlighting and the export is OMGfixMD — the document never leaves your browser.