OMGfixMD · playbook
Playbook · April 2026 · 6 min read

A prompt that gets Claude to only edit the passages you named

Every prompt you've tried has the same shape. Some variant of "Only edit the bit about X, leave the rest alone, don't rewrite everything." It works maybe 40% of the time. On the sixty you get a different document back, sigh audibly, and try a sharper variant. Which also works maybe 40% of the time. Brace yourself: there is no magic phrase.

The thing that works is not a better prompt. It is a format. (And no, it's not a magic phrase.)

TL;DR

Prose instructions like "only edit X" are weak signals on long documents. The reliable recipe pairs a short instruction with verbatim quoted passages: "Apply these edits. Leave everything else unchanged." Then each block is a quote of the exact passage, followed by your note, separated by ---. Copy-pasteable templates below for Claude, ChatGPT, Cursor, and Gemini — the format is identical across all of them. Full explanation in the guide; the browser tool that builds these blocks in one click is OMGfixMD.

The prompts you've already tried, ranked by reliability

Honest disclaimerThe percentages below are pattern-recognition after fifty-plus runs across Claude, ChatGPT, and Gemini on long documents — not a benchmark with a fixed corpus and statistical-significance bars. Treat them as the order of magnitude, not the decimal point. The shape of the curve is the load-bearing claim.

On a long document (call it 1,800+ words), here is roughly what each variant gets you. These are the patterns anyone who has tried this fifty times will recognize.

Prompt attempt Roughly works… Why it's weak
"Only edit X." 30% "Only" is a soft word. No positive target. The model decides what counts.
"Edit X. Leave everything else unchanged." 45% Two constraints; still no coordinate. On a long doc the model regenerates for "consistency."
"Do not rewrite any paragraph you are not explicitly editing." 50% A negative. Negatives are weaker than positive targets in attention.
"Respond with only the edited version of the passage, not the full document." 70% Constrains output shape, which helps. Still lets the model choose which passage "the passage" is.
"Apply these edits, leave everything else unchanged."
+ verbatim quote of the passage
~98% The quote is the coordinate. Match is by exact-text, not prose interpretation.

The jump from 70% to 98% is the one that matters. It is not a prompt-wording jump. It is a format jump — the moment you stop describing the passage and start quoting it, the reliability ceiling moves.

The model isn't disobeying. It isn't inferring poorly. It is doing exactly what a prose instruction asked it to do. The instruction just didn't include a target — and no amount of rephrasing adds one.

If you want the full argument for why this is a format problem and not a prompting problem — including the attention mechanics under the hood — the piece is here. The rest of this playbook is templates you can paste.

The template (one passage, one edit)

Apply this edit to your last answer. Leave everything else unchanged.

"<paste the exact passage, verbatim, including punctuation>"
<your note about what should change>

Three things make this reliable:

VerdictThis single template covers 80% of "only edit this" requests. Copy it, paste it, fill in the two slots, go.

The template (five passages, one message)

When the list gets long, the same pattern scales — just add a separator between pairs so the model cannot bleed context between them:

Apply these edits to your last answer. Leave everything else unchanged.
---

"The system leverages a cross-functional synergy touchpoint"
[Off tone] Rewrite as "connects to X". Delete the corporate phrasing.

---

"delight velocity"
[Delete] We do not measure this. Remove the phrase entirely.

---

"The FAQ opens with three questions about pricing."
Move this paragraph to the end of the product-overview section.

---

"moreover,"
[Delete] Third "moreover" in this section. Pick one, delete the rest.

---

"Users will onboard via a Slack-first flow"
[Factually wrong] We do not have a Slack integration. Change to "via an email-first onboarding flow."

---

Five passages. One message. One round-trip. No "not that one, the other one" follow-up.

The [Label] in brackets is optional — it is a shorthand the model parses easily (Delete, Off tone, Made it up, Too vague, Too long). You can skip it and just write the note. The quote and the note are the load-bearing parts.

If you have system-prompt control

If you're hitting the model through the API, a custom GPT, a Claude Project's instructions, or a Cursor .cursorrules — you have one extra lever the chat-only user doesn't. A short system-prompt rule lets you skip the per-message "leave everything else unchanged" ritual.

When the user provides feedback as paired blocks of "quoted passage" + note,
separated by ---, treat each block as an atomic edit:
- Locate the quoted passage in the document by exact string match.
- Apply the note's instruction to that passage only.
- Leave every byte outside quoted passages unchanged.
- If a quoted passage cannot be matched verbatim, skip it and report the failure
  at the end. Do not approximate.

When the user does not use this format, you may apply edits more liberally
but must summarize what you changed.

This is the kind of rule that lives well in a .cursorrules file at the repo root, in a Claude Project's "instructions" field, or in a custom GPT's system prompt. It generalizes the paired-passage discipline across every conversation, so you stop typing the prefix manually. It's the highest-leverage change in this whole post if you have access to the system layer.

Per-model notes

Claude (Sonnet / Opus)

Handles the format cleanly out of the box. On very long documents (10k+ words in a single turn), append "Return the full edited document after applying all edits." if you want the whole thing back; otherwise Claude sometimes returns just the edited passages, which is often what you actually wanted. Anthropic's own prompt-engineering guide covers the underlying preference for explicit, structured instructions in more depth.

ChatGPT (GPT-4 / GPT-5 class)

Works identically. One quirk: ChatGPT has a slight bias toward "improving" adjacent sentences for flow. The "Leave everything else unchanged" prefix is not optional — drop it and the behavior creeps back in.

Cursor (chat mode)

Works in chat mode. In Composer or Agent mode the model has file-editing tools and will sometimes apply the edit as a diff you have to accept. That is usually what you want. For long markdown files specifically, there is a separate playbook because Cursor's tool-call behavior on prose vs code is worth its own piece.

Gemini (Pro / Ultra)

Works. Gemini occasionally returns the full document with the edits applied even when you didn't ask — this is usually fine, sometimes annoying. Adding "Only show the edited passages in your response" to the prefix gets you the compact form.

What can still go wrong

The cruel twist

Claude, ChatGPT, Cursor, and Gemini will all, almost certainly, ship a native "click here to leave a comment on this passage" surface inside of a year. Probably some of them in the next quarter. The prompt-as-a-format workaround in this post will then look quaint — the way "save as HTML, open in Word, use Track Changes" looks quaint now.

Fine. Until then, the templates above are what work. If you're tired of typing them by hand, OMGfixMD is a browser-only tool that builds the paired-passage block in one click — paste the model's answer, highlight every passage, attach a note per highlight, press ⌘⇧C.


OMG.

Questions people actually ask

What is the best prompt to get Claude to only edit specific passages?

There is no single magic phrase. The reliable pattern is a format, not a sentence: prefix your message with "Apply these edits to your last answer. Leave everything else unchanged." Then paste each passage verbatim in quotation marks, follow each quote with your note, and separate blocks with ---. The quote is the coordinate the model needs; the prose prefix alone is too weak a signal on long documents.

Does "leave everything else unchanged" actually work as a prompt?

It works about 30-60% of the time when used alone on a long document. It works close to 100% of the time when paired with a verbatim quote of the passage to edit. The prose instruction is a hint; the quote is the target.

Why does "only edit the third bullet" not work reliably?

Because "the third bullet" is a description, not a coordinate. On a long document, the model has to infer which bullet you mean from context — and when ambiguity is present, frontier models regenerate the neighborhood for internal consistency rather than risk a surgical-but-wrong edit. A verbatim quote removes the ambiguity. See the diagnosis piece for the full mechanism.

Is this prompt format specific to Claude, or does it work for ChatGPT and Cursor too?

It works identically across Claude, ChatGPT, Gemini, Cursor's chat mode, and any local model large enough to hold the document in context. The pattern is about how transformer attention handles structured input, not about a specific vendor's prompt handling.

What's the single shortest version I can memorize?

"Apply this edit. Leave everything else unchanged." Then on new lines: the quoted passage in quotation marks, your note beneath it. That is the minimum viable version. Scale up to multiple blocks with --- separators when you have more than one passage to edit.