A field manual for giving LLMs precise feedback.
Long-form pieces on why the chat box loses at multi-point feedback, the paired-passage format that wins, and per-tool playbooks for Claude, ChatGPT, Cursor, and Gemini. Read in any order — each piece stands alone.
How to give Claude or ChatGPT feedback on 5 things at once
The paired-passage pattern — a manual recipe you can run in any plain-text editor, and the mechanism for why structured input beats prose every time.
Why ChatGPT rewrites the whole document when you only asked for one fix
The chat box has no target selection. RLHF trained the model to return complete documents. Four prompt attempts that almost work, one format that does.
PlaybookA prompt that gets Claude to only edit the passages you named
Every "only edit X" variant ranked by reliability, with copy-pasteable templates for Claude, ChatGPT, Cursor, and Gemini — plus a system-prompt rule for API and Project users.
ComparisonClaude Projects vs attaching a doc: which actually keeps your edits scoped?
Projects solves persistence. Attachments solve freshness. Neither solves scoping — the thing you actually wanted. Plus notes on NotebookLM, Canvas, and Gemini Files.
PlaybookHow to give Cursor feedback on a long Markdown file
Cursor edits prose like code: broadens context, applies a coherent diff, regenerates the neighborhood. Composer vs Agent vs chat, with a .cursorrules recipe that makes scoped edits the default.
How to add comments to a Markdown file (2026 edition)
Every practical method — HTML comments, Google Docs, Notion, GitHub PR, Slack, purpose-built comment layers — ranked, with the 2026 case the old methods don't cover.