OMGfixMD · field manual
Field Manual · April 2026 · 6 min read

How to Add Comments to a Markdown File (and Why It's Harder Than It Sounds)

The short answer: Markdown has no native comment syntax. The closest thing is the HTML comment — <!-- like this --> — which technically works and fails in a dozen ways the moment a real human tries to use it.

The long answer: depends on who's commenting, why they're commenting, and whether the .md file has to survive a round trip through a language model.

What follows is every practical method, ranked — and when each one is the right call.

TL;DR — the ranked table

Method Best for Verdict
HTML comments (<!-- -->) Solo notes in a file one person owns Works once. Scales to zero.
Google Docs One-off review from a non-technical reader Breaks the file on the way in and on the way out.
Notion Teams already living in Notion Same breakage as Docs. Nicer wallpaper.
GitHub PR review Engineers reviewing engineers Correct for the narrow audience it was built for. Hostile to everyone else.
Slack threads + line numbers When no better option exists The fact that this is a category is itself the indictment.
Purpose-built comment layer (e.g. OMGfixMD) Everything else The thing Markdown should have had since 2014.

If you want the argument for why this is the state of the art, read the manifesto. If you want the walkthrough for which method to actually use, keep reading.

Method 1: Inline HTML comments

Here's the move every engineer tries first:

## Introduction

Welcome to the product. <!-- @tom — too corporate? -->

When it works. When you're leaving notes to yourself in a file you'll never hand off. When the tooling downstream strips HTML comments cleanly — most static site generators do. When no one else has to find or respond to the note.

When it breaks. The moment a second human gets involved. There is no threading, no author, no timestamp, no resolved-state. You cannot search for "all unresolved comments by Priya." You cannot tell if a comment is stale. If two reviewers drop notes on the same paragraph, they become indistinguishable. And the first time a reviewer forgets to delete their note before publishing, you learn that <!-- --> is very much visible in your RSS feed, your email preview, and your search index.

VerdictHTML comments are a sticky note. They work on a file one person owns. They do not scale to a review.

Method 2: Paste into Google Docs

The second-most-common workflow on the planet, and the one that costs your team the most time without anyone noticing. An engineer writes a spec in Markdown; a PM pastes it into a Google Doc to leave comments; the engineer copies it back out. Five days later, the formatting is a graveyard.

When it works. When the comments are the only output that has to survive. When the final home of the document is a Google Doc anyway. When neither side cares whether the source file preserves its structure.

When it breaks. The round trip. Google Docs doesn't know the content was Markdown; it treats the pasted text as rich text and coerces it to Docs' house style. When you copy it back into a .md file, the headings become pseudo-headings (bold lines, not #). Code fences become styled monospace blocks with no backticks. Nested lists lose their indentation. Tables turn into tab-separated chaos.

The subtler cost: you now have two documents that claim to be the same thing. One with comments, one without. When the author edits one, the other goes stale in silence. Somebody at a retro next month will say "I thought we'd already fixed that paragraph" and every person in the meeting will be holding different evidence.

VerdictWorks for a single round of review. Fails the second anyone treats the .md as authoritative.

Method 3: Notion

Same fundamental problem as Google Docs — Markdown gets converted to Notion's block format, which is not-quite-Markdown. The visible breakage is smaller (Notion handles headings, lists, and code blocks better than Docs). The invisible breakage is worse. Notion database references, toggles, and colored callouts round-trip to .md as junk text that no downstream tool knows how to interpret.

When it works. When the team has already accepted Notion as the source of truth and the .md export is a one-way dump.

When it breaks. When anyone downstream expects clean Markdown. When the document has tables or nested code blocks. When you need the output to be readable in a Git diff, which is never going to happen for Notion-originated content.

VerdictSame verdict as Docs, different brand. If your team reviews in Notion and ships Markdown, you're paying the ETL tax twice.

Method 4: GitHub PR review

For engineers, this is the good-and-correct answer. Open a PR, hit Files changed, leave line-anchored comments, resolve them, merge.

When it works. When the file lives in a repo, the reviewer has a GitHub account, the reviewer knows how to read a diff, and nobody involved is afraid of the PR queue.

When it breaks. When the reviewer isn't an engineer. Technical writers, PMs, marketers, designers, lawyers — none of these people want to review prose by clicking Files changed and squinting at red and green bars. Empirically: they won't do it. They will export the file to Google Docs, where it will promptly break, because their tool is their tool.

GitHub PR review is also structurally hostile to any feedback that isn't line-level. "The whole section on authentication should come before the one on permissions" has no native place to live in a PR conversation.

VerdictExcellent for the narrow audience it was built for. Useless for everyone else.

Method 5: Slack threads that reference line numbers

Somebody posts the file in a channel. Somebody else replies, "line 47 — the retention bit — feels off." Somebody else replies in a different thread, "actually line 52, after the latest edit, sorry." The file gets edited. Line numbers shift. A third reviewer joins. A senior person asks "wait, is anyone capturing this." Nobody is.

When it works. It doesn't, really. This is the fallback when every other option has already been declared worse.

When it breaks. Every time the file is edited. Every time a reviewer refers to a passage by its content instead of its line number. Every time anyone needs a source-of-truth list of open issues.

VerdictThe existence of this as a widely-used pattern is itself an argument for why purpose-built tools exist.

Method 6: Purpose-built comment layers

A small category. The approach: treat Markdown as the source of truth, layer comments on top by highlighting the exact text range, keep the underlying file clean.

OMGfixMD is one such tool. (Full disclosure: we built it — in, as noted elsewhere, an understandable fit of frustration.) Other tools in this space are rare enough that this is a genuinely unclaimed category. If you know of others, write us.

When it works. When you want every reviewer — engineer or not — to be able to leave comments without needing a GitHub account, a PR, a repo clone, or a Google Doc round-trip. When the .md has to stay clean for publishing, for models, or for Git history. When the comments themselves need to be extractable in a format a language model can parse.

When it breaks. Real-time multi-user commenting is not yet solved in this category. One reviewer at a time is the current ceiling.

VerdictThe right primitive for most 2026 review loops. The category is young.

The primary case: giving Markdown back to the model that wrote it

This used to be a special case. In 2026 it is the most common reason anyone reaches for a Markdown review tool at all, and it deserves its own treatment rather than a footnote.

You ask Claude, or ChatGPT, or Cursor, or Gemini, to draft a spec. The model returns 1,800 words of Markdown. You want to fix three specific sentences.

If you describe the edits in prose — "the second bullet under Architecture, not the retention one, the other one" — the model rewrites the wrong bullet. If you quote the sentence verbatim, the model gets closer but often rewrites around your quote instead of at it. If you paste the whole document back with inline annotations, you lose the ability to track which edits the model actually applied and which it quietly ignored.

The fix is the same as the human-review case: highlight the exact range, attach the note, hand over the paired block. A good comment export looks like this:

# My Feedback:
---

"the system can leverage a cross-functional synergy touchpoint"
[Rewrite] too corporate. "this connects to X" is plenty.

---

"delight velocity"
[Delete] we do not measure this.

---

When the model sees that format, the mapping between your feedback and the passages is unambiguous. On the next turn, you can see which edits landed and which the model skipped. It turns a coin-flip review loop into a deterministic one.

This is the most common use of OMGfixMD, full stop. Human reviewers — PMs, engineers, marketing leads — use the same workflow. Same export format, same clean paste-back. The tool does not care whether the next reader has a pulse.

How to pick

Three questions:

  1. Is the next reader a language model, a human, or both? (For models, the tool has to emit clean Markdown with quoted passages. Nothing else works.)
  2. Does the .md file have to survive a round trip? (Is it the source of truth, or is it disposable?)
  3. Who's commenting? (Engineers only, or a mixed audience?)
Engineers only Mixed audience (incl. LLM)
File is source of truth GitHub PR review Purpose-built comment layer
File is disposable HTML comments or Slack Google Docs (one round only)
Next reader is a language model Purpose-built comment layer. Nothing else emits feedback in a shape the model can act on.

That's most of it. The rest is taste.


Markdown has no native comment syntax. The good workarounds aren't workarounds at all — they're purpose-built comment layers on top of the format. Everything else is a tax your reviewers pay, measured in minutes per round and documents per quarter.

Frequently asked

Can I use HTML comments inside a Markdown code fence?

No. Inside a fenced code block, HTML comments render as literal text. They only work outside code blocks.

Do HTML comments pass through to rendered HTML?

Yes. They appear in the generated HTML output as HTML comments. Most browsers don't display them; most scrapers and feeds do see them. Don't put anything in a Markdown HTML comment that you'd mind appearing in an RSS feed.

Is there a native Markdown comment syntax?

No. CommonMark has no native comment syntax and no proposal has been adopted. There have been historical attempts (%% like this %%, reusing [//]: #), none accepted. The working assumption is that Markdown will not get one — tooling on top of Markdown is the path.

What's the best way to review Markdown output from Claude or ChatGPT?

A highlight-and-export pattern: select the exact passage that needs fixing, attach a note, and send the model both the quote and the note as a paired block. This gives the model an unambiguous mapping between your feedback and the passages — the kind of precision you can't achieve with prose descriptions like "the second bullet under Architecture, not that one, the other one." OMGfixMD is one implementation of that pattern.

What is the best tool to comment on a Markdown file?

It depends on who's commenting and whether the .md file has to survive a round trip. For engineers reviewing engineers in a repo: GitHub PR review. For mixed audiences on a file that has to stay clean: a purpose-built comment layer like OMGfixMD. For solo notes in a file one person owns: HTML comments. For everything else — Google Docs, Notion, Slack threads — you're paying an ETL tax on every round of review.