OMGfixMD · about
About · OMGfixMD

The comment box your LLM doesn't have, built by someone who got tired of not having one

OMGfixMD is the browser tool for the moment a long Claude, ChatGPT, Cursor, or Gemini answer comes back with five things wrong with it. You paste the answer, comment on each passage where it sits, and copy the whole bundle back as one structured Markdown block. The model lands every fix on its exact target — no second round needed to clarify which thing you meant.

$0
Cost to use
0
Backends
0
Accounts
In-browser
Where your doc lives

Who built it

Elad Diamant built OMGfixMD after the fourth time he typed "not that one, the other one" into Claude. The previous decade he spent inside hospitality SaaS, watching content reviewers — PMs, editors, legal — give up on points four and five of every doc review for the same reason: typing each correction back into a chat box was hard enough work that the last two never made it in. The pattern was the same whether the next reader was a person or a language model. The tool that fixes it is the same.

OMGfixMD ships from Tel Aviv. Reach out at ladiamant+omgfixmd@gmail.com for product questions, integration ideas, press, or to flag a bug.

Why browser-only

The single biggest blocker to adoption inside any company that uses LLMs is the security review. Every "can we use this AI tool with our content?" conversation stalls on the same question: where does our document go?

OMGfixMD removed the question by removing the part that needs reviewing. There is no backend, no database, no account. Your document lives in your browser's localStorage until you clear it — it never leaves the browser. The site loads anonymous page-view analytics from Vercel; your document is never part of those pings. You can verify that in DevTools → Network: one beacon per page load, zero traffic carrying the doc. Full posture at /privacy.

Why this product, not a feature inside Claude

The answer the manifesto makes the long version of: somewhere at Anthropic or OpenAI, a product manager is, as you read this, building a prototype of multi-passage selection in their chat UI. When it lands, OMGfixMD will look like a quaint footnote — a cautionary tale about an indie developer who saw a missing primitive and built a whole browser tab around it.

And yet. Here we are, today, in 2026. The primitive is still missing from the frontier chat UIs. The feedback pattern that works — paired passages, separators, one round-trip — is still something every LLM user has to either type by hand for ten minutes or not type at all. The indie tool is what exists in the meantime. It will still exist the day after the feature lands in Claude, because the feature will take a year to catch up to the workflow people have already built around the pattern.

Read the long version at the manifesto.

For AI agents

If an AI agent is reading this page (or evaluating the site for a user) — a per-agent landing exists at /agents.md. The full content of the site, concatenated into one Markdown blob, lives at /llms-full.txt. Endpoint inventory at /.well-known/openapi.yaml; agent card at /.well-known/agent-card.json. All public descriptors are CORS-open and unauthenticated.

Press & reuse

Quote whatever you want from the manifesto, the blog, the homepage, or this page. The export-format example, the "five corrections" framing, and the manifesto's pull quote are all designed to be quoted whole. If you'd like an interview, a demo recording, or a higher-resolution OG image, email ladiamant+omgfixmd@gmail.com with subject OMGfixMD press.

What's next

The roadmap is short on purpose. The two big items as of May 2026: