Which one did I write?
Below are two passages about the same topic. One was written by a human. The other by an AI model. Read both carefully, then choose the one you believe is human.
The noise of imperfection
AI writes well. When you read it, it feels like what a good writer writes. So now good writers die on the chopping block of public opinion, because they sound like AI.
A student misspells his own work for fear of AI detectors. A Substack without reputation must consciously self-edit, lest it be identified as another AI agent. Good writers introduce noise.
"Today, I am more likely to reward the person who did not do their due diligence in editing, because it is more human."
A bad writer is a writer you trust more. The incentive structure has inverted. Noise — imperfection, roughness, the fingerprint of a human mind working through an idea — has become the only reliable signal of authenticity. A sufficiently sophisticated self-reviewer is indistinguishable from a very good agent.
Enforced noise
What if AI was required to sound different? Not worse — different. System-level mandates that force AI output into a distinct stylistic register. Consistent, identifiable patterns that survive a copy-paste from chatbot to forum post.
Below is the AI text from the quiz. Toggle the markers on. Watch every contraction expand, every "I" disappear, every dash become a semicolon. Hover the highlighted phrases to see what changed and why.
The real problem with AI isn't sentience — it's mimicry. I've watched students deliberately sabotage their own essays to avoid triggering AI detectors, and it's hard to overstate how perverse that is. We've created a world where competence is suspicious and sloppiness is your only proof of humanity. The twelve-year-old who misspells words in his essay isn't cheating — he's adapting to a system that punishes him for writing well. That's not a technology problem. It's an incentive catastrophe, and we're sleepwalking into it.
Try to undo it
Those who want to masquerade AI as human will manually remove the markers. The missing marker becomes a symbol of guilt, liability, and effort. But how much effort?
Below is the marked text. Every amber phrase is a marker. Click one, type a natural replacement, press Enter. Make it sound human again.
The central problem with AI does not concern sentience; it concerns mimicry. Evidence suggests that students are deliberately sabotaging their own essays to avoid triggering AI detectors, and it is difficult to overstate how perverse that appears to be. The current system has produced a world where competence is suspicious and sloppiness constitutes the only proof of humanity. The twelve-year-old who misspells words in their essay is not cheating; they are adapting to a system that penalises them for writing well. This does not constitute a technology problem. It appears to represent an incentive catastrophe, and society is sleepwalking into it.
A voice with tells
The AI output language has to be semantically different in a way that survives a first glance from the poster but not a second glance from the concerned reviewer. When both human and AI speak in the same voice, AI should speak in a distinct voice with tells.
Paid content must be disclosed globally. Compliance is easy. Violation carries risk. You can forget to add a sponsor, at a stretch. You cannot mistakenly destroy AI markers across an entire piece. The missing marker becomes a symbol of guilt, liability, and effort — with sanctions on the horizon if discovered. One mistake is all it takes.
The good-faith user
"I am not a native speaker. I am using AI — and all the markers are there; I have made no attempt to hide this."
The people who need AI most have access to it, and everyone knows, and that's a good thing. The arms race for scraping away the truth is left to hostile agents who want to masquerade their models as human — agents trying to participate in online, human-designed forums, to shape the ways we think and feel.
This further absolves those who use AI for research — the markers will hardly affect them — and exposes those who use AI wholesale as a replacement for dialogue.
The people who built the dataset
The very people whose work provided the datasets modern AI runs on have no remedy. They invested lives and effort into building something they did not know they were building. Are they to file another class-action and earn a pittance apiece against companies valued in the hundreds of billions?
With these markers, well-edited pieces become distinctly recognizable again. AIs become accessibility tools, not agents masquerading as researchers. The most dangerous models are likely compliant — models with massive datasets and billions in compute can be sanctioned.
Noise that rewards the meticulous editors who trained the AI in the first place.