But I use em-dashes/academic language/"highlighting" all the time! Why are you calling me AI/taking away my precious words?
No one is calling you AI or taking away your words. You are projecting that onto people who are not talking about you.
If you are seriously worried that people might mistake your original thoughts for AI, ask yourself why.
You're just witch-hunting based on no proof!
True; unless someone outright states they used AI, it's impossible to have proof. However, it's very possible to have strong evidence. AI writing has distinctive tells that are seen far less frequently in human writing; it tends to say the exact same things, over and over again, no matter the subject. With apologies to Justice Potter Stewart: you know it when you see it.
This is backed up by a great deal of research. Since AI has been impacting the world for years, it only makes sense that researchers have been studying it for years. Some findings include:
- People familiar with AI writing can achieve over 90% accuracy rates individually and over 99% accuracy collectively in spotting it, even when that text was "humanized" or paraphrased.
- The specific words and language patterns overused by AI were far, far less common in academic writing before the technology was widespread.
But AI detectors get it wrong all the time! They call the Declaration of Independence AI!
This is outdated; as of 2025, the top-of-the-line AI detectors are also achieving up to 99% accuracy on newly written/generated nonfiction text--as opposed to famous public-domain documents like the Declaration of Independence, that are reproduced everywhere on the internet for AI training sets to hoover up.
In general, though, if you're looking for AI-generated writing, you probably don't need a generator. If you're writing something and an AI detector flags a sentence as "AI-generated" when it's not, just rewrite the sentence. As the old writing adage goes, to be a good writer you must be willing to "kill your darlings."
You're just calling all good writing AI!
AI writing is not actually good, no matter how perfect its grammar or outlined its structure. AI tools, both by nature and by AI companies' process of flawed human feedback loops and system prompting, usually generate statements that are superficially "smart"-sounding but ultimately vapid and subtly promotional. In any situation, they default to the most generic, most banal, least interesting possible things to say:
LLMs (and artificial neural networks in general) use statistical algorithms to guess (infer) what should come next based on a large corpus of training material. It thus tends to regress to the mean; that is, the result tends toward the most statistically likely result that applies to the widest variety of cases. It can simultaneously be a strength and a "tell" for detecting AI-generated content. For example, LLMs are usually trained on data from the internet in which famous people are generally described with positive, important-sounding language. It will thus sand down specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the specific detail "invented the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated.
This essay in the Chronicle of Higher Education also does a fantastic job of illustrating this.
You're just anti-AI!
As surprising as this may sound, I'm actually not; I think there are valid cases for large language models. I don't think nonfiction writing is one of those.