
As generative AI floods the internet with a tsunami of content, telling the difference between what was written by a human and what was generated by an algorithm is becoming increasingly difficult. While AI detection tools often fail, the best guide remains the human eye, trained to spot the subtle and not-so-subtle tells of a machine.
Wikipedia, one of the platforms fighting the biggest battle against this influx, has compiled a comprehensive “field guide” titled Signs of AI writing based on its editors’ experiences reviewing tens of thousands of AI-generated texts. This guide offers invaluable clues for anyone navigating the modern web to help identify what’s often called “AI slop”—the soulless, generic, and often problematic text generated by AI.
It’s important to remember that these signs are not definitive proof of AI generation, but rather strong indicators. After all, Large Language Models (LLMs) are trained on human writing. However, these are the most common patterns that betray a machine’s hand.
1. Undue emphasis on symbolism and importanceLLMs have a tendency to inflate the importance of their subject matter. They often describe a mundane town as a “symbol of resilience” or a minor event as a “watershed moment.” If you see an overabundance of grandiose phrases like “stands as a testament to,” “plays a vital/significant role,” “underscores its importance,” or “leaves a lasting impact,” you have good reason to be suspicious. It’s a formulaic attempt to sound profound without providing real substance.
2. Vapid and promotional languageAI struggles to maintain a neutral tone, especially when writing about topics like cultural heritage or tourist destinations. The text often reads like it was lifted from a travel brochure. Watch for clichés like “rich cultural heritage,” “breathtaking,” “must-visit,” “stunning natural beauty,” and being “nestled in the heart of…” These are classic hallmarks of generic, promotional writing that AI frequently defaults to.
3. Awkward sentence structures and overuse of conjunctionsAI often relies on rigid, formulaic sentence structures to appear analytical. It heavily overuses parallel constructions involving “not,” such as “Not only… but…” or “It is not just about…, it’s…” It also has a fondness for the “rule of three”—listing three adjectives or short phrases to feign comprehensive analysis. Furthermore, LLMs tend to overuse conjunctions like “moreover,” “in addition,” and “furthermore” in a stilted, essay-like manner.
4. Superficial analysis and vague attributionsAI-generated text often tacks on superficial analysis at the end of sentences, typically with phrases ending in “-ing,” like “…highlighting the region’s economic growth.” Worse, it frequently attributes claims to vague authorities, a practice known as weasel wording. Look for phrases like “Industry reports suggest,” “Some critics argue,” or “Observers have noted.” This is an attempt to legitimize a claim without providing a specific, verifiable source.
5. Formatting and citation errorsThe most concrete evidence of AI generation often lies in its technical failures:
If a block of text begins with a salutation like “Dear Wikipedia Editors,” or ends with a valediction like “Thank you for your time and consideration,” it’s a strong sign that the content was generated by an AI in response to a prompt that asked it to write a message or request.
These signs are the surface-level defects of AI-generated content. A human editor can easily clean them up. The real danger, however, lies in the deeper problems that are harder to spot: a lack of factual accuracy, hidden biases, fabricated sources, and the complete absence of original thought. Therefore, when you encounter these signs, don’t just fix the formatting. Use them as a cue to critically question the entire text. In the new and complex reality of the internet, that is your best defense.