I spent three months writing purely AI-generated content for a test site. 45 articles, all generated via Claude with minimal editing. Engagement metrics: fine. Rankings: garbage.

The articles ranked position 15-25 for their target keywords. Not position 5. Not position 10. Bottom of the second page.

Then I started rewriting them manually, pulling out the specific details, adding real experience, removing the semantic bloat. Same keyword targets, same backlink profiles, different writer. Those posts climbed to position 3-8 within 60 days.

Google isn't penalizing AI writing explicitly. But it is ranking content that sounds like someone actually did something over content that just sounds informed.

The pattern

AI-generated content fails at SEO for one specific reason: it's indistinguishable from 50,000 other AI-generated posts covering the same topic.

You ask Claude: "Write a blog post about keyword research tools." Someone else asks Claude: "Write a blog post about keyword research tools." Another person asks the same thing.

Google sees content A, content B, content C. They're 85% semantically identical. They have the same structure, similar examples, the same depth of research. So Google ranks one of them (probably the one with more backlinks) and buries the rest.

The content is correct. It's well-written. It's SEO-friendly. It's just... indistinct.

What actually works

The posts that climbed to position 3-8 had these in common:

1. A specific story or example

AI version: "There are many keyword research tools available. SEMrush is one popular option that many marketers use."

Real version: "I've been using SEMrush for six months. Found a niche keyword with 40 searches/month and basically no competition. Wrote a 1,200-word post on it. It's ranking position 2 now and drives about 50 visits/month. Here's what I did differently."

The specific number (40 searches), the real outcome (position 2), the tangible impact (50 visits) — none of that is in the AI version. Those details are what differentiate.

2. A contrarian take or a limitation the AI won't mention

AI: "SEMrush is great for competitive analysis. It offers comprehensive features."

Real: "SEMrush is overkill if you're just starting out. The interface is overwhelming, and half the features you won't touch. I'd recommend starting with the free tier of Ubersuggest. Once you have budget and you know what you're analyzing, upgrade."

AI won't give you a reason to not use the tool because it's trained to be positive. Humans recommend things with caveats. That feels real.

3. Context that only comes from doing the thing

AI doesn't know what it's like to spend 8 hours on keyword research and find nothing. It doesn't know the frustration of a tool being slow during peak hours. It doesn't know which tools have customer service that actually helps vs. chatbots that deflect.

Real writers mention this stuff because they've lived it. The posts that ranked best had three or four throwaway comments about tool limitations or quirks that only someone who'd actually used the tool would know.

The formula that works

I call it the "70/30 rule":