← skills directory○ community

Skill · blader

humanizer.

Strips the AI tells out of your text. 29 patterns, voice calibration, before-and-after every time.

$ git clone https://github.com/blader/humanizer.git ~/.claude/skills/humanizer

The reason humanizer exists is that working with the model for two years means the model picks up enough of your tells that some of them leak back into your own writing the other way. Every draft now goes through humanizer before it ships, not because the prose was bad, but because the boundary between "your voice" and "the model's version of your voice" gets harder to see the longer you write together.

Most "humanize this" tools are regex over a stop-list of em-dashes and the word delve. humanizer is different in two ways. First, the patterns come from WikiProject AI Cleanup, the people who clean up AI slop on Wikipedia at scale and who have therefore seen more of it than anyone alive. Second, it accepts a sample of your own writing and calibrates to your rhythm rather than producing generic clean text.

§01What it does

The skill knows 29 patterns, grouped into four families. Content tells include significance inflation, promotional language, vague attributions, name-dropping, superficial -ing analyses ("symbolizing... reflecting... showcasing..."), and the formulaic despite challenges framing. Language tells include AI vocabulary like testament, landscape, delve, navigate, copula avoidance such as serves as, features, and boasts, the rule of three, synonym cycling, false ranges of the from X to Y shape, and passive voice fragments.

Style tells include em-dash overuse, boldface overuse, Title Case Headings, emojis, inline-header lists, and signposting announcements such as Let's dive in. Tone and filler tells include chatbot artifacts (I hope this helps!), sycophancy (Great question!), hedging stacks (could potentially possibly), and generic conclusions like the future looks bright.

After the first rewrite, the skill runs a second pass with the prompt "What makes the below so obviously AI generated?", answers it briefly, and then rewrites again to fix what it found. That second pass is what catches the lingering tells the first rewrite missed.

§02Voice calibration is the trick

Most editing tools have one output voice: clean, neutral, generic. That works for technical documentation and fails for anything with a byline. humanizer has an optional step where you paste two or three paragraphs of your own writing first, and the skill analyzes:

  • Sentence length distribution. Short and punchy, long and flowing, or mixed.
  • Word-choice register. Casual, academic, somewhere between.
  • Punctuation habits. Commas, dashes, semicolons, parenthetical asides.
  • Recurring phrases and verbal tics.
  • How transitions are handled. Explicit connectors versus just starting the next point.

It then applies those patterns to the rewrite, instead of importing a generic "good prose" template. The difference is real. Without calibration, the output sounds like every other humanized text on the internet. With calibration, it sounds like you on a good day.

§03The before / after

Here is a paragraph about AI coding assistants in two voices. Same idea both times. The first is what comes out of the model on a default prompt. The second is what humanizer produces.

Before, with every AI tell in one paragraph:

AI coding assistants serve as an enduring testament to the transformative potential of large language models, marking a pivotal moment in the evolution of software engineering. In today's rapidly evolving technological landscape, these groundbreaking tools — nestled at the intersection of research and practice — are reshaping how engineers ideate, iterate, and ship. At its core, the value proposition is clear: streamlining workflows, enhancing collaboration, and fostering alignment. It's not just about autocomplete; it's about unlocking creativity at scale. While details are limited, it could potentially be argued that these tools might have a meaningful impact. Despite challenges typical of emerging technologies, the ecosystem continues to thrive.

Read it once and every pattern is at work: testament, landscape, nestled, em-dashes around an aside, the rule of three (ideate, iterate, ship and streamlining, enhancing, fostering), the negative parallelism (it's not just X, it's Y), the cutoff disclaimer, the formulaic despite challenges, the generic conclusion. None of the sentences carry information. Every one of them is doing performance.

After, humanizer's pass:

AI coding assistants are good at the boring parts of the job: boilerplate, config files, and the glue you don't want to write. They're worse at judgment. The dangerous part is how confident the suggestions look — I've shipped code that compiled, passed lint, and missed the point, because I stopped reading carefully. If you treat it like autocomplete and review every line, it's useful. If you use it to skip thinking, it will help you ship bugs faster.

Same word count, roughly. The new version drops every pattern from the first one, and it gains an actual position. That is the point of the tool.

§04Setup

# Claude Code
mkdir -p ~/.claude/skills
git clone https://github.com/blader/humanizer.git ~/.claude/skills/humanizer

# OpenCode (or skip; OpenCode also reads ~/.claude/skills/)
mkdir -p ~/.config/opencode/skills
git clone https://github.com/blader/humanizer.git ~/.config/opencode/skills/humanizer

Usage in either tool is the same. Invoke /humanizer, paste the text, and let it run. For voice calibration, paste a sample of your own writing in the same prompt before the text you want rewritten. Two or three paragraphs is enough.

◆ pull quote

LLMs use statistical algorithms to guess what should come next. The result tends toward the most statistically likely result that applies to the widest variety of cases. — WikiProject AI Cleanup

§05Caveats

  • It is an editor, not a writer. If the source paragraph does not carry an idea, humanizer cannot invent one. The before/after above only works because the second version had something to say.
  • Voice calibration needs real samples. Pasting in someone else's writing as your "voice" will apply their patterns to your text. The skill is honest about doing what you tell it.
  • Some AI patterns are also good writing. Em-dashes, parallel construction, and the rule of three exist for a reason. The skill tilts away from them deliberately because the model overuses them. If you have earned a rule of three, leave it in.
  • It will not save bad ideas from sounding bad. A clean version of an empty paragraph is still an empty paragraph. Edit the thinking before you edit the text.
◇ summary · field notes
$ vibgineer summarize humanizer
  1. 01
    Content tells
    • significance inflation
    • promotional language
    • vague attributions
    • notability name-dropping
    • superficial -ing analyses
    • formulaic challenges
  2. 02
    Language tells
    • AI vocabulary
    • copula avoidance
    • rule of three
    • synonym cycling
    • false ranges
    • passive voice
  3. 03
    Style tells
    • em dash overuse
    • boldface overuse
    • title case headings
    • emojis
    • inline-header lists
    • signposting announcements
  4. 04
    Tone & filler tells
    • chatbot artifacts
    • sycophantic tone
    • cutoff disclaimers
    • filler phrases
    • excessive hedging
    • generic conclusions
✓ 29 patterns · plus a final "obviously AI generated" audit pass.
Summary: Step 01: Content tells (significance inflation, promotional language, vague attributions, notability name-dropping, superficial -ing analyses, formulaic challenges). Step 02: Language tells (AI vocabulary, copula avoidance, rule of three, synonym cycling, false ranges, passive voice). Step 03: Style tells (em dash overuse, boldface overuse, title case headings, emojis, inline-header lists, signposting announcements). Step 04: Tone & filler tells (chatbot artifacts, sycophantic tone, cutoff disclaimers, filler phrases, excessive hedging, generic conclusions). ✓ 29 patterns · plus a final "obviously AI generated" audit pass.