Rougher Isn't Smarter
What the Anti-AI Crafting Trend Gets Right and What It's Missing
Summary: Every 2026 design trend report says the same thing: AI output looks generic, and the answer is craft, texture, imperfection. The diagnosis is right. But the explanation underneath it is incomplete. AI doesn't look the same because it's too polished. It looks the same because the models converge on narrow defaults and we're drowning in the volume. And the craft response, when it skips the intentionality, is just a different kind of shortcut.
I've been reading a ton of 2026 design trend reports. After a while, they all start to feel like everybody got the same memo.
Creative Bloq says this year is defined by "the deliberate rejection of AI's hyper-polished aesthetic." (Creative Bloq, December 2025) Kittl's trend report talks about designers using imperfection "to inject soul into the algorithm." (Kittl, January 2026) Graham Sykes at Landor coined "Anti-AI Crafting." The consensus is everywhere, and the diagnosis isn't wrong. AI output really does have a particular aesthetic problem. But I think the conversation keeps getting stuck on the surface when the interesting part is happening underneath.
Right About the What. Wrong About the Why.
AI-generated visuals have a recognizable sameness. That smooth, glossy, odd quality. You scroll through a feed and everything blurs together. Not necessarily ugly. Just... generic. Hyper-polished in a way that feels empty. Nick Foster, who’s explored the future for Apple, Google, Nokia, and Dyson, told Dezeen he’s tired of design’s “homogenous gloss” where everything is “neat, tidy, polite and free from anything gritty, confrontational or different.” (Dezeen, January 2026)
Most of us feel that. But the trend reports frame it as AI being “too polished” or “too perfect,” and the solution as embracing human imperfection.
AI doesn’t default to that sterile aesthetic because it’s too good. It defaults there because of how the models are trained. They converge on overlapping data, average out the edges, and land on a narrow range of visual defaults. The sameness isn’t a sign of quality. It’s a sign of statistical tendency. Combine that with the massive volume of AI output flooding every visual platform, and you get a landscape where everything feels like it came from the same place. Because… it did.
And that changes what the answer looks like. If you think the problem is that AI is “too polished,” the answer is to make things rougher. If you understand that the problem is narrow aesthetic defaults and volume, the answer is to bring stronger creative direction to the process, whether that means handcraft, AI-assisted work, or anything in between.
The Beauty Mark Problem
The instinct behind Anti-AI Crafting is good. Designers reaching for texture, physical materials, hand-built work, visible process marks? I’m into that. Burberry’s Cross-Stitch Knight Life campaign merged craftsmanship with fashion in a way that felt intentional. Madalena Studio cultivated bacteria on a cork logo for the brand Crucible and documented its organic growth. (Creative Bloq, December 2025) That kind of work resonates with me because every decision in it was deliberate.
But Elizabeth Goodspeed, writing for (It’s Nice That), raised a great follow-up question. Most designers don’t have the time, tools, or support to do fully analogue work. The infrastructure isn’t there. So what happens instead is strategic mimicry: the market wants “handmade” cues, and designers simulate them digitally. She compared it to penciling in a beauty mark. An intentional imperfection designed to signal authenticity, not the result of an actual process.
Think about what that means. Slapping a rough texture on a design to signal “not AI” and generating a hyper-polished image without thinking about it are two sides of the same coin. Both skip the creative decision-making that actually makes design meaningful. The aesthetic is different, but the thoughtlessness is the same.
It Was Never About the Tools
The designers getting the best results right now, whether they’re working with their hands or with AI, are the ones bringing strong intentionality to the process. The medium isn't what matters. The thinking behind it is.
We had sameness before generative AI. Template culture, stock photo dependency, Dribbble-driven homogeneity. The tools change. The underlying problem doesn’t: when we optimize for speed and volume without creative direction, EVERYTHING converges.
AI just made the convergence faster and more visible. Which means the opportunity is also more visible. If you’re willing to push past the defaults, to treat AI output as raw material instead of finished product, or to invest the time in craft, the gap between intentional work and everything else has never been wider.
The sameness problem is real. But the solution isn’t as simple as making things rougher. It's about being more deliberate.



