Batman and the Case of the Possibly AI-Generated Synopsis
Did DC Comics use AI to write copy on the back cover of a Batman TPB collection? Does it matter?
Last week, a photograph of the back cover of the collected edition trade paperback of Batman: Dark Patterns began making the rounds online. I first saw it on r/comicbook subreddit in a post titled “Is DC generating the synopsis on the back of their TPBs?”
Reading the post and the comments, most folks seem, at least at first, to be reluctant to say with certainty that any AI was used to write the copy, which also appears on the page for the Kindle digital edition of the book. It's an understandable notion given that “this reads/looks like AI” has quickly become shorthand for “I don’t like this” regardless of the subject’s actual origin.
And, to that point, the first paragraph of the blurb seems to beat the accusations since it does exactly what you expect: praises the creators, compares the book to other well-regarded Batman comics, and lets prospective buyers know what kind of story they’re in for. It does all of this using language you'd expect in such a blurb ("grounded," "beloved classics," "talented," "striking," "gripping").
That first portion of the back cover text also closely mirrors the book's synopsis as found on the print edition's Amazon listing, in DC’s solicitations, and on the DC Universe subscription platform. But then there’s the second paragraph and, well, read for yourself:
“The series masterfully underscores Batman's dual identity as a methodical sleuth and shadowy beacon of hope, skillfully steering clear of cosmic and supernatural distractions. This intentional focus reassures readers of a steadfast commitment to a more authentic, detective-driven experience, highlighting relatable mysteries that brilliantly showcase the Dark Knight as a healer cloaked in darkness."
"Masterfully underscores" and “Intentional focus” read like the kind of wan phrasing that AI loves. "Authentic" to what "experience," exactly? Why use the clunky phrasing “detective-driven” after already having used the more natural "mystery-driven" in the previous paragraph? And while the other solicitation also mentions that the story lacks "cosmic" and "supernatural" elements, this one describes them as "distractions," which feels like unusually negative phrasing given how many other Batman titles featuring just such elements DC has also published and would like readers to buy.
This certainly sends off AI warning bells in my mind, and judging by the Reddit comments, I'm not the only one. Tossing this text into three different AI detectors (which are, admittedly, not always accurate) returns three results that are highly confident that the copy contains at least some AI text.
Bleeding Cool has already posted about this blurb and suggests that someone may have used Claude’s lengthen-to-fit features to rewrite the blurb to the desired length. That seems plausible and would explain the mixture of human and AI-generated text in the copy, as well as the similarities to the other versionof the synopsis. I haven't reached out to DC for comment, but Rich Johnston at BC did and didn't receive a response, so I assume they don't want to talk about it.
Taken all together, this feels like more than enough to justify asking some questions about the situation. And if DC isn't taking questions, we're left to ask questions of ourselves. In this case, the question is, if AI was used here, as seems plausible though not confirmed, are we okay with that?

Before making any attempt to answer that question, let me pause here to say what goes without saying: none of this is meant to reflect poorly on the actual comic book itself. Batman: Dark Patterns (by Dan Watters, Hayden Sherman, and Triona Farrell) is, for my money, the best Batman story that DC has published in years (yes, better than Absolute Batman). You should read it. It's great.
Now, with that out of the way, let’s revisit some of what Jim Lee, President, Publisher, and Chief Creative Officer of DC Comics, said about the use of AI at DC during his retailer day speech at New York Comic Con last year:
Let me make one prediction I know I can stand by: DC Comics will not support AI-generated storytelling or artwork. Not now. Not ever—as long as Anne Depies and I are in charge. Because what we do—and why we do it—is rooted in our humanity.
It’s that fragile, beautiful connection between imagination and emotion that fuels our medium—the thing that makes our universe come alive. It’s the imperfect line, the creative risk, the hand-drawn gesture that no algorithm can replicate. When I draw, I make mistakes—lots of them. But that’s the point. The smudge, the rough line, the hesitation—that’s me in the work. That’s my journey. That’s what makes it come alive.
That’s why human creativity matters. AI doesn’t dream. It doesn’t feel. And it doesn’t make art—it aggregates it. Our job—as creators, as storytellers, as publishers—is to make people feel something real. That’s why we create. That’s why we’re still here.
These comments were cheered in the room and roundly praised online, especially coming soon after Marvel Editor Tom Brevoort sent out a newsletter detailing his own experiments with AI. Lee drawing such a clear line in the sand felt like a breath of fresh air in the face of such equivocation and tacit acquiescence.
But do Lee’s comments really apply here? In his speech, Lee was obviously talking about the storytelling done in the comics, not the storytelling done to sell the comics. Not to mention, some comments in that Reddit thread say they’re surprised anyone even read that blurb, so who cares if the editor or assistant editor or intern or whoever at DC Comics was responsible for turning in that copy decided to get an AI assist?
I could enumerate the many social, environmental, economic, and creative costs that come with even the most basic and casual uses of AI. However, while those costs are real, they have been covered thoroughly elsewhere and are readily available online. At this point, I assume you either already know about them or don’t care to know.
It really gets back to that equivocation and acquiescence I mentioned earlier. In both my professional and personal life, I’ve interacted with plenty of people who use AI, and many of those same people claim to have very real concerns about AI, even, in some cases, considering themselves to be anti-AI, on balance.
So then, how do they justify using AI? Well, they're using AI “the right way," of course. What is the right way to use AI? I've found that, typically, the line between right and wrong when it comes to AI tends to be drawn, amongst those who care enough to draw it at all, between tasks seen as innately creative (writing, drawing) and tasks seen as separate and comparatively mundane, even if they support those creative activities (research, ideation). I personally believe this is a false dichotomy because the human consciousness that feeds creative works cannot be so easily partitioned, but that's probably a larger, more philosophical discussion better left for another time.
However, in my experience, the right way to use AI is the way the person you’re talking to uses AI. Maybe that’s just ideation. Maybe it’s only research. Maybe it's a little more. This tautological self-justification means that the set of AI activities deemed morally correct can grow to match one's immediate needs or shrink to enhance one's sense of superiority with ease.
And this is how the AI companies could, eventually, win: by achieving a critical mass of people using AI in their self-defined “right way." Because every person who uses AI the “right way” is compromised. Every person who uses AI the “right way” becomes that much less likely to call out others who use AI the “wrong way” because maybe one person's “wrong way” is just another person's “right way." After all, if I’m using AI, who am I to judge the way someone else uses it?
Maybe using just a pinch of AI to lengthen a blurb isn't such a big deal. And in that case, maybe using AI to write the entire blurb is even better. After all, with AI doing all this blurb writing, the editors who used to do these mundane tasks now have more time to do the real, creative heavy lifting of being an editor.
Of course, that may mean the company needs fewer editors overall, but that's good for the company’s bottom line anyway, right? A real win-win.
Yes, I know what a slippery slope argument is. However, in an era where Marvel Studios can lay off its entire visual development team because it thinks AI has the goods to imitate and iterate that team's work indefinitely without anyone knowing or caring, I feel it's a relevant enough point to make.
So here we are again, left to question where the line is, if there is one at all. Is it ad copy? Solicitations? Or full-on AI translations of manga and other foreign material?
Hopefully, Batman is on the case...