This was originally intended to be a brief comment addressing A.B. Timothy’s thoughtful reply to my review of the May 2025 BookBub report “How Authors are Thinking About AI”, but it got a wee bit too long, so I’m posting it here.
Thanks for the response, A.B.
I appreciate your honesty about using Grok for marketing strategy while feeling conflicted about other AI uses—and especially your acknowledgment that you’re “throwing stones from a glass house.” That takes integrity.
Your concerns about “humanity in art” pushed me to be more specific about where I draw the line with AI use in writing.
My original article was about market reality—AI use is already widespread and it works. But you’re asking something deeper: does certain AI use betray the creative contract between author and reader?
We both agree that it does, but where we differ I think is where that line is drawn.
My concern isn’t “AI bad, humans good.” It boils down to authorship fraud, where the credited author didn’t actually write the work. AI-generated prose passed off as human-written? That’s fraud. Absolutely.
But what about undisclosed ghostwriters working under NDA? I also believe that’s fraud, same with “co-authors” taking credit and royalties for work they didn’t do.
They all violate the same principle: readers buying “Author X’s book” deserve to get Author X’s creative work.
I’m not pro-AI. I’m pro-right-tool-for-the-job. AI is just another tool—appropriate for some tasks, catastrophically wrong for others. This is why I’m transparent about my AI use but also specific about what that means:
AI writing prose: I would never do this. This is fraud. The story must be written by me—my voice, my characters, my plot decisions. I’ve tested AI extensively for creative work, and it fails catastrophically. As I demonstrated in my June article, AI butchered Scott Lynch’s prose, making absurd critiques like saying his writing “lacks emotional depth.” AI isn’t intelligent, isn’t smart, and certainly isn’t creative—it’s a pattern-matching machine trained on a metric buttload of complete shit writing. But even if it worked perfectly, using it for prose would be lying about authorship.
AI for marketing materials: I art direct these the same way I direct a commissioned cover artist as a freelance graphic designer. Nobody thinks I personally painted the covers I design. I’m a designer. I commission illustrators. If the client wants I’ll craft AI visuals.
AI for manuscript analysis: I use it like a research assistant, finding patterns and organizing data—like when I used Claude Sonnet 4 to systematically analyze my 136,000-word manuscript for audiobook production, creating pronunciation guides, character breakdowns, and content warnings in hours instead of weeks. The creative decisions are all mine.
AI for brainstorming: I use it as a sounding board, exactly like you use Grok for marketing strategy. The ideas are mine; the AI just helps me think through possibilities.
What bothers me most is the indie publishing world has normalized multiple forms of authorship fraud that nobody talks about.
Professional ghostwriting: Ghostwriters earn around $0.35-0.75/word—about $28,000-$60,000 for an 80,000-word novel. The ghostwriter writes the prose and gets no credit or royalties. The credited author may (but not always) create the story, setting, and characters.
Exploitative ghostwriting: Even worse, some bestselling indie authors pay poverty wages—$0.10/word or less—essentially $8,000 or under per novel. They pump out multiple books per year on the backs of exploited writers working under NDA (while bragging about their bullshit “productivity hacks”).
Fraudulent “co-authoring”: Then there are the authors who produce several “co-authored” books per year but don’t contribute meaningfully to the creative work. They find writers to create entire novels, put both names on the cover, and take a significant share of ongoing royalties despite contributing nothing. The actual writer does 100% of the creative work but only gets a portion of the income—forever. The “co-author” is just exploiting their platform and audience.
All of these are authorship fraud—someone else wrote it, but the credited author is taking credit (and a significant chunk of the money).
Someone will inevitably leave me a comment that ghostwriting is different because “at least a human wrote it.” But this argument only makes the exploitation worse—especially when paying poverty wages of $0.10/word or less. When you commission a ghostwriter for $8,000 to write your 80,000-word novel, you’re exploiting a human writer’s creativity, paying them far below professional rates (~$40,000), giving them no credit, no royalties, no future income, taking credit for THEIR human creativity, and profiting from THEIR labor while they get poverty wages and absolutely fuck-all to add to their own bibliography.
At least AI-written prose doesn’t exploit a human being—though it’s still authorship fraud against readers who think they’re buying your creative work.
My principle is simple: the credited author must actually write the prose. Whether a human or AI wrote it doesn’t change the fraud—it only changes whether you’re also exploiting another human writer.
Both AI prose and ghostwritten prose should require disclosure—but currently, only AI does. Amazon requires authors to flag AI-generated content, but ghostwriting remains invisible. The industry’s acceptance of undisclosed ghostwriting doesn’t make it ethical. It makes the industry hypocritical.
And let’s be honest. The indie world has a long history of various forms of fraud. AI-written prose is just the latest version of an old problem.
Book Stuffing: Authors like Chance Carter added multiple already-published books to new releases, running books near 3,000 pages maximum, earning about $13.50 per book through fraudulent page reads, with some authors making over $100,000 per month. They included links directing readers to the last page for giveaways, registering thousands of fraudulent page views and robbing honest authors of their share of the KDP Select fund.
Click Farms: Bad actors used AI bots or cheap devices in Third World countries, having human trafficking victims walk racks doing page-turns to generate fraudulent Kindle Unlimited reads. Click farming seized greater share of the KDP Global Fund, with money stolen from other indie authors, not Amazon.
Review Manipulation: In 2023, debut author Cait Corrain was exposed for creating fake Goodreads accounts to one-star other debut authors’ books (mostly authors of color) while upvoting her own book.
The common thread in all these scandals: taking credit and/or money for work you didn’t actually do—whether page reads you didn’t earn, reviews you didn’t receive legitimately, or prose you didn’t write.
Amazon sued authors for book stuffing. Amazon banned authors caught using click farms. The industry condemns review manipulation.
But the industry largely accepts ghostwriting and fraudulent “co-authoring” as fair play.
Why the double standard? If the principle is “don’t commit fraud to make money in publishing,” then all of these should be equally condemned.
But that’s a much longer essay for another day.
I’m not an AI evangelist who thinks the technology solves everything. I’ve documented extensively where AI catastrophically fails:
AI as Gatekeeper: My article earlier this month showed how publishers using AI screening would have auto-rejected Beloved, The Handmaid’s Tale, Catch-22, and Slaughterhouse-Five. AI can’t distinguish satire from racism, condemnation from glorification, or character voice from author endorsement. AI sensitivity screening doesn’t just give bad feedback—it acts as a gatekeeper that rejects manuscripts before human judgment can be applied, and it’s worst for exactly the kinds of books that need to be published.
AI for Creative Feedback: As my June article proved, AI makes absurd critiques, contradicts itself session-to-session based on prompt variations, and will make you a worse writer if you blindly follow its advice.
AI for Prose: The Authors Guild is clear: “AI-generated text is not your authorship and not your voice… When you use AI to generate text that you include in a work, you are not writing—you are prompting. Choosing to be a professional prompter is not the same as being a writer.”
I oppose AI prose as strongly as you do—because it’s fraud. I also oppose AI gatekeeping because it’s destroying literature. And I oppose AI for creative feedback because it makes writers worse.
However, AI is an incredibly useful tool for several scenarios:
Systematic analysis: Finding plot holes, tracking character speech patterns, identifying inconsistencies, analyzing manuscript structure, calculating POV percentages—tasks that took weeks now take hours.
Data organization: Creating character databases, pronunciation guides for multiple distinct linguistic systems, world-building documentation, content categorization.
Pattern recognition: Processing information at scale that would be impossible manually.
Marketing assets: Concept art, blurbs, cover art, promotional graphics, book trailers, and more.
You say you feel “betrayed” when writers use AI for covers or outlines. I’m curious why?
If someone uses AI to write their prose and passes it off as their own: Yes, that’s betrayal. That’s authorship fraud. They didn’t write the book.
If someone uses AI to generate marketing materials: How is this different from commissioning a cover artist? You’re doing exactly that—commissioning a human artist for your cover. Neither of us is claiming to have personally painted our covers. The only difference is your vendor is human and (presumably) expensive. Both of us are art directing. Both of us are making creative decisions about concept, composition, mood. The vendor executing our vision is different, but the creative role is the same.
If someone uses AI to brainstorm or outline: You use Grok for marketing strategy—literally AI helping you think through how to present yourself. How is that different from using AI to explore plot possibilities? In both cases, the human makes the final creative decisions. Authors use all sorts of tricks to brainstorm plots. Hell, some use “story dice.” How is AI any different?
If someone hires an undisclosed ghostwriter: This should trigger the same “betrayal” feeling as AI-written prose—but does it? Because this is already common in indie publishing, and there’s far less outrage, if any at all.
You’re commissioning a human artist for your cover. That’s a perfectly valid choice, and I respect it. But let’s be clear about what’s happening: you’re providing creative direction to someone else who executes it. You’re art directing.
When I use AI for marketing materials, I’m also art directing—providing creative vision, curating outputs, making aesthetic judgments. The vendor is different (AI vs. human illustrator), but the creative role is the same.
Ultimately the “humanity” in both our approaches is in the creative direction and judgment, not in who/what holds the brush. And as for ”soul,” man, I’ve been in the commercial art and advertising business for three decades. Madison Avenue has no soul.
Readers buy books for the story—the prose, characters, emotional journey created by the credited author.
They don’t buy books for how the cover was made, what tools were used for marketing, whether the author used human or AI research assistants, what brainstorming methods were employed, or whether they used Grammarly (which is an AI-powered tool, don’t forget).
As long as the credited author actually wrote the story, the production process is just business decisions. As far as I’m concerned the betrayal only happens when someone claims authorship of work they didn’t create—whether that’s AI-generated prose or ghostwritten novels.
I’ve documented elsewhere that despite stated ethical concerns about AI, readers don’t actually boycott AI covers, because frankly most readers just don’t care. I’ll allow some care about AI use in covers, not because the quality is worse, but because they value human labor in art creation or have ethical concerns—I’m not going to get into the weeds about the various ethical arguments against AI such as training data sources, environmental impact, labor displacement, etc., because I just wrote about it extensively yesterday—but if they do, that’s a valid preference, just as some consumers prefer organic produce or fair-trade products. But those preferences don’t make alternatives unethical. The distinction matters: readers who care can seek out human-illustrated covers. Readers and authors who don’t care shouldn’t be told their choice is morally wrong.
You’ve acknowledged using Grok for marketing strategy. You’ve commissioned a human artist for your cover. Under what principle is AI for marketing strategy acceptable, but AI for marketing execution (covers, ads) betrayal?
The only consistent principle I can see is: the story must be written by the credited author. Everything else—marketing, tools, vendors—is just business operations.
Since you blogged about it, I’m assuming you disclose your Grok use to your X followers. If you don’t disclose, is that different from authors who use AI cover art but don’t disclose it?
I think the disclosure issue points to the real problem: there’s a stigma around AI use that exists regardless of whether the use is appropriate. One out of three authors aren’t hiding their use of AI because their work is AI-written—many are using AI for legitimate purposes like marketing materials, research, and analysis. They’re hiding because disclosure triggers backlash, not because the work itself fails. The Bob the Wizard cover scandal proves this: 2,500 fantasy readers couldn’t identify AI art in a scrutiny context. The backlash came after disclosure, not from the work itself (which was dope, incidentally—I loved that cover and bought the book because of it).
The Future You Fear
No disrespect intended, but your apocalyptic framing—“gleaming-enticing piles of AI garbage,” and eventual “jihad” to reclaim humanity—is the same Pavlovian “slop” response I get from randos on Twitter every day when I post concept art generated by AI. What makes AI-generated imagery “garbage” as opposed to kitbashed stock photography or pennies-on-the-dollar waifu crap from a Fiverr artist in Elbonia?
I’ve been experimenting with generative AI images and video for years now. The quality is progressing exponentially. In 2022-2023? Sure, AI art was garbage—malformed hands, six fingers, eyes going different directions. That was real. That was also over two years ago. Current models like Midjourney V7, Stable Diffusion XL, and Flux generate professional-quality imagery that people in blind tests prefer to human art when they can’t detect which is which. They preferred AI-generated artworks when unlabeled and could only detect them at 53-55% accuracy.
Literally a coin flip.
And that was over a year ago. The technology has only gotten better. And will continue to do so.
Bad AI art will hurt your sales. So will bad stock photos. Bad 3D renders. Bad Fiverr designs. The word “bad” is doing the heavy lifting here, not “AI.”
There won’t be a “jihad to reclaim humanity.” People said the same thing about photography replacing painting. Digital art replacing traditional media. CGI replacing practical effects. Photoshop replacing airbrush. Now they’re all ubiquitous tools. AI visual imagery will be too. It’s no more soulless than a 3D model from Daz Studio. It’s just another tool.
I oppose AI prose (June article). I oppose AI gatekeeping (October article). I oppose AI making creative or moral judgments.
What I support is AI leveraged to execute human vision—the same way a commissioned artist, a research assistant, or a marketing consultant does. This doesn’t diminish human creativity; it democratizes access to professional production values so more authors can focus on what matters: writing compelling human stories.
As for AI books flooding the market with garbage—KDP already publishes over 1.4 million books annually, roughly 117,000 per month. The market’s been flooded with human garbage for years. AI just changes the source of some of the shit, not the volume. Quality problems are structural to platform economics, not unique to AI technology.
AI as a tool promises to level the playing field for independent creators and allow them to compete with the big players. Tradpub spends $10K+ on cover budgets, professional designers, and marketing teams. With AI tools, indies can create legitimately professional-looking covers and marketing assets instead of using clip art or stock photography, which just screams “amateur unmitigated garbage.” I’ve been experimenting with professional-grade promo videos for Instagram and TikTok. The quality has to be seen to be believed. And again, it will only get better.
And at the end of the day this isn’t about competing with traditional cover artists—because frankly your average indie author could never afford one anyway. It’s about democratizing opportunity. Competing with publishers and indie content mills with six-figure advertising budgets.
As far as I’m concerned the bottom line is the credited author must write the story. The prose, characters, plot, voice—this can’t be outsourced to AI or ghostwriters without disclosure. That’s authorship fraud.
Everything else is vendor services and tools. Marketing materials, analysis, brainstorming assistance—these aren’t the product readers are buying. They’re business operations.
Transparency matters for authorship, not for process. Readers deserve to know who wrote the story. They don’t deserve to audit the tools you used in marketing—or even editing, frankly. And like I said earlier, even tools like Grammarly use AI extensively. Even spell-check does. Hell, everything does now, whether you know it or not, or like it or not. The distinction between “AI-generated” (created by AI) versus “AI-assisted” (edited/refined by AI) is crucial. Using AI to edit your human-written prose shouldn’t require disclosure. Using AI to write the prose does.
I oppose AI-written prose as strongly as you do—because it’s fraud. But I also oppose undisclosed ghostwriting and exploitive “collaborative partnerships” for the same reason. All violate the authorship contract with readers.
Using AI for marketing materials, analysis, brainstorming, etc. is legitimate use that doesn’t require disclosure because it’s not authorship. You use Grok for strategy. You commission human art. Both are legitimate business decisions that don’t require disclosure because you’re still writing your own stories.
The only betrayal is when the credited author didn’t actually write the work.
That’s the line that matters.
I’ve spent years testing AI’s limits. I know exactly where it fails (prose, feedback, gatekeeping) and where it helps (analysis, organization, marketing visuals). I’m not an AI evangelist—I’m someone who’s documented both its catastrophic failures and its legitimate uses.
You use Grok to optimize your marketing approach. I use AI to execute marketing materials and analyze manuscripts. We’re both leveraging AI for auxiliary tasks so we can focus on writing compelling human stories. The only difference is you feel emotional betrayal about some AI uses while practicing others.
For me the principle that protects what truly matters is simple and has nothing to do with AI or any other technology: The prose readers buy must be correctly attributed to the true author. That’s it. That’s my position.
Your prose is human. My prose is human. That’s the sacred line. Which tools an author uses for pre- and post-production and packaging is frankly irrelevant.
Discover more from Beyond the Margins
Subscribe to get the latest posts sent to your email.
Hey Ryan,
I just published my response to this article, I appreciate this back and forth, hope you enjoy reading my response as much as I enjoyed reading yours.
Warm Regards,
A.B. Timothy, Author
LikeLiked by 1 person