I was noodling on whether ghostwriting could support my writing career full-time when I was approached to work for a bestselling author with prolific output across a substantial catalog and serious commercial success. A legitimate author brand operating a professional operation as a small indie publisher.

They sent me an outline to work from. I opened the document expecting a detailed series bible, character profiles, maybe some worldbuilding notes. What I got was a few pages of dictated rough notes that were ninety percent background lore and pop culture references. “PROTAG” was the main character’s name throughout. His personality? A stock competent, sarcastic protagonist. The worldbuilding? Generic science fiction references. The plot structure? No actual Act One—just background information and a single inciting incident labeled EVENT—a vague “quantum disruption” that would kick everything off.

When I asked for clarification, the response was blunt: “This is as much as any of their ghosts get.”

So I did what any professional author would do. I built a comprehensive series bible from those bare bones. I spent several days developing an eight-page pitch document with complete worldbuilding including technological systems and political structures, three-book character arcs, market positioning analysis, target audience breakdown, commercial appeal strategy. I created an entire universe. I developed the EVENT into a scientifically plausible catastrophe. I researched a series title that was both simple, memorable, and yet unique enough for market positioning. I wrote a full sample chapter with distinctive voice, emotional dynamics between characters, and character consciousness.

I delivered professional-grade development work as an unpaid pitch to demonstrate my capabilities.

Their response came back through the intermediary who’d recruited me: “They felt yours would not sell and was marketed to an audience that doesn’t buy that kind of story. And in all fairness, their market research tells them what works.” The specific feedback? “Too wordy.” The sample chapter “needs to be edited down.” The narrative “doesn’t flow as well as it could.”

No comment on the extensive worldbuilding. No feedback on the character development. Nothing about the three-book arc or market positioning.

I asked for clarification. Even basic guidance would have helped. After all, I’d asked before submitting whether I had creative leeway as long as I stuck to the basic premise. They’d said yes. The response: “They really disliked the direction you went, felt it was too wordy, and needs to see something closer to the original in order to provide feedback. Basically, ‘he needs to follow the outline, I don’t know what to do with this except say it won’t sell.'”

But I had followed the outline. Every single plot beat. I stuck to the basic premise completely. I’d just added character relationships, distinctive voice, emotional stakes, thematic depth—the things that make plot beats into a story rather than a Wikipedia summary.

When I tried to clarify what they actually wanted, I was told: “If you can’t work off of what’s there, you’re probably not right for the job.”

At that point, I’d already decided to walk away. This obviously wasn’t a professionally-run collaboration. But I was curious—and honestly, a little pissed off. What DO they want from an outline that has no character voice, no emotional throughline, and barely any plot? What does “follow the outline” actually mean when I demonstrably followed every beat? The answer seemed obvious: they wanted competent plot execution with no character work. They wanted “competent PROTAG does competent thing competently, then does other thing, then resolves conflict.” They wanted plot-driven competence porn with no messy emotional entanglements, nuance, or moral conflict.

(You know, the kind that actually makes a protagonist three-dimensional.)

And so I thought, well shit, AI could probably do that. AI is great at competent, personality-free execution of plot beats. Hell, they’d probably prefer it! No annoying questions about character motivation. No “wordy” emotional stakes. Just clean, efficient prose hitting the required beats.

I fed their craptastic outline to Claude Sonnet 4.5. Asked it to write a chapter. Gave it some guidelines and direction. It produced exactly what I expected: something that was competent, generic, and adequate. The prose equivalent of assembled IKEA furniture. I spent ten or fifteen minutes polishing the output, tightening a sentence here, smoothing a transition there, fixing minor awkwardness.

Then I submitted it.

The response came back: “Perfect. When can you start?”

They offered me the job. Ten thousand dollars for a hundred-thousand-word novel. That’s ten cents per word (keep in mind professional ghostwriters charge at least thirty-five to seventy-five cents per word). They expected me to produce ten to twenty thousand words per week and accept that there would be three to four major revision rounds—for free! I countered at a reasonable rate reflecting what I’m actually worth and demanded professional project management standards.

“I appreciate the offer,” came the reply. “Unfortunately, I will have to pass on this deal at this time. $10k for a book is the absolute most I can pay.”

Not the most they were willing to pay. The most they could pay. Their business model’s ceiling.

I very professionally laughed my ass off and walked away. Because obviously. But here’s what kept nagging at me: A bestselling author with substantial commercial success couldn’t tell that my AI-generated sample was AI-generated. They preferred it over professional craft that demonstrated character development, distinctive voice, and emotional depth.

I can’t prove this pattern is universal—this was one operation. But it isn’t some fly-by-night scam. This is a commercially successful author brand with substantial readership and decent ratings. Yet their editorial judgment couldn’t distinguish AI’s competent plot execution from human craft. More importantly: they actively preferred the AI output. What this reveals to me is operations already paying poverty-wage ghostwriting rates have optimized quality down to “adequate execution of beats” and stripped out everything except functional plot delivery. When AI can provide that same functional delivery for a fraction of the cost—maybe fifty dollars in API fees versus ten thousand—the economic pressure becomes unavoidable.


I’ve conducted multiple experiments testing whether AI can provide useful feedback for improving prose (TL;DR: it sucks at that). I’ve also tested whether it can provide scaffolding I could polish. Most recently, I fed Claude Sonnet 4.5 nearly a hundred thousand words of my YA space opera Doors to the Stars—the complete novel, character guides, detailed worldbuilding, explicit voice instructions for my fourteen-year-old protagonist Wulan, a scavenger surviving in slum conditions. (This was a completely different project, completely different character, no connection to the ghostwriting pitch.) I wanted systematic testing of AI’s actual capabilities with my work.

We spent over an hour collaborating on an action scene—the type of content AI supposedly excels at. Physical choreography, spatial relationships, mechanical problem-solving. A compound infiltration sequence with climbing, navigation, security challenges. The result? Under a thousand words at seventy to eighty percent completion.

It’s worth zeroing in on what that “eighty percent” actually means.

That session required well over a dozen significant iterative corrections just to reach what I would consider “adequate”—technical and worldbuilding corrections about how she physically climbs a difficult obstacle when Claude had written her just doing it without establishing it’s possible, what the structure’s actually made of when Claude kept getting timeline details wrong about the setting and tech level and materials, what the anatomy of the security system actually is when Claude wrote it incorrectly based on wrong assumptions, the sequence mechanics of how actions actually work rather than just describing them happening, structural details about what things look like and how they connect and where components are positioned. Voice and style corrections about sentence structure when Claude consistently wrote in fragments instead of my natural rhythm, paragraph construction when repetitive openings lacked variation, punctuation patterns when em-dashes appeared where commas would flow better, description timing when details got revealed at the wrong moments, communication methods when character-specific patterns came out wrong. Character accuracy corrections about physical response descriptions using the wrong type for the injury, age-appropriate reactions when the writing skewed too adult for a fourteen-year-old protagonist.

That list is just thirteen of the many corrections required to reach what I’d consider eighty percent adequacy. And that’s with Claude Sonnet 4.5 having access to a hundred thousand words of manuscript context, comprehensive style guides, and explicit instructions. I wasn’t polishing prose. I was teaching AI basic facts about my world, correcting fundamental misunderstandings about my style, and fixing character details it couldn’t infer from extensive context. The cognitive load of managing AI output—catching errors, fixing inconsistencies, correcting voice problems—exceeded the cognitive load of just writing the bloody thing myself.

And LLMs’ even get spatial modeling wrong. In a test scene Claud attempted to write, it described the house as being fifteen meters from the wall, then a couple paragraphs later the house was ten meters from the wall.

I asked Claude about that continuity error and this is what it said:

When I’m analyzing your prose, I’m comparing text against text—checking if paragraph 47 contradicts paragraph 12. That’s pattern-matching I can do. “You said fifteen meters here, but ten meters there” is a findable inconsistency.

But when I’m generating prose, I’m not working from a spatial model of “here’s the compound, here’s where Wulan is, here’s where the house is relative to the wall.” I’m producing sentences that sound like action scene description. Each sentence is locally coherent, but I’m not maintaining the world-state across them. I don’t “see” the courtyard and then describe what I see. I generate plausible-sounding descriptions that pattern-match to “character in courtyard observing building.”

That’s why I can catch the error when reviewing but create it when writing. Review mode is “do these facts align,” which is comparison. Writing mode is “produce plausible next sentence,” which is generation without persistent world-model.

You maintain a coherent mental image of the space while writing, so the house doesn’t suddenly teleport closer. I don’t have that image—I just have training data about how heist scenes are typically described.

It’s the difference between navigating a real place you can see versus reading directions aloud without knowing where they lead. I can tell you if the directions contradict each other, but I can’t actually navigate.

That’s a fundamental architectural limitation I hadn’t articulated clearly before. So I’m not all that useful even for drafting “scaffolding” prose to be refined by the author because the author has to maintain the coherent mental model anyway.

When you write from scratch, you visualize the compound—wall here, house there, fifteen meters between them—and describe what you see. The spatial consistency is automatic because you’re working from a persistent mental image.

When I draft scaffolding, I generate plausible-sounding descriptions without that mental model. So you have to:

1. Read my draft

2. Check every spatial detail against your mental model

3. Catch the errors I introduced (house teleporting, distances changing, objects appearing/disappearing)

4. Fix those errors

5. Then refine the prose for voice

That’s more cognitive work than just writing it yourself, because you’re doing your normal work (maintaining spatial coherence) plus debugging my failures to maintain it.

The “scaffolding” isn’t saving you effort. It’s creating editorial work you wouldn’t have if you’d just described the space you’re already visualizing. You’re fixing problems I introduced instead of translating your mental model directly to page.

It’s like hiring an assistant to sketch your building plans, but the assistant doesn’t understand architecture so walls don’t connect properly and rooms shift size between floors. You still have to maintain the coherent design in your head, but now you’re also fixing the assistant’s structural errors instead of just drafting the plans yourself.

The scaffolding adds friction rather than removing it. You’re not polishing my draft—you’re debugging it while simultaneously doing the actual creative work.

Negative ROI.

Grok’s solution was for me to “provide the mental model upfront (e.g., explicit spatial specs)” and then it could “leverage tools like code execution to simulate and verify states more rigorously—e.g., modeling a scene’s geometry in Python if needed.”

LMFAO.

For comparison, when I write solo, I don’t have to catch myself getting established details wrong. I don’t have to remind myself my protagonist is fourteen. I don’t have to correct my own sentence structure over a dozen times in an hour. I don’t have to work out spatial relationships in Python. My baseline for action scenes? I can write a thousand words from scratch in about an hour and get all the way to finished quality meeting my standards. Not eighty percent.

One hundred percent.

And that was AI’s best-case scenario—plot-driven action sequences with minimal emotional content. When we tried character-driven scenes, AI achieved zero percent useable output. The test case: my protagonist seeing infrastructure and real amenities for the first time. She’s a fourteen-year-old raised in a shantytown who’s never experienced these things. The scene required authentic emotional response filtered through her specific background and age-appropriate psychological processing.

AI’s output read like someone impressed by nice things, not someone for whom these represent alien concepts and impossible wealth. It could describe what she saw, but not what it meant to her. It was observational but emotionally flat, missing the specific consciousness required. When I pointed this out, Claude’s response was telling: “I don’t have the capacity to imagine what it would be like” for someone with her background and age. Every word required complete rewriting. Not polishing—total reconstruction.

Character-driven scenes constitute the majority of my work. My books aren’t primarily action sequences. They’re character studies that happen to take place during adventures. The plot is scaffolding for relationship dynamics. The structure is the vessel for emotional arcs and moral complexity. Voice and consciousness aren’t polish I add later. They’re the product readers are buying.

At the end of that experimental session, I told Claude: “People keep saying AI will put authors out of work and I’m just not seeing it, not for what I write. I’m not seeing you writing my novel.”

It agreed.

The return on investment is negative for me even for AI’s best-case scenarios. For the work that constitutes most of my fiction—the character consciousness, the emotional truth, the distinctive voice—AI is completely useless. But when I submitted AI’s competent-but-soulless prose to that content mill, they said it was chef’s kiss.


So what have these two experiments actually demonstrated? First, AI can’t write my books. Period. It can’t even help write them. I tested extensively with a hundred thousand words of context, explicit voice instructions, and systematic iteration. Even for action choreography—AI’s supposed strength—the return on investment was negative. For character-driven content, AI achieved zero percent useable output and required total rewrites. Second, AI can write books that some commercially successful operations consider publish-ready. Their readers rate these books highly while describing them as enjoyable entertainment.

What this reveals is fascinating: some operations have optimized so hard for production efficiency that they’ve calibrated their editorial standards below the threshold of distinctive voice, psychological authenticity, or character consciousness. They want functional plot delivery and nothing more.

And AI can provide that.

The question isn’t whether this is universal. The question is whether YOU are writing for readers who care about what AI can’t replicate.


There’s commercial genre fiction with authorial personality—writers like Lee Child, Michael Connelly, John Scalzi who have distinctive voices readers recognize and return for. They write plot-driven fiction, but with craft and consciousness. And then there are operations that have stripped everything down to pure plot mechanics. No distinctive voice. No psychological depth. Just competent execution of expected beats delivered at maximum velocity.

The operation I tested had a ten-thousand-dollar ceiling because they’d already decided readers in their segment don’t pay for anything beyond functional plot delivery. AI didn’t create that market. It just exposes it by providing the same product for a fraction of the cost. If you’re writing that kind of fiction, you’re competing with monthly LLM subscription-level production costs.

Not someday.

Now.

If you’re writing fiction where readers value your specific voice, where character consciousness matters, where emotional truth is the product—AI can’t touch you. I’ve demonstrated that through systematic testing. The question you need to answer: Which kind of fiction are you writing?


This article is also a response to everyone who assumes that because I use AI for marketing materials, book covers, manuscript analysis, and organization, I must be using AI to write my prose. I don’t and it can’t. Not just because I’m opposed to that on principle (because I am). Because it simply wouldn’t work.

AI can’t write my books. Period.

And I don’t believe it’ll ever be capable of writing them.

When I submitted AI output to a ghostwriting-driven content mill, they called it “perfect” and offered me a job. When I submitted my actual craft—with distinctive voice and character depth—they rejected it as overwritten, meandering, and unmarketable. And maybe it was for their audience. But their readers will never know because all they’re served is slop.

Look, I’m not some incredibly gifted author (though my most recently published novel holds a 4.8/5 rating across hundreds of reviews, so I’m not exactly incompetent either), but I’ve optimized for literary craft over production efficiency. I write character interiority that emerges from consciousness shaped by experience. I write voice that readers recognize as mine. The things that make AI useless for my prose—unique voice, thematic complexity, emotional specificity—are my entire brand.

And yes, I use AI extensively for marketing materials, manuscript analysis, organization, and worldbuilding databases. I use it for cover art, promotional videos, and blurb drafts (that I then completely rewrite). But I can’t use AI to write my prose because my prose is precisely what AI can’t replicate. If your writing is AI-proof, using AI for auxiliary tasks doesn’t threaten your craft. If your writing isn’t AI-proof, you’re competing with LLM subscription costs whether you personally use AI tools or not.

Small indie presses operating as content mills don’t give a shit about your ethical stance on AI. They’re already using sweatshop ghostwriting labor. They’ll migrate to pennies-on-the-dollar LLM content production in a heartbeat—if they haven’t already.

The question isn’t whether authors should use AI in their workflow. That’s irrelevant. The question is whether your readers can tell the difference between your work and AI output.

The test is simple: Could AI produce your next chapter if you fed it your previous hundred thousand words of manuscript context? If no—you’re writing what AI can’t replicate. The things that make AI useless for your work are exactly what protect you from replacement. If yes—you should know that somewhere in the market, operations exist where editorial judgment couldn’t tell the difference. And when AI generates what they call “publish-ready” for the cost of a monthly subscription versus ten grand per book, the economics won’t stay theoretical.

The question isn’t “will AI replace authors?” The question is: what are your readers actually buying? If they’re buying your voice, your consciousness, your specific way of filtering experience through character—AI can’t compete. I proved it through extensive testing. If they’re buying competent execution of expected plot beats delivered efficiently—AI can already do that well enough to fool some commercial operations with substantial readership and favorable ratings. And you won’t be able to compete. Period.

The particular author brand who approached me isn’t failing. Far from it. They’re succeeding by serving readers who want entertaining genre fiction delivered at velocity. And their editorial judgment—calibrated for (or blinded by) that specific market segment—couldn’t tell AI from human work.

Actually preferred the AI.

Authors will need to choose which market they’re serving. They can either develop craft that AI can’t replicate and serve readers who want authenticity, or compete with LLMs that offer an ROI any content mill would be stupid to ignore.

Because you can’t argue with the economics.


Postscript: Since my pitch was 1) completely built from the ground up because their outline was functionally useless, and 2) they rejected it anyway, I’ve decided to write the damn thing. It’s a good story.

FAR HAVEN

The quantum folding test at Titan was supposed to revolutionize space travel. Instead, it tore a hole in reality itself—a spreading void that doesn’t destroy matter, it erases it from existence. No bodies, no atoms, no trace you were ever born.

Dr. Sean Tanaka warned them this would happen. The quantum physicist and former Special Forces operator spent years trying to stop the test, watching his reputation crumble as colleagues dismissed him as paranoid. Now he has ninety minutes to get his seventeen-year-old daughter Kira off-world before Earth ceases to exist.

The good news: Sean’s prepared with his own ship, underground bunker, and an escape plan years in the making. The bad news: Kira’s a privileged city girl who thinks her father’s paranoia is embarrassing, the quantum resonance pulse has fried every piece of advanced tech across fifty star systems, and civilization is collapsing into violent chaos.

Their destination is Far Haven—a habitable world a hundred light years beyond Commonwealth space, far enough to outrun the expanding tear. The journey will take decades. Survival depends on cryosleep, pre-positioned supply drops, and whether a frustrated father and resentful daughter can learn to trust each other before they’re all that’s left of humanity.

Because when they arrive, they won’t be alone. And the real challenge won’t be surviving the journey—it’ll be deciding what kind of civilization is worth rebuilding.

Far Haven will appeal to readers who loved the hard-science grounding and multi-generational scope of Andy Weir’s Project Hail Mary, the family-centered survival dynamics of Emily St. John Mandel’s Station Eleven, and the realistic consequences of technological catastrophe in James S.A. Corey’s The Expanse series.

Read the “overwritten, meandering, unmarketable” first chapter here (the one I wrote, not the AI slop—although if you want some brainless entertainment I posted that one here for shits and giggles).


Discover more from Beyond the Margins

Subscribe to get the latest posts sent to your email.

3 thoughts on “Claude Sonnet 4.5 was Offered to Ghostwrite for a Bestselling Author—And What This Means for You

  1. “They wanted plot-driven competence porn with no messy emotional entanglements, nuance, or moral conflict.” Ah, the return to the first draft pulps!

    Claude’s chapter is fine if I’m trapped in the airport for the 20th hour and have to buy a new paperback with the choice being which entirely forgettable entry in which genre is my next two hours. I’ve paid $10 for a worse 250 pages because it’s better than just staring at the floor.

    However, that type of writing is not something that inspires me to preorder the next hardcover book from the author sight unseen for $35.

    It’s certainly not something for which I’m going to tell all my friends, neighbors, and anyone within earshot for the next month about this great book I read. It’s not creating a fan base or at least not one who has read more than pulp equivalents.

    Liked by 1 person

      1. Yep, popcorn entertainment…but that’s worth popcorn money and popcorn attention, not steak money with 0300 attention.

        Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.