A user named “Professor Lycan” posted this on Twitter today during a debate about using AI for marketing assets (namely book covers):

So I asked him/her/them some clarifying questions to make sure I understood their position:

  • Using AI to in a book cover for a novel I wrote = unethical. (Confirmed.)
  • Using AI to optimize ad targeting for that same novel = ?
  • Using AI-powered tools to format the ebook = ?
  • Using algorithmic distribution systems (Amazon’s recommendations, etc.) to sell it = ?
  • Using spell-check and grammar tools with AI components = ?

Or is it only unethical when the AI touches the creative output itself?

Because if they were arguing that any use of AI in any commercial creative process is inherently unethical, they’ve just indicted virtually every publisher, designer, photographer, and creative professional working today. Everyone is using AI-assisted tools somewhere in their workflow now.

And if they weren’t arguing that. If there’s actually a coherent line they were drawing, I’d genuinely like to know where it is and what principle defines it. Otherwise it’s a conversation-ender disguised as moral clarity.

Sadly, the good professor never answered my pointed inquiry; instead they deflected with a pseudo-intellectual rant about how struggling indie authors should be “smart enough” to sign contracts with artists that pay them after royalties start rolling in (I had to laugh because I’ve never met a professional artist once in over thirty years in commercial art and advertising who’d agree to those terms), and that marketing assets, including cover art, don’t actually impact the selling of books (which made me laugh even harder).

But does he/she/them have a point? Is it legitimately unethical to use AI in the process of creating anything intended to be sold for profit?

They never clarified, but I suspect their ethical quandary stems from what the AI was trained on, a process they deem unethical. There are other ethical questions regarding AI—and I address each in turn below—they may have been referring to instead, but I think it’s safe to assume it’s the sourcing of the training data that has the good professor’s tighty-whities/thong/granny panties in a knot. 

Is use of a product developed unethically itself unethical? It’s a question that’s worth exploring.

If any mention be made of homicide, of the rights of private property, of the stealing of a slave, of polygamy, or of the working of a plantation, it is all a squabble about the means of doing right. The sugar they raised was excellent: nobody tasted blood in it. The coffee was fragrant, the tobacco was incense, the brandy was sunshine. But the slave-ship and the slave-mart, the whippings and the lynchings, were out of sight. We have been very good boys, and have eaten our sugar without knowing where it came from.

Ralph Waldo Emerson

Every single device—the phone, laptop, tablet, or even ancient Linux-based cobbled-together PC the good professor used to call my marketing assets “certainly unethical”—requires documented human suffering.

The entire infrastructure of the Internet. The servers that run Facebook and X and TikTok and all the other social media platforms. Your favorite search engine. Your Ring doorbell. Your Alexa. Your fitness tracker. Your smart watch. Your wireless earbuds. Your electric toothbrush. Your car’s computer system. Your thermostat. Your television. Your gaming console. Your Kindle. Your smart home—the doorbell, the locks, the security cameras, the lights you control with your voice. Your wearables—monitoring your steps, your heart rate, your sleep patterns. Your entertainment systems. Your transportation. Your payment systems—the chip in your credit card, the ATM, the contactless reader at the coffee shop. Your medical devices—the monitors, the implants, the diagnostic equipment keeping people alive. Everything you touch on a daily basis uses components requiring human suffering.

Even your fucking Fairphone. 

Not theoretical suffering. Not “maybe this causes harm somewhere down the line” suffering.

Actual children. Actual deaths. Actual weaponized rape. Actual modern 21st century slavery. 

Right now.

Today.

This very moment. 

Cobalt: Six-Year-Olds in Mineshafts

Your device’s battery needs cobalt. The Democratic Republic of Congo produces over 70% of the world’s supply. An estimated 40,000 children work in those mines. Some as young as six.

They work 12-hour days for $1-2. No protective equipment. Deep underground shafts they often dig themselves. Mine collapses. Mercury poisoning. Lung infections.

Muntosh started mining cobalt at age 6. Six years of digging took his health and his education. He lost his brother in a landslide.

Militias abduct children from distant parts of Congo, traffic them to mines, and use their labor to fund armed groups.

This isn’t historical. This is happening while you read this sentence.

Your device doesn’t work without cobalt. This cobalt comes from these mines.

Rare Earth Elements: Poisoning Entire Provinces

Every electronic device contains rare earth elements. China produces 60-70% of the world’s supply, largely because lax environmental regulations let them outpace competitors.

Bayan Obo in Inner Mongolia—the world’s largest rare earth mine—has devastated water, soil, and air for decades. The soil and water in Baotou are polluted with arsenic and fluorite, causing skeletal fluorosis and chronic arsenic toxicity.

Experts say it could take 50 to 100 years to clean up Jiangxi Province. China’s Ministry of Industry estimated the cleanup bill at $5.5 billion. Health impacts include “central nervous system damage, bone cancer, skin cancer” and cardiovascular and respiratory issues.

Common practices include ignoring community rights, direct threats, intimidation, and false charges against environmental defenders. REE mining advances in Myanmar under a dictatorship, with frequent persecution and violence.

Coltan: Funding Massacres

Coltan powers the capacitors in your device. The DRC holds substantial reserves. Its extraction has fueled conflict for decades.

Armed militias control mining sites and extort operators. They use profits to buy weapons, pay fighters, fund massacres, forced displacement, forced labor, and atrocities. The M23 rebel group’s control of the Rubaya mine yields roughly $300,000 per month.

The UN stated in 2001 that the DRC was suffering “systemic and systematic” looting of natural resources by foreign armies. The report accused over 100 western corporations of financing rebel groups.

Nearly 7 million people are displaced due to violence, extreme poverty, and mining expansion. Weaponized rape in mining regions. Environmental devastation, destroyed ecosystems, polluted water.

Lithium: Indigenous Communities Losing Water

Your battery needs lithium. The “lithium triangle” of Argentina, Chile, and Bolivia contains about 60% of identified reserves. Over 400 indigenous communitieslive in the impacted regions.

After a year in evaporation ponds, about half a million gallons of water evaporate for every ton of lithium carbonate collected. In one of the driest places on Earth.

Decades of mining in Chile’s Atacama Desert caused groundwater depletion, soil contamination, damage to ecosystems. The flamingo population shrunk by 10 percent since mining started. Mining divided communities, with younger generations drawn to mining and leaving behind traditional ways of life. Methods used in Chile led to “irreversible” and “unrecoverable” loss of water.

Indigenous communities face water scarcity and pollution while losing decision-making power. One indigenous leader: “Indigenous communities refuse to be part of an energy transition that generates territorial dispossession, pollution and loss of water sources.”

Here’s What We’re Comparing

AI training data concerns amount to copyright infringement (maybe—courts are deciding), artists who didn’t consent to their work being used for training, alleged but unquantifiable economic harm through competition, and feeling disrespected or exploited.

Hardware material sourcing involves actual children suffering physical harm and death, documented deaths in mine collapses, toxic exposure causing cancer and lifelong health problems, sexual violence as a control mechanism, armed conflicts funded by mineral trade causing mass atrocities, communities destroyed and forcibly displaced, indigenous peoples losing access to water and traditional lands, and environmental devastation that will take decades or centuries to remedy.

One involves human suffering that is direct, physical, ongoing, documented, and targets the most vulnerable populations on Earth.

The other involves copyright concerns and economic competition between professionals. Oh, and hurt feelings I suppose. Let me fetch my wee little violin. 

Professor Lycan typed their condemnation on a device built with child labor, deaths in mines, sexual violence in conflict zones, funding of armed militias, toxic exposure causing cancer, indigenous communities losing water, and environmental destruction.

The principle they claim: “You can’t use products created through unethical means.”

The principle they actually follow: “You can’t use AI, specifically, regardless of what other ethical compromises I make.”

This isn’t a principle. It’s a preference dressed up as self-righteousness.

And before someone accuses me of whataboutism, let me be clear about what I’m doing here.

Whataboutism is deflection. It’s saying “you can’t criticize X because you also do Y.” It’s a dodge that avoids addressing the actual concern.

That’s not what this is.

I’m not saying you can’t criticize AI because phones are far, far worse. I’m saying: you claim your principle is “can’t use products from unethical sourcing,” but your behavior proves you don’t actually hold that principle. You’re willing to use products built with child labor, weaponized rape, and environmental devastation. So ethical sourcing isn’t your actual objection.

This is a consistency argument. If someone claimed to be a vegetarian for animal welfare reasons but beat their dog daily, pointing out the animal abuse isn’t whataboutism—it’s evidence they don’t actually hold the principle they claim.

The question isn’t “what about phones?” The question is: what’s your actual principle? Because it clearly ain’t “no unethically sourced products.” Once we establish that some unethical sourcing is acceptable to you, we can have an honest conversation about which types, to what degree, and for what purposes.

But you have to stop pretending this is about ethical sourcing when you’re typing your condemnation on fucking blood cobalt.

Naturally someone will say: “You’re right about phones—we should boycott those too! Two wrongs don’t make a right!”

And if they say that online I won’t take them seriously.

If unethical sourcing is your principle, consistently applied, you can’t use phones, laptops, tablets, e-readers, the internet (server infrastructure uses the same materials), electric cars (massive battery requirements), most modern appliances, banking systems (digital infrastructure), or modern healthcare (depends on electronics).

You’re arguing for complete withdrawal from modern technological society—or accepting that some unethical sourcing is tolerable.

If some unethical sourcing is tolerable, we’re having a different conversation. Not “is unethical sourcing acceptable” but “which types of unethical sourcing, to what degree, and for what purposes?”

And if we’re having that conversation, then by any reasonable calculation: direct physical harm to children beats copyright concerns about training data. Deaths in mines beats economic competition with artists. Sexual violence in conflict zones beats artists feeling disrespected. Communities losing access to water beats market harm to illustrators.

If you’re willing to use a smartphone, you’ve already accepted far worse ethical compromises than AI training data represents.

So let’s cut the bullshit. 

People like the professor aren’t applying “can’t use unethically sourced products.” They’re applying: “AI offends me aesthetically, so I’ll find ethical-sounding reasons to condemn it while ignoring vastly worse compromises I make every goddamned day without a second thought.”

How many people are boycotting Apple for cobalt sourcing? How many protests against smartphone manufacturers? Nobody except the authors who believe you can’t be a real author if you don’t handwrite your manuscript are shaming people for using laptops. Nobody except my wife’s old friend Mike Daisey is calling it “certainly unethical” to own an iPhone (his 2011 monologue The Agony and the Ecstasy of Steve Jobs was fucking excellent). Nobody’s creating harassment campaigns against people who use electronics—except maybe Amish Extremists.

But an indie author using AI for a book cover? 

Moral panic!

Anti-AI activists never mention hardware sourcing. Or where we get our clothes, or even our food for that matter. They don’t want you thinking about that. Because once you do, their entire argument collapses.

They focus on AI specifically, not on any consistent principle about supply chain ethics. They target individual users (indie authors making marketing assets like book covers) rather than corporations (AI companies, mining operations, electronics manufacturers).

This is selective moral outrage in service of an aesthetic preference. It’s virtue signaling of the most effortless kind possible. 

And yes, before you say it in the comments, I am fully cognizant that AI itself runs on servers full of the same blood cobalt, rare earth elements, and conflict minerals I just described.

Exactly.

That’s the entire point.

AI training and inference require massive data centers full of GPUs and servers—all built with the same child labor, environmental devastation, and human suffering as your laptop. Every time you generate an image or ask ChatGPT a question, you’re using that infrastructure.

But the infrastructure concerns apply equally to every digital tool. Your email. Your spell-check. Your Kindle. Your streaming services. Your online banking. Your OnlyFans addiction. All of it runs on the same unethically-sourced hardware in the same data centers.

If the hardware sourcing makes AI unethical to use, it makes everything digital unethical to use.

But nobody’s organizing boycotts of Netflix or Gmail or Zoom. They’re organizing boycotts of AI specifically—which reveals this isn’t about the hardware at all.

It’s about AI.

But I digress. Let’s circle back and talk about training data for a minute, because it’s worth discussing.

There’s a reasonable conversation about whether scraping copyrighted works for training requires consent, whether compensation models should exist, whether attribution matters, how to balance innovation with creator rights, and whether current fair use doctrine adequately addresses AI training. These are valid policy, legal, and ethical questions. Courts are actively working through these issues, with mixed results so far.

The training data consent question has merit as a policy issue. But treating individual AI users as morally culpable for corporate training practices is like blaming smartphone users for child labor in cobalt mines—which brings us back to the central problem.

Here’s what’s not valid: using these concerns to shame individual users making normal business decisions while ignoring vastly worse ethical compromises in your own life, providing no consistent principle, targeting the wrong people (users instead of corporations), and treating AI as uniquely evil while giving passes to everything else.

If someone says, “I think AI companies should have better training data practices, so I’m advocating for regulation/legislation/corporate policy changes”—that I can get behind.

But “Using AI for anything commercial is certainly unethical and you should be ashamed” is bullshit, straight up and unfiltered. 

One advocates for systemic change. The other is moral grandstanding.

Oh, and before we go further, let’s acknowledge reality, because a lot of you aren’t living in it. AI isn’t some “future threat” that “may infect” publishing—it’s embedded throughout the entire bloody process. Right now. Today. This very moment.

Grammarly and ProWritingAid use AI-powered grammar and style checking. Amazon’s algorithms use AI to recommend your book, suggest pricing, optimize categories. KDP formatting tools offer AI-assisted layout and conversion. BookBub and similar platforms use AI-driven ad targeting and audience matching. Spell-check and autocorrect rely on machine learning-based corrections. Predictive text uses AI to suggest your next word. Email marketing platforms optimize send times, subject lines, and content testing with AI. ISBNs and distribution systems use automated cataloging with AI. Print-on-demand uses automated quality control with computer vision.

If you’ve published a book in the last five years, guess what? You’ve used AI in a process to create something for profit. The line critics claim exists—between “pure” human creativity and AI-tainted work—is already obliterated.

A survey of 1,200+ authors by BookBub found that 45% of authors currently use generative AI, with 60% using it frequently. Top uses include research (81%), followed by marketing materials and outlining. 85% use ChatGPT, 54% use Claude, 50% use ProWritingAID. Marketing applications include ad copy, blurbs, social media content, taglines, elevator pitches, book trailers, and promotional images. 74% don’t disclose AI use because, as one author put it, “I don’t disclose how the sausage is made.”

The Real Ethical Concerns

The AI ethics conversation gets muddled because people conflate two entirely different categories of concerns.

Category 1: The Costs. Training data sourcing, environmental costs of training models, concentration of power in tech companies, and labor displacement. These are infrastructure and development ethics—questions about the systems and corporations creating AI.

Category 2: Usage. Deepfakes and misinformation, bias in consequential decisions, synthetic child sexual abuse material. These are application ethics—questions about what people do with AI tools.

Guess where marketing assets like book covers fall into either problematic category? 

They don’t. 

Book covers aren’t consequential decisions about people’s lives. They’re not deepfakes or deception. They’re not biased hiring algorithms. They’re commercial art that’s clearly synthetic, used for a business function, causing no identifiable harm to specific individuals.

The infrastructure concerns—training data, environmental costs, concentration of power—apply to every use of AI, including the spell-check and email tools critics use daily. If those concerns made individual use unethical, you’d have to boycott all AI tools consistently. But nobody does that, which reveals this isn’t about consistent principles.

1. Misinformation and Deception

AI makes it trivially easy to create deepfakes, fake evidence, synthetic media that deceives people. This is real. Deepfake porn of real individuals, fabricated evidence in legal cases, politicians saying things they never said—these cause direct, identifiable harm.

The key distinction is the problem is deception, not generation.

Using AI to create a deepfake to impersonate someone and deceive others causes actual harm through fraud. Using AI to create an openly synthetic book cover or Facebook ad that nobody mistakes for a real photograph of a real event causes zero harm because there’s zero deception.

We’ve faced similar challenges before with photography, Photoshop, and video editing. We adapted through digital forensics, verification standards, media literacy education, and legal frameworks around fraud. The tool isn’t the problem—fraud and deception are the problems, and we address those through fraud laws and disclosure requirements where deception is attempted.

Promo videos and book covers aren’t deception. Nobody thinks they’re real life events. Nobody’s being tricked. No harm occurs.

2. Bias Amplification

AI systems can reproduce and amplify societal biases, causing harm when used for consequential decisions. This is a legitimate concern when AI makes high-stakes automated decisions about hiring, lending, criminal justice, or healthcare. Biased outputs in these contexts cause real harm to real people who get rejected for jobs, denied loans, or sentenced more harshly because of algorithmic bias.

But this doesn’t apply to marketing assets. You’re not making automated decisions about people. You’re creating commercial art. Even if the AI has biases in how it generates images, you’re the human reviewing and selecting the output. No consequential decisions about others’ lives are being made.

You know where bias actually matters? Publishers using AI to screen manuscripts. That’s actual consequential gatekeeping that’s killing literary diversity. AI slush readers with bias baked in are making binary yes/no decisions about whether books get read by human editors. That’s where algorithmic bias has real impact—but critics are largely silent about that while attacking book covers.

The selective outrage is just maddening.

3. Synthetic CSAM

AI can generate child sexual abuse material without actual victims in the image creation process. This is genuinely complex. The research is mixed on whether synthetic CSAM reduces harm (outlet theory) or increases it (normalization and desensitization). This is an edge case involving the most vulnerable population, and experts legitimately disagree about the net effects.

(Now, I have my own rather strong opinions about people who enjoy CSAM, mostly involving trebuchets and wood-chippers, but I’ll save the graphic visuals for a future essay.)

This has nothing to do with book covers, marketing assets, or normal creative use. Critics conflate the worst possible use of AI with all uses, as if they’re morally equivalent. They’re not.

The existence of this genuine edge case doesn’t make every use of AI ethically suspect any more than the existence of Photoshopped child exploitation images makes all use of Photoshop unethical. We address specific harms through specific laws and enforcement, not through blanket condemnation of tools.

4. Concentration of Power

A handful of companies control the most powerful AI systems. This is a legitimate structural concern about tech industry consolidation and the concentration of transformative technology in corporate hands with inadequate democratic oversight or accountability.

But this is a regulatory and antitrust issue, not a user ethics issue.

By this logic, you also can’t ethically use Google Search (search monopoly), AWS or Azure (cloud infrastructure concentration), Windows or MacOS (OS duopoly), Amazon (e-commerce dominance), or basically any major tech platform. If “concentration of power exists in this industry” equals “unethical to use products from this industry,” then virtually all modern commerce becomes unethical.

The solution is antitrust enforcement, regulation, open source alternatives, and democratic oversight of powerful technology—not individual boycotts that accomplish nothing except handicapping people trying to run businesses.

Direct your energy at policy and regulation. Support legislation that addresses concentration. Vote for representatives who take antitrust seriously. But don’t pretend that refusing to use AI for a book cover addresses structural power dynamics in any meaningful way.

5. Labor Displacement

AI will displace workers without adequate social safety nets. This concern is real—but notice the selective application.

I was laid off from Microsoft. I was replaced by a combination of AI and offshoring. Thirty percent of Microsoft’s code is now AI-generated. Where’s my boycott? Where’s the campaign demanding people refuse to use software with AI-generated code?

Look at the pattern of who gets sympathy:

Software engineers replaced by AI: massive layoffs at tech companies, crickets from the online discourse.

Factory workers replaced by robots: “That’s progress. That’s how economies evolve.”

Accountants replaced by software: “Adapt or retrain. Learn new skills.”

Customer service representatives replaced by chatbots: “Efficiency gains. Business evolution.”

Artists potentially affected by AI: moral crisis requiring boycotts and public shaming.

This isn’t about protecting workers. It’s about which workers critics think deserve protection.

The labor displacement concern is real—but it’s a policy problem, not an individual ethics problem. The solution isn’t boycotting new tools—it’s ensuring people can adapt, retrain, and start new ventures. Shaming indie authors for their book covers or TikTok videos doesn’t help a single displaced worker.

Every major technological shift has displaced labor. The printing press devastated scribes. Photography destroyed portrait painting as a profession. Digital cameras killed film processing labs. Email crushed the postal service. I’m sure some people organized boycotts of books, photographs, digital cameras, or email. But what came of it? Technology that reduces labor requirements for existing tasks is how economies have always evolved.

The solution isn’t boycotting technology. The solution is supporting people through transitions. Either through personal and community outreach or social programs. Pick your solution based on who you vote for, but shaming indie authors for their book covers doesn’t help a single solitary displaced worker.

Critics claim freelance illustration work has dropped 20-30% due to AI, but can’t produce the data when pressed. Meanwhile, verifiable tech layoffs exceed 400,000 in 2023-2024 with the trend continuing unabated this year. 

I’d bet $20 the impact to the software development industry trumps the artistic community’s woes by an order of magnitude.

6. Environmental Costs

Someone might raise the environmental concerns: training large AI models consumes massive energy and water.

Fair point. GPT-4’s training emitted roughly 500 tons of CO2 equivalent.

Now let’s talk about Bitcoin.

Bitcoin mining alone produces 85-120 million tons of CO2 annually. That’s 170,000 to 240,000 times the environmental impact of training GPT-4. Every singleyear. Bitcoin uses more electricity than entire countries—more than Argentina, more than the Netherlands.

And what does it produce? Imaginary tokens for speculation.

At least AI generates book covers, helps people write code, answers questions, automates tedious tasks. Crypto burns energy solving arbitrary math problems to maintain a distributed ledger that could run on a laptop.

Where’s the boycott of Bitcoin? Where are the campaigns shaming people for holding crypto portfolios? Where’s the moral panic about NFT collections?

Crickets.

But sure, let’s pearl-clutch about the indie author who used AI to generate a fucking book cover.

And let’s not forget traditional publishing itself is an environmental disaster. Publishers print tens of thousands of books based on speculation, ship them to stores, ship unsold books back, then pulp or landfill them. The industry accepts that 30-50% of print runs may not sell. They’re literally printing millions of books that go straight to shredders—paper production, ink, binding, warehousing, shipping to stores, shipping back, destruction. The entire cycle for books that never get read.

An indie author using AI for a cover and print-on-demand for production has a drastically smaller environmental footprint. POD only prints books that are ordered. No warehousing, no returns, no pulping of overstock, minimal waste.

But nobody’s boycotting traditional publishers for their environmental impact.

Marketing Assets Are Business Tools

There’s a fundamental distinction between the creative work and the business infrastructure around it.

The book is the product. The cover, trailer, ads, promo videos—those are marketing and packaging. Nobody confuses a movie poster with the movie. Nobody claims the dust jacket art defines the book’s literary merit.

(And yet idiots still keep insisting if your book has an AI cover it’s perfectly reasonable to assume the book is AI-written—which is illogical as fuck.)

Traditionally published authors typically have zero control over their covers and marketing. Publishers commission designers who use stock photos, digital tools, whatever makes business sense. Marketing is handled by separate departments. Nobody questions the author’s integrity based on how the publisher promoted the book.

Indie authors handle all business functions: editing, design, marketing, distribution. Every resource allocation decision is a business decision. Choosing AI for marketing assets is the same category of decision as choosing print-on-demand over offset printing, or using Amazon KDP instead of traditional distribution.

It’s a business tool for business functions.

And the online discourse isn’t representative of actual practice anyway. Authors split nearly evenly: 45% use AI, 48% don’t and won’t, 7% are considering it. Of those opposed, 84% cite “ethical concerns,” primarily training data and artist compensation (again with the selective principals).

But of those using AI, most describe it as “a brainstorming partner,” “an assistant,” or “a tool that frees up time for actual writing.” Common marketing uses include ad copy, blurbs, social media content, promotional images, and book trailers. 74% don’t disclose AI use to readers.

As one author explained: “I treat it like a tool. It allows me to free up time to do the thing I love the most, which is the writing. By using it for a lot of admin tasks, I save myself hours that then get put back into my time writing my books.”

Another: “I don’t disclose how the sausage is made because I don’t think it’s helpful for readers to know which craft books I’ve read, whether I use a hard copy of the Chicago Manual of Style or a grammar-checking app.”

Working authors treat AI as a business tool for business functions. The loudest voices online—the ones demanding boycotts and calling people unethical—aren’t representative of people actually making a living from writing or, and perhaps more importantly, the readers buying their books.

Because your average reader doesn’t care.

“Supporting Artists”

Let’s address the emotional appeal directly. The argument goes like this: “Using AI hurts artists who need the work.”

Except the reality is most indie authors were never the market for professional custom illustration. They were using premade covers ($50-150), Canva templates, stock photos, Fiverr artists from Elbonia paid pennies on the dollar

AI didn’t displace traditional illustration for most indie authors—it displaced other (far worse aesthetically) budget options. And notice: nobody called premade covers or Canva “unethical.” Nobody organized boycotts of authors using Fiverr or poverty-wage Elbonian artists. The outrage is selective.

But wait! The hypocrisy runs even deeper. Do AI critics boycott Amazon for destroying independent bookstores? Do they refuse print-on-demand because it hurt traditional print shops? Do they avoid ebooks because they hurt bookbinders? Do they only buy books with hand-drawn covers commissioned at US living wages?

Doubtful. Maybe a few. Not most. They happily support authors using those tools. But AI? That’s where they draw the ethical line.

Where were they before? Indie authors have commissioned work from Fiverr artists in countries with lower costs for years—paying pennies on the dollar compared to US/UK rates. That practice undercut domestic artists just as effectively as AI does. Nobody called it “unethical.” Nobody organized campaigns. Offshoring creative work to exploit wage differentials was fine—AI is the problem?

The selective application reveals this isn’t about protecting artists. It’s about aesthetic preferences dressed up as ethics—or simply fear.

For a lot of them I think it’s really fear. Fear of the unknown. Fear and ignorance. 

Ethical Inconsistencies

AI replacing software engineers has caused massive layoffs at tech companies. Crickets.

Offshoring art to low-wage countries has undercut US/UK artists for years. Nobody cared.

Madison Avenue has been bending artists over the drafting table and shafting them for decades. Oh well. 

Print-on-demand destroyed traditional print shops. Celebrated as innovation.

Amazon bankrupted independent bookstores. Everyone still uses it.

Stock photo sites reduced need for photographers. Business as usual.

Canva made professional designers less necessary. Praised as democratizing design.

AI for marketing assets might reduce demand for some artwork. Moral panic!

That’s not ethical consistency. It’s—I don’t even know what to call it anymore. Anti-AI rhetoric has become completely unhinged. Pavlovian, even.

The Question Professor Lycan Can’t Answer

Let’s return to where we started.

Professor Lycan claims it’s “certainly unethical” to use AI for anything sold for profit.

But he/she/they won’t (or can’t) answer these questions: Is it unethical to use Grammarly (AI-powered)? Is it unethical to use Amazon’s recommendation algorithms to sell books? Is it unethical to use email marketing platforms with AI optimization? Is it unethical to use KDP’s AI-assisted formatting? Is it unethical to use spell-check with machine learning?

If yes to all of these, he or she (or maybe it’s xe?), has indicted virtually every modern author and publisher.

If no to these but yes to visual AI for covers, what’s the principle that distinguishes them?

The good professor never answers because they have no coherent principle.

And if we’re going to discuss AI ethics honestly, we need consistent principles.

For infrastructure and sourcing issues, direct your concern at corporations and policy. Advocate for better practices and regulation. Support legislation requiring transparency and consent. But don’t shame individual users unless you’re consistent about all unethical sourcing (cf. the device you’re reading this on right now).

For application issues, deception and fraud are unethical and should be illegal. Consequential automated decisions raise legitimate concerns about bias. Marketing assets are neither of these.

The test for any AI use comes down to four questions. Am I deceiving anyone about what this is? Am I causing identifiable harm to specific people? Am I making consequential decisions about others using AI? Am I applying my principles consistently?

For AI marketing assets: No, no, no, and yes.

If the answers are all negative except consistency, there’s no ethical problem.

Frankly, if your ethical framework condemns indie authors for using AI to make marketing assets while giving a pass to smartphones built with child labor in cobalt mines, devices containing minerals funding armed conflicts, electronics requiring environmental destruction and indigenous displacement, publishers using AI to reject manuscripts, tech companies laying off thousands to replace them with AI, offshoring creative work to exploit wage differentials, and traditional publishing’s environmental waste—then your framework isn’t about ethics.

It’s about who you think deserves sympathy and who you think deserves judgment.

Marketing assets are business tools. Using cost-effective tools to market your creative work isn’t an ethical failing. It’s running a business.

The conversation about AI training data practices is worth having. But have it honestly, consistently, and directed at the right targets.

Don’t shame indie authors for their book covers while typing on blood cobalt.

If you’re willing to type your condemnation of AI on a device built with weaponized rape, you’ve revealed that your objection isn’t about ethical sourcing at all.

You simply don’t like AI. That’s fine. But don’t dress up your aesthetic preferences in the language of ethics while ignoring actual human suffering in your own fucking supply chain.


Discover more from Beyond the Margins

Subscribe to get the latest posts sent to your email.

3 thoughts on “Don’t Lecture Me About AI Ethics While Typing on Blood Cobalt

  1. Couldn’t agree more. It’s this kind of obfuscation that riddles discussion across social media that has led to the shit show’s we see around us.

    from experience I can say you won’ be liked for saying it out loud.

    I’m British, which the average American thinks makes me a pinko, commie, fag, but I can tell you I’ve been abjured by my fellow travelers for not abiding by the group decision over what is right versus what is wrong speak.

    You may remember I wrote three reasons on Xwitter for following you, because:

    1. Author MilSF 2. Iconoclast 3. To learn from your mistakes. ;-)

    Liked by 2 people

Leave a reply to AshleyRPollard Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.