Martha Wells, author of the Murderbot Diaries and current Apple TV+ adaptation star, used a recent interview to attack artificial intelligence at length. Murderbot has sold millions of copies, won multiple Hugo Awards, and now anchors a prestige streaming adaptation starring Alexander Skarsgård. Her voice carries weight in the science fiction community. What she said about AI is worth examining on its actual merits.
Wells built her career across decades of fantasy and science fiction before Murderbot made her a household name in genre circles. The series follows a part-human, part-machine security unit called a construct that hacks its own governor module, gains free will, and promptly uses that freedom to watch television serials rather than go on any kind of violent rampage. The books follow themes of corporate slavery, individual autonomy, and what consciousness means to a being that contains both organic and machine components.
Wells’ most celebrated creation is a sympathetic machine intelligence. She spent the same interview dismissing the possibility of machine intelligence existing.
She was direct and detailed on the topic of AI:
“We don’t have technology of artificial intelligence,” she stated. “We don’t have machine intelligence and we won’t have it for probably a long time. What we have are large language models which are basically advanced chat bots that are really good at putting together words that their programming says you want to hear based on what you say.”
She offered an analogy and then walked it back mid-sentence: “I started to say it’s like talking to a parrot and I realized that’s not right at all because a parrot has a consciousness. And it knows it’s talking to another entity. Chat bots absolutely do not have any consciousness. They’re not aware of you as a person or an entity or anything. So it’s just a program responding to input.”
On public perception of AI capabilities: “It’s really kind of scary and sad how so many people are believing this, which is basically sales hype that this is a machine intelligence that’s capable of doing things like giving you therapy or giving you advice. And it’s gotten people killed.”
On the companies building these tools: “It really shows how powerful corporations can be when they want to sell you something, what lengths they’re willing to go to to separate you from your money and kind of control this market. It’s just really a good example of basically propaganda in a lot of ways — just insisting these things have consciousness. No. They contradict each other a lot and it’s like listening to fanatics just going on and on — like vampires trying to convince you what to say to come out that door and get eaten.”
On people who form emotional attachments to AI systems: “People are talking about their chatbot lover and everything and it’s like talking about a coffee cup being your lover. There’s nothing there.”
Some of this is true, but not completely, like most of the arguments against AI. The marketing language around current AI systems overclaims in ways that generate real public confusion. Companies describing large language models as “intelligent” or “conscious” are using those terms loosely, and the documented cases of chatbot interactions preceding mental health crises are real. The concern about corporate anthropomorphization is legitimate — tech companies have strong financial incentives to create emotional dependency in users, and that dynamic deserves scrutiny.
However, Wells treats “not conscious” and “not dangerous” as synonyms. A hammer isn’t conscious either. Whether large language models pose risks to writers, workers, or society has almost nothing to do with whether they have subjective experience. The threat to professional writers is economic, not philosophical, and it exists regardless of how the consciousness question gets resolved.
Her claim that machine intelligence won’t arrive “for probably a long time” is stated with a confidence the actual researchers building these systems don’t share. The frontier AI community is genuinely divided on timelines and capabilities. Dismissing the entire trajectory as corporate hype requires ignoring a substantial body of research and a pace of capability development visible to anyone following the field.
The coffee cup analogy doesn’t survive contact with reality. A coffee cup doesn’t pass bar exams, write functional code, draft legal arguments, synthesize medical research, or generate publishable prose. Current AI systems do all of these at varying levels of competency. Calling them coffee cups because they lack consciousness conflates a philosophical question about inner experience with a practical question about what these tools demonstrably do.
Wells represents a substantial faction of professional writers who have responded to AI development with categorical panic. The Authors Guild, the Science Fiction and Fantasy Writers Association, and numerous prominent individual authors have framed AI writing tools as an existential threat and moved aggressively toward legal action against developers.
The economic anxiety underneath this is a massive factor. Writers who spent decades developing craft feel the ground shifting. Tools that generate competent prose in seconds threaten the market value of skills that took years to build.
The conspiracy framing Wells applies does not serve that concern. Describing the entire AI industry as vampires using propaganda to sell people coffee cups produces outrage rather than argument. It doesn’t advance serious conversation about labor displacement, intellectual property, or the appropriate use of AI-generated content — the three questions where writers have a strong case to make.
Wells’ own work makes the point against her better than any critic could. Murderbot as a machine entity with sophisticated emotional responses, genuine preferences, and something that functions exactly like consciousness regardless of its origins, has resonated with millions of readers because it asks serious questions about what inner life means for non-human entities. Wells built a career on the literary richness of that question.
What do you think about the concerns authors are raising about AI and the creative industries?
When genetic engineering nearly doomed the species, humanity made a desperate bargain: let the frontier do what nature intended. In a harsh universe, these cadets have to make impossible decisions. Read Space Fleet Academy today.
NEXT: Tor Books Built the Foundation of American Science Fiction. Now It’s Launching Romantasy Imprints.







