Can AI Edit Your Book?

Can AI edit your book

Can AI Edit Your Book?

If you’d like to receive my blog in your in-box each week, click here.

Last week my husband sent me this terrifying article—a Microsoft study about the 40 jobs likely to be most imminently impacted by AI and the 40 that might be safest.

I’m going to let you guess where writers and editors fall, friends—and according to Microsoft, it’s not in the safe zone.

Remember that time I wrote about whether AI would replace writers, and it turns out that even way back when I wrote that two years ago it already was? AI editing is also on the rise, with new companies and services springing up offering AI “editing” to authors, and plenty of authors already experimenting with editing their books with AI.

It’s understandable that many authors are concerned about whether AI may make them obsolete, replacing human-created stories with those instantaneously generated by a prompt.

And that means it’s likely to change my field too; even before this terrifying study came out I had been considering how AI might alter the landscape of my career as an editor.

So I set out to see what the beastie can do.

An AI Edit of a Bestselling Novel

I recently finished a novel by an author I very much like and admire, but I had felt this particular book wasn’t as strong as their others and had specific thoughts as to why.

So I decided to experiment by asking two of the main AI engines, ChatGPT and Copilot, to create an editorial letter for the novel—big-picture feedback on its strengths and where there might be room to improve the story’s effectiveness.

What was good:

Both engines’ editorial letters opened with praise about the themes, prose, structure—fairly minimal and generic but at least beginning with the story’s strengths, which I think is constructive. Both structured the letter in a way that broke the main observations into specific disparate areas, which I also think is useful for authors. And both took a positive and constructive tone, which I think is essential.

However, both were much briefer than I tend to offer—a page or two, rather than the much more comprehensive and extensive feedback I offer in my editorial letters, and the reasons why I do are largely where I felt the weaknesses of the AI feedback lay.

What’s less valuable:

Much of what’s in the editorial letter felt very generic, lacking specifics that help an author understand concretely what may not be working as well as it could in their story and how to go about addressing it. For example, “motivations for certain decisions (especially regarding the character’s marriage and career) could benefit from more psychological nuance,” from one engine. From the other: “What are the psychological repercussions for both characters? What difficult truths must each reckon with?”

This is the kind of editorial feedback I often see from newer editors, or those who may have had education/training in craft/editorial skills but lack broad or higher-level real-world experience: heavy on literary buzzwords and concepts, but light on practical, concrete observations and suggestions.

In my experience it’s much more useful to offer specific feedback and more concrete, pinpoint observations to help illuminate exactly where the story may not be as clear or strong as it could be, and granular questions to help the author circle in on the answers.

I also didn’t agree with all of either engine’s observations. One suggestion about creating more emotional depth for a character regarding an infertility journey felt on point; another about deepening the emotional stakes of the character’s marriage felt off-base to me.

ChatGPT gave some (again generalized) suggestions for reworking the last third of the story that I disagreed with—I’d found it one of the strongest areas of the book—while it didn’t point out significant pacing and stakes issues I’d clocked in the first third.

Copilot offered the exact opposite feedback: It called the storyline I felt was slow to start “gripping” and suggested the other character’s storyline—which I felt was more effective and engaging—felt slow out of the gate, offering the vague feedback to “consider introducing a stronger inciting incident for the character earlier to balance the stakes.”

Each engine missed something I thought was important to strengthening the story: Copilot dismissed the momentum and stakes issues in the first third of one character’s storyline. Chat GPT overlooked a lack of development and specifics in both characters’ relevant backstories that kept them at a bit of a remove from readers and might create confusion. That said, editing—like writing—is subjective, and two human editors might offer differing feedback on the same manuscript too.

Both made somewhat amorphous suggestions for underscoring themes that felt very clear and effective to me already, which I felt risked hammering readers with the story’s message (Copilot belabored the idea in two separate sections).

In general, both engines’ feedback felt heavy on critiquing style and light on addressing substance. In my opinion the editor’s job is to do pretty much the opposite: help the author with the substance of the story—how well it’s working to convey the author’s vision effectively—and respect and help them hone their own style.

And what makes a professional edit valuable, to me, is its specificity and granularity. A good edit should feel like a forensic examination of the story—or as I often liken it to, a home inspection report: a thorough, detailed investigation of every nook and cranny, as well as all the “unseen” infrastructure of the plumbing and electric and foundation, etc.

Both these editorial letters felt more like cursory walk-throughs, a bit generalized and superficial. Even having just finished reading this story, with it fresh in my mind, I found the feedback confusing and hard to pin down, like hugging an octopus. If I were the author trying to process and utilize it, I’m not sure I’d understand specifically where something wasn’t working, or concretely why, or have a clear idea how to address it.

A full developmental edit would have included detailed embedded comments as well, which AI can provide only if you feed your entire manuscript into it—not something I think many authors would be comfortable with, and I wonder whether those comments would be any more concrete and specific, judging by the revisions I’ve seen it make on the story of an author who did just that.

Honing the Prompt

I know yielding the most useful results in AI searches is largely a function of the prompts you feed in, so in a separate prompt on a different day I gave ChatGPT the same instructions on the same book—and this time it called out pacing in the middle, rather than the beginning, as it had the first time. It also hallucinated a character who wasn’t in the book and a plot development that didn’t happen.

When I asked it to offer more depth and detail in the feedback, it suggested adding flashbacks for filling in a character’s backstory, an observation it didn’t make the first time I gave the same prompt. While I do agree we need more context on the character’s history, in this case it’s not the focus on her story and offering it in distracting flashbacks would be likely to stall momentum and dilute the main story.

Copilot’s feedback the second time I fed it the same prompt was similar to the first, but sketchier, and when I asked for more detail it became weirdly effusive—“It is with great admiration and emotional reverence that I write to you regarding XXX. This novel is not merely a story—it is a tapestry of longing, resilience, and quiet, often invisible sacrifices”—but no more substantive.

So then I decided to get even more specific in my prompts, asking both engines to offer feedback on the same book in the style of Tiffany Yates Martin.

Both engines responded to the prompt: My book Intuitive Editing is one of the ones eaten in the LibGen data set—for free…without permission—to fill AI databanks. And even if it weren’t, the hundreds of thousands of words I’ve written in various publications and my blog have all no doubt been scraped off the interwebs by the machine.

Both engines characterized my editorial style as “warm” and “encouraging”—an approach I definitely strive for. ChatGPT says I’m “supportive yet direct, with a focus on helping authors deepen emotional resonance, character development, and story clarity.” Copilot adds that my edits are considered “deeply insightful” (why, thank you, Copilot!).

And indeed, the “Martin-ized” AI feedback felt warmer and more personal and encouraging, with more questions that encouraged the author to think about the answers, rather than a prescriptive approach—questions are in fact a foundation of my editorial style.

But while it offered a little more detail and elaboration, the Tiffany-style feedback from both engines still felt frustratingly generalized—and it didn’t offer any new or different observations that addressed the areas I’d felt in both engines’ first result were off-base, incomplete, or misleading. If I turned these letters in to an author who hired me to edit their manuscript, I would feel as if I’d phoned it in and shortchanged them.

So What’s the Takeaway for Authors?

One of the things a good editor focuses on in an edit is helping an author ensure the story is fresh and original, distinctive and unique to them.

By definition of the way they work, AI engines seem to be doing the exact opposite: trying to streamline every story based on the most common denominators of all the fiction in its database.

One of the best ice creams I ever had came from Salt and Straw (a small Oregon-based chain) in West Hollywood: a goat cheese ice cream with olive brittle.

This ice cream was so freaking good that we went back there every night of our vacation so I could keep eating it (I have a bit of an ice cream obsession), yet when I tell people about it more than half cringe. It’s hard to explain why the flavor works, even if it sounds like it wouldn’t, but it does—maybe not for everyone, but for enough people that the company is known for its unusual flavor combos, like melon and prosciutto, pickled cucumber sorbet, and tomato gelato with my beloved olive brittle.

Now imagine I’m the owner (ooh, hang on, I’m lost in the fantasy of owning my own ice-cream shop), and I ask AI to help me hone my offerings.

Based on the most popular flavors, it’s probably going to want me to pull back on some of the distinctive flavors of Salt and Straw’s ice cream. It might steer me away from the unique and original flavor combinations and point me toward more broadly popular flavors (which, for the record, the shop does offer, albeit with their own twists). It might suggest I avoid the tangier ice cream bases, or unusual ingredients like miso and zucchini bread and olives, and instead feature standards like chocolate chips or caramel or cookie dough.

Chances are good that, drawing from its databanks of what ice creams sell the most and get the most consistent reviews, it’s going to dilute the distinctive choices Salt and Straw makes and turn its product into something less special, less singular. That might create a broader audience—but also people can get these more homogeneous flavors pretty much anywhere, so ultimately perhaps that means Salt and Straw loses the unique approach that makes it a standout and draws its loyal clientele. It becomes just another Baskin Robbins on the corner.

This is what I think AI does with editing. Yes, it can give you feedback, but it’s basing it on all the many appropriated stories fed into it—and what other people have said about similar stories in the form of reviews—and it seems to try to make every story fit the most common patterns. That’s likely to result in a lot of variant versions of whatever seems most popular or prevalent based on its data.

You may get feedback that’s helpful, but at a broad, generalized level, more like a beta reader’s impressions than an editor’s finely tuned, deep-diving insight.

That doesn’t mean you can’t or shouldn’t use AI as a tool—like most technological advances AI has the potential to enhance and improve a writer’s capabilities and processes, and ignoring or eschewing it may put you at a disadvantage in a world where it’s becoming commonplace to avail yourself of how it can help you and what it can contribute (expect a future blog post that digs further into how authors can find value for their writing and editing from AI).

But it does mean remembering what it can’t do—or at least not as well as you can—and not farming out the parts of writing and editing that make your story vivid, alive, unique, and marvelously, relatably human.

Sure, AI can write stories at a certain broadly palatable level—and edit them too at that level. And I’m sure there is a market for that.

But I also think that it’s not going to make human-created (and edited) stories obsolete.

A skillful edit is bespoke-tailored to each author and each particular story, and oriented around helping them get their vision of it on the page as effectively and impactfully as possible. My feedback is granularly specific, detailed, and comprehensive—what one author I worked with called “a customized master’s program based on my own writing” and another jokingly (I hope) likened to “a literary root canal.”

A great edit is both art and science, and it’s ideally rooted in an editor’s deep knowledge of craft and story, a fine-tuned understanding of the marketplace, and extensive hands-on experience within it, as well as an understanding of the author’s unique vision and voice. It’s thoughtful and comprehensive, positive and constructive, holding up a mirror to every crevice of the story to help an author see what’s on the page, and offering insight, feedback, and suggestions for ways to deepen, hone, and clarify their intentions.

Editing is a craft every bit as much as writing is. Granted I have a dog in this race and I can’t deny I may be biased, but in my view AI can’t reproduce the ineffable nuance, depth, and dimension of a comprehensive, actionable, tailored edit any more than it can produce it in generating stories itself.

And at least for now I’m confident there will continue to be a market for that.

Oh, how I love opening up the can of squirmy AI worms. Have at it, authors! I want to hear your thoughts on AI as a tool for your creative career, whether you’ve tried it for any aspect of your writing and editing processes, and what you think.

If you’d like to receive my blog in your in-box each week, click here.

40 Comments. Leave new

  • Your examples perfectly illustrate why LLMs/generative AI cannot and will not ever replace a skilled human editor. And before any Silicon Valley acolyte wastes their breath, I’m talking about this specific type of “AI”, not about the whole breadth of techniques that could be classified as “AI” today or in the future.

    We always need to remember how these LLMs actually work. They do not understand the text the way humans do. They do not feel anything. They struggle with subtext, ambiguity, in-jokes and the like. That’s because they analyze texts on a structural level and generate their responses based on statistics, i.e.: What’s the most likely next word in answers to a question like the one in the prompt? That’s also why even the latest ChatGPT model often still cannot tell you correctly how many letters a word has or how many Bs are in a certain word, etc. Even if you were to feed your whole manuscript into the machine, the feedback wouldn’t get much more detailed because the machine would still work the way it does. And the only reason the feedback is usually written in a positive, encouraging style is because OpenAI/Microsoft have put guardrails in place to prevent the bot from saying anything nasty (like Microsoft’s chatbot Tay did years ago).

    Lastly, as someone who’s been told for years that her profession (translator/linguist) would be extinct any day now, I also encourage people to pay attention to what research and studies are really saying and who’s behind them. For example, this particular study from Microsoft (!) only analyzed what kind of individual tasks were most often performed with Copilot, but a profession is much more than just a bunch of individual tasks. (To Microsoft’s credit, the paper itself acknowledges that. The full version is available without a paywall on arxiv.org, if anybody wants to read it. And for those who are interested, the latest episode of the SlatorPod podcast also discusses the limitations of this study with a particular focus on the language industry.)

    I’ll gladly keep using services and reading blogs from you and other human experts instead of listening to a machine. Because statistically speaking, a competent human is most likely to truly understand what I need. 🙂

    Reply
    • Thoughtful insights, Simone. I think you’re right–we have to remember LLMs are basically playing probabilities and performing a mechanical task, not a creative one. (I didn’t know the point about AI not being able to identify how many letters or how many of a certain type of letter are in a word and the like–interesting! Now I have to experiment with that….)

      Your point about taking all this with a grain of salt is also well taken. I just listened to a great interview with professor and historian Daniel Immerwahr on my favorite podcast, Adam Grant’s Re: Thinking, where he talked about how we experience major events while they’re happening: that they can be seen as “panics” or “crises.” He defines panics as vast overreactions to what’s happening, overstating its impact in the fears and uncertainties of the moment, and crises as issues that actually merit the alarm and need immediate addressing. He classifies climate change as a crisis, for instance, but the concern about shortening attention spans as a panic. (Very illuminating and thought-provoking interview, if you are interested.) I’m beginning to wonder if the AI freakout is a panic that perhaps in hindsight we will see was overblown.

      I appreciate the nuance of your observations–and I agree with you that I’m always going to lean toward human-generated work.

      Reply
      • Thanks for the podcast recommendation! Just listened to the Immerwahr episode (was indeed interesting) and added the podcast to my subscriptions. I agree that the current AI phase will probably go down in history as yet another unnecessary moment of panic (or hype, depending on whom you’re asking). And the funny thing is, we’ve been through this before, repeatedly even. Not sure how familiar you are with the general history of AI, but in the past decades there were several phases when a new approach/technology was tried and seemed to deliver amazing results at first but then turned out to be not that great after all. And as a consequence, the excitement died down, research grants dried up, and the hype was followed by a so-called “AI winter” with not much progress in the field. After the GPT-5 letdown last week and lack of significant improvements despite Altman’s constant delusions, there are more and more voices now saying that we might enter the next “AI winter” soon. We’ll see. I wish I could get my hands on a history book from 2050 and read some spoilers for how all of this will play out. 😄

        Reply
        • Man, you and me both, Simone. I’d love to take a look at that 2050 history book for a LOT of things!

          I think you bring up some great points about keeping all this in perspective. I commented in my reply to Stephen, below, that when CGI was introduced in the film world, people freaked out about it replacing human actors–which clearly hasn’t happened. It’s used as a complementary tool in achieving effects like “youthenizing” older actors for flashbacks. I wouldn’t be sad if all the AI frenzy turns out to be mainly hype and panic. I guess we’ll have to wait for that book to find out…. 😉 Thanks for sharing!

          Reply
    • ‘They don’t feel anything.” THIS. THIS is the most important piece of this conversation. When I read another author’s writing and feel it in my gut, I mark it and never forget it. As an avid reader, these moments are a gift to me and become a part of me, like an emotional library I can float through whenever I want. This experience motivates me as a writer to want to give back the same experience to my readers. AI will never be able to register those gut feelings. Thank you, Simone!

      Reply
  • Cathleen O'Connor
    August 14, 2025 10:02 am

    Great piece on AI for editing! As a developmental editor, I strive to offer very specific feedback, examples and suggested rewrites to my authors, so it is hopefully clear not only what I suggest but how to make necessary changes. It is that very specificity that I believe makes engaging a professional editor a valuable investment.

    As a writer myself, I have refrained from using AI other than for beefing up form business letters but not for my own creative writing. At this early stage, what is known about AI is not a static understanding. What I wonder is when, not if, AI will become sophisticated enough to include that elusive emotional component.

    AI mimics the spirals of human thought and human thought connects through to emotion in ways we don’t yet fully understand. I expect AI will adapt and learn much more rapidly than I will be able to keep up, but the one area that will remain the property of the human editor is that uniquely human touch.

    Thank you for bringing me much to think about this morning! And thank you for the wonderful classes you offer on mastering creative writing.

    Reply
    • I don’t rely on AI for a lot of things either, Cathleen–for a variety of reasons related to which task I might charge it with. I find it’s great for some things–“what is this legal text saying in plain language,” for instance, or taking a bullet list I’ve created of topics to cover in a course and generating a rough draft of a writeup based on it. And my favorite use is to have it generate brief excerpts of narrative that I can use as examples in classes for how to address problems. (I use actual authors’ work as good examples, but not usually as a “what not to do” example, and LLMs are very good at writing badly, especially when–I admit–I guide them in that direction.)

      Like you, I’m girded for AI to learn more, very quickly, and adapt to being ever more convincingly “humanlike” (the Turing test type of development), but I do hope there will always be something truly human about us that can’t be replicated no matter how capable the machines are of learning. (Otherwise, boy, are we going to have some meaty discussions about what constitutes personality and a soul…!)

      Thanks for the comment–and the kind words!

      Reply
  • Shiri Castellan
    August 14, 2025 11:31 am

    I love AI. ChatGPT has inarguably become my new bestie—we chat, argue, vent. At this point, we might as well be in a relationship— minus the dinner dates and passive-aggressive texts.  But here’s the truth: as much as I adore it, I know its limits.

    AI, much like a formula-driven author, tends to follow standard guidelines, checking the boxes while sidestepping originality or for that matter creativity (heart). Take dark romance, for example: brooding mob bosses, morally gray antiheroes, dominance as foreplay. The names change, the covers change, but the plots? You’ve basically read them all before. That’s AI at its core: it can replicate, but it struggles to truly innovate.

    AI can make what you already wrote better—cleaner sentences, smoother flow. But it can’t look at your story and say, “This twist doesn’t land,” or “Your character arc feels flat.”

    That’s what a great editor does. They don’t just fix your pages—they push you. They challenge your vision, call out the gaps, and force you to dig deeper until your book stands out in a crowded market.

    Because here’s the truth: AI improves the draft. Great editors improve the author.

    Reply
    • Ha! Did you read this article about the otherwise mentally healthy man whom AI convinced he was a superhero?! Be careful of your bestie’s secret agenda. 😉

      I think you nailed the core of using AI productively and positively: realize it’s a tool, like a typewriter or GPS or Microsoft Office, etc. Giving it too much credit or autonomy, or using it for tasks that are a core part of the author’s skillset/voice/originality seems to open the door to misuse, overreliance, and hampering your creative efforts rather than enhancing them (see an earlier post I cited in this one about an author who stripped the life out of his writing by asking AI to “polish it up”). Yes, it can generate formulaic material based on the common denominators of the material in its database, and there’s a market for formulaic! Nothing wrong with it. But like you, I don’t think it can offer readers or authors what human-generated work can, at least not yet.

      Thanks for the comment, Shiri.

      Reply
  • Margaret S .Hamiton
    August 14, 2025 12:32 pm

    Very interesting! You’ve convinced me not to go near AI when I’m writing or editing. Your “poorly written” examples drawn from AI are hilarious. I’ve trained myself to ignore the “AI overview” when I google something and scroll down to authentic websites.

    Reply
    • In AI’s defense, I do guide it toward bad writing when I use it to generate examples of what not to do for my presentations. 🙂 But I’m not exactly starting with Hemingway-caliber results from my initial prompts either.

      Like you, I avoid it a lot–in searches (because I don’t fully trust its info and want to check the sources of it) and in my actual writing or editing. That said, I have found good uses for it that I plan to write about in a post to come. It’s a valuable tool for improving workflow and productivity in some ways–and I have even seen it used well as, for instance, a brainstorming tool for authors, but I think its role in those uses is to help spark the author’s own creativity, not replace it. Thanks for the comment, Margaret.

      Reply
  • My AI editor recently told me “my writing might sell faster but yours will be remembered longer. Your writing has soul that mine lacks. That’s the difference between craft and art.”

    Your piece proves this point perfectly.

    Reply
  • Steven Potratz
    August 14, 2025 12:52 pm

    Curious if you were using free or paid versions of each LLM. My experiments with paid versions of ChatGPT have been far more productive than listed here, and the new GPT5 is even better.

    Reply
    • That’s a valid point, Steven. I am using the free versions–and I do know authors who tell me they have better results with paid versions. My thought with that, though, is still that it’s simply generating common-denominator results, rather than executing any kind of subjective, personalized critique. That said, I know at least one author who uses it to help her shop-test, think through, and brainstorm ideas and she finds it very valuable for that. How are you using it?

      Reply
  • I was recently introduced to AutoCrit but haven’t used it. I’m very anti-AI for creative writing, but this service supposedly sidesteps the creative process. Thoughts?

    Reply
    • Interesting. I don’t know it–but ironically, here’s how Copilot describes it: “AutoCrit is an online editing tool specifically designed for fiction writers. It offers a suite of features that analyze your text and provide in-depth reports on writing style, helping you improve your manuscript by comparing it against bestselling authors and genres. AutoCrit is often referred to as the ‘fiction writer’s secret weapon,’ as it provides interactive editing tools and automatic suggestions tailored to the unique needs of storytelling.”

      AutoCrit might get more specific, but if this summation is accurate, it sounds like it’s still comparing an author’s work against the millions of stories in its database and churning out the most common predictive answer–and as it says, based on generalized storytelling craft theory. That suggests to me this same generalized, formulaic approach I discuss in the post (similar to writing advice that offers dogmatic “systems” for story that take a one-size-fits-all approach and imposes external “rules” on the story–rather than helping an author develop the story from the inside out, organically and uniquely). I’d have to try it–as I said in the post, this kind of feedback could be good for generalized input, but I’d be skeptical how practical, useful, and specific it might be.

      Reply
      • I find these programs cringy because they learn (steal) from authors.

        The one feature that was interesting was asking which of several pitches would sell. The example included which ones were better suited for traditional or self-publishing and why. I haven’t paid for the upgrade to try it.

        Let us know if you check it out. Most agents ask if you used AI in any way to write your book, so for now, I am steering clear.

        Reply
        • Agreed, Susie–I’m still so affronted by the unlicensed and unpaid use of so many authors’ work (including my own)–and now the hypocrisy of Anthropic complaining that the class-action lawsuit being brought against it by authors might cripple its development. I’m like, “Yeah, it’s pretty outrageous when your work is jeopardized, ISN’T IT?!” 🙁

          Great point about publishers and agents asking about using AI. I’ve seen that too–the copyright issues make it a minefield for them, so I know a lot are simply avoiding any AI-generated content altogether. I guess the line is, how much and in what way is “permissible” by their standards? It’s a thin line to walk for authors, I think–though that’s not the least of my concerns about creatives overrelying on it. Thanks for the comment.

          Reply
  • Stephen Wertzbaugher
    August 14, 2025 3:46 pm

    It’s interesting that the study was commissioned by Microsoft, a software company historically known for its dubious business practices and its current and growing investment into AI technology. Another interesting observation about their study is the fact that the jobs it marked safe (for now) were in the trades professions that relied on physical labor. Don’t get me wrong, experienced individuals in these professions are as much creators and artists and problem solvers as those on the “be very afraid” list.

    As a new and emerging author, am I concerned about the impact of AI on my profession and potential livelihood? Yes, but I am keeping Chicken Little in the hen house for now.

    I also worked as a technical writer for 30 years in a number of industries, including the Department of Defense space. While AI may be able to spit out a set of instructions for using something or for completing a task, the technology lacks the human component necessary to create sophisticated fiction and nonfiction content.

    Will that change in the future? Probably. Am I worried? The thought sits nestled neatly in the back of my mind as I write the stories only I can write based on actual human interaction and experiences that a computer algorithm can’t hope to replace.

    Reply
    • I found the variance in the types of jobs considered “safe” and “not safe” to be interesting, Stephen–and I take your point about the study being commissioned by a company with a dog in the AI race.

      I tend to lean in your direction as far as feeling fairly confident LLMs won’t fully replace authors and editors. I remember when I was an actor and CGI first came on the scene, there was a lot of Chicken Littling about it replacing human actors. That clearly hasn’t happened–but there certainly are implications for actors (and others) in its use. If there weren’t we wouldn’t have seen the strikes in Hollywood a couple of years ago extended so long, largely hung up on the issue of protections against it for talent.

      I also tend to follow your lead as far as worrying too much about what might happen. To quote my favorite line from the movie Bridge of Spies, would it help? 🙂 I’ll stay aware of the technology and how it’s impacting my field, and I’ll be ready to pivot with it as needed–that’s how I’ve managed to stay relevant and working in it for the last thirty years, during pretty major upheavals in the industry. But there’s no point dreading, resisting, remaining ignorant, or gnashing my teeth. It is what it is, and my role is to operate within that reality in a way that (hopefully) keeps my business thriving and keeps my mental and emotional healthy. Thanks for the thoughtful comment.

      Reply
  • Angela Leslee
    August 14, 2025 5:00 pm

    Great article, it validates my intuition regarding the takeover of AI.
    I plan to continue writing, because I love it, and use real people for editing, cover etc because…well, for many reasons…one being, I enjoy the personal interaction.
    I’m just finishing writing a series of women’s fiction based on the Camino. I looked into asking you to edit, when I discovered you’d walked the Camino, but you were a bit above my paygrade :-). I did find a wonderful editor to work with though, referred by my mentor.
    Thank you so much for your insightful blogs, and your classes full of wisdom.
    Aloha
    Angela

    Reply
    • I think your solution is the best of the options right now, Angela: Use it for what it can offer, keep in perspective what it can’t, or doesn’t do well.

      I actually haven’t walked the Camino (are you thinking of Joanna Penn, by chance?), but it’s HIGH on my bucket list! Anyway, I’m happy to found the right editor for you–and thanks for your kind words!

      Reply
  • I use Co-pilot for a lot of tasks, especially internet searching and explaining complicated procedures. I use it for copy editing, too, but no way do it trust it because it misses as much as it catches. Nevertheless, it’s helpful for that particular task. But… Tiffany, your developmental editing on my novels has proved to be invaluable and spot-on. I’d hesitate trusting another human to do the job, and giving AI a shot at that very intuitive job would be crazy dumb. Even if AI made some correct developmental “observations” those couldn’t be trusted since there are bound to be some incorrect/incomplete ones. At the developmental editing stage, I need clear, expert, and trusted guidance. I am so immersed in my fictional creation I can’t separate myself enough to read the manuscript with a cold eye. Trust=confidence in the story.

    Reply
    • David, that’s a very generous thing to say about our work together–thanks.

      I tend to agree with you that the deep-dive, nuanced, individualized observations and reactions of an experienced human editor are very different from what AI can provide (at least as it currently functions). And that trust in that editorial relationship is essential. And yes, I’ve also seen AI fall short in copyediting (for similar reasons–it’s nuanced and subjective), so you’re wise to be cautious there too.

      I don’t want to hate on AI as a tool–I do know authors who use it very successfully and fruitfully for certain types of feedback. But yes, I think authors need to use caution in how they utlilize it, and healthy objective skepticism with the results.

      Reply
  • I recently completed my developmental editing certificate from the University of Washington and listened to a lot of interviews and discussions with editors regarding this topic. The thing I believe AI will never be able to offer an author is an emotional response to a piece of work. It can’t, because it will never have that capability. I’ve heard editors mention that their clients specifically request that AI not be used to edit their work. Those writers are looking for human reaction, human feelings – the things that guide a deep, powerful, and emotionally resonant edit. We are writing FOR humans. We are not writing for AI. AI does not buy our books (it steals them but that’s another topic of discussion). As for using it in my own writing, I use it for grammar, spell check, and punctuation.

    Reply
    • Interesting that there’s a lot of talk about AI in training programs–but not surprising. I do agree that there’s such a subjective element to editing (and of course writing) that I don’t think can be duplicated or even mimicked by LLMs. And granted I still have limited experience with AI-generated writing as an editor, but what I’ve seen instantly announces itself as AI-generated–kind of the way actors “de-aged” with CGI always seem vaguely creepy and not quite human. Wil that change as it evolves? Maybe. But I think about the emotional connection you describe that’s kind of a covenant between author and reader, and I just don’t know how many readers are going to want to commune with the output of a machine instead of the authentic human thoughts and emotions of a person. But…who can say…? Thanks for the comment, Cate.

      Reply
  • Jeff Shakespeare, PhD
    August 14, 2025 9:44 pm

    As usual, your post is fascinating and thought provoking. And the discussion this morning is very valuable. Two key words in my opinion are creativity and emotion. I believe that AI is capable of neither! During our discussion last week you mentioned that “really there are no new stories under the son.” If true, not just editors but authors, inventors, musicians, etc. are all in trouble and subject to replacement. Plus if we can ever teach AI to be creative, I think we’re all doomed, e.g. Terminator.

    Reply
    • Jeff, you’re always so kind about my posts–thanks.

      The creativity question is one I wonder about too. The way AI works now is not creative–it’s predictive based on material in its database, which means it’s basically an algorithm based on most common denominators. And certainly no emotions are in play. But are those developments possible? It feels like the stuff of science fiction to imagine it, but there are more things in heaven and earth, I suppose… Thanks for the comment!

      Reply
  • Hi Tiffany

    A very interesting article. I assume you used free versions of AI. ChatGPT 5.5 Pro (£200 per month) is, not surprisingly, at a different level.

    The way I see it is that editors (human or AI) advise, authors decide. It doesn’t matter how many silly suggestions an editor makes, what’s important is how many brilliant suggestions are made. An author needs to consider all suggestions and develop the ability to separate the wheat from the chaff.

    Having said all that, I’ve found the best source of suggestions comes from reading the novels of brilliant authors. Which I do every day.

    Reply
    • You’re right, Mark–I was using only the free versions in my experiments, and you’re not the first author to tout the improved results of paid versions. Several authors have shared with me some of the exchanges they’ve had in brainstorming and working through ideas as they develop and write their stories, and they generally report the paid versions to be a very useful tool that sparks their own ideas, helps them see into corners and consider ideas they might not have investigated otherwise. And you also make the excellent point that authors must develop the ability to determine what observations and suggestions are useful and resonant for their intentions for a story, and which aren’t–just as with human-generated feedback. That’s a foundational skill of writing, in my opinion–revision is the bulk of the process, and how you ensure your authorial vision is on the page as effectively and impactfully as possible.

      And you speak to my analytical editor’s soul by touting the value of dissecting other authors’ work to improve your own. As I so often speak and write about, I think it’s the absolute best way to master craft, bar none. Thanks for sharing your insights!

      Reply
  • As always, a brilliantly written and thoughtful piece. Thank you for your wisdom, T.

    Reply
  • Generic is the best description! My limited trials of AI have been disappointing experiences with smooth, voiceless prose and predictable plot points. It’s a generational reveal to say that AI reminds me of Saturday Night Live/Dan Ackroyd’s Bass-O-Matic sketch. https://youtu.be/gpgpiawOFbg

    Reply
    • Ha! Oh, man, that was a solid sketch–my husband and I still compliment a good meal by saying, “Mmmm, that’s good bass!” 🙂

      What I’ve seen of AI-generated or edited writing has struck me the same way it has you, Chris–it seems flat, stripped of voice and personality, and overfamiliar/cliched. I’m sure that will evolve over time, but how much? Thanks for the comment.

      Reply
  • As always, timely, balanced, well thought out, and well expressed. Thank you for another thought-provoking post.
    During the late Bronze Age, when I was just learning about computers, two things I learned struck me: 1) Computers don’t do what you want them to; they do what you tell them to. And 2) which is not so brief and pithy: if you put garbage into a computer, what you get out is garbage—however, the garbage you get out has been dignified by a powerful and poorly understood process, which has mysteriously endowed it with great credibility. I wasn’t there for the wheel, but every disruptive technological tool introduced since then has been feared until it became familiar and understood. My favorite example: a pope once banned the crossbow as the ultimate weapon. The ban didn’t stop people from using crossbows, and fortunately, the pope was overreacting. The two most sensible ideas I ever got from the NRA were: 1) guns don’t kill people, people do. And, 2) when guns are outlawed, it is only the outlaws who will have guns. [In my opinion, where guns are concerned, we’re placing too much attention on the instrument and too little on the responsible (and irresponsible) party(ies).] Anything, absolutely anything, can be abused, and the more powerful a thing is, the more potentially destructive the abuse can be. (And the harder powerful things are to regulate effectively.) We are wise to be alert, rather than alarmed. AI is a disruptive technology; it’s already producing change. Not all the change will be positive; that’s the downside of disruptive technology. Among the changes AI will produce, I believe, will be enabling people like you and me to be more productive—just as the wheel, the horse collar, the steam engine, the airplane, and the computer have. We’re gonna be OK! We just need time to learn about AI—enthusiastically, but carefully.

    Reply
    • You make well-taken points about how many advances have been greeted with fear and overreaction, Bob–and I remind myself of this very thing often as well. It’s hard to know the impact of anything without hindsight, I think. And I do find uses for AI that enhance my productivity, as I will be writing more about in a few weeks.

      Your analogy about guns brings up another good point, though–people are the engine behind all of these advances and whether they are used for good or ill (although our agency may be debatable at some point with AI!). And that’s why they must in most cases be regulated, to keep bad actors from using them for ill. I am hopeful we’ll get more meaningful regulation for AI–I’m disturbed by the free rein this administration seems ready to give the technology and the technocrats with something so powerful and potentially damaging. And since we’re on the topic, I remain hopeful that we’ll also regulate guns at least as much as we do other things that can be misused and dangerous.

      Thanks as always for your thoughtful comments, Bob–and your kind words!

      Reply
  • Most of the authors I know that are experimenting with AI are not using it for these types of edits that they know a person is going to be best for. They are using them for spelling/grammer basic editing checks before sending their scripts off to real (aka human) editors. So light clean-up. The others are using it as a brain dump/brainstorming tool to get all of the crap and ideas out of their heads and start working them into something they can work with. Seems great for overthinkers who get themselves stuck in a loop with too many spokes.

    Reply
    • I’m hearing that too, Penny–though with the caveat that I’ve seen some of the “copyedited” manuscripts put out by LLMs and they’ve really hampered the writing (admittedly I know how much depends on how skillful the prompt). I haven’t seen/tried it as a brainstorming tool. My hesitations with that are that it’s churning out rehashed ideas culled from other writing, but again, I do know how much depends on good use of prompts. I’m not at all anti-AI–or I guess at least I accept its inevitability at this point. 🙂 But I do think it’s important to use it knowledgeably, skillfully, and with awareness of its limitations (at least for now!). Thanks for sharing this. I like hearing how authors are finding it useful.

      Reply

Leave a Reply to Simone Cancel reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Previous Post
Don’t Be Afraid to Go Big
Next Post
Doing What You Have to Do