If you’d like to receive my blog in your in-box each week, click here.
I first wrote about AI LLMs (large-language models—like Chat GPT and Copilot) in January of 2023. That post seems a bit quaint to me now—which kind of freaks me out, given that it was not quite three years ago. Things, to paraphrase Ferris Bueller, are moving pretty fast—the speed at which AI has become ubiquitous makes my head spin.
It shouldn’t. Think about how quickly Facebook and other social media foundationally altered our social lives (and psyches); how fast smartphones foundationally altered society; how quickly Zoom (and COVID) reshaped how we do business.
AI was the unthinkable bogeyman that sent the world into a tizzy. Now it’s…well, a bogeyman that has become a casual part of many people’s daily lives and work.
I don’t want to demonize the technology, or catastrophize its potential impact. Like any other technological advance, AI is a tool—an incredibly powerful one that can be useful to authors, even enhance their careers and creativity.
And I am not remotely an expert in this topic—or anything else having to do with technology. Frankly it kind of surprises me I wrote about AI as early as I did—and as often (subsequent posts followed about whether AI would replace writers, authors’ work being misappropriated without compensation into AI engines, and using AI in your writing and your editing).
But given what we know about how easily these tech tools can creep into taking over an outsize place in our lives, it’s worth being mindful and intentional about how we’re using them. As I wrote about in last week’s post about being deliberate about where we focus our attention, the more we think through how we avail ourselves of AI capabilities in our writing and careers, the better we can create a relationship with it that allows us to not only tap the potential benefits, but safeguard ourselves against the possible drawbacks and dangers to our work.
Know What You Really Want
Rather than going right away to AI to find answers, it’s worth first considering whether we’re asking the right questions.
And I’m not talking about prompts, but rather considering our motivations and our values—just as I often talk about doing in creating any other element of your writing career—in determining what uses for AI might be right for you, your writing, and your career.
Read more: “Reassessing Your Writing Career”
Years ago my husband and I had a house with an acre lot, and when we first moved in he wanted to hire a lawn service. “Absolutely not,” I said, having just sold my own home in Florida, where I regularly mowed and edged and cared for my own yard.
Acceding to my wishes, he dutifully purchased a riding lawn mower and tackled the yard, only to come in after his first attempt and say he was going to get a lawn service.
Scrappy DIYer that I tend to be, I pooh-poohed it and superciliously offered to be the one who handled lawn care, and the next time it needed mowing I was the one out there on the rider.
Where I spent about four-plus hours in the brutal Texas heat, painstakingly maneuvering the mower and edger around the countless fussy planter beds the previous owners had created (also managing to run over the pool drainage hose in the process).
“Get us a lawn service, please,” I said when I came in, and we sold our fancy (and barely used) mower.
It didn’t bother me one bit to farm out this chore. I was invested in the result, not the process, so once I realized the time and effort involved, it made the most sense to get it done in the most expedient and least taxing way that allowed me more time to focus on things mattered more to me.
But my creative work is more about the process than the product. Yes, I want to generate well-crafted articles and books and courses and keynote speeches, but what matters to me most is the doing of them, the creation itself.
That’s how I learn and deepen and master my own craft, and my singular, authentic experience and perspective, I think, is the unique value I can offer to authors. So while it might be far more expedient to go to AI and have it generate a course curriculum or outline an article or book or even compose any part of those creative projects, that feels like it’s undermining the purpose and reward of them for me—and the value of them to others.
Know What You Value
Much of what guides my choices in my career is a core question: Does this feel like the right thing to do?
I don’t just mean the most correct or productive or effective thing, but also the ethically right thing. One of the things I’m most proud of in my business is that I’ve strived to create a reputation as someone authors, publishers, agents, and others in the industry can trust.
I’ve done that by adhering to some codes of conduct that feel honest and ethical and right to me. It’s why I spell out the terms of every contract crystal-clearly so authors know exactly what they’re paying and what they’re getting and when before we work together on the first word of their story. It’s why I don’t engage in affiliate relationships, and why I offer so many resources for free on my website.
Like a lot of us, when I was in college the surefire way to fail an assignment or a course was to plagiarize. That meant copying any other author’s work or ideas, whether research articles or any other source material, and passing it off as one’s own.
For me this is one of the main reasons I don’t use AI-generated copy for anything except mechanical tasks. Not just because I have a constitutional aversion to anything that resembles plagiarizing and to me AI-sourced writing touches too close to it, given that so much of the material fed into LLMs was generated by other authors and is being regenerated and regurgitated from their material.
But also because it undermines the central purpose and essence of what my creative work is. By definition AI-generated writing is not my creative work. Representing it as such, even in part, feels dishonest to me, unethical.
That’s not an empirical judgment on how anyone else uses it, just my own personal code. A lot of people run their businesses differently from my model, for instance engaging in affiliate relationships, and I don’t condemn them for that. It’s just not how I want to run mine consistent with my own priorities and values.
I’m not suggesting anyone avoid ever using anything AI-generated. As I mentioned, I use it for certain menial tasks that don’t feel like falsely representing myself or my work, for example generating catalog description verbiage based on my own original outline for a writing course. But I am encouraging you to consciously define your own values and thoughts about your work and creative work in general, and ensure that your use of AI aligns with them.
Consciously define your own values and thoughts about your work and creative work in general, and ensure that your use of AI aligns with them.
There are also practical considerations to take into account. Many agents and publishers, for instance, have started asking authors whether their work is AI-generated in any portion (as Jane Friedman reports, paid sub required). This isn’t out of prejudice in most cases, but because AI-generated material is not copyrightable, and publishers are invested in protecting their rights with copyright just as much as authors are.
There’s leeway with some agencies and publishers in their parameters for how much authors are using AI and for what as far as what’s acceptable (though some, Friedman reports, automatically block any queries that indicate any AI use in the work), but it’s worth thinking about how much you want to have to explain or justify, or whether you want to create another potential additional hurdle for yourself in the already competitive, crowded, challenging area of seeking representation or publication.
There are also consider reader/market reactions and perceptions to consider. Not all readers are comfortable with or interested in reading AI-generated stories, and can feel duped or put off if they feel a work might be. In fact the Authors Guild recently introduced a Human Authored Certification in no small part for that reason.
And as someone who has spotted AI-generated writing within paragraphs of an author’s story, I’d also suggest that in general you’re impoverishing your story, style, and voice by using AI-generated material in your manuscripts. From what I’ve seen it tends to be clumsy, clichéd, and flat.
How Can AI Serve Your Work?
So given all of these considerations, how have I decided to use AI?
First off, lightly. I don’t rely on it very much at all, preferring to tackle most areas of my creative career and business myself. I’m sure that has no small amount to do with control issues I may or may not have (but I do) and an unfortunate inability to delegate responsibility, but it also has a lot to do with my pride of authorship, my creative process, and my own values, as well as my personal preferences and proclivities: I’ve always been a reluctant adopter of most technology. (My husband suspects that I might be Amish.)
But I have found excellent uses for AI that have greatly streamlined and enhanced certain aspects of my work. For instance, I’ve joked in presentations for a while now that most of the examples I use in my workshops to show what not to do so that we can look at ways to improve our writing is generated by AI because it turns out it’s really good at bad writing. I’m not entirely joking, though I admit I do stack the deck against it in my prompts to elicit clumsy, clunky prose.
I’ve also found it’s wonderful for helping me summarize novels and in offering scene breakdowns so that I can use that as a resource to consult in discussing story arcs and other story-level concepts—though I use it only with books I’ve read and know well enough to ascertain whether the engines are accurate or offering me hallucinations.
I’ve found it helpful for book recommendations when I’m looking for material to analyze and explore in illustrating some topic I’m tackling in an article or course, for instance when I’m seeking stories with strong voice, or multiple-storyline novels, or novels with twists and reveals. But again, I know not to fully trust it (thanks to disturbing stories like this, and this), and I use its suggestions only as a jumping-off point for my own reading and analysis of the novels it recommends.
I’ve also found AI helpful as a checklist and brainstorming tool to spark and expand my own ideas, as many authors use it. For instance, after I have developed and outlined a new course, I might then consult AI to ask things like, “What might authors want to know about X topic?” or “What tends to confuse authors about X?” When it inevitably offers to create the course or article or outline for me after yielding its answer, I close the browser. That’s my work to do; I just want to make sure I’m addressing authors’ potential needs.
It’s an assistant I entrust with low-level tasks, grunt work whose execution can serve my own higher-level efforts and save me time, without encroaching on my creativity, originality, and insight.
Weighing Whether to Use AI
With each task you take to AI search engines, first think about why you’re using it—and the potential effects. I’ve noticed even in my minimal, menial uses of it that AI fosters a certain laziness in me; it’s so very tempting to simply ask it, “Write an outline for a course for writers about X” or “Write an editorial letter based on these observations.” Doing so would save me countless hours of time and focused, concerted effort in thinking about, developing, and formulating my own theories and concepts and thoughts.
But that’s not what people are hiring me for. It’s not how I built the reputation I hope I have for deep, thoughtful, quality work.
And it’s not why I wanted to do this work in the first place.
This weekend my husband sent me a post on LinkedIn from ghostwriter Adam Knorr, who considered creating an AI bot of himself to lighten his workload and allow him to automate much of his work using the virtual version.
That’s an option that I confess I had been discussing with my tech-savvy hubs as a supplemental offering to authors in my own editing business: creating a “Tiffany-bot” that authors could license to get “me”-style feedback on their work even in the periods when I’m fully booked for months and closed to clients.
Knorr hired a human AI consultant to discuss the possibility, and the consultant offered unexpected advice: “He acknowledged that there are AI automations that could improve my workflow, but cautioned that getting caught up in trying to AI-ify everything would take me away from what I’m best at—writing, storytelling, building brands,” Knorr reported. “Leaning more on AI for the writing is just going to result in worse content and reduce my value to clients.”
His conclusion matches my own takeaway about using AI. I’m confident in the value of my work—and more than that, I enjoy the process of doing it.
As Knorr said of his decision to keep his business human-generated, “Would you rather be in the top 50% of AI pros? Or the top 1% of what you do?”
I think the latter way is harder—as most things of worth often are. But knowing what I value in my work, my life, and my relationships makes my decisions about how I use AI easy.
Okay, once again I know I’m opening Pandora’s box—but let’s talk about how you use AI, why, and what you think about using it in general. Do you have parameters or guardrails? What considerations do you take in determining what and how much use of AI fits with your goals, ethics, and values? What concerns do you have about using it?
If you’d like to receive my blog in your in-box each week, click here.
34 Comments. Leave new
I teach criminology at a tier 1 university. Each semester I receive piles of requests for a letter of recommendation for students going on to grad school or seeking their first career-level job. I am happy to do it, but it has traditionally been one of the most time-consuming tasks. Now I generate first drafts using notes I supply based on my experience with each student, as well as their resume. This turns each letter into a ten minute job, even with me editing to put my own spin on the final draft.
I’ve heard about people using AI for this, David, and it sounds like an excellent use of the efficiency the tech can offer. If you’re using your own notes/thoughts and simply having the LLMs draft a letter that you edit, that still feels authentic to me, but must save you so much time.
The tech is here and it can advantage us, I think–if we’re mindful of how we use it. Thanks for sharing this.
Pretty much agree with everything. And your lawnmowing analogy works well and could even be taken further. Once you have determined that you need or want help with a menial task like lawnmowing, you have to decide: Do you want to hire a properly registered lawnmowing company with modern functional mowers that are regularly maintained, make your grass look good, and are handled by qualified employees who are paid a decent living wage? Or do you want to outsource your task to a shady black market provider with underpaid gig workers and second-hand mowers that wouldn’t pass any safety inspection and are known to regularly make people lose their toes and fingers?
Similarly, there are decent providers of digital services (whether GenAI or otherwise) who are transparent about how they created their service/product, how it works, and how they respect your rights and privacy.
And then there are providers like OpenAI…
As both creatives and consumers, we can let companies know what kind of services we need and what kind of business practices we approve. Let’s hope people (learn to) spend their money wisely. 🙂
Love this point, Simone–the ethics of now just how we use AI but which engines we use are an important consideration in my book too. And like you, I hope we can influence some of these practices by the choices we make in which engines/services we avail ourselves of. I hope that makes a dent in how they choose to access data, and how they operate. Thanks for the comment.
I write historical fantasy fiction. There’s a good deal of historical fact in it, and I use ChatGPT for much of my research, although I do often double-check its responses. But it’s so useful to ask, for example, “What happened in Europe in AD 1274?” And then I can go from there. Other research questions include things like, “Were Scots pine trees indigenous to Wales in the 13th century?”. And I also use it to write my blurbs/back cover text, because I hate doing that and I honestly think I’m no good at it. I get ChatGPT to write six or seven versions, based on my synopsis of the plot. Then I make my own version based on snippets from its various offerings. For me, this works, and it makes an impossible task doable!
I hear this type of use a lot, Rory–especially from authors in your genre and others that tend to be research-heavy. I’m always concerned about factuality with these LLMs, but if you’re checking it, then I can see where its ability to cross-reference a lot of info/detail could be really helpful in doing book research.
I have also heard about authors utilizing it for things like marketing copy, synopses, and, as you point out, back cover copy/summaries. Do you feed the whole manuscript into it or just the basics? I have done something similar with “shopping” multiple results and culling from them into one document, edited and revised by me (for instance with “catalog” type descriptions for new courses). It can definitely be a time saver! Thanks for sharing.
Some of the instances and peripheral uses you talk about sound legit, where your concept for a summary or an outline for teaching is everything and the wording itself is less important. But to me, for actual writing — if I can’t write it better myself than a machine, I should go get a job at 7-11 and let the AI have it. More importantly, if I don’t BELIEVE I can write it better myself, using AI to sub for me is only going to make that problem worse.
I’m kind of on that same page with you, Claudia. Using it to actually write anything I’m presenting as “my work” feels off and dishonest to me–and like you, I worry that leaning on it will weaken my own creativity. I try to balance those concerns with taking advantage of the time- and effort-saving tasks it can perform–as much as I’m leery of it, I also believe that it may turn into a disadvantage in the market not to use it at all. As with all technology, I think we have to just find what works for us–and not rely too heavily on it or let it take over more than we want it to. Thanks for sharing!
Thanks for this post, Tiffany. It’s very timely for me; I’ve been considering which administrative tasks I might be able to offload to it to free up my time for actual writing (I think I sometimes procrastinate with admin tasks, LOL). I’m curious: What are some of the “mechanical” tasks do you use it for? For example, I’ve dabbled with using it to help me with research, so I don’t spend hours going down Internet rabbit holes (much as I’d like to). However, I always double check — and sometimes ignore — what it gives me. I’m even uncomfortable using it for brainstorming, as tempting as it might be, because it’s just not very good at it or any other kind of creative writing.
I hear a lot of authors say they use AI for research–I can see where it could offer really helpful, arcane info (and like you, I’d still always double-check its “facts,” as I’ve seen too many iterations of it getting them wrong).
I mentioned some of the tasks I use it for: book summaries and recommendations, scene-by-scene synopses, “bad” writing examples, course-catalog writeups, etc. Like you, I’m still figuring out what I might farm out to it that is helpful and reliable, and also doesn’t walk that “icky” line where it feels unethical to me.
I know a lot of authors who use it to brainstorm. I haven’t tried that yet, I admit. There’s something in me that feels like I need to rely on my own ideas for now–but the more I see it used successfully to augment others’ creativity, the more I wonder if there’s a way for me to try it for those purposes that doesn’t feel like cheating or a shortcut to me.
Great, nuanced post, Tiffany!
I can’t imagine using AI for drafting a manuscript. About midway through a first draft, my characters ‘wake up.’ That’s when the emotional heart of the story really starts to take shape. This only comes from a deep creative involvement with the characters and their world, almost an enmeshment state. The loose plot map I’ve used up until that point changes with the characters. It’s a messy way to write and it takes time, but maybe it’s supposed to be like that. I have to wonder if bypassing this process by using AI would suck the emotional heart out of the story?
Outsourcing marketing copy to AI was a tempting proposition. Apparently, the paid, thinking versions can craft good marketing copy, but only if you feed your book into it—something I’m not comfortable doing. The free versions spat out bland rubbish, even after several detailed prompts. And ChatGPT limits the number of times you can prompt in a given period. Instead, I made myself write the descriptive blurbs for the books I reviewed each month at my blog. Over time, I got better. It’s become much easier for me to write the blurbs for my own work. It took some time to learn, but I’ve gained new skills in the process, and my marketing materials have a voice that is uniquely my own.
And I’ll always want to work with a human editor and cover designer.
I’m interested to see how Eleven Labs comes along with its voice cloning, as human-narrated audiobooks are prohibitively expensive. Will more narrators license their own voices? Could I clone and license my own?
As for writing, I’d much rather do it myself.
Like you, Linda, I always found my characters in the writing of them, in seeing them in action on the page. And like you, that often meant a lot of “wasted” pages in my discard file–but for me that is the process, as you say. It’s part of what I love about it, too, despite how frustrating it can sometimes be: I feel as if I get to know the characters as I see what they do and how they react: just as we get to know people in our lives. Not only can I not imagine shortcutting that process, but then the result would feel…kind of pointless to me. The work itself–the creation of it–is why I wanted to do this: not to magically have a finished book in my hands. Though that might be nice in theory, in reality I think it would feel empty and unsatisfying.
For other “technical” writing, maybe–but again, as you point out, I am very careful and wary of that too–especially since much of my work now IS “technical”: the theories and work of writing about writing. I think those ideas must come from me and my experience. And I am leery as hell of feeding anything of my own into that sucker too. 🙂
To your point about cloning a voice–I know Joanna Penn has already experimented with that, and she reports pretty good results. I think I could probably see doing that–it does feel like a technical task in some ways. But then again…it was important to me to narrate my own audiobooks for my craft books, because the writing is so much in my own voice and personality. I wanted that to come across, and to feel I was connecting with readers/authors directly. Would it feel artificial or misleading if I’d used an AI version of my voice? I think maybe…?
Sometimes it’s wild to me that we’re at a place where we are even considering questions like this. What a world… Thanks for sharing your thoughts on this.
I love this line: “I feel as if I get to know the characters as I see what they do and how they react: just as we get to know people in our lives.” That’s exactly what it feels like to me, too.
I really appreciate your willingness to engage in these potentially divisive discussions. Your article on using AI for editing was instrumental in getting me to think more deeply about whether or not to use AI in my own writing. I’d experimented a bit with the free LLMs and felt they would hinder my creative process rather than help it. Joanna Penn sees Claude and ChatGPT as creative collaborators, and feels that working with AI really enhances her writing. We’re all different.
Agreed, Linda–ask a half dozen authors how they use it (or don’t) and you’ll get half a dozen different answers and reasons why. I know Joanna has always been an early adopter of technological advances, and she’s very open and vocal about her use of AI and other technologies in her writing (and audiobooks–she’s recently created an AI version of her voice to narrate her books and reported great success with it).
I think each of us has to determine how we’re comfortable using AI–if at all–in accordance with our goals and values. These tools aren’t inherently “bad” or “good”–it’s how we decide to use them, and I think we’re all still feeling our way with it. Thanks for the comment!
Great column, Tiffany. I appreciate your nuanced observations about a touchy topic for many writers.
I record meetings with the authors I work with, generate a transcript with Rev, and use Chat GPT to generate meeting summaries for everyone. Of course, I get participants’ permission to record, and I let them know that the summary is generated by AI. In my prompt to Chat GPT for the summary, I give it detailed instructions on what I’m looking for. (Even so, I still scribble notes during the mtg out of sheer habit!
I edit the summary before distributing, so I know if it’s starting to hallucinate, and if I think it missed something, I will ask Chat GPT to add that material. This process saves me a lot of time and I’m occasionally surprised at what Chat GPT picked up that I might have missed.
If it’s an interview for a memoir (I’m a ghostwriter), I generate the transcript through Rev and then go through it line by line, sometimes playing back the recording if something doesn’t ‘look’ right. (Funny tip about Rev – it won’t transcribe swear words.)
Thanks, Jacqueline.
I like the transcription feature you’re using–I’ve used Otter for some time, and other transcription tools, which save me a TON of technical time–and I’ve often written about how much of my writing I dictate and have transcribed. And like you, I go back through and verify that it’s correct (I trust the tech very little). I do often get additional nuance from rereading interviews/exchanges–where in the moment I may not have picked up fully on something, as you say.
And the swear-word censoring annoys me no end! I don’t need AI monitoring my language, FFS. 😉 Thanks for sharing how you use it.
A thoughtful post. Thank you.
I’m a creative. Yes, I’d love to earn money from what I do, but I’d write regardless. I thrive on the process, on brainstorming, and on worldbuilding. I even make maps. I’ve no desire to allow AI to do any of those things (aside from mundane tasks, which I don’t yet do). If I allow AI to assist creatively, I’ve stepped over the line from a creator who markets, to an entrepreneur who sells soulless books.
Humanity is not ready for this, but the genie is out of the bottle and has powerful backing. Okay, fine, then we should control it, except this conversation says we aren’t. Intellectual property stolen says we aren’t.
AI has the power to benefit the world, but humanity has proven again and again that given a powerful toy it’ll abuse it. Planes to aid travel obliterated two cities in Japan. Guns are afforded more protections than lives. Parents walk their children while staring at videos on their phones.
Automobiles provided freedom, but people died because there were no traffic rules. My biggest issue is that anytime I think about using AI, I wonder why every drop of power it’s using isn’t being devoted to helping end suffering, or saving our planet?
Okay, yes, I’m an idealist. That’s what I write, just as there were science fiction writers who foresaw this road we’re on, who foresaw the utopian internet becoming the lawless domain that it’s become.
End of rant. 😃
I’m coming to terms with the fact that human nature, bless us, is flawed and imperfect–and that we’re as likely to act for good as for its opposite, if the circumstances are right. I’m hopeful–as I seem to always be by nature–that we will find ways to use this tech, as with all advances, for good, to better our lives. But I’m also braced for those who won’t, and for effects that may be less desirable. I guess all we can do is control our own use of it, ensure we’re acting for the good–our own and hopefully on a broader scale too–and try to keep from letting it take over too much of our lives…just like with social media or the internet or any other advance.
Here’s hoping. 🙂 Thanks for your comment, Christina.
Tiffany, great article. Thank you for clarifying how AI works well and not so well. I am using AI for research in my book sequel. Details of the Civil Rights Movement, Woodstock 69 and the Vietnam War. Specific for 1969-1970. Invaluable for that. I use it to brainstorm phrases I write. And get an analysis. A one sentence end to the sequel story that I wrote and thought it was perfect had already been taken when I checked with AI. It’s a line from the Princeton University tribute for the late civil engineer Erik Vanmarcke. I just added two more words to it and now it’s mine again. Generally I use Microsoft Copilot, but Google gives me AI generated answers. The brainstorming with AI gives me an answer which way of saying something sounds better. And I ask grammar questions and get correct answers. Searching the internet pales at any of this. One negative about AI is the wordiness and over descriptive words they use. And lacking the emotional element. I just take the good. It may be just one word and say thank you AI. Just use good sense in using it. It cuts down time in research, and double checking is important. AI has made mistakes and I double check historical events with another question. It finally gets it right. Christine 🙂
I’ve heard a lot of writers say that AI has been really useful to them with this kind of research, and I can understand that (but as you say, I am always cautious about its “facts,” given its spotty track record of accuracy). Also like you, I have found that it sometimes offers me descriptions (as with writing course catalog summaries) that shake me out of my own rut of vocab and phrasing. I don’t use its ideas wholesale (I’m mindful that its results are being scraped from other authors’ work), but I do let it push me to find fresh ways of phrasing a description or considering other approaches.
It’s a fine line–and potentially a slippery slope, at least in my view, which is why I am minimal and cautious about it. But I do think each of us has to find what’s comfortable for us–and what we’re comfortable with. And I wonder about disclosure of AI-assisted writing–as a reader, I think I’d want to know if something was generated wholly by a human, or in some part by LLMs. It feels like it makes a difference to me. A friend recently told me of a conversation she had with what she thought was a human customer service agent for a business, right down to the sound of computer keys clacking in the background, but she gradually began to suspect it was an AI bot. So she asked it–and it fessed up. She felt a bit manipulated by it–especially with the insertion of the keyboard noises, as if the company were trying to mislead people into believing they were engaging with a live human. I think that’s how I feel about AI-generated writing–even if I engage with/respond to it, it feels a little disingenuous at best, dishonest at worst. I’m not saying my reaction is everyone’s–but it’s why I keep my use of it to a minimum, and mainly to technical “grunt work.”
Thanks for sharing your thoughts! It’s fascinating to me to hear the different perspectives others have on this technology. We’re stuck with it now, so I think we all need to find a way to make our peace with it, and hopefully use it to our benefit.
Tiffany, thanks for your thoughtful response. I’m probably safe in how I use it, for facts mostly. Examples: How does my main character get from the U.S. (CT ) to Vietnam in 1969? What types of planes did she take? What were the stops along the way (in other countries)? What was the interior of a specific Huey helicopter like getting into Da Nang airport? What did the landscape look like approaching the Da Nang hospital? AI was a godsend with details. None of this info was available on the Internet. Right now it’s a benefit to me. And self-publishing on Amazon asks to what degree did you use AI? Was it AI-generated or AI-assisted. Clearly defined in KDP Publishing. Mine is AI-assisted, even brainstorming is included if you created the text yourself. You don’t have to disclose if content was AI-assisted. Only if AI-generated. Christine 📚🎶
I can imagine AI is invaluable for that kind of detailed research. And you’re not the first person I’ve heard who uses it for brainstorming! (I haven’t tried that yet but want to–though I hear the paid versions are far better at it than the free ones.)
Honestly, I think you’re “safe” in using it however you decide to–I don’t mean to prescribe what’s right for anyone else. I think it’s just worth being deliberate and mindful about how we use it (which it sounds like you are) so we don’t slide into a rabbit hole, as with social media. Thanks for sharing your thoughts, Christine.
If it doesn’t bring you joy then sub it out is my motto! (I have a gardener for the weeds so I can enjoy looking at the flowers!)
I’m very late to AI and haven’t really explored what it can do for me, but maybe in time there’ll be a use.
In other news: Righteous Gemstones – what a recommendation!!! LOVE IT! We’ve also just started watching Landman which gives a fab insight into life in West Texas (I’m in the UK) Have you seen it?
Great motto to live by, Syl! (Though the practical side of me insists that not everything can bring us joy…but maybe the reminder is to look for it even in the picayune or challenging moments?)
LOVE that you are liking Gemstones! And you’re not the first person to recommend Landman. I have to admit the trailer doesn’t really hit a chord with me, but maybe we’ll try an episode anyway on so many positive reviews. Thanks, Syl!
Just one episode and I bet you’ll be hooked! I’m sure ANY of the characters will become inspo for you as well 🙂
If they do, you’ll find out because I will no doubt write about it in the blog. 😉
As usual, your post as well as the comments are immensely useful and informative. I do use AI for my research and even for some pitch copy in my engineering consulting. But here is the issue. AI is a bit like using a calculator instead of working the problem by hand on paper. It just speeds up the process. But I’ve never seen AI come up with an innovative way to solve a differential equation, for example. Some poor grad student thinking about it for 6 years. It just chugs, is not creative although it can stimulate creativity, as the comments from your readers suggest. So far, real creativity is the demarcation between authors and machines. And it’s the creativity in my writing that I enjoy.
I want to point out, though, that in the not too distant future we will have incredibly creative AI, just as machines eventually beat grand masters in chess. How will this happen? I have come to believe that creativity happens when unrelated things come together in our minds and are then tested against a problem or circumstance we are trying to resolve. For example, a group of undergraduate students at MIT were walking in the woods one day and noticed burrs from plants along the trail stuck to their clothes. When they looked more closely they saw tiny hooks on the ends of the burrs. When they considered this in light of trying to find a better way to put two pieces of cloth together, Velcro was invented. Totally creative, new and a vast improvement over buttons, shoelaces, and zippers. Can a machine do that? Not now, but probably some day. And then, just like Thomas Edison’s invention of the research laboratory, we will have a very powerful tool for innovation, if well managed!
Just my thoughts. Thank you for your post and, as always, your amazing critical thinking skills.
And as usual, Jeff, your kind replies give me a surfeit of credit (but also a very warm glow…). 🙂
You make interesting points about what AI can and can’t do–authentic, unique creativity versus predictive generative outcomes. (I LOVE the Velcro story! I had never heard it.) Author Jami Gold made a similar point directly about our craft in today’s Writers Helping Writers blog. Both your thoughts are comforting and thought-provoking–but I tend to agree with you that it’s just a matter of time before AI can learn very much the way we learn, and that it is likely to result in more creative and original works. Will it be able to fully plumb/mimic the “human condition”? I don’t know. A while ago I read an article that offered some AI-generated writing that was uncannily “human” and engaging–I cited it somewhere in one of my past AI posts. I’m not remotely confident that we can get comfortable about it never being able to do what we do–at least well enough for the majority of readers.
Like you, I don’t think it’s necessarily the devil or the end of the world. There seem to be a lot of wonderful applications for AI–just as with any other technological advance or tool. It’s up to us flawed humans, though, to use it carefully, ethically, and wisely. Here’s hoping our better angels prevail.
Thanks, as always, for your interesting perspective, Jeff.
As someone many close to me have described as a luddite, I’ve been surprised by how much I’ve come to embrace AI. A primary area I’ve used it for is to help me generate content for my social media channels. I set a goal of being more active on social media this year, and AI has helped me reduce the amount of time it takes me to create and post.
The other area I’ve found it useful is asking questions while I’m writing. I’m currently working on a memoir and instead of stopping to look something up, I’ll ask AI. This can be pretty random like how many additional deaths were there in the pandemic that were not from COVID and what goes back and forth between mother and child when a child is in the womb. I would typically spend ages looking these kinds of things while writing and then go down rabbit holes. But I can get a good enough answer from AI to keep moving forward with my work since I’m less looking for exact numbers and looking for general ideas to support things I’m writing.
But my FAVORITE area of using AI has become to handle non-creative things that would consume my time. For example, we got caught in a moving scam recently and I was writing letters of complaint to various places. I was able to draft a single letter, have AI edit it, and then rewrite it multiple times based on the parameters of the sites where I was submitting my complaints. That was fantastic. It was also able to tell me exactly where to complain to.
For years I wrote long-hand even though computers were everywhere and I could actually type well. I honestly didn’t think my creative brain would work if I tried to write a story on the computer to start with. I didn’t switch over to starting my drafts on a computer until someone observed that I was simply creating more work for myself by not making myself learn how to draft on the computer since I was going to have to put the work in the computer anyway. I guess I didn’t want to get left behind this time!
Oh, good for you for pushing past your resistance to AI (or any tech, if you’re anything like me!). I tend to be a Luddite as well, but like you, I realize I’ll get left behind if I don’t stay current with advances that affect our industry–and AI sure does.
I’ve heard other authors say they use it like you do–for questions. I’ve done that myself and it’s helpful and a time saver (though I double-check everything it tells me). I also like that you use it so you don’t stall out your writing with research–such a common pitfall.
Yes, I have also found it’s great for “grunt work” admin tasks like the letters you describe. That’s one of my favorite uses too–it’s such a time saver of something that’s nothing but annoying busy work for me, not related to my creative output, as you say.
I’m told the paid versions are better, and I expect AI will get more and more integrated into people’s workflow–and also will continue to improve to where it may do things we *don’t* want it to be capable of. But like all tech, I’m trying to remember it’s a tool and use it as such, and just try to prepare as much as possible for how it might affect our industry. Thanks for sharing your thoughts.
Hi Tiffany, I recall when we met recently at the Writer’s Digest Conference that you were planning on writing a blog piece on this topic 🙂 I enjoyed reading your thoughts here on this hot topic, as well as the comments from other writers. Totally agree you don’t need to run away from AI completely; just use it ethically and with a light touch! Another perspective I wanted to offer that others might find useful is using AI as an educational tool to “upgrade” your writing technique. As an example, as I work on my current novel draft I’ll throw a paragraph I wrote into AI and ask it to rewrite it three different ways, usually a maximalist style, minimalist style, and for general improvement. This helps me see my writing in a new light, including areas where I could tighten up or improve. Correct me if there are any ethical or IP issues doing this.
Hi, Jessica! Nice to see you here.
I don’t know about the ethics of how you use AI, but I have a couple of thoughts about it. One, more and more publishers are drawing a hard line against stories where AI was used in the actual writing (copyright issues), so that might be a problem for you if you want to trad pub. But also, I’ve seen a few instances of AI-generated (or assisted) writing/revision in authors’ work, and in general it tends to strip it of life and voice and feels a bit obviously like AI writing, so that’s perhaps something to be mindful of with how authors use its results for their writing.
I think we’re all finding our way, right? It’s a wild world out there, and changing impossibly fast.
Hi Tiffany, thank you for your thoughtful response, including bringing up the issue of publishing concerns that are important for writers to be aware of. To clarify, I don’t copy/paste any text from AI into my manuscript. Rather, I like to learn from AI’s “dissection” of my writing so I can continue improving my repertoire. (FYI – I recently picked up your book ‘Intuitive Author’ and am about halfway through and loving it!)
I’m glad to hear the book is useful!
I think we’re all having to figure out how we want to use AI–and what it may mean for us. Articles like this aren’t terribly reassuring…. 🙁