As AI apps flood the internet, how do they affect authors and editors? Is AI really such a massive deal? Is there a real concern for writers and book professionals? Cate Baum finds the answers are not what you might think.

This week, after spending four hours re-writing an author’s blurb for his new novel, he accused me of using AI to write it. I was horrified for three reasons: firstly, I absolutely had not used AI to write it. Secondly, the accusation seemed to come from a place of fear and misinformation. But thirdly, and most horrifying for me as a professional writer: my position was entirely indefensible.

Let me explain.

Human vs. Robot

When I rewrite an author’s book blurb to publicize their book, I am doing so with the knowledge of two decades of what is acceptable on Amazon and Goodreads. I cut down plot, make the blurb quick to read, and I check the language is consistent, original, and contains keywords for genre and categories so that readers can speed-read it and make a decision on purchasing.

I'm not a robot - ChatGPT

ChatGPT cannot replace human experience – yet

This only comes with experience. It’s something I trained to do, first as a book editor with a certified institution. I have a Masters with Distinction in Creative Writing for Novels from a top London university. I trained in SEO and PPC, and worked in media industries for years. I am now an agented author with a book coming out next year. AI did not occur to me as a tool that could help me with work I do every day in my profession.

I suppose I should take it as a compliment. It means my blurb came out to a standard that makes it look slick and clear. But what worries me is the accusation itself, that AI is a bad and scary thing that might change the texture of reality itself (Several years back, authors similarly freaked out about using “clickfarms” to promote books. Another bogeyman in the closet of outrage that didn’t really exist).

What AI does is amazing, for sure. But having looked into various apps for this article, I found massive issues with even attempting to use AI apps to rewrite prose. Firstly, there is no style to any of the rewrites. They sound obviously robotic and plain. Secondly, because AI apps do not have access to specific data that is needed to write book-based content, for example, duh, the content of a book or the author’s specifications, it would be impossible to use to write copy for promoting a book as effectively as a human can.

For the same reason, AI apps cannot write reviews of books either. AI apps cannot read eBook content.

A Place of Fear and Misunderstanding – Forums and Message Boards

It’s easy to stir outrage online. People make a living from it. There are many forums and message boards for authors where snippets about the disaster of AI can be found. But what authors need to be aware of is that when certain content creators (those who use clicks to make money — we do not) publish these so-called articles on social media, they are looking for a title that is something called “clickbait.” They want you to click on their link to earn money, and care little for what you take home from the usually poorly-written and researched content.

There’s a more sinister angle here. You can read about what clickbait does to your brain here. Atticus Dewey writes, “Websites and newspapers have started blatantly lying and passing it off as fact just to get a click...The anger from these articles prompts an immediate action. In your mind, there’s no time for fact-checking because the immediate response is to hit retweet and to tell everyone we know about this so that everyone Internet outragecan be as enraged as we are.

So, they title any old lukewarm bit of writing with an extreme title: “The End Of Writing? All About AI” or “AI Will Destroy The World, Expert Says.” In this way, our eyes read these headlines over and over and come to believe it. Maybe AI could be the end of everything we know now, maybe it will destroy certain systems in that things might change, but change is not an end. Where tech is concerned, it’s usually progress.

It’s easy to soak in information from peers who have shared these scary snippets, but honestly, you can Google anything and find a sensational headline for things we know in our bones to be untrue. Try it.

  • “Cats eating fish dangerous.” – Many ‘articles’ saying fish can be toxic to cats
  • “face cream toxic to skin.” – Many ‘articles’ saying you can get cancer from skin cream
  • “Water is not good for you.” – The first thing that appears is an ‘article’ claiming that water is bad for the kidneys

These ‘articles’ in Google are often just some bored person on Reddit spouting an opinion that is not even checked. These are not facts. We have imagination as human beings, and a tendency to fill in the gaps. In psychology, this is known as “closure”, the tendency to take partial hints at facts and making up the connections in our brains to fit a narrative. This is what is happening worldwide in politics, gossip columns, and science/health debates because of our brains changing due to the way the internet delivers these messages to us every day. The Ohio State University studied this, and found “excess internet use has been associated with a higher risk for depression and anxiety, and can make us feel isolated and/or overwhelmed.” When we are bombarded with these high-octane messages daily, we face a world in which everything is super-dramatic and crashing.

The truth is, AI for writing is in its infancy. It’s simply not good enough to use if you have worked with books and stories for a long period of time. Nobody I know in the industry is using them for writing, and probably won’t for years to come, unless they improve in leaps and bounds.

Does this issue with AI matter right now?

Not really. Either it will remain unusable, like now, and we won’t be using it for copy at all; or, it will become usable in the future, but you’ll still need someone like me to check it over and make sure it’s correct. Either way, you’d have to know the best ways to use it, and the nuances of the tool’s prompts. In essence, in the future, if AI becomes that good (which I doubt – remember Blu-Ray, holograms, and polyurethane tires?) we won’t need to worry about it because it will do what I do and you won’t be able to tell the difference, as long as I have worked out the best ways of using it in my job. But I don’t think AI will reach those echelons. I really doubt it. I can see the issues with it now, and it’s janky to say the least. And so I won’t be using it for the foreseeable future.

It’s worth noting too that Amazon has used a form of AI for years now in its algorithm – what AI is made of. And so is Meta, if you run ads on Facebook or Instagram. So we are already successfully using AI in some forms without even having really acknowledged it. There’s not really any outrage surrounding these platforms about AI, because it’s been almost invisibly rolled in over time.

It could be argued that we already use tools for writing similar to AI. Microsoft Word’s spellchecker and dictionary automatically checks our writing. So developments in AI are not something to be feared.

But…you can’t know what you don’t know… and neither can AI

Quick Brown FoxSo let’s get to the horror part of this. If an author accuses me of using AI apps to rewrite their blurb, how can I defend myself? The way AI apps such as ChatGPT work is by generating unique, one-off content on demand. If I say “rewrite this”, the app will do so. That copy will have never existed in the world in that version, and once I leave the app, it will disappear. I have no proof I didn’t use AI. But then, the author has no proof I did. This is terrible. Neither of us can prove it.

Here’s the thing, though. While the author can’t know what they don’t know, I know. Because rewriting copy using AI only does that: it rewrites what you had using similar words.

The cat sat on the mat becomes The feline was reclining on the rug

The quick brown fox jumps over the lazy dog becomes The fast muddy vulpine leaps across the sleepy pooch

That’s fine if you need a sentence thesaurus service (and maybe a little bit helpful, right?) but useless if you are faced with rewriting a text that needs to be cut down for length, punctuation, and impact in the style of the unknown author of the book. For example, a horror novel needs to cut to the core with scary words, while a thriller needs fast-paced, slick language. I need to think about removing repeated phrasing or story points. I need to add keywords I know will help the book on Amazon’s search.

And this is how I can prove I haven’t used AI.

Conclusions

While an author might think I could easily do my job by running a “rewrite this” prompt on ChatGPT, it’s impossible to get a good or full result, counterproductive, and simply not what I trained to do. And it’s rather insulting, honestly.

Because I’m a human writer with decades of experience at working with humans, I promise you this: AI tools will be of assistance in the near future. But for now, I have RI. Real intelligence. And I’m sticking with that for my clients.


Get an Editorial Review | Get Amazon Sales & Reviews | Get Edited | Get Beta Readers | Enter the SPR Book Awards | Other Marketing Services