Since ChatGPT and other LLMs have become available, so many writers’ conversations center on stances surrounding utilizing AI. Many of them are well handled by groups like the Authors Guild and other professional organizations for writers. But today we are exploring what we’ve seen as writing coaches, working with both writers just starting out and ghostwriters to refine their craft. There are many debates around copyright issues, environmental impact, privacy issues, and the ethics of potentially stealing other people’s work. But today, we’re exploring another imminent problem with AI, especially the most commonly used ChatGPT: it’s hurting your own writing ability. Listen to the video below or scroll on for Emily’s written perspective.

Reading is essential to improving your writing skills, and reading like a writer means seeing past the glossy end-result into a book’s structure and the author’s choices about how the story is told. Learning to read like a writer doesn’t mean learning how to merely imitate but learning the craft by example. We’re excited to announce the launch of our Memoir Method Book Club! Your first session is free, so sign up to save your seat here.

Writing is a muscle

Broadly, AI has many applications and I’m sure there are some utilities it presents that can be life changing for individuals and industry-changing at the macro level. There are more AI applications than LLMs, and more LLMs than ChatGPT. Amanda and I will be referring mainly to GPT as the program we see many writers and clients turn to first and the primary culprit of many of the patterns and issues we see, but many of these issues will apply to other LLMs as well.

Writing ability is like a muscle, and like any other muscle, it can atrophy. The less you use it, the weaker it becomes. In leaning on GPT, it can over time even kill your ability to write. Worse, it can start to effect your patience and your willing ability to engage and consider what makes writing not just competent, but excellent, personal, and unique to your voice and purpose.

MIT Study

MIT recently completed and published the results of a study that examined the effects of utilizing LLMs to write SAT-level essay responses on brain activity and writing quality. The experiment design was created by a collaborative team of researchers from several fields. The study grouped the participants into three groups: LLM group (utilizing an AI to generate text), Search Engine group, (access to internet information for reference), and a Brain-only group, (no aid) where each participant used a designated tool (or no tool in the latter) to write an essay. They recorded brain activity with an EEG, interviewed participants afterwards, and judged with writing with both human and specially designed AI readers.

It is not surprising that those who wrote through an LLM showed both less ownership and less recall of what they had “written” in the post interviews. They were most frequently unable to quote from the essays even minutes later. They also showed less neural connectivity. What is even more deeply concerning is that in the next stage of the experiment, when the researchers regrouped participants and repeated the test over subsequent months, the LLM group continued to show less neural connectivity when reassigned to the brain only groups and performed worse on the scoring of the writing assessment. You can read the study in full here.

When we are leaning on programs for writing tasks, we are training our brains not to engage. The more and more we use these programs to string words together, the more often we accept that output (regardless of perceived or actual quality) and our ability to put our thoughts into coherent, organized, and compelling writing atrophies. It hampers your ability to learn not only how to string words together well, but also how to analyze and critically evaluate written work. Neither Amanda and I are ones for fear mongering or anti-tech, but these conclusions are what both the science and our personal experience is showing us.

It doesn’t really save you time

It’s not disputable that LLMs can generate more words in a minute—even a second—than any human writer could. But stringing words together isn’t all that writing is. Many writers we’ve worked with feel strongly that with experimentation, AI can work to save time. Many writers who use LLMs heavily in the process insist that with practice and developing better prompts, etc, that the deficits can be overcome. While it’s clear that some LLMs are certainly better than others, and prompts can be improved to get marginally better results, in the long run, we have not seen it save nearly as much time as one would want. Between adjusting prompts, fact-checking for hallucinations, and very heavy editing to make the voice sound more like a singular, specific human with a personality, the process often takes just as long (or longer) as writing it yourself in the first place. We find it only saves time when the final product isn’t really scrutinized—and why would you want to spend time writing things that people are going to read closely?  

It can make you lose your voice

We are not here to shame those who have been experimenting with this tool. After all, we have too, in order to see what it can and can’t do and its habits and inclinations. One of the most concerning patterns we’ve seen is how the LLMs linguistic habits and sentence structures can over time replace or become preferred by writers to their own natural voices. When the bulk of what you’re reading is being generated by GPT, your concept of good writing will start to morph around what the GPT is generating, especially if you do not routinely read writing from varied human authors with differing styles. We have, over the course of 8 months to over a year, seen writers and clients lose the richness of their ideas and the unique personality of the prose by gradually growing a preference for the flatter, “cleaner,” and more generic voice of their preferred LLM. It is sad to see, and we hope more and more writers become aware and sensitive to this problem.

ChatGPT is a Yes Man

“Yes man syndrome” is when writers (or anyone) loses their ability to critically examine their own ideas and the execution of their work because they are constantly surrounded by nothing but positive reinforcement. Even when you ask GPT for a critique, it’s going to give you advice and criticism that it thinks will please you—not an “objective” evaluation. GPT doesn’t know how to engage with what you’re trying to do and help you bridge the gap between what you have and what you want to have. It is just giving you words in the logical order that most people have thought sounded good when it gave those same words to them. It is a pattern recognition model. It can, by definition, only give you the common denominator of your ideas. While what it produces is almost always “clean” in the sense of correct traditional grammar, it can’t intuit.

The purpose is in the writing, not the product

Finally, we have written in this blog frequently about the emotional, intellectual, social, and psychological benefits of writing. So many of these benefits are weakened or entirely circumvented when they filtered through the use of an LLM, especially one like ChatGPT. Wrestling with your ideas, thoughts, memories, and experiences is hard. Putting them on paper and then forming them into a compelling structure with engaging and clear prose is even harder. But that process makes you stronger, gives you clarity, makes you smarter. Even if that final product is imperfect and takes months or years for you to make and put out into the world, the benefits to your mind and heart are not transferable. They cannot and should not be bypassed.

PS. Searching the internet for writing, publishing, and book marketing advice can be exhausting to say the least! If you’re ready for hands on, one-on-one support for your memoir, check out The Memoir Method. We’d love to welcome you into this nine-month group program specially designed for women writing their first memoirs. And don’t forget, if you’d like to chat with Amanda about the program (or any other services we offer), you can book a free consult any time!

Share This Post

Picture of Emily Thrash

Emily Thrash

Emily Thrash acquired an MFA from the University of Memphis in 2011. She has taught academic and creative writing for over fifteen years. She has helped many authors see their stories through to publication through ghostwriting, cowriting, and editorial services. She is a Author Support Specialist with Page and Podium Press.

Related Posts

5 Habits to Support Memoir Writing

The New Year is a time of reflection and, often, promise-making. (Can you believe it’s 2026!?) We promise ourselves that this year, we’re going to be the best version of ourselves. We’re going to rise to a higher standard, accomplish

What is a book coach?

In the publishing and writing support industry, terminology tends to evolve and shift over time, especially as practices change. We’ve previously discussed commonly misunderstood publishing terms, and the number one term that’s misunderstood is “editor,” largely because there are so

Is your chapter going off the rails?

We often call the drafting stage being on the “chapter train” as when you get going, it can feel great, like you’re chug-chugging your way through, watching your book get longer and more complete one chapter at a time. But

Sign up for our weekly newsletter to get book-related tips, tricks, and mindset shifts delivered straight to your inbox.

By continuing to browse this website, you agree to our use of cookies to collect website visit statistics.