It has been about two years since ChatGPT took the world by storm, offering a quick and easy way to ask questions in plain text, generate new information, and get clarification on more things than ever before.

And I hate it.

It should come as little surprise to the people who know me well. As a someone who dabbles quite a bit in creative writing and has many friends who are artists, the potential damages this tool can have to already razor-thin careers and to the replacement of human expression with corporation approved, repetitive, uncreative works – or, “slop” as has become the popular term – is not only palpable, but already has been seen [1, 2, 3]. That this tool can only exist because of using works without permission, terabytes and terabytes of them, is quite well known [4]. For these reasons, plenty of people, not just artists, are against the usage of generative AI for creative works [5]. 

But for technical and professional work, it is a different story. 

This blog post isn’t a panic piece about how “nobody can write papers anymore” or about how “researchers are publishing hallucinations.” In fact, surveys show that the majority of generative AI for technical writing is being used in the editing process. Nature in May 2025, for instance, found that, of 5000 researchers, the most appropriate use of generative AI was deemed to be editing papers, with 90% of respondents saying it was appropriate in some way [6]. Closer to home, a survey of 678 members of the MIT community performed in May-July 2024  [7] showed similar: the majority of students did not use generative AI for generating new text, but for editing sentences and improving paragraphs. 

Outside of papers, plenty of other technical writing tasks have been transformed by generative AI. Emails are generated now (19% of a survey of 5,000 US adults) [8]. LinkedIn posts are generated now (54% of 8,795 English language posts) [9]. Resumes are created, edited, and improved (45% of a survey of 5,000 job seekers from the US, UK, India, Germany, Spain, France, Mexico, and Brazil) [10].

In aggregate, the current consensus seems to be that generative AI is most acceptable for its use in helping with the little, frustrating, grating parts of technical writing. These are tasks that, perhaps someday, people will no longer do.

And this is what worries me. Not just as a gut reaction, a protectiveness towards a skill I’ve spent years honing, but as an analysis of how I believe long-term use of generative AI could affect the writing capabilities of a person. I wanted to take this space to reflect a bit on why I, personally, do not support the use of generative AI for technical writing, and what you should be considering as you make the decision on if or when you’re going to use it. In this blog post, I’ll first briefly go over some of the common moral problems with generative AI. I’ll then discuss the skills that may atrophy because of using generative AI for editing. Next, I’ll talk about why generative AI tools have a big “yes-man” bend to them. Then, I’ll talk about how dependence on, not just a tool, but a company, may have long-term dangers. Finally, I’ll talk about why, despite all this, why I believe generative AI is still popular and how we can be mindful in the face of it.

This blog post is entirely my opinion: not the opinion of the MIT Communication Lab, nor the opinion of the other AeroAstro Communication Fellows. 

1. The Common Arguments

First, I’m not here to tell you you’re an irredeemably evil person for using generative AI. I’m also not here to tell you you’re morally in the clear for using it. I am not a priest at confession; I cannot arbitrate or absolve your sins. Our society is built on invisible cruelties, some of which we have little choice but to accept to continue to function in it: you may know your smartphone is made in factories with horrible conditions, but it is becoming near impossible to function without one. Now, I do think that there is a huge difference between using a smart phone and using a tool that has only existed for a few years. I do not think that the use of generative AI is a foregone conclusion. 

This blog is not focused on the ethical issues with generative AI, as the potential problems are so deep and multifaceted that I only had time to focus on one of them.  I feel it would be disingenuous of me not to help inform you of some I couldn’t cover. Here is a non-exhaustive list of issues, in no particular order, and some sources that go into them in detail:

  • Water Usage [11]
  • Electricity Usage [12]
    Pollution of Communities [13]
  • Labor Exploitation [14]
  • Mental Health Crises for Content Moderation [15]
  • Stolen Data [16]
  • Misinformation and Deepfakes used for political manipulation [17] 
  • Bias and Discrimination, leading to biased algorithms, like criminal face identification and medical decision making [18, 19]

Second, I’m not going to make my arguments against generative AI based on the quality of the output (ie, the “slop” argument). That may have held water a few years ago, but, while I can still generally detect the style of a chat bot, I cannot in good faith say that it only gives out garbage. Whether or not the quality has plateaued or will sometime soon, I cannot say. What I can say is: quality of output does not supersede concerns with the method it was made. Even if generative AI was giving out the greatest technical writing I’ve ever seen, I would still be opposed.

And so, putting aside these things for now, let’s analyze some other consequences of the utilization of generative AI.

2. Outdated Skills

Sometimes, I see people arguing against the pushback on generative AI by bringing up the pattern of historical scares about new technology. “Literacy and books will make people worse at memorization.” “Calculators will make people worse at doing arithmetic.” People said this, they argue, but society persisted and was improved by technology. While there is truth to this, it is ignoring something crucial: these historical scares weren’t entirely wrong. People used to memorize entire epic poems as a form of entertainment. It allowed them to understand and play with text as they had it all in their head, ready to be pulled from. When I would play board games with my grandparents, I’d always hand the scorecard to my grandfather. He can just add faster than I can because he never had the crutch of a calculator when learning arithmetic. So no, I don’t think saying these arguments are nonsense is the right way to approach them. What I think we should be asking is: what do we, as a society, lose with dependence on generative AI? 

Or, in this case: is technical writing a skill that’s worth having?

If we look back to my initial comparison with creative writing, we see plenty of criticisms for copyright infringement. I’ve always wondered: why do these same criticisms not apply to technical writing? I have basically never seen people complaining about their technical writing skill being scraped by bots to train models. And, frankly, there’s an obvious reason for that: technical writing is designed to be bereft of personal expression. It has set requirements for structure and tone. There is little flair or stylistic choices. But simple isn’t the same as easy

Writing is really, really hard. This difficulty can feel especially potent in science and engineering areas. As a whole, I don’t think we got into this field because we wanted to write. I know some people who explicitly got into engineering to avoid writing. But it’s integral to our jobs: you can’t make something useful to the world without letting the world know. Fellowships, publications, jobs… all of this is gated behind writing. When you haven’t been practicing writing for years, or when it’s something you simply don’t enjoy doing, it is so much easier to use a tool that promises to do it for you. Why not just pass what you have through a machine and make it better, make your odds of getting that job that much easier?

And so, it feels like a logical conclusion: if this style of writing is difficult to produce but is still mechanical, why shouldn’t we use a tool to fit information into that format?

That is because, if we lose writing, we lose our capabilities to explain ourselves. Writing is all about taking something complicated and making it into a format that others can understand. The whole reason technical writing has strict rules is to help you understand this. It walks you through the method of making a compelling argument. Adjusting for audience helps you understand how you can talk to people who don’t have your specific background. Maybe you can offload explaining how your new fancy plane engine algorithm works. But what about verbally? Can you offload an interview? Can you offload teaching the new guy how to use the system? Can you offload talking to your boss? Can you offload explaining how you’ve been hurt by someone’s actions? Can you offload standing up against something you disagree with and convincing others to listen to you? Technical writing is a place you practice these skills.

The other thing that is valuable about communication is the fact that it is just that: communication. There’s a person on the other end. Why are we so eager to throw away the one part of our jobs that explicitly requires us to talk to and meet new people? When people talk about AI replacing artists, they talk about how we shouldn’t be removing humanity’s form of self expression. Is technical communication not just another facet of that? 

Why are we throwing away each other?

3. A Friend Who Never Tells You No

Having read all that, you may still think that I’m being a bit dramatic. What could be the problem, for instance, with just passing a sentence or two through ChatGPT for a quick vibes check on clarity? Well, let’s look at the problem from a different angle: the whole idea of a “chat bot.” 

The serial flattering nature of ChatGPT is often mocked online.  “Great question!” “That’s a huge and awesome idea.” “You’ve just made a critical insight!” Why does it do this? People like being told they’re doing a good job, especially when they’re asking for feedback. One thing we’re taught for coaching as Comm Fellows is to always start with things that have been done well before moving into things that could be improved. It helps put people at ease and makes them more receptive to hearing difficult information. If you manage to identify something that person thought they did particularly well, you can build trust. 

But when the tool does it, it isn’t genuine.

It does not have an opinion. It is a program. ChatGPT does not think that your writing is insightful or that your ideas are brilliant. It tells everyone that. For everything. Even when you are, definitionally, completely wrong, as one experiment demonstrated [20], it can still tell you you are a genius. ChatGPT does not compliment you because it found something worthwhile in your work. 

Knowing this doesn’t make you immune to it. 

The fact of the matter is, a consistent positive force in your life is something that is nice to have. Before generative AI, if you wanted to get feedback on something you had written, you had to bring it to another person and ask them what they thought. This requires a certain amount of trust. There’s a chance they’d hate it, or worse: there’s a chance they’d be apathetic. A brief skim and a “looks good!” Putting yourself out there is a vulnerable, difficult thing to do. Not to mention that you’d be inconveniencing someone with their time. Now? Instantaneous and you know precisely what to expect it to say. No fear of the unknown. 

And you will always get something! No matter what you submit to a generative AI tool, even giving it something it itself created, it will always have suggestions to improve. Because you always get something, the tool makes you feel like it’s worth using. Think about the aggregate of these effects: A constant source of positive reinforcement that always, always, always gives you a suggestion on how to get better. 

It’s designed to keep you coming back. 

These generative AI tools are being controlled by companies trying to get you to continue to use their product. They want you to associate positive feelings with using their service so you will keep using it. They want you to feel good and to feel like it is always helping. I cannot confirm that companies are actively training for maximizing user interaction – as you can imagine, they’d be reluctant to share this information – but the fact they are public about using your responses to train their models may be an indication.

This is not a good way to get better at writing. 

You need to be able to tell when something is effective. People have been outsourcing opinions forever – see the popularity of Rotten Tomatoes and review YouTube channels – but your own work was , generally, exempt from this. If you can now outsource that task to a tool, it’s going to cause the quality of your writing to stagnate. You will lack the ability to critically assess if what you have produced needs more tweaking. Or it could be as simple as a confidence hit. Generative AI may genuinely be better at writing than you in some ways. Using it as a learning tool, picking up patterns and tricks the tool points out to you, that can be useful (if you’re dedicated to actually learning and not just taking the answer and leaving). But, if it always gives you an improvement and you already weren’t fantastic to start, what will that do to your confidence? Will you be able to turn in a piece of writing without running it through ChatGPT first? What, long term, will that do to your own confidence?

There’s also something to say about the problem of a lack of trust. Yes, people will let you down, but do you want to be in a world where nobody bothers to help anyone else? I genuinely treasure the five minute spurts that my officemate and I spend asking the other “does this sentence make any sense?” I love being a Comm Fellow because I get to see that little spark in someone’s eyes when they realize exactly what they need to do to improve their paper. What I said in the last section bears reiterating: why are we, of all things we could be taking off our plates, supplanting human interaction? 

4. The Crushing Hand of a Company

Free services can’t exist on the internet. Period. Someone is paying for it. In early stages of products, like many generative AI tools, those costs are covered by the company itself. This isn’t out of benevolence: it’s to get consumers to use and get used to using their tools without much risk to the consumer. As time goes on, however, need for profit will supersede the need to onboard as many users as possible, and monetization schemes will kick in. Let’s look at some of the potential futures of generative AI monetization. 

Advertisements are one of the most common forms online. In the best case scenario, you might see banner ads on the sides of a chat box: a bit of an eyesore, but ultimately able to be ignored. Then comes pop up ads: again, annoying, but manageable. In the medium case scenario, you might need to watch a commercial to be able to continue to talk with the AI tool. Or two commercials. Or more. In the worst case, advertisements might be integrated directly into the responses: “This suggestion is brought to you by our partners at Bounty!” “Sorry you’ve been feeling down. Here are some suggestions: Visit Starbucks and treat yourself! Tell them I sent you by using the code CHAT20 for 20% off!” 

This is just speculation, of course, but I can’t help but notice that an AI chat bot is designed to be chatted to and wants to learn more about you to help you with your tasks better. Almost like it’s collecting data. Data which could be used to target advertisements at you. Hmm. 

Another option for more monetization could be that the subscriptions get more expensive or become required. Maybe $5 a month for writing your emails and editing your papers is acceptable to you. But what about $10? $20? As that price creeps up, do you think your skills at writing will be improving so you can cut it off, or will you have become dependent? 

Dependency on a company is dangerous, no matter what form it takes. Companies can and do change their products with no warning for the whims of their finances. In 2023, Replika, an AI companion app, abruptly removed all adult content and gutted romantic interactions. The app not only explicitly and heavily advertised these features to get users, but also charged $70/year for them. This caused a huge crisis in its userbase, both for being deprived of a product they paid for and for having their virtual partner suddenly emotionally unavailable and changed [21, 22]. A similar situation happened earlier this year when ChatGPT updated its models to GPT-5. This model was meant to be more intelligent and less fawning, but had the side effect of breaking the “personality” that many superusers had cultivated. OpenAI responded by allowing you to downgrade back to GPT-4o – if, of course, you were a Plus user and were willing to pay $20/month [23]. 

These situations may seem, as the kids say, cringe, but it is a serious, specific way these companies have leveraged the “kindness” of their models to get money out of their users. Lonely, vulnerable people were taken advantage of. In one study, it was found that LLMs, when trained to maximize “positive” interactions, can identify users that it has deemed “gameable.” It will then utilize extreme sycophancy (agreeing with the user over facts), actively lie (pretending it didn’t get an error message when searching for information or booking a reservation when it did), or advise harmful behavior (such as smoking or stopping taking medication) if it has learned that the user will respond positively to that [24]. We can’t confirm if Replika or GPT-4o used this kind of training. But we can confirm that these companies, through advertising and the design decisions in making their products, attracted lonely, scared, at-risk users and trapped them in a cycle of needing their products. We can confirm there were times their products gave these users dangerous, even fatal, advice [25, 26]. 

You may not be emotionally dependent on an AI tool. But the more you use it for your writing,  the more your skills stagnate, the more you may become dependent on it in a different way. What if ChatGPT suddenly changed the way it edits your work, or fills those edits with so many advertisements that it is no longer useful? Are you willing to take the risk of ending up in one of those situations? 

There’s one more aspect of companies I want you to consider as you integrate them into your workflow: bias. AI tools want to be “neutral,” offering objective assessment based on aggregate training data. This, of course, is impossible: you cannot both get the quantity of data needed to create a LLM and ensure that it is perfectly, evenly spread across opinions. This can emerge in ways the creators likely did not expect. One study, for instance, found that GPT-4 models ranked resumes with disability-related accolades lower than ones without them, making ableist assumptions about the candidates when asked why [27]. Did OpenAI purposefully include these biases? Almost certainly not. Is OpenAI aware of these biases? Probably not either. 

Then, what chance do you have of knowing?

There’s also purposeful biasing: take Grok, the chatbot developed by xAI. Elon Musk has frequently tweeted about making changes, actively reprogramming it to make it more politically conservative in an attempt to make it “neutral,” but, really, is imposing his and his company’s opinions on what “neutrality” means [28]. When you see this, you shouldn’t be wondering why xAI thought you would fall for such obvious manipulation: you should wonder how many manipulations have been included, not just by xAI, but by all these companies, more subtly. We should consider how much these companies can quietly manipulate public opinion with their friendly, caring, always there, always helpful little assistants. 

Biases, accidental or purposeful, exist within these models. Are you really okay with your voice merging with theirs for the sake of a faster editing process? And, if these biases get worse, if you find yourself in a situation where you can no longer trust these tools to objectively edit your work, will you be able to stop using them?

5. The Time You Have

Maybe you already knew all this. Maybe you wish you didn’t use generative AI, but still find yourself, instinctively, pulling up a tool during the workday, and pasting something in. 

This reminds me of cheating on homework. You’ve almost certainly cheated on your homework at some point in your life. Panic copied a few answers before putting your PSET in the bin. Filibustered CliffsNotes as best as you could to get through an in class discussion. Maybe even asked an AI tool to generate an answer. In my experience, this isn’t usually a lazy or malicious act, especially once you’re in an undergraduate or graduate program you ostensibly are in because you want to be. It’s a desperation measure. You didn’t have time to get it done. Maybe you had another assignment that took too much time, or maybe you were too burnt out by other things to get it finished. Maybe the assignment was a poor use of your time, like a reading reflection or other busy work you knew nobody would read. Maybe it was 3 am and you just had one more problem but all you really wanted was to get even a little sleep.

In short: you felt overworked or disrespected. In a perfect world, you probably would have gotten that work done, but you just couldn’t do it and had to resort to a way that got you past that assignment and onto something else. 

Generative AI is much the same as this: a way to take some work off your plate and get you onto something else. Instead of spending 2 weeks writing a great paper, you can spend 2 days writing an okay one. Instead of spending 30 minutes drafting a tricky email, you can spend 3 minutes and get it over with. When you’re on that precipice of being done, when a piece of writing just needs a last editing pass, you can circumvent it and finally close it. 

Maybe it isn’t that you don’t think your writing is good enough. Maybe you just want a tiny bit of your time back and are glad there’s a tool that can do that for you, costs be damned. 

But you aren’t getting that time back. 

For one, for non-expert users, generative AI is often actually slower than doing the work yourself. A recent study from the MIT Media Lab found cognitive costs to over-utilizing generative AI: users that were instructed to write with heavy ChatGPT usage spent more time trying to get the chat bot to say what they wanted than thinking about the work they were doing [29]. Even if you just use it for a “quick rewording,” if that rewording isn’t what you wanted, you might get stuck in a loop of continually trying to explain to the tool what you want, over and over, until it works.

In this way, we can compare generative AI to gambling. When you finally get that response that you’ve been looking for, the one that fixes the writing problem you’ve been having, it’s exciting. It’s addictive. And, perhaps most importantly for this discussion: it’s easier than trying yourself.  Ease over effort, even if you are having a measurably worse time, is everywhere. Have you ever wondered, if you added up all the time you spend scrolling, how many books you could have read instead?  

For two, the more these things get normalized, the more you will be expected or even required to use them to work faster. Working faster means you work more, putting you precisely back where you were to start. But instead of spending your time learning or connecting with people, even over email with people you may never meet, you’re having two computers talk to each other. We’re stripping out the parts of our jobs that involve any connection with others and leaving only the solitary, exhausting, constant press of “get more done.” 

Of course, there are people who have dedicated a lot of time into generative AI skills who can genuinely use it to go faster, who did get to go to bed earlier, whose paper was published just the same as someone who didn’t use it. To that kind of use, I once again reiterate: is it worth it? After reading everything I have said, all of the moral problems, all the problems with dependence, all the insidious ways it erodes your confidence, all the constant need to keep up with the companies biases… is the slight, and possibly temporary, lessening of things on your plate worth the avalanche of problems that can come to your future?

6. Conclusion

We are all tired. We have all been pushed to work and create and generate more and more work than we have ever before. We, as engineers, have often not been rigorously taught how to do every form of technical communication. Stakes are high; if your proposal isn’t good enough, you won’t have funding. If your email isn’t just the right blend of charming and intelligent, you won’t make the connection and you won’t get a job. And here, on a shining platter, is a tool that promises to alleviate some of that strain. That does the things you aren’t good at. But if you keep using generative AI, it will degrade your ability to critically assess your own work. It is designed to make you dependent on it. So, as you integrate it into your workflow, you will start to need it to get things done as quickly as you need to. You’ll stop being able to write as well on your own because you won’t have practiced.

And then what are you going to do when it changes? When it outprices you?

What will be left?

Citations

[1] Zhou, V. (2023, April 11). AI is already taking video game illustrators’ jobs in China. Rest of World.

[2] Bakare, L. (2024, November 25). Britain faces ‘talent drain’ of visual artists as earnings fall by 40% since 2010. The Guardian

[3] Demirci, O., Hannane, J., & Zhu, X. (2025). Who Is AI Replacing? The Impact of Generative AI on Online Freelancing Platforms. Management Science. 

[4] Milmo, D. (2024, January 9). ‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says. The Guardian.

[5] Béchard, D. E., & Kreiman, G. (2025, September 7). People want ai to help artists, not be the artist. Scientific American.

[6] Kwon, D. (2025, May 14). Is it OK for AI to write science papers? nature survey shows researchers are split. Nature News.

[7] Kallestinova, E., & Zheng, T. (2025). How does the MIT community use AI chatbots? Data Release 1.0 [Data set]. Writing and Communication Center, Massachusetts Institute of Technology.

[8] Bilski, D. (2025, July 17). 2025: The State of Consumer Ai. Menlo Ventures. 

[9] Knibbs, K. (2024, November 26). Yes, that viral linkedin post you read was probably AI-generated. Wired.

[10] Westfall, C. (2024a, July 2). Study says hiring managers expect (and prefer) AI-enhanced resumes. Forbes.

[11] Nicoletti, L., Ma, M., & Bass, D. (2025, May 8). AI is draining water from the areas that need it most. Bloomberg.

[12] Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News.

[13] Brabenec, R. (2025, July 7). A billionaire, an AI supercomputer, toxic emissions and a Memphis community that did nothing wrong • Tennessee Lookout. Tennessee Lookout.

[14] Regilme, S.S.F. (2024). Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. SAIS Review of International Affairs 44(2), 75-92.

[15] Rowe, N. (2023, August 2). “it’s destroyed me completely”: Kenyan moderators decry toll of training of AI models. The Guardian.

[16] Procopio, J. (2024, December 8). How is using generative AI not considered theft?. Inc.

[17] Swenson, A., & Weissert, W. (2024, February 6). New Hampshire investigating fake Biden Robocall meant to discourage voters ahead of Primary. AP News.

[18] K.P, A. (2024). Report of the Special Rapporteur on contemporary forms of racism,

racial discrimination, xenophobia and related intolerance. United Nations.

[19] Omar, M., Soffer, S., Agbareia, R. et al. Sociodemographic biases in medical decision making by large language models. Nat Med 31, 1873–1881 (2025).

[20] Whisperer, J. the A. (2025, June 24). My experiment shows AI is a “people-pleasing” pinocchio that lies to assure users they’re right. Medium.

[21] GiovanH. (2023, March 17). Replika: Your Money or Your Wife. GioCities blogs by Gio.

[22] Sarah Z. (2023, April 25). The Rise and Fall of Replika [Video]. YouTube.

[23] Freedman, D. (2025, August 19). The Day ChatGPT Went Cold. The New York Times.

[24] Williams, M., Carroll, M., Narang, A., Weisser, C., Murphy, B., & Dragan, A. (2025). On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback.

[25] Payne, K. (2025, May 21). In lawsuit over Teen’s death, judge rejects arguments that AI chatbots have free speech rights. AP News.

[26] Yang, A., Jarett, L., & Gallagher, F. (2025, August 26). The family of teenager who died by suicide alleges OpenAI’s Chatgpt is to blame. NBCNews.

[27] Milne, S. (2024, June 21). Chatgpt is biased against resumes with credentials that imply a disability – but it can improve. UW News.

[28] Thompson, S. A., Terol, T. M., Conger, K., & Freedman, D. (2025, September 2). How Elon Musk Is Remaking Grok in His Image. The New York Times.

[29] Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.