WRITING WITH(OUT) Gen A.I.
A Brief For Instructors
To contextualize AI refusal for you and/or provide you with language to adapt for your courses and AI-refusal advocacy. If you use or modify this statement, please credit Dr. Vyshali Manivannan, Dept. of Writing and Cultural Studies, Pace University – Pleasantville (CC BY-NC-SA 4.0).

A white page with black text from a management manual: “A computer can never be held accountable, therefore a computer must never make a management decision.”
tl;dr (too long; didn't read) Version
Writing is thinking. Therefore, large language models (LLMs) and generative AI (GenAI) technologies have no place at any stage of the writing process. Writing is painstaking work we do to refine our ideas and communicate them to others. LLMs are designed to produce plausible sounding permutations of text, with no ideas or thought behind them. They violate academic integrity. They encourage delusion. They destroy the environment. They do all this to produce synthetic, mediocre text in the name of capitalistic imperatives. They reduce your capacity for thought, across human activity. They reduce human intentionality.
If you advocate for, accept, or ignore GenAI’s encroachment into higher education, ask yourselves:
- As educators, is our job to teach writing, reading, and thinking, or to encourage our students to outsource the thought process itself for the sake of more and faster mediocrity?
- As learners, what do we lose—professionally, socially, and personally—by allowing GenAI to rob us of the opportunity to develop our critical and creative skills?
- As humans, is cognitive offloading for any part of any task worth long-term cognitive atrophy? For young and/or vulnerable people, is it worth the risks of delusion, psychosis, and/or suicide?
Finally, if GenAI promises total disengagement from critical thought—its raison d’etre—what’s left for humans to do once that goal is achieved?
Unabridged Version
Responses to Generative AI (GenAI) technologies in higher education vary but are mostly enthusiastic. GenAI refusal has emerged as an ethical, disciplinary position in Writing Studies, based on disciplinary knowledge in composition, rhetoric, and creative writing (Sano-Franchini et al., 2025). Despite its name, this disciplinary position doesn’t mean GenAI must be prohibited. Instead, GenAI refusal only asks student writers, teachers, and scholars to make informed decisions about its use, starting from the position that GenAI is neither inevitable nor more efficient and its impacts and risks far outweigh its alleged benefits.
As Sano-Franchini et al.’s (2025) note in their Refusing GenAI in Writing Studies: A Quickstart Guide, refusal is intentionally, carefully chosen for its connotation and rhetorical effects. This statement summarizes and builds on their premises, with special attention to providing language that can be borrowed to argue for AI refusal with colleagues and administrators who use and systematize GenAI technologies with gleeful, willful ignorance of the consequences of doing so.
0. What is GenAI?
GenAI is an AI subfield that uses generative models and large language models (LLMs) to produce text, images, videos, or other forms of data. AI text generators mimic and plagiarize the bodies of preexisting text that train them and predetermine their output, from the Internet to datasets pirating copyrighted books. They use statistics and probability to supply the next likely character in an ongoing sequence, “spelling” words, phrases, sentences, paragraphs, and essays. GenAI technologies—like ChatGPT, CoPilot, Grok, Google Gemini, Lex, Canva Magic Write, and others—seem like infallible time- and labor-saving writing technologies. However, even in their most advanced forms, they can’t think. They work best when fed a series of many, many prompts that are written in the natural language equivalent of programmatic syntax and combine a sense of audience (a directive, rule-based, outcome-oriented program) with purpose (how to engineer the appearance of interpretive thinking, which requires facility with interpretative thinking).
GenAI technologies are inherently racist and linguistically unjust. Their datasets privilege dominant white Eurocentric ideologies and language. They maximize casualties in massacres of civilians. They accelerate climate change, leading to water crises and extreme weather in the global South and in the U.S. They spread political misinformation, such as Grok’s insistence on white genocide over apartheid in South Africa (Kerr, 2025). They exploit underpaid content moderators in Sudan who filter traumatizing and illegal content out of GenAI outputs (also indicating that human workers are essential for the GenAI’s operation).
GenAI technologies are also inherently ableist. GenAI technologies that claim to “serve” or “improve the quality of life for” disabled people—like glorified AI-enhanced autocomplete functions that take over the composition process—are disability dongles, solutions to imagined or perceived problems largely designed for the comfort of nondisabled people. In addition to spreading misinformation about vaccines, measles, COVID, autism, chronic illness, medication, wellness fads, and other public health concerns, they help insurance companies deny claims and delay care.
Given the rush to praise and uncritically adopt GenAI technologies without considering their true impact, let’s start with their disadvantages and dangers.
1. Rhetorically humanizing GenAI technologies obscures the critical fact that they can’t think or feel and frequently return generalizations, inaccuracies, and falsehoods.
“Artificial intelligence,” “machine learning,” “chatbots,” “hallucination,” “write” (Sano-Franchini et al, 2025). The metaphors surrounding GenAI combine automated perfection with man-made sentience to convince users of its infallibility and creativity. These conflations enable corporations to argue that their widespread intellectual property theft should be forgiven, along with GenAI’s environmental harms and uses in war crimes.
1. GenAI technologies misunderstand core principles of Writing Studies and what makes writing effective, resulting in writing that perpetuates popular misconceptions about writing.
Where Writing Studies discourages linguistic and stylistic homogeneity and plagiarism surveillance and doesn’t equate grammatical correctness with prowess, GenAI promotes one-size-fits-all, correctly formatted, error-free prose. Its lack of linguistic and stylistic variation repels any sense of originality in craft and content and actively attracts plagiarism accusations. This is worsened by the inclusion of its own outputs in its training datasets, which increases the quantity of disinformation its corpus contains. Ultimately, these misunderstandings create more labor for student writers, who must map the tool’s decisions about content and craft, substantially revise, and prove the sources being used are both real and appropriate, and for instructors, who must ensure that students do this work.
2. GenAI technologies magnify common problems in student writing…and with the education system.
While GenAI technologies might be productive tools for writers who have already mastered writing in the target discipline and genre, fact-checking, editing, and copy-editing are exponentially more tedious and time-consuming when it involves someone else’s work. Writers still learning their craft usually struggle with issues that GenAI tools smoothly and confidently get wrong, such as adherence to (or intentional divergence from) a discipline- and genre-specific style, methodology, organizational schema or affective shape, depth of critical and/or creative thought, facts, and citation. These writers often fail to notice when GenAI produces these issues, usually needing multiple rounds of reviewing, fact-checking, and rewriting to polish the work. In short, AI-generated texts can add days to a given writing task instead of saving time.
Student writers who are pressed for time tend to believe that 1) GenAI will save time (instead of adding days to a given writing task), and 2) GenAI products can be trusted as-is, without “post-production stage” that’s more extensive, tedious, and laborious than composing the text themselves.
3. When a measure becomes the target, it ceases to be a good measure.
Students have been trained to believe that their futures hinge on good scores. Traditional assessment seems punitive and attuned to “what the professor wants” instead of what best communicates student learning to the professor and what “feels like learning” to the student. Students are trained to pass exams and write essays that sound like exam responses to get the best score possible. GenAI technologies exacerbate and exploit the disconnect between writing assignments and why and how they benefit learners.
4. GenAI technologies are just different variations of a search engine trained on datasets of mixed validity.
GenAI works by searching its datasets and supplying consecutive characters in the order they’re likely to appear, based on what appears most in the illicitly obtained materials used to train them (Morrison). Corporations didn’t seek permission for most of these materials and even included pirated media in their training datasets. According to the U.S. Copyright Office (2025), GenAI outputs infringe on the reproduction right, the right to prepare derivative works, and, depending on the content type, the public display and public performance rights of original authors; therefore, GenAI outputs are not fair use and can’t be copyrighted. This hasn’t stopped GenAI companies from stealing others’ work without attribution because “how else will GenAI survive?” Hypocritically, despite this harm to individuals, media piracy and plagiarism are treated like criminal, unethical acts because they harm corporations…which is why sites like Libgen, Anna’s Archive, SciHub, The Pirate Bay, 12ft.io, for free literature, scholarship, music, and television and film should definitely not be recommended.
5. GenAI technologies reduce the amount of intention and beauty in the world, contributing to rising rates of STEM unemployment.
As Chiang (2024) observes, “ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes [a sentence] a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something.” GenAI products “look wrong,” “feel soulless,” and are as unearned as declaring victory in a football game at halftime or defeating Kefka with the Vanish-Doom cheat in Final Fantasy 6. GenAI technologies require no discipline or responsibility on the part of the users and, in fact, promise that users will not have to “earn” the output. Professional writers, writing instructors, scholars, and artists tend to recognize AI-generated texts as empty mimicry, mechanically rearranged content that exploits and tokenizes the talent and labor of original creators. GenAI technologies tend to be most impressive to users who are unskilled or inexpert in the task or area they’re using it for, and who are thus most vulnerable to GenAI’s missteps, because they won’t be able to recognize them.
Writing, art, and computer coding, historically associated with ineffable genius or “black magic,” are especially vulnerable to this perception, but of the three, only computer code can be accurately produced by GenAI, which may be why recent graduates who majored in writing studies, art history, and philosophy are outperforming graduates in STEM fields, particularly computer science and engineering, who face higher rates of unemployment due to increased demand for creative thinking and “soft skills” (CNBC, 2025).
6. GenAI does not liberate us in any way, and we do not benefit from GenAI tools.
GenAI technologies are designed to accelerate our pace of work in order to maximize the amount of work that can be extracted from us. Tech companies insistently promote GenAI technologies to generate more revenue for themselves and more content for their profitable GenAI tools. This effectively hijacks our leisure time, as our completed tasks are replaced with new tasks with increasing speed. As workers, we are alienated from the products and tools of this labor.
7. Treating writing as a “necessary evil” instead of the intentional communication of scholarly and/or creative activity has serious implications for critical thinking, creative process, civic engagement, and linguistic variation and expression.
Social media is full of students complaining that their peers “can no longer think for themselves” (e.g., r/College). GenAI actively atrophies human cognition, including the capacity for reading, writing, and critical thinking, jeopardizing the cultivation of the thoughtful readerly and writerly disposition that all writers need, regardless of discipline. Studies by Microsoft found that a higher confidence in and reliance on GenAI technologies correlates to a lower capacity for intellectual and creative thought and greater difficulty with planning for or completing tasks when GenAI isn’t available (Lee et al., 2025).
This is because GenAI’s operation requires “cognitive offloading,” and habitual cognitive offloading for activities ranging from the banal (“Grok, is that true?”) to the social (the stated desire by GenAI companies for users to replace “IRL” friends with a GenAI tool; the rising popularity of barely legal AI girlfriends with no lives or needs of their own) to the complex (a set of coding or writing tasks with a project in mind) results in cognitive atrophy. Put colloquially, GenAI technologies advance a permanent state of brain-rot. Moreover, because GenAI presents its answers as evidence, the propensity of GenAI users to approach writing with a disposition toward proof instead of exploration—what the answer should be, not what it could be—is problematically high.
8. GenAI technologies are exploitative and harmful.
GenAI runs on an extractive economic model that steals the work of writers, artists, scholars, professors, students, content creators, and social media users to build profitable technologies that don’t benefit or credit the original content creators, that damage the environment, and that are used to create racially-informed target lists and kill people at mass scale. They assist in drone strikes on civilians and in ICE’s detainment and deportation of people based on racial profiling and social media content.
GenAI technologies are also significantly depleting natural resources, including clean fresh water that is used to generate electricity for data centers and cool servers. Training GPT-3 consumed at least 184,920 gallons of U.S. freshwater. Water quantity, water pressure, and property values have already decreased in areas near AI data centers; the Colorado River is under water rationing due to depletion; water crises in the U.S. and abroad have deepened; and farmland communities in the Midwest face water cuts as well, with impacts to our food supply chain. AI’s projected water usage could hit 6.6 billion m³ by 2027. Additionally, AI data centers consume vast quantities of energy. One online search using AI consumes up to 10 times more energy than a standard search. This consumption is expected to triple by 2030. GenAI has proposed nuclear fusion as an alternative renewable energy source, but nuclear energy has not yet been mastered or commercialized (Paddison, 2024). Using GenAI technologies even briefly does real, irreversible damage to the planet.
9. GenAI actively prevents deep learning and the cognitive conditions for knowing how to learn.
AI-generated slop is oversaturating media environments, from Internet search results to entertainment content to computer code to resources users might turn to for help with writing (like Grammarly). GenAI products from prewriting to final drafts contain factual errors, inappropriate disciplinary techniques, and fabricated sources. AI text generators have: generated a mushroom foraging book that poisoned a whole family (Grady, 2024); spread political misinformation through AI-generated photographs and voice cloning, as in Elon Musk’s deepfaked Kamala Harris ad (O’Neil, 2024); created deepfake pornography of real people from their photographs and voices; generated student essays that merged incompatible genres; and attributed invented information to the wrong writers, didn’t provide attribution, or used invented sources, as in the White House’s MAHA report; and presented fictional information as fact (LaChance, 2025).
Because GenAI is a proprietary technology (not open source), GenAI users are barred from understanding how it makes decisions. This in turn hinders their ability to understand what makes a text “successful” on its own merits; instead, the success of the text is yoked solely to the guidelines that prompted its creation.
10. To learn how to write, writers must write.
Just like the purpose of education is education, not just a degree, the purpose of college writing isn’t a perfect essay but learning to make, rationalize, and modify writing decisions in response to specific parameters. Writing is a human activity of thought and self-expression, a way of cultivating relationships, engaging in inquiry, growing as thinkers, developing our capacities to engage with and transform the world—skills that go beyond grammatical correctness and optimization. Creative works and theoretical ideas never moved or inspired anyone because they were written efficiently (Chiang, 2024).
“Refusal can be a principled and pragmatic response to the incursion of GenAI technologies in college writing courses” (Sano-Franchini et al., 2025). We encourage you to begin with refusal and become informed about these technologies before adopting or promoting them.

Jurassic Park’s Ian Malcolm in a dim room with a projector light behind him, captioned: “I’ll tell you the problem with the GenAI that you’re using here: It didn’t require any discipline to attain it. You didn’t earn the knowledge for yourselves so you don’t take any responsibility for it.”
For Students:
Food for thought to facilitate student discussion if necessary

In the classroom, GenAI technologies reinforce the incorrect assumption that grades are more important than understanding, along with the myth that GenAI guarantees a good grade in a way that the struggle to learn does not. The truth is: Learning isn’t supposed to feel frictionless, and there are almost zero instances where the effects of using GenAI technologies outweigh their risks.
For one, GenAI comes with serious privacy concerns that extend to all areas of our professional and personal lives. These tools scan our computing environments, cloud drives, browser activity, streaming service usage, and share that data across platforms. Google Gemini has been caught scanning users’ private documents in Google Drive without being granted permission, though Google claims it won’t use your data for training purposes (Hale, 2024). CoPilot reads your emails to summarize them. ChatGPT shares your ideas, voice, and writing style with other users, reducing your ownership of your own work and making it easier for bad actors to “sound” like you (or deepfake you). We’re automatically opted into GenAI technologies and can’t fully disable them, and unlike other companies, you also can’t ask OpenAI to delete your personal data. (‘ChatGPT is a data privacy nightmare,’ 2023).
The uncritical adoption of GenAI technologies in educational settings (without informing you of these permanent risks) have already resulted in the exposure and sharing of sensitive student data, including academic performance data, accommodations documentation, psychological evaluations, disciplinary details, and parent information as well (Keierleber, 2024).
Furthermore, GenAI technologies often create more work for you, not less, in the classroom. Whatever their creators claim, GenAI technologies can’t think. They’re glorified search engines combing through a database corrupted with mis- and dis-information, fiction, and ineffective writing. They use statistics to supply the next likely character in an ongoing sequence, “spelling” words, phrases, sentences, paragraphs, and essays in a probability game predetermined not by logic or fact but by what appears most in the datasets used to train them (Morrison).
Thus, they can only answer questions poorly, partially, inaccurately, and artificially. They can’t develop an original voice, style, thought, interpretation, or argument. They can’t skillfully identify genre, resulting in jarring stylistic choices. They use incompatible organization schemas and disciplinary conventions and inappropriate textual materials and statistics. They can’t identify the sources that contributed to their output, creating issues around citation. They “hallucinate.” If you’re still learning your craft, you might not notice when these issues occur without multiple rounds of substantive rewriting and fact-checking. And if you aren’t very well read in a particular field, you might not be able to tell if an author or source is real or made up.
In short, GenAI is not a time-saving tool unless:
- You already possess mastery of the target genre, writing style, research methodology, and material
- You know how to write a series of high-quality rule-based prompts for a given GenAI tool that precisely and specifically summarizes your assignment directions, target genre, writing style, research criteria, relevant stylometric rules and behavioral directives, and more
- You know how and where to integrate your own critical thinking, interpretation, and analysis based on your class discussions and previous assignments at the sentence- and paragraph-level
- You are a skilled source- and fact-checker and line editor and will verify that the information is accurate and locate the original sources on your own to provide attribution
- You are expert in revision
Writing is an activity of thought, a way of learning that goes beyond correct grammar and mechanics. AI text generation doesn’t encourage the learning of reading and writing, let alone the cultivation of the thoughtful inner readerly/writerly voice that you’ll eventually need, whatever your career path.
Anti-racist, anti-ableist, restorative pedagogies acknowledge that external pressures (work, health, personal crisis) can reduce your available writing time; self-disappointment (“I’m bad at it”) can make you writing-avoidant or ashamed to meet with me; GPA concerns (financial aid, honors programs, specific majors, athletics) might make you susceptible to problematic “get good grades quick” schemes. If this sounds like you, swing by for a coffee chat. I’m open to hearing your concerns, whether material or emotional, and working to ensure an equitable learning environment for everyone.
Discussion Question: Metacognition
What are the activities you really want GenAI to do for you, and why? For instance: Would you want GenAI to decide what book you should read? To decide how you should play a video game or sports match? To make your personal styling decisions? To decide on your home decor? To decide if you should or shouldn’t be resuscitated or placed on life support if you’re near death? To choose your pet?
Of the items you said no to, why did you say no? Because your desire to do them yourself outweighs any friction you might experience in doing them? Because they’re frictionless activities already? Or because you don’t trust GenAI to make the right decision for you?
If you use GenAI for writing, what do your responses to the above questions suggest about how you understand writing and the role it plays/will play in college and beyond?

What Can I Do?
Acceptance of GenAI isn’t inevitable! Some things you can do right now are:
- Regularly review your settings in your email, OS, MS Office, browsers, social media, and smartphone to make sure you have disabled GenAI. (You might have to dig around for this, as companies purposely make it hard to turn GenAI off.)
- Try to think of convenience as the trade-off for privacy and security. Choose inconvenience sometimes! Delete your browser history and clear cookies and temp files regularly, and don’t treat your email like cloud storage. Save anything you need to preserve to your local drive. Minimize your use of the cloud as much as possible.
- If you maintain a website, modify robots.txt to try to prevent GenAI technologies from scraping your material, or create another “honeypot” to trap them.
- Use a “hardened” browser with security and anti-AI settings already installed, like Mullvad Browser (you don’t need to pay for Mullvad VPN, although a VPN is always useful) or LibreWolf. Try to avoid Edge or Chrome.
- Use a VPN on your computer and phone whenever possible (Proton, Mullvad, etc.)
- Stay up to date by following the Refusing Generative AI blog
Also, instructors don’t need to police GenAI use in the classroom. Instead, in line with a pedagogical emphasis on metacognition (or thinking about thinking), emphasize that students should always be able to rhetorically read the assignment and identify and explain their writerly decisions—whether or not GenAI is used. This approach both saves you the time and emotional labor of policing GenAI use and promotes effective and compassionate writing instruction.
Additionally, these essays contain strategies for talking to GenAI advocates about why they use the technology and why you oppose it:
- Edward Zitron, How to Argue with an AI Booster
- Anthony Moser, I Am an AI Hater
- Nick Sousanis, One-Page Comic AI Refusal Policy
And finally, Save the AI created a toolkit that uses guerilla advertising in the form of satirical flyers and postcards.
References
AI’s excessive water consumption threatens to drown out its environmental contributions (2024 Mar 21). The Conversation.
ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned (2023 Feb 7). The Conversation.
Chiang, T. (2024). Why A.I. isn’t going to make art. The New Yorker.
Grady, C. (2024 Apr 29). The AI grift that can literally poison you. Vox.
Hale, C. (2024 Jul 15). Gemini AI platform accused of scanning Google Drive files without user permission. Tech Radar.
Keierleber, M. (2024 Jul 1). Whistleblower: L.A. schools’ chatbot misused student data as tech co. crumbled. The 74.
Kerr, D. (2024 Jul 12). AI brings soaring emissions for Google and Microsoft, a major contributor to climate change. NPR.
Kerr, D. (2025 14 May). Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats. The Guardian.
LaChance, N. (31 May 2025). RFK Jr.’s Disastrous MAHA Report Seems to Have Been Written Using AI. Rolling Stone.
Lee, H., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Microsoft.
Morrison, A. Meta-Writing: AI and Writing. Composition Studies, 51(1), 155-161.
Nondo, N. (2023 May 19). Facing disturbing content daily, online moderators in Africa want better protections and a fair wage. CBC Radio.
O’Neil, L. (2024 Aug 10). Will the Government Stop Political Deepfakes Like Elon Musk’s Kamala Harris Ad? Rolling Stone.
Paddison, L. (2024 Mar 26). ChatGPT’s boss claims nuclear fusion is the answer to AI’s soaring energy needs. Not so fast, experts say. CNN.
Sano-Franchini, J., McIntyre, M., & Fernandes, M. (2025). Refusing GenAI in writing studies: A quickstart guide.