AI and Writing: Statement and Lesson Framework
Version: 2024-08-19
If you use or modify this statement, please credit Dr. Vyshali Manivannan, Dept. of Writing and Cultural Studies, Pace University – Pleasantville (CC BY-NC-SA 4.0).
AI text generators — like ChatGPT, Google Gemini, Canva Magic Write, Grok, Lex, and others — seem like infallible time- and labor-saving writing technologies. There’s been a rush to praise and uncritically adopt them without considering their true impact, so let’s flip that and start with its disadvantages and dangers before discussing its advantages.
OpenAI is significantly depleting water and fossil fuel.
Using OpenAI even briefly does real damage to the planet. OpenAI may have helped with research into the world’s water problems, but its water and energy consumption vastly outstrip these contributions. Training GPT-3 consumed at least 184,920 gallons of U.S. freshwater. Where a single Google search requires 0.5 mL of water — mere drops — ChatGPT consumes 500 mL of water for every 5 to 50 prompts, where one user can average between 20-90 prompts per day (“AI’s excessive water consumption,” 2024). Water quantity, water pressure, and property values have decreased in areas near OpenAI data centers; the Colorado River is under water rationing due to depletion; and farmland communities in the Midwest face water cuts as well, with impacts to our food supply chain. Additionally, AI data centers consume vast quantities of electricity. An online search using AI consumes up to 10 times more energy than a standard search. This consumption is expected to triple by 2030. OpenAI has proposed nuclear fusion as an alternative renewable energy source, despite the fact that nuclear energy has not been mastered or commercialized (Kerr, 2024; Paddison, 2024).
OpenAI uses others’ work without attribution by default (leaving you to attribute its sources).
AI-generated texts come from the databases that trained them, many of which were illegally obtained. It’s important to remember that even in their most advanced forms, whatever their creators claim, OpenAI can’t think. Instead, when prompted to write a draft, AI text generators plagiarize from the bodies of preexisting copyrighted text used to train them, from pirated books to newspaper articles to public forums like Reddit. They use statistics to supply the next likely character in an ongoing sequence, “spelling” words, phrases, sentences, paragraphs, and essays in a probability game predetermined not by logic or fact but by what appears most in the datasets used to train them.
OpenAI makes things up and gets things wrong, requiring you to engage in heavy fact-checking. you need to be as knowledgeable as a fact-checker.
AI text generators have: generated a mushroom foraging book that poisoned a whole family (Grady, 2024); spread political misinformation through AI-generated photographs and voice cloning, as in Elon Musk’s deepfaked Kamala Harris ad (O’Neil, 2024); created deepfake pornography of real people from their photographs and voices, a service that OpenAI is considering offering to its users; and generated student essays that merged incompatible genres (academic essay, blog post, and commercial sales pitch); attributed information to the wrong writers or didn’t provide attribution; and presented fictional information as fact.
This is because AI text generators don’t “know” how to develop an original voice, style, thought, interpretation, or argument. They answer questions partially, inaccurately, and artificially. They can’t skillfully identify genre, resulting in subtle shifts between an article summary and a book sales pitch. They use incompatible organization schemas pulled from different genres and disciplines, incongruous disciplinary conventions, and inappropriate textual materials and statistics. They can’t reliably identify the sources that contributed to their output, creating issues around citation. They “hallucinate.” Writers still learning their craft might not notice when these issues occur without multiple rounds of substantive rewriting and fact-checking.
In the writing process, OpenAI works best for outlining and if you think like a coder.
AI-generated outlines are the least likely to be plagiarized and don’t require too much extra research and writing time from you (though they still need to be fact-checked), and the draft you produce from such an outline will be your work. OpenAI works best when fed a series of prompts — not just one! — that are written like code: directive, rule-based, outcome-oriented, using the natural language equivalent of programmatic syntax. If you’ve had experience designing games, this might be intuitive for you. Otherwise, be aware that copy-pasting an assignment prompt into ChatGPT is going to produce a generic result that is likely to contain markers of plagiarism or fabrication.
Due to lawsuits, ChatGPT may finally implement the “AI content” watermark that AI text generators like Google Gemini already include.
OpenAI is now being forced to consider implementing invisible and inaudible AI watermark that will identify its content as AI-generated (whether downloaded or copy-and-pasted) as a good-faith measure regarding current lawsuits and to avoid further legal action. (Other AI generators, like Google Gemini, include this to minimize their legal liability, leaving you at risk.) It may no longer make sense to use OpenAI in classes where AI-generated texts are considered an academic integrity violation.
You might feel like “I have nothing to hide,” but everyone has sensitive data and digital property they want sole ownership of.
AI technologies scan your documents, local drive, cloud drive, and browser data and share that data across platforms. Google Gemini has been caught scanning users’ private documents in Google Drive without being granted permission, though Google claims it won’t use your data for training purposes (Hale, 2024). ChatGPT shares your ideas, voice, and writing style with other users, reducing your ownership of your own work and making it easier for bad actors to “sound” like you (or create deepfakes of you, if you share selfies and personal details with these technologies).
According to OpenAI’s privacy policy, ChatGPT retains your IP address, browser type and settings, and data on how you interact with the site – including the type of content you engage with, features you use, and actions you take. It also stores information about your browsing activities over time and across websites and claims the right to share your personal information with unspecified third parties, without informing you or asking for your consent, for their own business objectives. While you can turn off chat history, the option is buried under several layers, and disabling history also limits the usefulness of conversations. Unlike other companies, you can’t ask OpenAI to delete your personal data. (“ChatGPT is a data privacy nightmare,” 2023).
The uncritical adoption of AI chatbots in educational settings (without informing you of the permanent risks to your privacy) has already resulted in sensitive student data being accidentally shared and subject to ransomware attacks, exposing students’ academic performance data, accommodations documentation, psychological evaluations, disciplinary details, and their parents’ personal information as well (Keierleber, 2024).
AI-generated essays actually magnify common problems in student writing.
Student writers often struggle with: uncertainty about genre conventions and appropriate style; confusing organization; inconsistent paragraphing; generalizations; logical fallacies; repetitiveness; flawed research and citation; inconsistency with your voice in class discussion and on other assignments; and problems addressing the assignment instructions.
In short, ChatGPT is not a time-saving tool unless:
- You already possess mastery of the target genre, writing style, research methodology, and material
- You know how to write a series of high-quality rule-based prompts for ChatGPT that precisely and specifically summarizes your assignment directions, target genre, writing style, research criteria, relevant stylometric rules and behavioral directives, and more
- You know how and where to integrate your own critical thinking, interpretation, and analysis based on your class discussions and previous assignments at the sentence- and paragraph-level
- You are a skilled source- and fact-checker and line editor and will verify that the information is accurate and locate the original sources on your own to provide attribution
- You are expert in revision
Writing-enhanced courses aim to teach you that writing is an activity of thought, a skill that goes beyond correct grammar and mechanics. AI text generation doesn’t encourage the learning of reading and writing, let alone the cultivation of the thoughtful inner readerly/writerly voice that you’ll eventually need, whatever your career path. Also, AI text generation assumes that you don’t have problems with any of the above and can easily identify and correct AI-generated text.
If you possess the above skillset, AI text generators can be helpful in the prewriting and paraphrasing process. However, like any tool, you have to use it with competence and care. It’s your responsibility to center your original thinking and contributions when using AI for brainstorming or organization; ensure accuracy and substantive quality (not merely grammatical correctness) and remove offensive or problematic statements in your work; do the extra research to attribute ideas to scholars that the AI doesn’t cite; and disclose the specific ways AI was used and cite the system, dates, and prompts in their documentation as well; and triple-check for accuracy. When writing with AI, your references will always include both the AI you used (e.g., ChatGPT, Lex) using standard MLA or APA format and a works cited for the AI-generated output, which you’ll have to locate since the AI doesn’t cite its sources.
The 4C/MLA Task Force on AI and Writing offers initial guidance for using and evaluating AI-integrated scholarly and creative writing, as does Writing Across the Curriculum (WAC). WAC states outright:
Writing to learn is an intellectual activity that is crucial to the cognitive and social development of learners and writers. This vital activity cannot be replaced by AI language generators.
You might want to keep this in mind if you use AI language generators to help you brainstorm or organize.
THE BOTTOM LINE IS THIS: Please come to me if you’re facing circumstances that are making you desperate enough to plagiarize!
I’d much rather work out an alternate arrangement that relieves some of the pressures you’re facing than have you turn in AI-generated writing tripe. Restorative justice pedagogy acknowledges that external pressures (work, health, personal crisis) can reduce your available writing time; self-disappointment (“I’m bad at it”) can make you writing-avoidant or ashamed to meet with me; GPA concerns (financial aid, honors programs, specific majors, athletics) might make you susceptible to problematic “get good grades quick” schemes.
If this sounds like you, swing by for a coffee chat. I’m open to hearing your concerns, whether material or emotional, and working to ensure an equitable learning environment for everyone.
References
AI’s excessive water consumption threatens to drown out its environmental contributions (2024 Mar 21). The Conversation.
ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned (2023 Feb 7). The Conversation.
Grady, C. (2024 Apr 29). The AI grift that can literally poison you. Vox.
Hale, C. (2024 Jul 15). Gemini AI platform accused of scanning Google Drive files without user permission. Tech Radar.
Keierleber, M. (2024 Jul 1). Whistleblower: L.A. schools’ chatbot misused student data as tech co. crumbled. The 74.
Kerr, D. (2024 Jul 12). AI brings soaring emissions for Google and Microsoft, a major contributor to climate change. NPR.
MLA-CCCC Joint Task Force on Writing and AI. (2024). Initial Guidance for Evaluating the Use of AI in Scholarship and Creativity.
Paddison, L. (2024 Mar 26). ChatGPT’s boss claims nuclear fusion is the answer to AI’s soaring energy needs. Not so fast, experts say. CNN.
O’Neil, L. (2024 Aug 10). Will the Government Stop Political Deepfakes Like Elon Musk’s Kamala Harris Ad? Rolling Stone.
Association for Writing Across the Curriculum. (2023 Jan). Statement on Artificial Intelligence Writing Tools in Writing Across the Curriculum Settings.