Of all the generative AI workshops and webinars I’ve attended in my professional capacities as a writer and educator, only a few have addressed the limitations and drawbacks of this technology. But if we use the technology without understanding its drawbacks, we can’t do our jobs well. The following list of sources is intended to provide concrete information, case studies, and facts to balance out the conversation.
This bibliography is by no means comprehensive, and reflects my own interests and biases; for example, there are far more resources about text content than visual content. It is annotated so that you can get the gist and source of each link without clicking through, and you can search this page (ctrl + F) to find specific topics. I will update this page with additional sources as the story of generative AI develops.
[Last updated: December 15, 2025]
If you find this resource useful to you in your work, I would be glad to hear about it. If you have favorite resources, I would like to hear about those too.
Table of contents
- Quality issues
- Inefficiency
- Pseudo-reasoning
- Implicit bias
- Disinformation
- Model collapse
- Copyright and labor issues
- Copyright lawsuits
- Other forms of redress
- Exploitation of gig workers
- Ethics of use
- Environmental impacts
- Long-term risks
- Consumer trust
- GenAI and the US government
- Essays and op-eds
- Additional resources
Quality issues
Many resources I’ve encountered agree that AI-generated content shouldn’t be public-facing. That is to say: for quality assurance, a human being should review and revise any AI-generated drafts before publication–whether the content in question is a press release or just an email to a colleague. But some of the same resources encourage users to work with generative AI models to save time at the beginning or end of the writing process.
Using large language models to proofread or polish work may raise some security issues–for example, Grammarly uses data even from its paying customers to train its own AI.
Column: These apps and websites use your data to train AI. You’re probably using one right now. (LA Times, August 16, 2023)
In addition, MIT researchers observed that LLM users exhibited less cognitive connectivity than their peers and consistently underperformed at neural, linguistic, and behavioral levels.
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (MIT Media Lab, June 10, 2025)
However, this section focuses primarily on the quality issues raised by AI-generated first drafts and preliminary work, such as the exclusion of vital information or inclusion of false information.
Inefficiency
In an Upwork survey in spring 2024, 77% of employees reported that AI tools have added to their workload, rather than saving time.
Upwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence (Upwork, July 23, 2024)
Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes (Futurism, July 6, 2025)
Pseudo-reasoning
Australia’s Securities and Investments Commission (ASIC) conducted a trial to see how well Meta’s open source AI summarized official documents. They concluded that AI summaries would create more work for human employees, not less, because of the need to fact-check and cross-reference.
AI worse than humans in every way at summarising information, government trial finds (Crikey, September 3, 2024)
A new study from six Apple engineers shows that even advanced large language models are not capable of logical reasoning; they can only attempt to replicate the reasoning steps observed in training data.
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities (Ars Technica, October 14, 2024)
A study from MIT refutes a common claim that as AI becomes increasingly sophisticated, it develops “value systems.”
MIT study finds that AI doesn’t, in fact, have values (TechCrunch, April 9, 2025)
Implicit bias
A WIRED investigation, which included a review of hundreds of AI-generated videos, has found that Sora’s model perpetuates sexist, racist, and ableist stereotypes in its results.
OpenAI’s Sora Is Plagued by Sexist, Racist, and Ableist Biases (Wired, March 23, 2025)
In late 2023, an IBM survey showed that 42% of global IT companies were using AI screening of job candidates. Marginalized candidates tend to fall through the cracks of these screens.
AI hiring tools may be filtering out the best job applicants (BBC, February 16, 2024)
A multi-author study shared on arXiv shows that ChatGPT and Alpaca (a LLM developed by Stanford University) use very different language to describe imaginary male and female workers.
ChatGPT Replicates Gender Bias in Recommendation Letters (Scientific American, November 22, 2023)
A paper authored by researchers of the Allen Institute for Artificial Intelligence studied how racial biases in large language models have evolved as the models become larger and organizations set up more ethical guardrails.
As AI tools get smarter, they’re growing more covertly racist, experts find (The Guardian, March 16, 2024)
Disinformation
A study from Columbia Journalism Review’s Tow Center for Digital Journalism finds serious accuracy issues with generative AI models used for news searches.
AI search engines cite incorrect news sources at an alarming 60% rate, study says (Ars Technica, March 13, 2025)
Another study released in October 2025 found that AI assistants routinely misrepresent news content. Gemini performed worst with significant issues in 76% of responses, more than double the other assistants.
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory (BBC, October 22, 2025)
Swedish researchers Jutta Haider, Kristofer Rolf Söderström, Björn Ekström, and Malte Rödl observed an uptick in research papers on Google Scholar with signs of undisclosed GPT use (in particular, specific phrases that are commonly found in GPT-generated content, such as “as of my last knowledge update”).
GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation (Misinformation Review, September 3, 2024)
Climate Visuals stresses the importance of authenticity in photographs of climate change. They critique AI-generated images for relying on cliches and overly familiar visual metaphors rather than real impacts on real people.
On AI images and climate change photography (Climate Visuals, March 6, 2024)
In a paper published in JAMA Ophthalmology, researchers used ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. Their aim was to show how easy it is to create a data set that is not supported by real data.
ChatGPT generates fake data set to support scientific hypothesis (Nature, November 22, 2023)
Model collapse
Aaron J. Snoswell, Research Fellow in AI Accountability at Queensland University of Technology, explains model collapse: a hypothetical scenario “where future AI systems get progressively dumber due to the increase of AI-generated data on the internet.” Snoswell believes that model collapse is unlikely, predicting a more heterogeneous landscape content produced by human creators and AI platforms, but argues that the flood of AI-generated content bears other risks.
What is ‘model collapse’? An expert explains the rumours about an impending AI doom (The Conversation, August 19, 2024)
This interactive article shows how generated content deteriorates in quality when AI is trained on its own output.
When A.I.’s Output Is a Threat to A.I. Itself (New York Times [gift link], August 25, 2024)
To curtail model collapse, generative AI has to be trained with new data–created or reviewed by human beings. This is a costly proposition, if human data sources and handlers were compensated fairly.
Copyright and labor issues
This section examines some of the legal and/or ethical issues posed by generating content with AI. Many creators whose work is used to train large language models are not compensated or even notified of this usage. Furthermore, the implementation of “guardrails” (which are necessary to curb issues such as implicit bias) and data labeling are typically carried out by underpaid and insufficiently protected gig workers. In addition, workers in some creative fields are organizing to protect their labor from being digitally replicated or replaced.
Copyright lawsuits
OpenAI in particular has incurred multiple lawsuits for training its GPTs on copyrighted material. Baker & Hostetler LLP tracks ongoing copyright litigations:
Case Tracker: Artificial Intelligence, Copyrights and Class Actions
For example, the Authors Guild and 17 authors filed a class-action suit against OpenAI for copyright infringement of their works of fiction on behalf of a class of fiction writers whose works have been used to train GPT without their knowledge or permission. The complaint draws attention to the fact that the plaintiffs’ books were downloaded from pirate ebook repositories and then copied into the fabric of GPT 3.5 and GPT 4 which power ChatGPT–making it possible for AI tools to generate soundalike content and profit off of a known author’s existing reputation.
The Authors Guild, John Grisham, Jodi Picoult, David Baldacci, George R.R. Martin, and 13 Other Authors File Class-Action Suit Against OpenAI (The Author’s Guild, September 20, 2023)
An ongoing account of the Authors Guild’s legal action may be found here: Artificial Intelligence
Eight daily newspapers including The New York Daily News and The Chicago Tribune have sued OpenAI and Microsoft for using copyrighted articles to train their large language models.
Eight newspapers sue OpenAI, Microsoft for copyright infringement (NPR, April 30, 2024)
In response to one of the many lawsuits raised against it (this one from the New York Times), OpenAI has said that it cannot train large language models without access to copyrighted work.
‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says (The Guardian, January 8, 2024)
The first major AI copyright case in the United States was won by the plaintiff Thomson Reuters, a media and technology conglomerate claimed the the legal AI startup Ross Intelligence reproduced materials from its legal research firm.
Thomson Reuters Wins First Major AI Copyright Case in the US (Wired, February 11, 2025)
Anthropic copyright settlement
In August 2025, a judge ruled that training AI on legally acquired books constitutes “fair use.” However, he likewise determined that AI startup company Anthropic pirated thousands of books illegally, which does not fall under fair use. The settlement called for a $1.5 billion award to split among the rightsholders of all of the books included in the class (about 500,000 titles). This amounts to approximately $3,000 total per title.
What Authors Need to Know About the $1.5 Billion Anthropic Settlement (The Author’s Guild, September 5, 2025)
Other forms of redress
Creative professionals who have the benefit of a guild or union are bringing concerns about artificial intelligence to the bargaining table.
After a 148-day strike, the Writers Guild of America reached an agreement with the Alliance of Motion Picture and Television Producers (AMPTP) which included language intended as a guardrail against replacing the labor of screenwriters with AI. The agreement specifies that AI-generated material will not be considered source material (and therefore AI-generated material cannot undermine a writer’s credit), and while a writer can choose to use AI, they cannot be required to use AI.
Summary of the 2023 WGA MBA (WGA Contract 2023, September 25, 2023)
Screen Actors Guild-American Federation of Television and Radio Artists went on strike in July 2024; among other things, they asked for protections around “exploitative uses” of artificial intelligence that would replace voice actors, motion capture performances, and other professionals with AI-generated voice and visuals.
Video game performers are going on strike over AI concerns. Here’s what to know (PBS, July 25, 2024)
Alex N. Press, a staff writer at Jacobin who covers labor organizing, reports on how a few different industries (including but not limited to creative fields) are addressing AI in their workplaces.
US Unions Take on Artificial Intelligence (Jacobin, November 8, 2024)
Amazon’s Kindle Direct Publishing has always been a pipeline for unauthorized summaries or knock-offs of traditionally published books, but there has been an uptick since the release of ChatGPT–and it is difficult and time-consuming for authors to redress. Some knock-offs use summarized or revised content under the original author’s name, without their permission:
I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires) (Jane Friedman, August 7, 2023)
Others publish the summaries under a different author name, which is tougher to address, since titles technically can’t be copyrighted.
Scammy AI-Generated Book Rewrites Are Flooding Amazon (Wired, January 10, 2024)
Exploitation of gig workers
The Brookings Institution surveys the current state of data labor, including a few attempts of workers to organize and challenge exploitation.
Reimagining the future of data and AI labor in the Global South (The Brookings Institution, October 7, 2025)
The Verge’s Josh Dzieza spoke with two dozen annotators from around the world about the repetitive, low-paid work behind the scenes of training LLMS.
AI Is a Lot of Work (The Verge, June 20, 2023)
Adrienne Williams, Milagros Miceli and Timnit Gebru of the Distributed AI Research Institute report that AI technology depends on gig workers like data labelers, delivery drivers, and content moderators who perform repetitive tasks under precarious labor conditions.
The Exploited Labor Behind Artificial Intelligence (Noema, October 13, 2022)
A Time investigation revealed the working conditions of underpaid Kenyan data labelers tasked with removing sexual abuse, hate speech, and violence from ChatGPT training data.
Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (Time, January 18, 2023)
Ethics of use
Australian news outlet Crikey points out the various tangled ethical issues in training generative AI to produce inauthentic artwork in the style of indigenous artists.
AI is producing ‘fake’ Indigenous art trained on real artists’ work without permission (Crikey, January 19, 2024)
Environmental impacts
On the user end, generating AI content is relatively quick and cheap, often free. The frictionless access to AI conceals the fact that all this massive computing power must take place on a physical server that exists in the real world and extracts resources from the environment.
Of course, all internet usage requires the existence of brick-and-mortar data centers. But the energy cost of generative AI is substantially greater than a typical web search:
- Google’s AI could soon consume as much electricity as Ireland, study finds (The Next Web, October 11, 2023)
- Making an image with generative AI uses as much energy as charging your phone (MIT Technology Review, December 1, 2023)
- Generative AI’s environmental costs are soaring — and mostly secret (Nature, February 20, 2024)
As major companies like Microsoft build additional data centers to support generative AI, they are seeking out additional power sources–which is slowing (or, in some cases, impeding) global decarbonization goals.
- AI Needs So Much Power That Old Coal Plants Are Sticking Around (Bloomberg, January 25, 2024)
- Three Mile Island nuclear reactor to restart to power Microsoft AI operations (The Guardian, September 20, 2024)
And, predictably, the reliance on fossil fuel energy contributes measurably to pollution and harms public health.
- Air Pollution and the Public Health Costs of AI (Caltech News, December 10, 2024)
- Elon Musk’s xAI in Memphis: 35 gas turbines, no air pollution permits (E&E News, May 1, 2025)
The new data centers also generate a lot of heat, and need water for cooling:
- Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water (AP News, September 9, 2023)
- How much water does AI consume? The public deserves to know (OECD.AI Policy Observatory, November 30, 2024)
- A bottle of water per email: the hidden environmental costs of using AI chatbots (The Washington Post, September 18, 2024)
- Revealed: Big tech’s new datacentres will take water from the world’s driest areas (Guardian, April 9. 2025)
In addition to these environmental impacts, new data centers are driving up the costs of consumer electric bills.
- How AI infrastructure is driving a sharp rise in electricity bills (PBS, September 5, 2025)
- AI Data Centers Are Sending Power Bills Soaring (Bloomberg, September 29, 2025)
In January 2025, a Chinese tech startup called DeepSeek released a free AI assistant that cost less money and computing power to operate. Claims about DeepSeek’s environmental impact are still being investigated.
DeepSeek might not be such good news for energy after all (MIT Technology Review, January 31, 2025)
DeepSeek claims to have cured AI’s environmental headache. The Jevons paradox suggests it might make things worse (The Conversation, January 31, 2025)
For more in-depth reading about the environmental impacts of AI:
In the midst of what the UN labels a worldwide water crisis, only 41% of data center operators reported on any water usage metric at all. The lack of accountability sparks concerns about data centers in drought-stricken Catalunya.
Why We Don’t Know AI’s True Water Footprint (Tech Policy Press, November 26, 2024)
A look at the scope 2 and scope 3 emissions for Amazon, Apple, Google, Meta, and Microsoft. Technically companies are not required to disclose scope 3 emissions, which is partly why the data is so messy–and worth looking at.
Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse? (The Guardian, September 15, 2024)
The Washington Post examines the various energy sources harnessed by Google, Meta, and Microsoft for newly built data centers–including coal plants and nuclear facilities that were scheduled to go offline–and their more abstract plans, such as harnessing atomic fusion.
AI is exhausting the power grid. Tech firms are seeking a miracle solution. (Washington Post, June 21, 2024, gift link)
HEATED interviews Michael Khoo, climate disinformation program director at the nonprofit Friends of the Earth, about the difference in energy costs between typical internet use (web searches, video streaming, etc.) and generative AI use.
Are your internet habits killing the planet? (HEATED, May 28, 2024)
Paul Schütze, a researcher at the Ethics and Critical Theories of Artificial Intelligence research group at the University of Osnabrück, argues that the pursuit of sustainable AI is not driven by the desire to create resource-friendly and ethical solutions, but by capital interests.
The Problem of Sustainable AI: A Critical Assessment of an Emerging Phenomenon (The Weizenbaum Journal of the Digital Society, April 5, 2024)
A concise explainer from Friends of Earth:
Report: Artificial Intelligence A Threat to Climate Change, Energy Usage and Disinformation (FOE, March 7, 2024)
Felippa Amanta of the University of Oxford’s Environmental Change Institute points out several ways that AI use increases energy consumption.
AI is supposed to make us more efficient – but it could mean we waste more energy (The Conversation, January 26, 2024)
Earth.org compiles research from several different sources to look at the carbon footprint, waste disposal, ecosystem impact, and transparency issues of AI technologies.
The Green Dilemma: Can AI Fulfill Its Potential Without Harming the Environment? (Earth.org, July 18, 2023)
Long-term risks
Is AI a boom, or a bubble?
Early reports suggested that ChatGPT was extremely expensive to maintain:
You won’t believe how much ChatGPT costs to operate (Digital Trends, April 20, 2023)
In January 2024: “AI-related companies lost $190 billion in stock market value late on Tuesday after Microsoft (MSFT.O), opens new tab, Alphabet (GOOGL.O), opens new tab and Advanced Micro Devices (AMD.O), opens new tab delivered quarterly results that failed to impress investors who had sent their stocks soaring.”
AI companies lose $190 billion in market cap after Alphabet and Microsoft report (Reuters, January 30, 2024)
And in August 2024: “Shares of both Google and Microsoft dipped following their earnings reports, a sign of investors’ discontent that their huge AI investments hadn’t led to far-better-than-expected results.”
Has the AI bubble burst? Wall Street wonders if artificial intelligence will ever make money (CNN, August 2, 2024)
The head of stock research at Goldman Sachs called for caution.
Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm. (New York Times [gift link], September 23, 2024)
When DeepSeek released an AI assistant in January 2025 that claimed to match the performance of OpenAI at a fraction of the cost, Nvidia (producer of the expensive chips used by OpenAI, Meta, and other tech giants) took a substantial loss. Long-term impacts remain to be seen.
DeepSeek sparks AI stock selloff; Nvidia posts record market-cap loss (Reuters, January 27, 2025)
Brian Merchant spoke to the scholars who wrote the book on tech bubbles for their assessment.
AI Is the Bubble to Burst Them All (Wired, October 27, 2025)
Consumer trust
Those of us who work in creative industries may not have invested billions in generative AI, but its ubiquity is still impacting our relationships with our own stakeholders: readers, students, consumers.
The Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.
AI use damages professional reputation, study suggests (Ars Technica, May 8, 2025)
University of Minnesota researchers found that 80% of their study respondents believed that news organizations should alert readers when AI is used to generate content–and 78% wanted an explanatory note describing how.
Most readers want publishers to label AI-generated articles — but trust outlets less when they do (Nieman Labs, December 5, 2023)
Author Kester Brewin argues that authors can be more transparent with readers about whether large language models were used to generate, suggest, improve (via Grammarly or similar), or correct (via spell check) text.
Why I wrote an AI transparency statement for my book, and think other authors should too (The Guardian, April 4, 2024)
The Digital Education Council Global AI Student Survey showed that 55% of respondents believed overuse of AI within teaching devalued education, and 52% said it negatively impacted their academic performance.
Students Worry Overemphasis on AI Could Devalue Education (Inside Higher Ed, August 9, 2024)
GenAI and the US government
In early January 2025, the Federal Trade Commission examined real and potential cases of consumer harm caused by AI: e.g. commercial surveillance, enabling fraud and impersonation, and perpetuating illegal discrimination. Their report was originally posted here:
AI and the Risk of Consumer Harm (FTC Blog, January 3, 2025)
But that blog post has since been removed.
FTC removes Lina Khan-era posts about AI risks and open source (TechCrunch, October 20, 2025)
Since inauguration on January 20, 2025, the second Trump administration has been a proponent of unregulated AI–in word and deed.
On July 23, 2025, the Trump administration released an AI Action Plan and addressed their priorities in spoken remarks, including their resistance to regulating AI.
Trump Says He’s ‘Getting Rid of Woke’ and Dismisses Copyright Concerns in AI Policy Speech (Wired, July 23, 2025)
The administration has also shown a partiality to using AI-generated content on White House socials; for example, a video of Trump flying a plane over New York and dumping fecal matter on No Kings protestors.
Donald Trump Is the First AI Slop President (Wired, October 29, 2025)
DOGE
During the first 100 days of the second Trump administration in 2025, an initiative called the Department of Government Efficiency (DOGE) gained control of government agency information systems. Generative AI was an instrumental part of this organization’s strategy for mass layoffs, contract terminations, and dismantling of federal agencies.
For Tech Policy, Eryk Salvaggio argues that technology cannot replace representative politics–and shows how the automated processes employed by DOGE attempt to do just that.
Anatomy of an AI Coup (Tech Policy, February 2025)
This lengthy Wired article summarizes the rise and early impacts of DOGE, ending on the launch of a chatbot intended to replace 1,500 General Services Administration employees.
Inside Elon Musk’s ‘Digital Coup’ (Wired, March 13, 2025)
Also from Wired:
DOGE Put a College Student in Charge of Using AI to Rewrite Regulations (Wired, April 30, 2025)
Banning states from regulating AI
H.R. 1, a budget reconcilitation bill pushed through Congress in July 2025, originally stipulated a 10-year ban on regulating AI. That was cut from the final version.
Republicans push for a decadelong ban on states regulating AI (The Verge, May 13. 2025)
Senate strikes AI regulatory ban from GOP bill after uproar from the states (AP News, July 1, 2025)
Later that year:
Trump Signs Executive Order That Threatens to Punish States for Passing AI Laws (Wired, December 11, 2025)
Essays and op-eds
This section maintains a distinction between opinion pieces and traditional reporting. Still, if you read the linked pieces in only one section of this bibliography, read these. Data is important, but there is nothing like an intelligently reasoned and compellingly written argument.
Emily M. Bender and Alex Hanna enumerate instances of governments using genAI tools in ways that harm and mislead constituents.
Government officials are letting AI do their jobs. Badly (Bulletin of the Atomic Scientists, May 30, 2025)
Linguist Emily M. Bender takes a critical look at a guide for using Chat GPT in classrooms–released by OpenAI–and explains why it won’t work.
ChatGPT Has No Place in the Classroom (Mystery AI Hype Theater 3000, November 22, 2024)
Ed Zitron argues that we’re not just approaching an AI bubble burst but an AI crisis, created by the extremely high costs of operating the technology coupled by the relatively low value it offers consumers.
The Subprime AI Crisis (Where’s Your Ed At?, September 16, 2024)
For an even longer, more in-depth and updated look from the same author:
The Hater’s Guide To The AI Bubble (Where’s Your Ed At?, July 21, 2025)
Tech columnist John Herrman pokes fun at the terrible text and email summaries his Google and Apple applications served up.
The Future Will Be Brief (Intelligencer, August 12, 2024)
According to Glasgow lecturers Joe Slater, James Humphries, and Michael Townsen Hicks, “bullshitting” is a philosophical term. “When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true.”
ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting! (Scientific American, July 17, 2024)
UK lecturers James Muldoon, Mark Graham and Callum Cant draw from their forthcoming book Feeding the Machine: The Hidden Human Labor Powering A.I. to explain that the seemingly autonomous technology is powered by data annotators, content moderators, machine learning engineers, data center technicians, writers and artists.
Opinion: What’s behind the AI boom? Exploited humans (LA Times, July 12, 2024)
Software engineer Nikhil Suresh does not sugarcoat his low regard for generative AI’s potential as an instrument of efficiency or optimization.
I Will Fucking Piledrive You If You Mention AI Again (Ludicity, June 19, 2024)
NPR’s Linda Holmes responds to an ad depicting a father using Google’s Gemini on behalf of his daughter to generate a fan letter to an Olympic athlete.
The antithesis of the Olympics: Using AI to write a fan letter (NPR, July 30, 2024)
Science fiction author Cory Doctorow suggests that not enough people are thinking about what can be salvaged when the AI bubble pops.
What Kind of Bubble is AI? (Cory Doctorow, December 18, 2023)
Climate justice professor and bestselling author Naomi Klein critiques the mischaracterization of AI “hallucinations,” along with some of the implausible claims made by AI proponents (e.g. AI will solve the climate crisis).
AI machines aren’t ‘hallucinating’. But their makers are (The Guardian, May 8, 2023)
Science fiction writer Ted Chiang explains the difference between student summary and ChatGPT summary–and what is lost in the latter process.
Chat GPT is a Blurry Jpg of the Web by Ted Chiang (New Yorker, February 9, 2023)
A fantastic profile of five women in tech who raise awareness of how different AI technologies impact marginalized communities.
These Women Tried to Warn Us About AI (Rolling Stone, August 12, 2023)
Additional resources
Will Alpine, a former Microsoft software engineer, sounds the alarm about the AI race negates his former employer’s climate goals.
We’re Wrong About AI – Will Alpine (Ignite Seattle, February 24)
AI Colonialism
An MIT Technology Review series, supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center, investigating how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.
AI + Planetary Justice Alliance is a global collective of researchers, activists, and artists. Their Observatory of Planetary Justice Impacts of AI tracks the socio-environmental impacts of artificial intelligence across its entire lifecycle.
Mystery AI Hype Theater 3000, hosted by linguist Emily M. Bender and sociologist Alex Hanna, help break down the AI hype and separate fact from fiction. The two also co-authored The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.
Refusing GenAI in Writing Studies is a collective of rhetoric, composition, and writing studies scholars making the case for refusal as a disciplinary and principled response to the emergence of generative AI technologies.
Rhetorica by Mark Watkins is an excellent resource for teachers interested in or concerned about AI use in the classroom. In issues like Our Era of Generated Deception (August 23, 2024) and First Drafts In The AI Era (August 9, 2024), he not only raises pressing issues in AI but proposes strategies for helping students navigate this technology.