A year in AI news

Since fall 2023, I have spent about 15 hours in webinars and in-person workshops learning about artificial intelligence–specifically about generative AI and how it could impact my work as a professional writer, a published creative writer, and a creative writing instructor. A few of these workshops offered a balanced look at the technology, with information about how it works and how it could be used along with a realistic look at its drawbacks. These have been helpful to me; I think that if we are going to accept this technology in our jobs and classrooms, then we need to understand its limitations as well as its potential. I have endeavored to pass the relevant information on to my colleagues and students.

Other workshops offered little more than sales pitches, uncritically repeating the claims made by generative AI creators and vendors. Such as: Generative AI is cheap, it costs pennies on the dollar to use; generative AI is getting smarter all the time, and soon it will be able to match human cognitive abilities; generative AI is the future, and in less than a year it will be everywhere. Many of these claims are challenged by peer-reviewed research and data. I bookmark news coverage on these topics whenever I see it, and share the most interesting articles, explainers, and opinion essays in my monthly reading round-ups. After a year of collecting AI news, I wanted to step back and get a big-picture view of what we know.

The annotated bibliography that follows is by no means comprehensive, and reflects my own interests and biases; for example, there are far more resources about text content than visual content. This bibliography will also be outdated the moment I post it, so this version will remain on my site as a blog post, but I will also publish my sources on a separate page that I will continue to update as the story of generative AI develops.

If you find this resource useful to you in your work, I would be glad to hear about it. If you have favorite resources, I would like to hear about those too.

Table of contents

  • Quality issues
    • Implicit bias
    • Disinformation
    • Model collapse
  • Copyright and labor issues
    • For creators
    • For gig workers
    • For creative professionals
  • Environmental impacts
  • Long-term risks
    • Consumer trust
  • Essays and op-eds
  • Other sources

Quality issues

Most resources and workshops I’ve encountered agree that AI-generated content shouldn’t be public-facing. That is to say: for quality assurance, a human being should review and revise any AI-generated drafts before publication–whether the content in question is a press release or just an email to a colleague. Instead, we have been encouraged to use generative AI models to save time at the beginning or end of the writing process. Using large language models to proofread or polish work may raise some security issues–for example, Grammarly uses data even from its paying customers to train its own AI–but this section focuses on the quality issues in content drafts generated by AI.

In an Upwork survey in spring 2024, 77% of employees reported that AI tools have added to their workload, rather than saving time.
Upwork Study Finds Employee Workloads Rising Despite Increased C-Suite Investment in Artificial Intelligence (Upwork, July 23, 2024)

Australia’s Securities and Investments Commission (ASIC) conducted a trial to see how will Meta’s open source AI summarized official documents. They concluded that AI summaries would create more work for human employees, not less, because of the need to fact-check and cross-reference.
AI worse than humans in every way at summarising information, government trial finds (Crikey, September 3, 2024)

Implicit bias

In late 2023, an IBM survey showed that 42% of global IT companies were using AI screening of job candidates. Marginalized candidates tend to fall through the cracks of these screens.
AI hiring tools may be filtering out the best job applicants (BBC, February 16, 2024)

A multi-author study shared on arXiv shows that ChatGPT and Alpaca (a LLM developed by Stanford University) use very different language to describe imaginary male and female workers.
ChatGPT Replicates Gender Bias in Recommendation Letters (Scientific American, November 22, 2023)

A paper authored by researchers of the Allen Institute for Artificial Intelligence studied how racial biases in large language models have evolved as the models become larger and organizations set up more ethical guardrails.
As AI tools get smarter, they’re growing more covertly racist, experts find (The Guardian, March 16, 2024)

Disinformation

Swedish researchers Jutta Haider, Kristofer Rolf Söderström, Björn Ekström, and Malte Rödl observed an uptick in research papers on Google Scholar with signs of undisclosed GPT use (in particular, specific phrases that are commonly found in GPT-generated content, such as “as of my last knowledge update”).
GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation (Misinformation Review, September 3, 2024)

Climate Visuals stresses the importance of authenticity in photographs of climate change; AI-generated images tend to rely on cliches and overly familiar visual metaphors, rather than real impacts on real people.
On AI images and climate change photography (Climate Visuals, March 6, 2024)

In a paper published in JAMA Ophthalmology, researchers used ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. Their aim was to show how easy it is to create a data set that is not supported by real data.
ChatGPT generates fake data set to support scientific hypothesis (Nature, November 22, 2023)

Model collapse

Aaron J. Snoswell, Research Fellow in AI Accountability at Queensland University of Technology, explains model collapse: a hypothetical scenario “where future AI systems get progressively dumber due to the increase of AI-generated data on the internet.” Snoswell believes that model collapse is unlikely, predicting a more heterogeneous landscape content produced by human creators and AI platforms, but argues that the flood of AI-generated content bears other risks.
What is ‘model collapse’? An expert explains the rumours about an impending AI doom (The Conversation, August 19, 2024)

This interactive article shows how generated content deteriorates in quality when AI is trained on its own output.
When A.I.’s Output Is a Threat to A.I. Itself (New York Times [gift link], August 25, 2024)

To curtail model collapse, generative AI has to be trained with new data–created or reviewed by human beings. This is a costly proposition, if human data sources and handlers were compensated fairly.

Copyright and labor issues

This section examines some of the legal and/or ethical issues posed by generating content with AI. Many creators whose work is used to train large language models are not compensated or even notified of this usage; furthermore, the implementation of “guardrails” (which are necessary to curb issues such as implicit bias) and data labeling is typically carried out by underpaid and insufficiently protected gig workers. In addition, workers in creative fields are organizing to protect their labor from being digitally replicated and replaced.

For creators

Australian news outlet Crikey points out the various tangled ethical issues in training generative AI to produce inauthentic artwork in the style of indigenous artists.
AI is producing ‘fake’ Indigenous art trained on real artists’ work without permission (Crikey, January 19, 2024)

Amazon’s Kindle Direct Publishing has always been a pipeline for unauthorized summaries or knock-offs of traditionally published books, but there has been an uptick since the release of ChatGPT–and it is difficult and time-consuming for authors to redress. Some knock-offs use summarized or revised content under the original author’s name, without their permission:
I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires) (Jane Friedman, August 7, 2023)
Others publish the summaries under a different author name, which is tougher to address, since titles technically can’t be copyrighted.
Scammy AI-Generated Book Rewrites Are Flooding Amazon (Wired, January 10, 2024)

OpenAI in particular has incurred multiple lawsuits for training its GPTs on copyrighted material.

The Authors Guild and 17 authors filed a class-action suit against OpenAI for copyright infringement of their works of fiction on behalf of a class of fiction writers whose works have been used to train GPT without their knowledge or permission. The complaint draws attention to the fact that the plaintiffs’ books were downloaded from pirate ebook repositories and then copied into the fabric of GPT 3.5 and GPT 4 which power ChatGPT–making it possible for AI tools to generate soundalike content that users may attempt to pass off as human-generated, profiting off of a known author’s existing reputation.
The Authors Guild, John Grisham, Jodi Picoult, David Baldacci, George R.R. Martin, and 13 Other Authors File Class-Action Suit Against OpenAI (The Author’s Guild, September 20, 2023)

Eight daily newspapers including The New York Daily News and The Chicago Tribune have sued OpenAI and Microsoft for using copyrighted articles to train their large language models.
Eight newspapers sue OpenAI, Microsoft for copyright infringement (NPR, April 30, 2024)

In response to yet another lawsuit (this one from the New York Times), OpenAI has said that it cannot train large language models without access to copyrighted work.
Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says (The Guardian, January 8, 2024)

For gig workers

The Verge’s Josh Dzieza spoke with two dozen annotators from around the world about the repetitive, low-paid work behind the scenes of training LLMS.
AI Is a Lot of Work (The Verge, June 20, 2023)

Adrienne Williams, Milagros Miceli and Timnit Gebru of the Distributed AI Research Institute report that AI technology depends on gig workers like data labelers, delivery drivers and content moderators who perform repetitive tasks under precarious labor conditions.
The Exploited Labor Behind Artificial Intelligence (Noema, October 13, 2022)

For creative professionals

Creative professionals who have the benefit of a guild or union are bringing concerns about artificial intelligence to the bargaining table.

After a 148-day strike, the Writers Guild of America reached an agreement with the Alliance of Motion Picture and Television Producers (AMPTP) which included language intended as a guardrail against replacing the labor of screenwriters with AI. The agreement specifies that AI-generated material will not be considered source material (and therefore AI-generated material cannot undermine a writer’s credit), and while a writer can choose to use AI, they cannot be required to use AI. 
Summary of the 2023 WGA MBA (WGA Contract 2023, September 25, 2023)

Screen Actors Guild-American Federation of Television and Radio Artists went on strike in July 2024; among other things, they asked for protections around “exploitative uses” of artificial intelligence that would replace voice actors, motion capture performances, and other professionals with AI-generated voice and visuals.
Video game performers are going on strike over AI concerns. Here’s what to know (July 25, 2024)

Environmental impacts

On the user end, generating AI content is relatively quick and cheap, often free. The frictionless access to AI conceals the fact that all that massive computing must take place on a physical server that exists somewhere in the world–taking up physical space that might otherwise be a meadow or wetland–and requires electricity for power and water for cooling.

Of course, all internet usage requires the existence of brick-and-mortar data centers. But the cost of generative AI is substantially greater than a typical web search:

The new data centers also generate a lot of heat, and need water for cooling:

Here’s a concise explainer from Friends of Earth:

  • AI systems require an enormous amount of energy and water, and consumption is expanding quickly. Estimates suggest a doubling in 5-10 years.
  • Generative AI has the potential to turbocharge climate disinformation, including climate change-related deepfakes, ahead of a historic election year where climate policy will be central to the debate. 
  • The current AI policy landscape reveals a concerning lack of regulation on the federal level, with minor progress made at the state level, relying on voluntary, opaque and unenforceable pledges to pause development, or provide safety with its products.

Report: Artificial Intelligence A Threat to Climate Change, Energy Usage and Disinformation (FOE, March 7, 2024)

For more in-depth reading into the environmental impacts:

A look at the scope 2 and scope 3 emissions for Amazon, Apple, Google, Meta, and Microsoft. Technically companies are not required to disclose scope 3 emissions, which is partly why the data is so messy–and worth looking at.
Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse? (The Guardian, September 15, 2024)

The Washington Post examines the various energy sources harnessed by Google, Meta, and Microsoft for newly built data centers–including coal plants and nuclear facilities that were scheduled to go offline–and their more abstract plans, such as harnessing atomic fusion.
AI is exhausting the power grid. Tech firms are seeking a miracle solution. (Washington Post, June 21, 2024, gift link)

HEATED interviews Michael Khoo, climate disinformation program director at the nonprofit Friends of the Earth, about the difference in energy costs between typical internet use (web searches, video streaming, etc.) and generative AI use.
Are your internet habits killing the planet? (HEATED, May 28, 2024)

Paul Schütze, a researcher at the Ethics and Critical Theories of Artificial Intelligence research group at the University of Osnabrück, argues that the pursuit of sustainable AI is not driven by the desire to create resource-friendly and ethical solutions, but by capital interests.
The Problem of Sustainable AI: A Critical Assessment of an Emerging Phenomenon (The Weizenbaum Journal of the Digital Society, April 5, 2024)

Felippa Amanta of the University of Oxford’s Environmental Change Institute points out several ways that AI use increases energy consumption.
AI is supposed to make us more efficient – but it could mean we waste more energy (The Conversation, January 26, 2024)

Earth.org compiles research from several different sources to look at the carbon footprint, waste disposal, ecosystem impact, and transparency issues of AI technologies.
The Green Dilemma: Can AI Fulfill Its Potential Without Harming the Environment? (Earth.org, July 18, 2023)

Long-term risks

Is AI a boom, or a bubble?

Even early reports suggested that ChatGPT was extremely expensive to maintain:
You won’t believe how much ChatGPT costs to operate (Digital Trends, April 20, 2023)

In January 2024: “AI-related companies lost $190 billion in stock market value late on Tuesday after Microsoft (MSFT.O), opens new tab, Alphabet (GOOGL.O), opens new tab and Advanced Micro Devices (AMD.O), opens new tab delivered quarterly results that failed to impress investors who had sent their stocks soaring.”
AI companies lose $190 billion in market cap after Alphabet and Microsoft report (Reuters, January 30, 2024)

And in August 2024: “Shares of both Google and Microsoft dipped following their earnings reports, a sign of investors’ discontent that their huge AI investments hadn’t led to far-better-than-expected results.”
Has the AI bubble burst? Wall Street wonders if artificial intelligence will ever make money (CNN, August 2, 2024)

Consumer trust

Those of us who work in creative industries may not have invested billions in generative AI, but its ubiquity is still impacting our relationships with our own stakeholders: readers, viewers, students, etc.

University of Minnesota researchers found that 80% of their study respondents believed that news organizations should alert readers when AI is used to generate content–and 78% wanted an explanatory note describing how.
Most readers want publishers to label AI-generated articles — but trust outlets less when they do (Nieman Labs, December 5, 2023)

Author Kester Brewin argues that authors can be more transparent with readers about whether large language models were used to generate, suggest, improve (via Grammarly or similar), or correct (via spell check) text. Why I wrote an AI transparency statement for my book, and think other authors should too (The Guardian, April 4, 2024)

The Digital Education Council Global AI Student Survey showed that 55% of respondents believed overuse of AI within teaching devalued education, and 52% said it negatively impacted their academic performance. Students Worry Overemphasis on AI Could Devalue Education (Inside Higher Ed, August 9, 2024)

Essays and op-eds

This section maintains a distinction between opinion pieces and traditional reporting. Still, if you read the linked pieces in only one section of this bibliography, read these. Data is important, but there is nothing like an intelligently reasoned and compellingly written argument.

Ed Zitron argues that we’re not just approaching an AI bubble burst but an AI crisis, created by the extremely high costs of operating the technology coupled by the relatively low value it offers consumers.
The Subprime AI Crisis (Where’s Your Ed At?, September 16, 2024)

Tech columnist John Herrman pokes fun at the terrible text and email summaries his Google and Apple applications served up.
The Future Will Be Brief (Intelligencer, August 12, 2024)

According to Glasgow lecturers Joe Slater, James Humphries, and Michael Townsen Hicks, “bullshitting” is a philosophical term. “When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true.”
ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting! (Scientific American, July 17, 2024)

UK lecturers James Muldoon, Mark Graham and Callum Cant draw from their forthcoming book Feeding the Machine: The Hidden Human Labor Powering A.I. explain that the seemingly autonomous technology is powered by data annotators, content moderators, machine learning engineers, data center technicians, writers and artists.
Opinion: What’s behind the AI boom? Exploited humans (LA Times, July 12, 2024)

Software engineer Nikhil Suresh does not sugarcoat his low regard for generative AI’s potential as an instrument of efficiency or optimization.
I Will Fucking Piledrive You If You Mention AI Again (Ludicity, June 19, 2024)

NPR’s Linda Holmes responds to an depicting a father using Google’s Gemini on behalf of his daughter to generate a fan letter to an Olympic athlete.
The antithesis of the Olympics: Using AI to write a fan letter (NPR, July 30, 2024)

Science fiction author Cory Doctorow suggests that not enough people are thinking about what can be salvaged when the AI bubble pops.
What Kind of Bubble is AI? (Cory Doctorow, December 18, 2023)

Climate justice professor and bestselling author Naomi Klein critiques the mischaracterization of AI “hallucinations,” along with some of the implausible claims made by AI proponents (e.g. AI will solve the climate crisis).
AI machines aren’t ‘hallucinating’. But their makers are by Naomi Klein (The Guardian, May 8, 2023)

Science fiction writer Ted Chiang explains the difference between student summary and ChatGPT summary–and what is lost in the latter process.
Chat GPT is a Blurry Jpg of the Web by Ted Chiang (New Yorker, February 9, 2023)

A fantastic profile of five women in tech who raise awareness of how different AI technologies impact marginalized communities.
These Women Tried to Warn Us About AI (Rolling Stone, August 12, 2023)

Other sources

Webinars and in-person sessions

Virtual presentation on AI. College of Liberal and Professional Studies Staff Summer Retreat. August 7, 2024. In-person event and discussion.

Create Your Next Digital Campaign with AI Assistants + GPTs. EduWeb Summit. July 9, 2024. In-person masterclass.

Epic Content Marketing for Higher Education. EduWeb Summit. July 9, 2024. In-person keynote.

Understanding ChatGPT and AI Implications for Your Teaching. Penn Arts & Sciences Online Learning. April 23, 2024. Webinar.

Artificial Intelligence: Revolution and Opportunity in Trade Publishing. Publishers Weekly. September 27, 2023. Web conference. Access on YouTube.

Understanding ChatGPT and AI Implications for Your Teaching. Penn Arts & Sciences Online Learning. September 22, 2023. Webinar.

Can A Robot Run Your Marketing?: AI in Climate Tech Marketing. Alder & Co. September 7, 2023. Webinar. 

Newsletters

Rhetorica by Mark Watkins is an excellent resource for teachers interested in or concerned about AI use in the classroom. In issues like Our Era of Generated Deception (August 23, 2024) and First Drafts In The AI Era (August 9, 2024), he not only raises pressing issues in AI but proposes strategies for helping students navigate this technology.

1 thought on “A year in AI news”

Leave a comment