ChatGPT Wrapped (An AI’s Year in Review)

Casey Fiesler
8 min readNov 30, 2023

--

Image Credit: OpenAI’s DALL-E and countless uncredited and uncompensated artists whose work contributed to training this model.

OpenAI’s ChatGPT launched officially on November 30, 2022, and over the past year has often dominated news cycles for days at a time. In addition to my frequently updated spreadsheet of AI ethics and policy news, I have also been keeping a more systematic eye on ChatGPT. This spreadsheet contains five news stories (via a Google News search) about ChatGPT for each week for the past year, starting December 1, 2022.[1] What follows is a non-comprehensive look at major stories over the past year about ChatGPT, with an eye towards the evolution of the technology and its criticisms. (And what a season finale!)

Week 1: Enter chatbot.Within 24 hours of ChatGPT’s launch we were seeing viral Twitter threads of experiments that might seem quaint now (explain zero point energy in the style of a cat) but that at the time made people understand quite quickly what it could do, and that it could seem both brilliant and weird. But people were also immediately talking about how easily it could go from entertaining (explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible) to problematic (explain how to make a molotov cocktail).

Week 2: What is this thing? In addition to countless explainers, we began to see a whole lot of “ChatGPT wrote this!” (and similar local news segments), and even ChatGPT writing the explainers. But the “what is this?” was combined with a lot of “what does this mean?” with immediate think pieces about the potential for disruption.

Weeks 3–4: But wait… With time for some of the initial novelty to begin to wear off, we saw more deep dives into ChatGPT’s limitations. Was it “smart enough” to justify the immediate concerns around job loss? And sure, it could do some of your schoolwork, but would it be a good student? (And with final exam season upon us, there was a lot of reflection about what this meant for education.) We also started to hear about the inevitable oncoming AI tech race.

Weeks 5–7: Mitigation and misinformation. In the first weeks of 2023, the conversations about education picked up steam as new school terms started and schools had to decide what to do. This resulted in a number of outright bans of the technology in schools. School bans were an attempt to mitigate cheating, but also reflected broader recognition for misinformation and inaccuracy from the chatbot (and the difficulty in spotting it). Academic venues also began to address use of ChatGPT. But even a midst warnings about other types of potential risks, there was also still a great deal of enthusiasm for the usefulness and possibilities of the technology as more people began to use it.

Weeks 8–11: Microsoft and Google enter the chat. In late January 2023, Microsoft announced a multi billion dollar investment in OpenAI, and shortly after, integrated OpenAI’s tech into their Bing search engine. Meanwhile, Google unveiled Bard, its rival to ChatGPT. ChatGPT also hit 100 million users, making it the fastest growing userbase ever. But as the AI race sped up, we also continued to hear about the ethical implications, particularly when both the Bing and Bard launches included hallucinations and factual errors.

Weeks 12–14: Picking up speed. ChatGPT continued to “blow minds” while we heard about many specific use cases of the technology — from the good (learning assistance for visually impaired students) to the bad (church sermons that “lack soul”) to the “what do we think about this?” (the boom of AI-written books). Chinese apps removing ChatGPT also pointed to the potential for a global AI race, and the release of ChatGPT’s API opened the chatbot floodgates.

Weeks 15–17: Growing up. At three months old, ChatGPT hit a growth milestone with the announcement of GPT4, which many found immediately impressive. Though we continued to hear about some significant growing pains, like security flaws and a negative impact on the workforce. Meanwhile, Google officially released Bard and the head-to-heads began, with many users putting ChatGPT on top.

Weeks 17–21: The need for rules. Regulatory pressure heated up considerably, as did overall pushback against AI. ChatGPT was banned in Italy over privacy concerns, the EU created a privacy taskforce, and in the U.S., President Biden and Senator Schumer both made calls for AI regulation. Also OpenAI faced its first defamation lawsuit and criticism over security risks. The “AI Pause” letter suggested a temporary pause to development of more powerful models, citing the potential future risk to humanity, which also sparked criticism over a lack of attention to current AI harms.

Weeks 22–25: No seriously, rules. Regulatory pressure continued, with OpenAI attempting to assuage concerns. CEO Sam Altman testified before Congress. Italy allowed ChatGPT to return, with promises of new privacy protections. There was an arrest in China over a use of ChatGPT that violated their law around deepfakes. And as we neared the end of the spring school term, we heard that ChatGPT was killing homework assistance site Chegg, and also many teachers were trying hard (sometimes inappropriately) to catch students using ChatGPT to cheat. Also OpenAI released a ChatGPT iPhone app and a subscription service.

Weeks 26–30: Risks near and far. Over the month of June 2023, we saw recognition of current ethical issues with ChatGPT and similar AI systems, as well as discussion of long-term existential risks. A lawyer was caught using ChatGPT to write legal filings. There was another security breach and another defamation lawsuit. People were losing jobs due to companies using ChatGPT to automate work. Microsoft and OpenAI were hit with a huge privacy lawsuit. At the same time, there was another open letter, this one warning about the “risk of extinction” from AI. CEO Sam Altman weighed in on risks versus benefits of his technology, focusing on long-term threats rather than current harms. Also at the end of June we saw more movement globally as China’s Baidu claimed that its rival chatbot Ernie was outperforming ChatGPT.

Weeks 31–32: Copycat. Multiple lawsuits related to ChatGPT’s training data emerged: one as a proposed class action from authors against OpenAI for copyright infringement, another led by Sarah Silverman against both OpenAI and Meta, and another from anonymous plaintiffs against OpenAI for “unprecedented theft of private and copyrighted information.” ChatGPT’s growth also showed a decline for the first time at the beginning of July.

Weeks 33–35: Investigation and competition. The U.S. Federal Trade Commission launched an investigation over potential consumer harms and OpenAI’s security practices, and meanwhile, researchers experimented with bypassing ChatGPT’s safety guardrails. Elon Musk announced a new company, xAI, intended to compete with ChatGPT; Apple’s stock rose in response to reports that it was developing a ChatGPT competitor; and Facebook announced Llama 2, the latest version of its LLM.

Weeks 36–38: School and work. As a new school year began, schools continued to grapple with how to best handle ChatGPT. But this time, there were more reports of schools deciding repeal bans or even actively embrace it, experimenting with teaching students to use AI systems. Meanwhile, in the workforce, some companies considered banning ChatGPT due to concerns over data security, but others were making increasing use of AI continued to sound alarms over job loss. But also, listings for AI-related jobs increased dramatically.

Weeks 40–41: Useage. As OpenAI released its enterprise version of ChatGPT, promising expansion, Baidu’s chatbot Ernie jumped to the top of the Chinese app store. And though ChatGPT’s traffic dropped for the third month in a row, we also started to hear about the impacts of resource consumption in the community near OpenAI’s data centers.

Weeks 42–45: Seeing and hearing. A late September update to ChatGPT gave the system the ability to analyze images (“see”), understand voice commands (“hear”), and use text-to-voice (“speak”). And in the wake of these advancements in the tech, we also saw more conversation speculating about where the tech might be going next. In the meantime, concerns over security continued and prompted the US Space Force to halt use of AI tools, and OpenAI was hit with its most high-profile copyright lawsuit yet, from the Authors Guild featuring writers like George R.R. Martin, Jodi Picoult, and John Grisham.

Weeks 46–48: Research findings. Over time we’ve been seeing more published research on the impacts and capabilities of large language models. One study warned that chatbots could be tricked into producing malicious code. Another found that ChatGPT and other LLMs propagate debunked race-based medical information. Though yet another suggested the potential for ChatGPT to produce less biased insights for doctors (while acknowledging ethical concerns). Meanwhile, media organizations released a scathing indictment of large language models as “illegal rip-offs.”

Weeks 49–50: New stuff and downtime. One of the announcements during OpenAI’s Dev Day in early November was a new service called GPTs and a GPT store — the idea being that anyone can create custom chatbots (and commercialize them) without coding experience. Around this same time, Elon Musk also announced his own chatbot, intended to compete with “politically correct” AI systems (and not everyone was impressed by its humor). And shortly after Dev Day, ChatGPT went down for a couple of hours due to a DDOS attack.

Weeks 51–52: ChatGPT’s season one finale looked like an episode of Succession. On November 17, the OpenAI board announced that Sam Altman had been removed as CEO due to his being “not consistently candid in his communications.” CTO Mira Murati became interim CEO. Former Twitch CEO Emmett Shear then became OpenAI’s third CEO in as many days, and Microsoft announced that it had hired Altman to lead his own AI division, and almost the entire workforce of OpenAI threatened to quit. There was rampant speculation about the reasons behind Altman’s removal, many of them focusing on the tension between AI commercialization and speed, and AI safety. By November 22, Altman had returned as CEO of OpenAI, with most board members being replaced as well. There have since been reports that this shake-up was due to internal concern over a rumored AI breakthrough at OpenAI. In a letter to the company published the day before ChatGPT’s first birthday, Altman touted the company’s resilience and stated priorities towards “advancing our research plan and further investing in our full-stack safety efforts” and “building out a board of diverse perspectives.” As of November 30, the new OpenAI Board consists entirely of white men.

I will not be updating the spreadsheet of ChatGPT articles any further (consider this single year “wrapped!”) but if you are interested in my broader news round-ups, I maintain a much more comprehensive spreadsheet of AI ethics and policy news, and post periodic video news round-ups on Instagram and TikTok, and ocasionally other places. You can find all of my social media here: https://casey.prof/ My public education this academic year is supported by the Notre Dame IBM Tech Ethics Lab!

[1] A note on methodology. The database of ChatGPT articles is based on a Google News search (for “chatgpt”) in incognito mode for each seven day period. I added the top articles that appeared (in whatever way Google is ranking relevance), leaving out things like press releases and blogs. The dataset itself is obviously non-comprehensive (both because of what came up in search and because of what the media reports and how they report it), so the narrative above is as well — I used the spreadsheet to construct narrative chunks, though occasionally supplemented with related links from my broader AI ethics news spreadsheet. And of course, though I was attempting to be relatively neutral in describing what the media covered, the narrative above is colored by my own positionality as an ethics researcher and therefore how I perceived the most salient events.

--

--

Casey Fiesler
Casey Fiesler

Written by Casey Fiesler

Faculty in Information Science at CU Boulder. Technology ethics, social computing, women in tech, science communication. www.caseyfiesler.com

Responses (1)