6 Where Did Generative AI Come From? An Exercise in Co-Writing

To demonstrate how co-writing and academic research can work together, I co-created this outline about the history of generative artificial intelligence. You’ll learn more about how we got to this point, and you’ll also be able to see some of the strengths and limitations of using chatbots for research. The chatbot initially left out several important developments, such as movies like The Terminator and The Matrix. It even left out the development of the Internet, which was critical to amassing the data needed to train large language models. But working together, the generative AI tool and I were able to create a more comprehensive outline, and I was able to use the results to locate the research linked and in the references below using key words the chatbot provided. The important thing to remember is that this is not just one query and one response. Interacting with the chatbot involved a lengthy conversation. 

It seems like everywhere we look, we see people talking about artificial intelligence. From news headlines to deepfakes to student papers, generative AI is dominating the discourse. Our World in Data has a robust interactive history of artificial intelligence including graphs and timelines. Here’s a brief and high-level timeline of the development of artificial intelligence (AI), including both technological and cultural milestones. I have also included a section for further reading that includes links to some of the foundational research papers that led the development of large language models like ChatGPT and Google Gemini.

1920s-1950s: Early Visions of AI and Technical Foundations

  • 1920: R.U.R. (Rossum’s Universal Robots), a science fiction play by Karel Čapek introduces the word “robot” and explores themes of artificial beings rebelling against their creators.
  • 1927: Metropolis, a silent film by Fritz Lang, features a robot named Maria, one of the earliest depictions of AI in cinema, symbolizing the potential dangers of technology.
  • 1943: Warren McCulloch and Walter Pitts publish a paper on artificial neurons, laying the groundwork for neural networks.
  • 1950: Alan Turing proposes the Turing Test in his paper “Computing Machinery and Intelligence,” which questions whether machines can think.
  • 1950: I, Robot by Isaac Asimov (a collection of stories) introduces Asimov’s Three Laws of Robotics, setting a framework for ethical AI in literature.
  • 1955: The term “artificial intelligence” is coined by John McCarthy during the Dartmouth Conference, marking the official birth of AI as a field of study.

1960s: The Rise of AI Research

  • 1961: The first industrial robot, Unimate, is introduced, demonstrating the potential for AI in manufacturing.
    Unimate pours coffee for a human by Frank Q. Brown, Los Angeles Times – https://digital.library.ucla.edu/catalog/ark:/21198/zz0002vfhdCC BY 4.0,
  • 1966: ELIZA, an early natural language processing program, is developed by Joseph Weizenbaum, simulating conversation with a human.
  • 1969: Shakey the Robot, created by SRI International, becomes the first robot capable of reasoning about its actions.
  • 1965: Dune by Frank Herbert introduces a distant future where AI and “thinking machines” have been banned.
  • 1968: 2001: A Space Odyssey, directed by Stanley Kubrick and based on Arthur C. Clarke’s work, introduces HAL 9000, an AI that controls a spaceship and turns against its human crew. HAL became an iconic representation of the dangers of AI.
  • 1968: Do Androids Dream of Electric Sheep? by Philip K. Dick, later adapted into the film Blade Runner (1982), explores the line between humans and androids, raising questions about identity and consciousness.

1970s: AI Winter and Slow Progress; Early Media

  • 1972: The programming language Prolog is developed, which becomes a key language for AI development.
  • 1973: The first “AI winter” begins as funding and interest in AI research decline due to unmet expectations.
  • 1973: Westworld, a film written and directed by Michael Crichton, depicts a theme park where AI-controlled robots malfunction and threaten the guests. The concept was revisited in the 2016 HBO series of the same name. Westworld was the first feature film that used a computer to process images.
  • 1977: Star Wars: Episode IV – A New Hope, directed by George Lucas, introduces C-3PO and R2-D2, AI characters that became beloved icons, presenting AI in a more positive, helpful light.
  • 1979: The Stanford Cart (a project begun in 1960) successfully navigates a room filled with obstacles, marking an early achievement in computer vision and robotics.

1980s: Expert Systems and the Rise of AI in Popular Media

Movie poster of the Terminator (1084)
This movie poster for The Terminator (1984) be found at the following website: http://www.impawards.com/1984/terminator.html,
  • 1980: The introduction of expert systems, such as XCON developed by John McDermott and implemented at Digital Equipment Corporation, shows the practical application of AI in business.
  • 1982: Japan’s Fifth Generation Computer Systems project begins, aiming to develop computers with AI capabilities.
  • 1982: Blade Runner, directed by Ridley Scott, further popularizes the themes of AI and artificial life, emphasizing the moral and philosophical implications of creating sentient beings.
  • 1984: The Terminator, directed by James Cameron, depicts a dystopian future where AI (Skynet) becomes self-aware and attempts to exterminate humanity, creating one of the most enduring images of AI as a threat.
  • 1984: Neuromancer by William Gibson introduces the concept of cyberspace and AI in a cyberpunk setting, influencing countless works of fiction in both literature and film.
  • 1987: The second AI winter begins due to the collapse of the expert system market.
  • 1989: Ghost in the Shell (manga) by Masamune Shirow, later adapted into an anime film in 1995, explores AI, cybernetics, and the nature of consciousness in a cyberpunk world.
  • 1989: Sir Tim Berners-Lee lays out an “Information Management System” proposal for CERN, which described what would become the World Wide Web.

1990s: AI Revival, the Internet, Blockbusters, and Deeper Philosophical Reflection

  • 1992: Jurassic Park, a novel by Michael Crichton and later a film in 1993, explores themes of technology, including AI, and the consequences of playing God.
  • 1993: The rapid growth of the World Wide Web (WWW) accelerates global connectivity, laying the groundwork for the explosion of data that would become essential for AI development.
  • 1997: IBM’s Deep Blue defeats world chess champion Garry Kasparov, marking a significant milestone in AI. Deep Blue used brute computing power to perform 11.38 billion floating point operations per second.
  • 1997: Contact, based on Carl Sagan’s novel, delves into themes of AI and extraterrestrial communication, blending science with philosophical inquiry.
  • 1999: AI starts gaining commercial use in various applications, including speech recognition, recommendation systems, and more.
  • 1999: The Matrix, directed by the Wachowskis, becomes a cultural phenomenon, depicting a dystopian future where AI enslaves humanity in a simulated reality, raising questions about reality, freedom, and control.

2000s: The Internet, Data, and AI Convergence

  • 2000-2002: The dot.com bust led to a more cautious approach to AI funding and development.
  • 2001: Steven Spielberg’s film AI: Artificial Intelligence explores themes of consciousness, human-AI relationships, and the ethical implications of creating AI with human-like emotions and desires. It reflects the growing cultural interest in AI and its potential to challenge our understanding of humanity, empathy, and morality.
  • 2002: Minority Report explores the ethical and moral implications of predictive technology powered by AI.
  • 2004: The rise of social media platforms, powered by AI algorithms, significantly impacts user behavior and content curation.
  • 2005: Computer scientist and futurist Raymond Kurzweil writes The Singularity Is Near: When Humans Transcend Biology, predicting that by the mid-21st century, AI will reach a point of superintelligence, leading to the “Singularity”—a moment when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
  • 2006: Geoffrey Hinton and his team revive neural networks with the concept of deep learning, leading to a revolution in AI capabilities.
  • 2006: The launch of Amazon Web Services (AWS) enables the widespread use of cloud computing, providing the computational resources needed for large-scale AI development.
  • 2008: Wall-E portrays AI’s role in both the decline and redemption of civilization, reflecting concerns about technology and the environment.

2010s: AI in Everyday Life and Ethical Reflections

IBM Watson Computer
IBM Watson Computer by Clockready, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15891787
  • 2011: IBM’s Watson wins Jeopardy!, showcasing advances in natural language processing, driven by Internet data and computational power.
  • 2012: The Google Brain project demonstrates the power of deep learning by training a neural network to recognize cats in YouTube videos.
  • 2013: Her presents an intimate depiction of AI, focusing on the emotional and relational dynamics between humans and AI.
  • 2014: Ex Machina explores the complexities of AI consciousness and ethics.
  • 2015: AI as a Service (AIaaS) becomes mainstream, allowing businesses to integrate AI into their operations via cloud-based services.
  • 2016: Westworld (HBO series) revisits the concept of AI in a theme park setting, diving deeper into consciousness, morality, and free will.
  • 2016: Coder and AI researcher Joy Buolamwini forms the Algorithmic Justice League to highlight the systemic ways that algorithms harm certain groups.
  • 2017: Transformer Model Architecture: Researchers at Google introduced the Transformer model in their paper “Attention is All You Need.” This architecture revolutionized natural language processing (NLP) by enabling more efficient and powerful models. The Transformer model laid the foundation for subsequent large language models like GPT-2, BERT, and others.
  • 2018: Companies and research institutions began developing AI systems to assist with early detection of diseases like cancer and diabetic retinopathy, with some AI systems achieving or surpassing human-level performance in specific tasks.
  • 2018: The development and testing of autonomous vehicles intensified during this period. Companies like Waymo, Tesla, and Uber advanced their self-driving technologies.
  • 2019: OpenAI releases the full version of GPT-2. Its capabilities continued to impress, further solidifying the potential and risks of AI in generating content
  • 2019: As AI technologies became more powerful and pervasive, governments, academic institutions, and tech companies began formulating guidelines and frameworks for the responsible development and deployment of AI.
  • 2019: Reinforcement learning, the technique behind AlphaGo and AlphaGo Zero, continued to advance.
  • 2019: AI started making significant inroads into creative fields, with tools that could generate art, music, and even write scripts.

2020s: Maturation, Ethical Considerations, and AI’s Societal Impact

  • 2020: The Internet facilitates global discussions on AI ethics, with growing concerns about privacy, bias, and the societal impact of AI.
  • 2020: GPT-3, another language model by OpenAI, pushes the boundaries of AI text generation with 175 billion parameters, made possible by the vast data available on the Internet.
  • 2021: AI plays a critical role in healthcare, such as in the development of vaccines and medical diagnostics during the COVID-19 pandemic, leveraging global data networks.
  • 2022: Diffusion models, like Midjourney and DALL-E 2, demonstrate impressive capabilities in generating images from textual descriptions, reflecting the fusion of AI and Internet-driven creativity.
  • 2022: OpenAI releases ChatGPT 3.5, which sets off an immediate wave of competition to release Large Language Models.
  • 2023: Ongoing advancements in AI continue to impact various industries, with increased focus on ethical considerations, regulation, and responsible AI development.
  • 2024: Concerns about return on investment, election interference, and deepfakes begins to spread.

This timeline highlights key developments that have shaped the field of AI, reflecting its evolution from theoretical concepts to practical applications that permeate everyday life (Open AI, 2024).

References/Further Reading

Anderson, N. (2014). “Only We Have Perished”: Karel Čapek’s R.U.R. and the catastrophe of humankind. Journal of the Fantastic in the Arts, 25(2/3 (91)), 226–246. http://www.jstor.org/stable/24353026 (ISU graduate!)

Buolamwini, J. (2024). How to protect your rights in the age of AI [Video]. TED. https://www.ted.com/talks/joy_buolamwini_how_to_protect_your_rights_in_the_age_of_ai?subtitle=en

Chatterjee A. (2022). Art in an age of artificial intelligence. Frontiers in psychology13, 1024449. https://doi.org/10.3389/fpsyg.2022.1024449

Devlin, J., Chang, M.W.,  Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://arxiv.org/abs/1810.04805

Feigenbaum, E., & Shrobe, H. (1993). The Japanese national Fifth Generation project: introduction, survey, and evaluation. Future Generation Computer Systems, 9(2), 105-117. https://doi.org/10.1016/0167-739X(93)90003-8

Fikes, R. E., & N. J, Nilsson (1971). STRIPS: A new approach to the application of theorem proving to problem solving”, Artificial Intelligence 2, 189–208. (STRIPS): http://ai.stanford.edu/users/nilsson/OnlinePubs-Nils/PublishedPapers/strips.pdf

Gutiérrez-Jones, C. (2014). Stealing Kinship: Neuromancer and Artificial Intelligence. Science Fiction Studies, 41(1), 69–92. https://doi.org/10.5621/sciefictstud.41.1.0069

Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural computation18(7), 1527-1554. https://doi.org/10.1162/neco.2006.18.7.1527

Matheson, T.J. (1992). Marcuse, Ellul, and the science-fiction film: Negative responses to technology. Science Fiction Studies, 19(3), 326–339. http://www.jstor.org/stable/4240180

McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943). https://doi.org/10.1007/BF02478259

McDermott, John (1980). “R1: An Expert in the Computer Systems Domain” (PDF)Proceedings of the First AAAI Conference on Artificial Intelligence. AAAI’80. Stanford, California: AAAI Press: 269–271. Archived from the original (PDF) on 2017-11-16.

McLellan, H. (1988). Computers, artificial intelligence, and human imagination. Journal of Thought, 23(3/4), 28–44. http://www.jstor.org/stable/42589270

Padhy, S. K., Takkar, B., Chawla, R., & Kumar, A. (2019). Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian Journal of Ophthalmology67(7), 1004–1009. https://doi.org/10.4103/ijo.IJO_1989_18 

Peterson, D. J. (2018). On the faith of droids. In T. Peters (Ed.), AI and IA: Utopia or Extinction? (pp. 107–116). ATF (Australia) Ltd. https://doi.org/10.2307/j.ctvrnfpwx.9 

Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog1(8), 9. https://insightcivic.s3.us-east-1.amazonaws.com/language-models.pdf

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460. https://doi.org/10.1093/mind/LIX.236.433

Schwartzman, R. (1999). Engenderneered Machines in Science Fiction Film. Studies in Popular Culture, 22(1), 75–87. http://www.jstor.org/stable/23414579

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need.(Nips), 2017. arXiv preprint arXiv:1706.0376210, S0140525X16001837. https://user.phil.hhu.de/~cwurm/wp-content/uploads/2020/01/7181-attention-is-all-you-need.pdf

Weizenbaum, J. (1983). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM26(1), 23-28. https://doi.org/10.1145/357980.357991