September 12, 2024

The History of Artificial Intelligence: Key Events from 1920 to 2024

3D web and neon green starburst on a black background

In this blog:

The artificial intelligence boom is unfolding and expanding in real time. From research labs like OpenAI to tech juggernauts like Google, most companies are ramping up their artificial intelligence efforts — Big Human included. We recently launched Unhuman, our collection of AI products, and Literally Anything, our text-to-web-app tool.

But AI is not a new idea; this global surge, however massive, is just one more stage in the metamorphosis of machine-modeled human intelligence. AI’s origins can be traced as far back as 380 BC — and philosophers, researchers, analysts, scientists, and engineers have been iterating on it ever since.

As the past frames our present and enlightens our future, learning its history is the key to fully understanding artificial intelligence and how it might evolve. With AI having another moment of glory, we’re studying what artificial intelligence is and the major events that shaped its rise.

What is artificial intelligence?

Artificial intelligence has been categorized as both science and technology. Computer scientist, mathematician, and AI pioneer John McCarthy coined the term “artificial intelligence” in 1955. He described it as “the science and engineering of making intelligent machines,” relating it to “using computers to understand human intelligence.” This definition is the basis for how we characterize AI today: the theory and process of training computers to perform complex tasks that usually require human input, enabling machines to simulate and even improve human capabilities.

Today, artificial intelligence is typically referred to as technology. It’s a set of machine-learning tools for data collection and analytics, natural language processing and generation, speech recognition, personalized recommendation systems, process automation, and more. Some of AI’s more well-known contemporary applications are chatbots, self-driving cars, and virtual assistants like Amazon’s Alexa and Apple’s Siri.

How long has AI been around? When was AI made?

The pursuit of artificial intelligence began more than 2,200 years ago in 380 BC, but it was more of an intellectual concept ruminated on by philosophers and theologians. AI was a hypothetical that felt so otherworldly it was used as a storyline in mythological folklore.

Like much of human history, artificial intelligence first surfaced in Ancient Greece. (Automated is derived from automatos, the Greek word for “acting of oneself.”) As the myth goes, Hephaestus, the god of crafts and metalworking, created a set of automated handmaids out of gold and bestowed them with the knowledge of the gods. Before that, Hephaestus made Talos, a mechanical “robot” assigned to protect Crete from invasions. The start of AI’s practical applications came in 250 BC when inventor and mathematician Ctesibius built the world’s first automatic system — a self-regulating water clock.

Developments in math, logic, and science from the 14th to 19th centuries are definitive markers of artificial intelligence’s climb. Philosophers and inventors at the time may not have known they were early proponents of robotics and computer science, but they laid the groundwork for future AI advancements. In 1308, theologian Ramon Llull completed Ars magna (or Ars generalis ultima, The Ultimate General Art), detailing his method for a paper-based mechanical system that connects, organizes, and combines information to form new knowledge. A forerunner in AI research, Ars magna was a framework for analyzing logical arguments in order to draw logical conclusions. Expanding on Llull’s work, philosopher and mathematician Gottfried Leibniz’s 1666 paper Dissertatio de arte combinatoria (On the Combinatorial Art) asserted that all new ideas are strung together by using a combination of existing concepts. Giving mankind a computational cognitive algorithm, Leibniz devised an alphabet of human thought — a universal rulebook for evaluating and automating knowledge through the breakdown of logical operations.

Physical progress in artificial intelligence didn’t stop at Ctesibius in Greece. Over the course of his life in the 12th and 13th centuries, polymath and engineer Ismail al-Jazari invented over 100 automated devices, including a mechanized wine servant and a water-powered floating orchestra. In 1206, al-Jazari wrote The Book of Knowledge of Ingenious Mechanical Devices; the first record of programmable automation, it later solidified him as the Father of Robotics. It’s also rumored that al-Jazari influenced one of the most prolific inventors in history: Leonardo da Vinci. The Renaissance man was known for his expansive research in automation, going on to design (and possibly build) an artificial armored knight in 1495.

From the early 1600s to the late 1800s, artificial intelligence and technology were given artistic spins in poems, books, and plays. In 1726, Jonathan Swift released Gulliver’s Travels, where a machine called “The Engine” became the earliest known reference to a computer. Then in 1872, Samuel Butler anonymously printed “Erewhon,” one of the first novels to explore the idea of artificial consciousness. Butler also suggested Charles Darwin’s theory of evolution could be applied to machines.

It wasn’t until the 20th century that we started seeing substantial strides in artificial intelligence, setting the foundation for how we view and use it today. In 1912, inventor, civil engineer, and mathematician Leonardo Torres Quevedo built an autonomous chess player, later debuting it at the University of Paris in 1914. The first to play a game of chess, the electromechanical machine initiated modern AI development. 

Timeline of Artificial Intelligence

The following timeline of artificial intelligence delves into its most significant developments in the last 120-plus years, from the first programmable computer to the establishment of AI as a formal discipline with legal regulations.

1920-1949: AI tests its capabilities

The pace of technological advancement picked up at the turn of the century. Taking cues from the film and literature at the time, people of science began experimenting with machines and wondering about their capabilities and potential uses.

Important Dates:

  • 1920: Leonardo Torres Quevedo made improvements to his original chess-playing machine to further test the possibilities of his general automatics theory. The automation used an algorithm to dictate its moves and electromagnets to shift the chess pieces.

  • 1921: Rossum’s Universal Robots, a play by Karel Čapek, opened in London, telling the story of artificial people made in a factory. This was the first time the word “robot” was used in the English language, which led to others applying the word and idea to art and research.

  • 1927: The science fiction movie Metropolis was released; it follows a robot girl named Maria as she wreaks havoc in a 2026 dystopia. This was a significant early portrayal of a robot in cinema, later serving as the inspiration for C-3PO in the Star Wars films.

  • 1929: After seeing Rossum’s Universal Robots, biologist Makoto Nishimura built Japan’s first functional robot Gakutensoku (meaning “learning from the laws of nature”). The robot could move its body and even change its facial expressions.

  • 1939: Looking for ways to solve equations more quickly, inventor and physicist John Vincent Atanasoff constructed the first digital computing machine with graduate student Clifton Berry. The Atanasoff Berry Computer wasn’t programmable, but it could solve up to 29 linear equations concurrently, turning Atanasoff into the Father of the Computer.

  • 1949: When Edmund Berkely published Giant Brains; Or Machines That Think, he detailed how machines are adeptly proficient in handling sizable amounts of information, concluding that machines can think (just not in the exact same way humans do).

1950-1959: AI hits the mainstream

The 1950s signaled the transformation of the theoretical and imaginary into the empirical and tangible. Scientists started to use extensive research to fuel and test their hypotheses of practical applications of artificial intelligence. During this time, Alan Turing, John McCarthy, and Arthur Samuel proved themselves to be AI trailblazers.

Important Dates:

  • 1950: Mathematician and logician Alan Turing released “Computing Machinery and Intelligence,” questioning whether or not machines could manifest human intelligence. His proposal came in the form of The Imitation Game (better known today as The Turing Test), which evaluated a machine’s ability to think as humans do. The Turing Test has since become the cornerstone of AI theory and its evolution. 

  • 1952: Possibly inspired by Claude Shannon’s 1950 paper “Programming a Computer for Playing Chess,” computer scientist Arthur Samuel created a checkers-playing computer program that could determine the probability of winning a game. It was the first program to learn how to autonomously play a game.

  • 1955: John McCarthy used “artificial intelligence” in a proposal for a summer computing workshop at Dartmouth College. When the workshop took place in 1956, he was officially credited with creating the term. 

  • 1955: Economist Herbert Simon, researcher Allen Newell, and programmer Cliff Shaw wrote “Logic Theorist,” which is lauded as the first AI computer program. The program could calculate mathematical theorems, simulating a human’s ability to problem-solve.

  • 1958: Reinforcing his status as the Father of Artificial Intelligence, John McCarthy developed LISP, a computer programming language. LISP’s popularity waned in the 1990s, but it’s seen a recent uptick in use.

  • 1958: The United States Department of Defense formed the Advanced Research Projects Agency, later renamed Defense Advanced Research Projects Agency (DARPA). Its purpose is to research and invest in technology, including AI, for national security.

  • 1959: Arthur Samuel originated the phrase “machine learning,” defining it as “the field of study that gives computers the ability to learn without explicitly being programmed.”

1960-1969: AI propels innovation

With the strong groundwork scientists, mathematicians, and programmers established in the 1950s, the 1960s saw accelerated innovation. This decade brought in a slew of new AI research studies, programming languages, educational programs, robots, and even movies.

Important Dates:

  • 1961: General Motors began using Unimate, the first industrial robot, in its assembly lines. In his original 1954 patent, inventor George Devol described a “programmed article transfer” machine, an autonomous device that could perform systematic digital commands. At General Motors, Unimate was assigned to extract hot metal castings from another machine, a job too hazardous for humans.

  • 1964: Daniel Bobrow, another computer scientist, built the AI program STUDENT to solve word problems in high school algebra books. Written with the LISP programming language, STUDENT is considered an early example of natural language processing.

  • 1965: Computer scientist Edward Feigenbaum and molecular biologist Joshua Lederberg invented the first “expert system,” a program that could model human thinking, learning, and decision-making. This feat earned Feigenbaum the title of the Father of Expert Systems.

  • 1966: Joseph Weizenbaum developed the world’s first “chatterbot,” a technology we now refer to as a “chatbot.” Examining how mankind could communicate with machines, the computer scientist’s ELIZA program used natural language processing and pattern matching to simulate human conversations.

  • 1968: Referred to as the Father of Deep Learning, mathematician Alexey Ivakhnenko published a paper called “Group Method of Data Handling.” In it, Ivakhnenko posited a new approach to AI that used inductive algorithms to sort and validate data sets. His statistical work is now referred to as “deep learning.” 

  • 1968: When director Stanley Kubrick released 2001: A Space Odyssey, he put sci-fi back in mainstream media. The film features HAL (Heuristically programmed Algorithmic computer), a sentient computer that manages the Discovery One spacecraft’s systems and interacts with its crew. A malfunction turns a friendly HAL hostile, kicking off a debate about the relationship mankind has with technology.

1970-1979: AI loses its authority

Artificial intelligence’s focus shifted toward robots and automation in the 1970s. Still, innovators struggled to get their projects off the ground as their respective governments did little to fund AI research.

Important Dates:

  • 1970: Japanese researchers constructed the first anthropomorphic robot, WABOT-1, at Waseda University. The robot had fully functional limbs and semi-functional eyes, ears, and mouth, which it used to communicate with people in Japanese.

  • 1973: Mathematician James Lighthill might’ve been the reason governments reduced their support of AI. In his report to the British Science Council, he criticized past artificial intelligence discoveries, arguing they weren’t as impactful as scientists promised.

  • 1979: Hans Moravec (then a Ph.D. student, later a computer scientist) added a camera to mechanical engineer James L. Adam’s 1961 remote-controlled Standford Cart. This allowed the machine to successfully move around a chair-filled room on its own, becoming one of the earliest examples of an autonomous vehicle.

  • 1979: The Association for the Advancement of Artificial Intelligence (AAAI, formerly the American Association for Artificial Intelligence) was founded. The nonprofit scientific organization is dedicated to promoting AI research, widening its scientific and public understanding, and ethically guiding its future developments.

1980-1989: AI revives government funding

In 1980, AAAI’s first conference rekindled interest in artificial intelligence. The 1980s saw breakthroughs in AI research (particularly deep learning and expert systems), prompting governments to renew their support and funding.

Important Dates:

  • 1980: Digital Equipment Corporation began using XCON (expert configurer) in one of its plants, marking the first time an expert system was available for commercial use. John P. McDermott wrote the XCON program in 1978 to help DEC order computer systems by automatically choosing components based on customers’ needs.

  • 1981: In one of the largest AI initiatives at the time, the Japanese Ministry of International Trade and Industry granted $850 million (equivalent to more than $3 billion today) to the Fifth Generation Project over the course of 10 years. The goal was to create supercomputers that could use logic programming and knowledge-based processing to reason as humans do.

  • 1984: AAAI warned of an impending “AI Winter,” fearing artificial intelligence developments wouldn’t live up to the increasing frenzy of the time. The foreshadowed AI Winter would dramatically decrease funding and appeal.

  • 1986: Aerospace engineer Ernst Dickmanns and his team at Bundeswehr University of Munich unveiled the first self-driving car. Using computers, cameras, and sensors, the Mercedes van could reach up to 55 MPH on empty roads.

  • 1986: Computer scientist and cognitive psychologist Geoffrey Hinton, psychologist David Rumelhart, and computer scientist Ronald J. Williams released a paper that explored backpropagation. They built a machine learning algorithm that trains artificial neural networks by correcting errors, working backward from output to input. It’s now a fundamental part of the modern AI system, securing Hinton’s status as the Godfather of AI.

  • 1987: When the stock market crashed, specialized LISP-based hardware companies could no longer compete with more accessible and affordable competitors like Apple and IBM. 

1990-1999: AI encounters a downturn

Just like AAAI cautioned, the 1990s faced artificial intelligence setbacks. Though waning public and private interest was stoked by AI’s high cost but low return, earlier research paved the way for new innovations at the end of the decade, ingratiating AI into everyday life.

Important Dates:

  • 1991: The U.S. military developed the Dynamic Analysis and Replanning Tool (DART), a DARPA-funded AI program that coordinates and optimizes the transportation of supplies and personnel, and other logistics. DART automates processes to help the military assess logistical feasibility, which decreases the time and cost of making decisions.

  • 1997: In a highly publicized six-game match, IBM’s Deep Blue computer defeated world chess champion Gary Kasparov. It was the first program to beat a human.

  • 1997: Dragon Systems built Dragon NaturallySpeaking, the world’s first commercial speech recognition software. Compatible with Microsoft’s Windows 95 and Windows NT, it could understand 100 words per minute, and its framework is still used today in more modern versions.

  • 1998: The famed Furby could be considered an early form of a domestic “robot.” The toy initially spoke its own language (Furbish) but then gradually learned English words and phrases. Some may argue Furby is just a toy, though, as it had limited interactive capabilities.

2000-2009: AI expands common use

After the Y2K panic died down, artificial intelligence saw yet another trending surge, especially in media. The decade also noted more routine applications of AI, broadening its future possibilities.

Important Dates:

  • 2000: The leadup to the new century was fraught with concerns about the “Millenium Bug,” a series of computer glitches that affected the formatting of calendar data. Since all internet programs and software were created in the 1900s, engineers used a two-digit system to record the year, omitting the preceding “19.” Tech experts discerned computers would have trouble reconfiguring to the year 2000, causing flaws in daily and yearly programs. But their worries were futile — systems adjusted accordingly with little difficulty.

  • 2000: Dr. Cynthia Breazeal, then an MIT graduate student, designed Kismet, a robot head that could recognize and recreate human emotions and social cues. An experiment in social robotics and affective computing, Kismet was supplied with input devices that mimicked human visual, auditory, and kinesthetic flexibilities.

  • 2001: Steven Spielberg’s sci-fi flick A.I. Artificial Intelligence followed David, an android with human feelings disguised as a child. As David tries to find a place where he belongs and feels loved, the movie examines whether or not humans can coexist with artificial, anthropomorphic beings.

  • 2002: iRobot releases the Roomba, an autonomous vacuum regarded to be the first successful household robot. The company has continued to iterate on its tech and scale its product line. 

  • 2003: NASA launched two Exploration Rovers — Spirit and Opportunity — to learn more about past water activity on Mars. When the rovers landed on the planet in 2004, they operated autonomously, collecting surface samples and performing scientific experiments. Both far outlived their planned 90-day mission by several years.

  • 2006: Along with computer scientists Michele Banko and Michael Cafarella, computer science professor Oren Etzioni added another term to the AI vernacular. “Machine reading” gives computers the ability to “read, understand, reason, and answer questions about unstructured natural language text.”

  • 2006: Social media companies like Facebook, Twitter, and Netflix incorporated AI into their advertising and user experience algorithms. These algorithms now pilot most, if not all, of the social media channels we use today.

2010-2019: AI becomes part of the everyday

Tech and gaming companies built on the common-use foundations set in the early 2000s, using AI to create more interactive experiences. It’s hard to find a smart device that doesn’t have intelligent functions, intensifying AI’s rise and cementing it as a substantial part of our everyday lives. 

Important Dates:

  • 2010: Microsoft built the Xbox 360 Kinect, a gaming hardware with sensors that could follow and interpret body movement as playable directions. With microphones for speech recognition and voice control, the Kinect facilitated the growth of the Internet of Things (IoT), a connected tech network that helps devices communicate with each other.

  • 2011: Apple’s launch of Siri on the iPhone 4S sparked a trend in virtual assistants — most notably Amazon’s Alexa and Microsoft’s Cortana, both released in 2014. The rugged 2011 version of Siri has since been finessed and integrated into other Apple products.

  • 2015: Wary of a global AI arms race, over 3,000 people signed an open letter to governments worldwide calling for a ban on artificially intelligent weapons and AI warfare. Among the signees were influential scientists and innovators, including Stephen Hawking, Steve Wozniak, and Elon Musk.

  • 2016: Hanson Robotics caused an uproar with its humanoid robot Sophia, whose likeness closely resembles a real human being. Deemed the world’s first “robot citizen,” Sophia describes herself as a “human-crafted science fiction character depicting where AI and robotics are heading.”

  • 2016: Google’s AlphaGo mastered Go, a strategic board game more complex than chess, and defeated world champion Lee Sedol. The AI system’s neural networks proved that machines can plan ahead and learn how to solve complex problems on their own.

  • 2017: Facebook’s Artificial Intelligence Research Lab taught two chatbots how to communicate with each other. As their conversations grew, the chatbots drifted away from English and fabricated their own language without any human intervention.

2020-Present: AI prompts concerns

The scope of artificial intelligence continues to grow, and so do questions about its safety. While AI experts are in high demand, some of the world’s biggest proponents of artificial intelligence regulation are the scientists and engineers who contributed to its rise.

Important Dates:

  • 2020: OpenAI released GPT-3, a trained natural language processing model. Developers and software engineers were among the first adopters. A year later, OpenAI took another giant leap in generative AI with DALL-E, a program that produces realistic art and images based on user prompts. 

  • 2021: As reported by McKinsey, AI experienced more widespread adoption. Companies across every industry — notably service operations — began integrating it into their workstreams. 

  • 2022: OpenAI launched ChatGPT 3. One of the most advanced chatbots to date, ChatGPT 3 can answer philosophical questions, write code, pen essays, and more. (Its predecessor, ChatGPT 2, trained using data from Reddit posts and was released as an open-source model.)

  • 2022: The number of AI-related job postings grew significantly, and companies most often hired software engineers, data engineers, and AI data scientists. To secure more AI talent, organizations began training both technical and non-technical employees, showing greater integration of AI in the workplace. 

  • 2023: Geoffrey Hinton left Google after 10 years as a vice president and engineering fellow, warning the public about AI’s dangers and voicing regret for his role in its advancement. The following year, he gave a lecture on how artificial intelligence could be used to spread misinformation and replace human workers.

  • 2024: World governing bodies called for stricter AI regulations. The European Union instituted the Artificial Intelligence Act to monitor AI usage. As the world’s first extensive AI measure, it provides legal guidelines for the ethical development and implementation of AI throughout Europe.

  • 2024: Citing AI’s rapid growth as the reason for the update, the North Atlantic Treaty Organization (NATO) made revisions to its original 2021 artificial intelligence strategy; the main goal is to promote and defend the safe use of artificial intelligence. 

What is the future of artificial intelligence?

Today’s artificial intelligence landscape is evolving with unprecedented speed. With a market that’s expected to grow to $826 billion by 2030, AI is changing industries across the board — from eCommerce to healthcare and cybersecurity. While we can only speculate on what AI has in store, there are a few trends that’ll define the next decade.

Increased Adoption

AI is already ingrained in many of our devices, so interactions between mankind and artificial intelligence will only become more commonplace — both at home and in the workplace. About 72% of organizations integrated AI capabilities in 2024, up from 55% in 2023. 

Job Shifts

By 2033, the U.S. Bureau of Labor Statistics expects a 17% increase in employment for software developers. Though there’s growing demand for AI-related jobs, it’s also changing the way non-technical roles operate. Companies are using artificial intelligence in customer service, accounting, data analytics, system automation, content creation, and more.

Democratization of AI

The open-source movement is fueling the democratization of AI technology and development. This revolution advocates for the free, widespread use of computer software, placing the power of AI in everyone’s hands.

Stricter Regulations

Artificial intelligence is leading technological innovation, but its unrelenting boom attracts compounded concerns. There are issues surrounding AI’s accuracy, data privacy, bias, and misuse, with AI scams becoming more and more frequent. Artificial intelligence operated as an unregulated industry for most of its existence, but it now has lawmakers drafting accountability and safety policies.

At the end of the day, we aren’t able to unanimously predict the future of artificial intelligence, but if its history is any indication, we’re strapping into quite the rollercoaster.

Looking to step foot into the world of AI? Send us a message.

FAQs

When was AI first invented?

Who is the creator of AI?

When was the golden age of AI?

How old is the oldest AI?

up next
17 Best App Development Companies
August 29, 2024

Development

The Top 17 Mobile App Development Companies in the World

Black and white plus sign on a red background
August 13, 2024

Product Design

The History of the Swiss Design Style