History of Artificial Intelligence (AI) 1921- 2024

Artificial intelligence has integrated into our daily lives, from using virtual assistants like Siri to accessing self-driving cars. It is everywhere. But did you know the concept of AI is not new? Instead, the journey of AI goes way back to ancient times, a period you would not have imagined. The term “artificial intelligence” was introduced in 1956 during a workshop. 

In this article, we will closely examine the history of artificial intelligence (AI), tracing its development from its early foundations in the 1900s to the remarkable advancements it has achieved in recent years.

What is Artificial Intelligence?

Artificial intelligence (AI) is a computer science technology that creates intelligent agents or systems that can replicate human intelligence, decision-making, and problem-solving abilities. Applications or devices equipped with AI can identify objects, understand and respond to human language, and even learn from new information by improving their performance and experience over time. Today, AI is utilized in various areas such as healthcare, finance, customer service, manufacturing, transport, and more.

The History of Artificial Intelligence

Artificial intelligence has a rich history that goes back thousands of years to ancient myths and philosophical musings. Although “artificial intelligence” wasn’t coined until 1956, inventors made mechanical devices known as “automatons,” which moved independently without human involvement. The word “automatons” means “acting of one’s own will.” Some of the earliest records of an automaton include the “mechanical monk” created in the 16th century, the still-functional “Silver Swan” constructed in 1773, and more. 

Groundwork for AI:

The groundwork for AI was laid through a series of significant developments and discoveries over the years. In the early 1900s, there was a massive buzz about “Artificial humans.”  

The buzz was so strong that scientists began to question whether it was possible to create an artificial brain. Various creators made simplified versions of robots that could perform simple tasks. 

Some of the notable dates during this time are as follows: 

1921: Karel ?apek, a Czech playwright, released a science fiction play, “R.U.R.” (Rossum’s Universal Robots), in 1921, which introduced the word “robot” into the English language. He used the term “robots” for artificial people created to serve humans.

1929: Makoto Nishimura, a Japanese professor, created the first-ever Japanese robot, known as “Gakutensoku.” 

1949: Edmund Berkeley, a computer scientist, published a book called “Giant Brains, or Machines That Think. ” In it, Berkeley compared early computers to human brains, exploring the potential of machines to perform tasks traditionally associated with human intelligence.

Birth of AI: 1950-1956

The period from 1950 to 1956 is considered a prominent period in the history of AI. During this period, the term “artificial intelligence” was introduced, along with several groundbreaking developments in the field.

1950: In 1950, Alan Turning, who is often considered the inventor of AI, published a landmark paper titled “Computing Machinery and Intelligence,” which proposed a test called the “Turing Test.” This test was introduced by Turning to determine whether a machine is capable of exhibiting intelligent behavior indistinguishable from a human. 

1952: Arthur Samuel, a computer scientist, created a checkers program, the first-ever program to learn the game independently. The program could also improve its performance over time by playing it against itself and analyzing its outcomes.

1956: The Dartmouth workshop took place in 1956 and considered the founding event of artificial intelligence as a field. John McCarthy and Marvin Minsky organized this workshop with the support of two senior scientists from IBM, Nathan Rochester and Claude Shannon. In this workshop, John McCarthy introduced the term “Artificial Intelligence” for the first time. This workshop was when AI first gained its name and mission, which is considered AI’s birth. 

AI maturation: 1957-1979

The late 1950s to 1960s was a period of creation in AI. From programming languages that are relevant to this day to books and films that explore the idea and objective of robots, AI became a widespread idea instantly. The 1970s also played a significant role in the development of AI, with The American Association of Artificial Intelligence (AAAI) being founded in 1979. However, there was a lot of struggle for AI research since the government reduced its interest in funding AI research. 

Some of the notable dates during this period are as follows: 

1958: John McCarthy created LISP, which stands for List Processing, in 1958; this was the first high-level programming language designed specifically for artificial intelligence research. 

1959: Arthur Samuel coined the term “machine learning” while giving a speech on teaching machines to play chess better than humans who programmed them.

1961: James Slagle developed SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus.

1965: Joshua Lederberg and Edward Feigenbaum created the first “expert system” in 1965. The Expert system was a form of AI specially programmed to replicate or copy the thinking and decision-making abilities of human experts. 

1966: Joseph Weizenbaum built the first “chatterbot,” which was later shortened to “chatbot. ” This bot utilized natural language processing (NLP) to communicate with humans.

1968: Alexey Ivakhnenko, a soviet mathematician, released “Group Method of Data Handling” in the journal “Avtomatika,” which carried an entirely new approach to artificial intelligence,e which is known as “Deep Learning” in today’s date. 

1973: The British government declined support and funding for AI research in 1973 after applied mathematician James Lighthill provided a special report on the strides, which were apparently not as impressive as the scientists had promised. 

1979: In 1961, James L. Adams created the Stanford cart, a remotely controlled, TV-equipped mobile robot that became one of the first-ever examples of an autonomous vehicle. In 1979, the Stanford cart successfully navigated a room full of chairs without any human interference. 

1979: The American Association of Artificial Intelligence (AAAI) was founded in 1979 and is today known as the Association for the Advancement of Artificial Intelligence (AAAI). This organization plays a significant role in promoting research, education, and public understanding of artificial intelligence.

AI boom: 1980-1987

Most of the 1980s showcased a period of excellent growth and interest in AI, labeled as the “AI bloom.” The massive increase in AI came from breakthroughs in AI research and additional funding from the government to support researchers. During this period, deep learning techniques and the use of expert systems also became broadly popular.  

1980: The first American Association of Artificial Intelligence (AAAI) conference was held at Stanford University in 1980. It was also named the first Nation Conference on Artificial Intelligence (AAAI-80). This conference is considered one of the significant milestones in developing AI as a field, as it provided a unique platform for researchers and experts to showcase their ideas and works.

1980: XCON (Expert Configurer) was one of the first expert systems to enter the commercial market. It was developed by Carnegie Mellon University to assist in the configuration of computer systems. XCON helped streamline the ordering process and reduced errors by automatically choosing components based on customer specifications.

1981: The Japanese government launched the Fifth Generation Computer Systems Project to develop computers with capabilities such as human-level reasoning, problem-solving, and natural language understanding. The government funded the project around $850 million (which is more than $2 billion dollars today). 

1984: The American Association for Artificial Intelligence (AAAI) warned about the arrival of “AI Winter.” This term refers to a decrease in funding and interest in AI research, which made the entire process more difficult.

1985: AARON, an autonomous drawing program capable of creating original drawings and paintings without human involvement, was demonstrated in 1985 at the American Association for Artificial Intelligence (AAAI) conference. This demonstration helped showcase AI’s true potential in generating unique artworks and paintings and its growing capabilities in creative domains.

1986: Ernst Dickmann, along with his team at Bundeswehr University of Munich, developed and demonstrated the first driverless car or robot car in 1986, which was known as “Stanley.” This robot car could drive autonomously up to 55 mph on roads without other obstacles or human drivers.

1987: Alactrious Inc. launched Alacrity, the first commercial strategy managerial advisory system. Alacrity was a complex expert system with more than 3,000 rules that could offer strategic advice to managers. After the commercial launch of Alacrity, a significant step was taken in the application of AI to business decision-making. 

AI winter: 1987-1993

As predicted by the American Association for Artificial Intelligence (AAAI), AI Winter did occur in the late 1980s and early 1990s. The first AI Winter took place in the 1970s when AI became a subject of critique and witnessed several financial setbacks. The term AI Winter refers to a period of low consumer, public, and private interest in artificial intelligence, resulting in reduced research funding and interest. By then, government and private investors had lost interest in AI and halted financing due to the high costs and seemingly low returns. The primary reason behind the occurrence of this AI Winter was because of inevitable setbacks in the expert systems and machine market.

Some of the key factors which contributed to the AI Winter are:

  • The End of the Fifth Generation Project: The Japanese project launched by the government in the early 1980s to develop advanced computers capable of performing translation, conversing in human language, and expressing reasoning on a human level came to an end. Despite the ambitious goal, the project failed to meet its objectives, which led to a loss of confidence in AI research. 
  • Cutbacks in Strategic Computing Initiatives: The Government reduced its funding for AI research as it shifted its priorities to other areas of spending.
  • Slowdown in the Deployment of Expert Systems: Although expert systems started well and saw early success, their momentum lasted only a short time. The limitations became quite clear: they were not utilized in commercial applications as widely as anticipated.

Some of the notable dates during AI Winter are as follows: 

1987: The market for specialized LISP-based hardware crumbled in 1987 due to the availability of cheaper and more accessible computers that could run LISP software, including those offered by Apple and IBM.

1988: Another notable event during this timeline was the invention of Jabberwacky, a chatbot designed by Rollo Carpenter to provide interesting and entertaining conversations to humans.

AI agents: 1993-2011

Regardless of the shortage in funding during the AI winter, the early 90s introduced some impressive strides forward in AI research, including IBM’s Deep Blue, which created a record by beating the reigning world champion chess player. This era also introduced an autonomous vacuum robot, Roomba, into their everyday life.

Some of the notable dates during this era are as follows: 

1997: IBM’s Deep Blue, a chess-playing expert system, created a record when it defeated the world chess champion, Gary Kasparov, in a six-game match. This victory was considered a significant milestone in the history of AI, demonstrating the excellent progress made in computer systems with its complex problem-solving and strategic thinking.

1997: Windows released its speech recognition software in June 1997, developed by Dragon Systems. 

2000: Kismet is an expressive robot head developed by Professor Cynthia Breazeal. It was designed to stimulate human emotions through facial expressions, including eye movements, eyebrow changes, mouth movements, and ear positioning. 

2002: iRobot introduced Roomba in September 2002, an autonomous vacuum designed for cleaning floors. The success of Roomba has helped popularize the concept of household vacuum robots, which is popular among people today.

2003: NASA successfully landed two rovers (Spirit and Opportunity) on Mars. The rovers could navigate the Martian surface autonomously, collecting information and exploring the surface of the planet’s geology without any human intervention.

2006: In the mid-2000s, several social media platforms, such as Twitter and Facebook, and streaming services like Netflix had begun utilizing artificial intelligence in their operations and advertising. Platforms were utilizing AI algorithms to personalize user content recommendations, optimize advertising targeting, and improve the overall user experience. These platforms paved the way for the widespread adoption of AI in numerous sectors. 

2010: Microsoft released the Kinect for the Xbox 360, the first gaming hardware specifically designed to track body movement using motion-sensing technology and translate them into game commands. 

2011: IBM’s Watson, a natural language processing (NLP) system programmed to answer questions, won Jeopardy against two former champions in a televised match. Watson’s ability to understand and process natural language and an extensive knowledge base allowed the system to outsmart and defeat human opponents. 

2011: Apple released Siri, the first popular virtual assistant that could be activated using voice commands. This helped spread the concept of voice-activated assistants. 

Artificial General Intelligence: 2012-present

That brings us to the most advanced and developed era of artificial intelligence up to the present day. This era witnessed the introduction of virtual assistants, search engines, chatbots, and more. Chatbots such as ChatGPt were being utilized on a large scale by people worldwide to generate human-like texts such as emails, stories, code, musical pieces, and much more. OpenAI also introduced DALL-E, an AI model that can develop AI images using text prompts.

2012: Jeff Dean and Andrew Ng, two researchers from Google, trained neural networks to demonstrate their capabilities. They trained neural networks to recognize cats from unlabeled images without background information.

2015: In 2015, some of the most prominent figures worldwide, including Elon Musk, Stephen Hawking, and Steve Wozniak (along with 3000 others), signed an open letter urging a ban on the development and usage of autonomous weapons systems in the world’s government. The letter expressed concerns regarding the ethical implications of such weapons and the potential of them falling into the wrong hands and causing danger. This letter helped raise awareness regarding the issue.

2016: A humanoid robot named Sophia was created by Hanson Robotics in 2016 with a remarkable human-like appearance and the ability to replicate human emotions. Sophia became the first “robot citizen” and was granted citizenship in Saudi Arabia.  Its ability to engage in human-like conversations and respond to queries made her a notable figure in robotics and AI.

2017: Facebook researchers programmed two AI chatbots that were specifically designed to learn how to negotiate with each other. However, as the chatbots interacted, they developed their language, departing from the English language initially programmed for use. This raised concerns regarding the potential of AI systems as they could build their language entirely autonomously, which could be problematic for humans to understand or control.

2018: The Chinese tech group Alibaba’s language-processing AI system surpassed human performance on the Stanford Reading Comprehension Dataset (SQuAD), creating a benchmark for machine reading comprehension.

2019: Google’s AlphaStar AI system reached Grandmaster level in the complex real-time strategy video game StarCraft 2. Unlike other games, StarCraft 2 is significant because it requires strategic thinking, planning, and adaptability, skills that are often considered challenging for AI systems. 

2020: OpenAI introduced GPT-3, a language model capable of generating human-quality text, including articles, code, scripts, musical pieces, emails, letters, etc. Although it’s not the first of its kind, GPT-3 was the first language model capable of generating content similar to those created by humans. 

2021: OpenAI launched DALL-E, a unique AI model that can generate high-quality AI images from text descriptions. DALL-E’s ability to understand and process visual content through texts represented a significant step forward in AI’s understanding of the visual world.

2023: OpenAI created a multimodal large language model GPT-4 capable of processing and generating text and images. This multimodal capability allows GPT-4 to perform a broader range of tasks, such as answering questions about pictures or creating original images based on textual descriptions.

Who Invented AI?

There isn’t any one single inventor of AI; instead, multiple individuals play a crucial role in laying the foundation of AI. Alan Turing proposed the famous “Turing test,” a method that helped determine whether a machine can think like a human. However, John McCarthy is often coined as the person who invented the term AI “artificial intelligence” in 1956.

First Artificial Intelligence Robot

Shakey is the first ever AI-based mobile robot created in 1970 by the Stanford Research Institute (SRI International). It was one of the first robots to demonstrate the ability to plan and execute tasks in a real-world environment. Shakey could perceive its surroundings by utilizing sensors and performing various tasks such as opening doors, pushing blocks, and navigating a room.

When Did AI Become Popular

AI has gradually become popular over several decades, with the development of various expert systems, increasing capabilities, and practical applications playing a significant role in this rise. 

1950s to 1960s: During the initial surge, various AI programs such as ELIZA and the Dartmouth Summer Research Project on Artificial Intelligence played a crucial role in the development of AI. 

The 1990s: AI gained popularity during the 90s with advances in neural networks and machine learning. Some notable milestones during this timeline are IBM’s Deep Blue, which created a record by beating the reigning world champion chess player and releasing speech recognition software. 

2000s: AI started gaining massive recognition in the 2000s as computational power, data availability, and machine learning improved. Various social media platforms and streaming services like Netflix also began utilizing artificial intelligence to personalize content recommendations, helping pave the way for the widespread adoption of AI across various sectors.

The 2010s: Several breakthroughs for artificial intelligence occurred in the 2010s, especially with the development of neural networks, which led to advancement in AI, enabling numerous tasks such as natural language processing, self-driving cars, and image recognition. 

2020s: Various workforces and applications are today integrating AI into their lives. From virtual assistants and chatbots to autonomous vehicles, the demand for AI is increasing daily. 

What does the future hold?

Now that we have learned about the history of artificial intelligence (AI), the most obvious next question in everyone’s mind is: what comes next for AI?

Well, we can’t precisely predict the future. Still, many experts and professionals have stated that AI systems are also expected to become more sophisticated and capable of understanding complex concepts and learning from diverse data sources. The adoption of AI is also likely to occur among businesses of all sizes, bringing excellent changes in the workforce as automation eliminates and generates jobs in equal measure, more robotics, autonomous vehicles, etc, leading to higher efficiency, productivity, and cost-saving.

About GilPress

I'm Managing Partner at gPress, a marketing, publishing, research and education consultancy. Also a Senior Contributor forbes.com/sites/gilpress/. Previously, I held senior marketing and research management positions at NORC, DEC and EMC. Most recently, I was Senior Director, Thought Leadership Marketing at EMC, where I launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). Twitter: @GilPress
This entry was posted in AI and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *