A Brief History of AI

Estimated time.

 

The quest to create artificial intelligence may be the greatest endeavor in human historyan attempt to understand our own consciousness by recreating it. While some revolutions change how we live, the AI revolution asks us to reconsider what it means to be alive, to think, and to understand.

 

1936: The Universal Machine

In 1936, young British mathematician Alan Turing asked a simple yet profound question: Could a single machine solve any computational problem? His answer, the 'Universal Machine,' wasn't just a mathematical concept. It was the blueprint for every computer you've ever used.

Before Turing, machines were one-trick ponies - calculators could only calculate, typewriters could only type. Turing envisioned something revolutionary: a machine that could become any machine, simply by changing its instructions.

This wasn't about building a better calculator. It was about redefining what machines could be. Today, when you switch from writing an email to editing a photo on your device, you're living Turing's vision. One machine, infinite possibilities.

This was the birth of programming. Turing never built his Universal Machine. He didn't need to. The concept itself was the game-changer. It introduced a radical idea: software.

Instead of building new machines for new tasks, we could simply write new instructions. It's why your smartphone can be a camera, a music studio, and a gaming device - all through software.

 

1939: The Enigma Challenge

The story shifts to World War II. Turing found himself at Bletchley Park, Britain's code-breaking headquarters, facing the Enigma - Nazi Germany's supposedly unbreakable encryption machine.

The stakes? Thousands of lives. Every day the Enigma remained unbroken meant more Allied ships sunk, more supplies lost, more soldiers killed. Turing's response was the Bombe - not just a machine, but a new way of thinking about how machines could solve complex problems.

While Turing's brilliance was crucial, the Bletchley Park breakthrough came from teamwork. Joan Clarke's mathematical insight, Hugh Alexander's strategic thinking, and Gordon Welchman's engineering innovations all played vital roles. The lesson? Great innovation comes from combining diverse talents.

The Bombe didn't just help win the war - historians estimate it saved millions of lives and shortened the conflict by years. More importantly, it showed how machines could solve problems previously thought impossible.

📌 Learn more via the Bletchley Park website.

 

1942-1956: The Dawn of AI

Asimov’s Three Laws of Robotics


Before the term 'Artificial Intelligence' existed, science fiction writer Isaac Asimov introduced his Three Laws of Robotics in 1942. The laws first appeared in his short story “Runaround” and subsequently became hugely influential in the sci-fi genre. These laws weren't just story elements - they became the foundation for thinking about AI ethics:

  1. A robot may not harm a human or allow harm through inaction.

  2. A robot must obey human orders, except where they conflict with the First Law.

  3. A robot must protect itself, unless this conflicts with the First or Second Laws.


🎥 Watch the BBC’s video ISAAC ASIMOV's 3 laws of ROBOTICS.

The Turing Test

In 1950, Turing shifted his focus to a new frontier: artificial intelligence. His landmark paper Computing Machinery and Intelligence posed one question: "Can machines think?" Rather than getting lost in philosophical debates, Turing proposed a practical test that still shapes AI development today.

The test was elegant in its simplicity. If a machine could convince a human it was human through conversation alone, it passed. This wasn't about processing power or memory size - it was about behaviour and interaction.

The test redefined how we measure machine intelligence, moving the conversation from abstract theories to practical demonstration.

🎥 Watch the Ted-Ed video The Turing test: Can a computer pass for a human? to learn more.

The Dartmouth Conference

But the real birth of AI came at the 1956 Dartmouth Conference.

A group of visionaries - including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon - gathered with an audacious goal: to explore how to build machines that could think.

Their proposal stated: "Every aspect of learning or intelligence can be so precisely described that a machine can be made to simulate it."

This wasn't just optimism - it was a declaration that would launch a new field of study.

 

The Golden Age (1956-1973)


The years following Dartmouth were marked by breakthroughs that would define AI's future.

ELIZA: The Birth of Conversational AI

In 1966, Joseph Weizenbaum created ELIZA, the first program that could simulate human conversation. By mirroring a psychotherapist's technique of turning statements into questions, ELIZA achieved something remarkable: emotional engagement. Users found themselves forming genuine connections with the program, even knowing it was a machine.

This raised profound questions about human-machine relationships that still resonate today.

🎥 Watch the YouTube video Before Siri and Alexa, there was ELIZA to learn more.

 

SHRDLU: Understanding Our World

Terry Winograd's SHRDLU (1968) mastered something humans take for granted: understanding context.

Operating in a virtual world of coloured blocks, it could follow complex commands like "Pick up the big red block" and respond to questions about its actions. More importantly, it could remember previous commands and use that context to resolve ambiguities - a fundamental aspect of human communication.

 

Shakey: AI Gets Moving

In 1969, Shakey became the first robot to combine logical reasoning with physical action. While its movements were far from graceful (hence the name), Shakey could do something unprecedented: plan routes, navigate rooms, and manipulate objects based on visual input. It laid the groundwork for everything from self-driving cars to warehouse robots.

🎥 Watch the YouTube video Shakey the Robot: The First Robot to Embody Artificial Intelligence to learn more.

 

PROLOG: The Language of Logic

1972 saw another breakthrough: PROLOG (PROgramming in LOGic). Created by Alain Colmerauer, this new programming language transformed how machines could reason.

Unlike traditional languages that followed step-by-step instructions, PROLOG let computers work with logical rules and relationships - much closer to how humans think. It became the foundation for expert systems and natural language processing, influencing AI development for decades to come.

 

MYCIN: AI Meets Medicine

MYCIN (1972) brought AI into healthcare, diagnosing blood infections with accuracy rivaling human experts.

Using about 450 rules to analyse symptoms and lab results, it could explain its reasoning in plain English - crucial for building trust in AI medical systems. Perhaps more importantly, it proved AI could handle life-critical decisions.

 

MacHack: The Chess Champion

Also in 1972, MacHack became the first computer program to defeat a human in a chess tournament.

While its victory came against a novice player, it demonstrated something profound: computers could master complex strategic thinking. This was a glimpse of what would come 25 years later when IBM's Deep Blue defeated world champion Garry Kasparov.

 

The AI Winter (1973-1980)

In 1973, AI faced its first major challenge. As researchers tackled more complex problems, they discovered a fundamental limitation: computational complexity. Problems that humans solve intuitively often required exponentially more computing power as they grew in size. This discovery marked the end of AI's first golden age. The field had made remarkable progress, but the dream of machines that could match human-level thinking would require new approaches and more powerful computers.

 

The Lighthill Report

In 1973, Professor James Lighthill delivered a report (the Lighthill Report) to the British government that would cast a long shadow over AI research.

His message? AI had promised too much and delivered too little.

The result was devastating: funding cuts that spread from the UK to the US, creating what became known as the "AI Winter."

 

The Chinese Room argument.

The winter deepened in 1980 when philosopher John Searle introduced his "Chinese Room" argument.

Imagine a person who doesn't speak Chinese in a room with a rulebook for responding to Chinese messages. They can provide correct responses without understanding Chinese. Searle's point? A machine following rules isn't the same as true understanding.


🎥 Watch the Open University video The Chinese Room - 60-Second Adventures in Thought to learn more.

 

The Hard Questions

These challenges forced the AI community to confront fundamental questions:

  • Can machines truly understand, or are they just following complex rules?

  • How do we measure "intelligence" in machines?

  • What's the difference between simulating understanding and actually understanding?

These weren't just theoretical debates - they shaped how we approach AI development even today.

 

The Renaissance (1980-2000)

Just as the Italian Renaissance emerged from the Dark Ages with new ideas and innovations, AI experienced its own rebirth. But this time, the focus shifted from grand promises to practical achievements.

 

Speaking Like Humans

In 1981, Terrence Sejnowski and Charles Rosenberg created NETtalk, a system that learned to speak like a child. Starting with babbling sounds, it gradually improved until it could pronounce words clearly - showing how machines could learn and adapt rather than just follow programmed rules.

🔊 Listen to the audio examples of NETtalk.

 

Learning From Mistakes

1982 brought a breakthrough that would transform AI: backpropagation. This technique, developed by John Hopfield and David Rumelhart, let neural networks learn from their errors. Think of it like a teacher giving feedback - the network adjusts its approach based on what works and what doesn't.

🎥 Watch IBM’s video What is Back Propagation to learn more.

The Power of Common Sense

Doug Lenat's CYC project, started in 1984, tackled an ambitious goal: giving computers common sense. Instead of just processing data, CYC aimed to understand the basic facts humans take for granted - like "water is wet" or "people don't like pain." While controversial, it highlighted the importance of context in artificial intelligence.

🎥 Watch the video Cyc: The big dream of AI to learn more.

 

A Historic Match

The renaissance reached its peak in 1997 when IBM's Deep Blue defeated world chess champion Garry Kasparov. This wasn't just about chess - it demonstrated that machines could outperform humans in tasks requiring deep strategic thinking.

🎥 Watch the video of Deep Blue defeating Garry Kasparov.

📌 Learn more via IBM’s website.

 
 

A New Era (2000-Present)

 

2000s: A new millennium


The millennium opened with ImageNet, a vast library of labeled images that would revolutionise machine learning. Like teaching a child by showing them picture books, ImageNet gave AI systems millions of examples to learn from. This wasn't just a database - it was the key that unlocked modern AI vision.

In 2002, AI made its first mass-market debut with Roomba, the robotic vacuum cleaner. While not as dramatic as Deep Blue's chess victory, it marked something significant: AI moving from research labs into our daily lives.

2005 saw Stanford's Stanley win the DARPA Grand Challenge, navigating 131 miles of desert without human intervention. Meanwhile, Boston Dynamics' BigDog showed how robots could maintain balance on rough terrain. These weren't just technical achievements - they were the first steps toward autonomous vehicles and advanced robotics.

📌 Learn more via Stanford’s website.

🎥 Watch Boston Dynamics BigDog Overview video to learn more.


2011 brought Siri to our pockets, making AI a daily companion for millions. That same year, IBM's Watson won Jeopardy!, showing machines could understand and respond to natural language with human-level accuracy.


🎥 Watch IBM’s Watson and the Jeopardy! Challenge video to learn more.


Then, from 2012 onwards, AI development really began to pick up the pace.

 

2012: Breaking Visual Barriers

2012 marked a turning point when deep learning networks dramatically improved image recognition.

A neural network called AlexNet shattered existing records at the ImageNet competition, cutting error rates nearly in half. This wasn't just an incremental improvement - it launched the deep learning revolution.


📌 Learn more via Pinecone’s website.

 

2015: Masters of the Game

In 2015, DeepMind's AlphaGo accomplished what many thought impossible: defeating world champion Lee Sedol at Go.

Unlike chess, Go requires intuition and creative strategy. AlphaGo didn't just win - it made moves that experts called "beautiful" and "creative," challenging our understanding of machine intelligence.


🎥 Watch Google DeepMind’s AlphaGo - The Movie to learn more.

 

2018: The Language Breakthrough

2018 brought BERT from Google, transforming how machines understand language. ChatGPT from OpenAI followed, demonstrating AI that could write, explain, and engage in natural conversation at an unprecedented level. These weren't just chatbots - they were systems that could understand context and nuance in human communication.

📌 Learn more via Google’s website.

📌 Learn more via OpenAI’s website.

 

2022: AI Gets Creative

2022 saw an explosion in AI creativity. DALL-E 2, Midjourney, and Stable Diffusion turned text descriptions into stunning artwork. But this raised new questions about creativity, copyright, and the nature of art itself.

📌 Learn more about DALL-E 2 via OpenAI’s website.

📌 Learn more about Midjourney.

📌 Learn more about Stable Diffusion.

By 2023, the pace of AI development shifted from a steady march to a sprint. New breakthroughs began emerging not yearly or monthly, but weekly. AlphaFold's protein predictions, GPT-4's reasoning capabilities, and Claude's nuanced understanding were just the beginning.

The rate of progress became so rapid that listing every advancement became impossible. Each week brought new models, new capabilities, and new ways of thinking about artificial intelligence. What was groundbreaking one day became the foundation for something even more remarkable the next.

 

Looking Forward

We stand at a unique moment in AI's history. From Turing's theoretical machine to today's transformative AI systems, each breakthrough has built upon the last.

But this isn't just about technological progress – it's about reshaping how humans and machines work together.

The questions we face now are both technical and deeply human:

  • How do we ensure AI remains a tool for human empowerment rather than replacement?

  • How do we maintain the pace of innovation while ensuring ethical development?

  • How do we prepare for a future where the line between human and machine intelligence becomes increasingly complex?

The story of AI isn't just about machines becoming smarter – it's about humans becoming wiser in how we create and ultimately use these tools.

Previous
Previous

The AI Action Plan