What is Artificial Intelligence?

What is Artificial Intelligence?

Machines that mimic human intelligence.

Artificial Intelligence (AI) has become one of the most transformative technologies of our time, revolutionising industries and reshaping our daily lives. At its core, AI refers to the development of computer systems capable of performing tasks that typically require human intelligence. These tasks include visual perception, speech recognition, decision-making, and language translation.

How could you describe AI to a 5-year old?

Artificial intelligence is like a really smart robot friend.

It helps people by listening, talking, drawing pictures, and even playing games.
It learns new things just like you do when you try again and again.
So, AI is like a magic helper that lives inside computers and phones, making life easier and more fun!

Explain how Large Language Models work for non-technical people.

Large language models are smart computer programs that know lots and lots of words and facts.
They read huge amounts of text—like books, articles, and websites—so they can learn how people talk and write.

When you ask them a question or give them a task, they think really fast and put together words that make sense, almost like a super-powered predictive text.

You can imagine it like a recipe:

  • The LLM mixes up all the reading it’s done

  • It guesses what sounds best to say next

  • It repeats this again and again, super quickly, until you get a full answer!

They don’t “think” like humans, but they’re great at finding the right words based on what they’ve “read.”

It’s like having a helpful assistant who’s read a million books and can answer any question you have in a friendly, reliable way.

The Evolution of AI

The concept of AI has a rich history dating back to the mid-20th century:

  • 1950: Alan Turing proposes the Turing Test, a method for determining if a machine can exhibit intelligent behavior

  • 1956: The term "Artificial Intelligence" is coined at the Dartmouth Conference, marking the birth of AI as a field of study

  • 1960s-1970s: Early AI research focuses on symbolic methods and rule-based systems.

  • 1980s: Expert systems gain popularity, demonstrating AI's potential in specific domains.

  • 1990s: Machine learning begins to emerge as a promising approach to AI

  • 2000s: Advances in computing power and data availability lead to significant progress in AI capabilities.

  • 2010s: Deep learning techniques achieve breakthrough results in various AI tasks

  • 2020s: Large language models and generative AI capture public imagination and find widespread application

  • 2022: ChatGPT launches, making conversational AI accessible to millions and sparking global mainstream adoption of generative AI.

  • 2023: The first AI Safety Summit is held, with 29 nations signing a joint declaration on safe and ethical AI development. Multimodal AI models, like OpenAI’s GPT-4 and Google Gemini, allow systems to understand and generate not only text but also images, audio, and more.

  • 2024: Broad industry use of AI in healthcare, finance, education, and entertainment accelerates; AI becomes central to drug discovery, climate modelling, and advanced cybersecurity. Efforts for AI transparency and regulation grow.

  • 2025: OpenAI releases GPT-5, bringing sharper generative capabilities and deeper contextual understanding. AI’s role expands further into daily life, with greater focus on ethical guidelines and international cooperation.

Types of AI

Machine Learning

Machine Learning (ML) is a subset of AI that focuses on creating systems that can learn and improve from experience without being explicitly programmed. ML algorithms use statistical techniques to find patterns in large datasets and make predictions or decisions based on those patterns.

Key types of machine learning include:

1. Supervised Learning: Algorithms learn from labeled data to make predictions on new, unseen data.

2. Unsupervised Learning: Algorithms discover hidden patterns in unlabelled data.

3. Reinforcement Learning: Algorithms learn through trial and error, receiving rewards or penalties for their actions

Computer Vision

Computer Vision is an interdisciplinary field that aims to enable computers to gain high-level understanding from digital images or videos. It involves tasks such as:

  • Image classification

  • Object detection and recognition

  • Facial recognition

  • Scene reconstruction

Computer vision has numerous applications, from autonomous vehicles to medical imaging and surveillance systems.

Natural Language Processing

Natural Language Processing (NLP) focuses on the interaction between computers and human language. It encompasses tasks such as:

  • Speech recognition

  • Machine translation

  • Sentiment analysis

  • Text summarisation

NLP has enabled the development of virtual assistants, chatbots, and language translation services

Generative AI

Generative AI represents advanced technologies that can create brand-new content—text, images, music, video, and even computer code—from simple prompts.

Recent years have seen stunning breakthroughs:

  • Text, audio, and video generation: Multimodal AI models like GPT-5 and Google Gemini Ultra now generate fluent text, conversational audio, realistic voices, and high-quality video content across a huge range of topics and styles.

  • Image and design creation: Tools such as DALL-E 4 and Midjourney 3 instantly produce striking images, 3D assets, and visual art from text, sketches, or a mix of both.

  • Music and code composition: AI not only composes original music in nearly any genre but also writes, reviews, and debugs complex software code.

Generative AI is transforming industries, from marketing and education to software and the arts, by empowering creativity, boosting productivity, and enabling personalisation at scale.


However, as these systems evolve, so do challenges around deepfakes, misinformation, and ethical use, driving new regulations and safety standards globally.