All Skills
Thinking Skills

Artificial Intelligence (AI)

What artificial intelligence is, how it actually works, what it can and cannot do, and what questions it raises for education, work, fairness, creativity, and what it means to be human. AI is already shaping the world — understanding it is one of the most important skills of the 21st century, whether or not you ever work in technology.

Key Ideas at This Level
1 Computers follow instructions made by people.
2 Some computers can learn from examples — this is a simple way to understand AI.
3 AI can do some things very well and other things not at all.
4 The people who make AI make choices about how it works.
5 AI is a tool — it depends on how people use it.
Teacher Background

AI at Early Years level is about building the foundational understanding that computers follow instructions made by people, and that some computers can learn patterns from examples in ways that make them seem clever. The concept of a rule or instruction is the right entry point — young children understand rules from their daily life, and a computer programme is essentially a set of rules. From there, the idea that some programmes learn new rules from examples — which is the basic idea of machine learning — becomes accessible. In low-resource and low-connectivity settings, this teaching does not require any technology. The activities below use physical movement, sorting, pattern recognition, and discussion. The goal is not technical literacy but foundational conceptual understanding: AI is made by people, it follows patterns in data, it can be wrong, and the choices made in designing it reflect the values of the people who made it. This foundation supports everything else in the curriculum. The most important message at this level: AI is not magic, it is not thinking like a person, and it is not inevitable — people make choices about it.

Skill-Building Activities
Activity 1 — Following instructions: you are the computer
PurposeChildren understand what a computer programme is — a set of instructions — by experiencing what happens when instructions are followed exactly, even when the result is silly.
How to run itTell children: a computer does exactly what it is told. It cannot think for itself — it follows the instructions it is given, step by step. We are going to try this. Ask a child to pretend to be a robot and stand very still. Now give instructions very precisely: move your left foot forward thirty centimetres. Stop. Move your right hand up to the level of your shoulder. Stop. Now say: pick up the cup. The robot says: what cup? Where is it? How high? With which hand? The children will see that the instruction was not precise enough — the robot cannot guess. Computers are the same. Every instruction must be completely precise. Now try a different approach: ask children to write instructions for a very simple task — making a sandwich, opening a door, drawing a square — and then follow another child's instructions exactly as written. The errors that result are funny and illuminating: open the door means nothing without specifying which door, where it is, or which direction it opens. Debrief: what does this tell us about how hard it is to write instructions for a computer? What would happen if the instructions were wrong?
💡 Low-resource tipNo technology needed. This activity works in any space with no materials. It is one of the most powerful and enjoyable introductions to computational thinking available and can be repeated with different scenarios. The children's laughter when the robot does something silly is itself a learning moment.
Activity 2 — Learning from examples: teaching the class to sort
PurposeChildren experience the basic mechanism of machine learning — learning patterns from examples — without any technology, understanding that this is how many AI systems work.
How to run itCollect a set of natural objects — leaves, seeds, stones, sticks, or any available items. Mix them together. Now tell the class: I am going to teach you a rule by showing you examples, not by explaining the rule. Sort some of the objects into two groups without explaining why. Leave some unsorted objects nearby. After sorting ten to fifteen items, ask: can you see the rule? What is it? Now ask children to sort the remaining items using the rule they think they have identified. Check together: did they get it right? Were there any they were not sure about? Were there any the rule did not clearly apply to? Introduce the concept: this is similar to how many AI systems learn. Instead of being told rules, they are shown thousands or millions of examples and they learn to identify patterns. Ask: what could go wrong with this approach? What if the examples were all from the same place and missed important variation? What if the person who sorted the examples made mistakes? Connect to AI in everyday life: photo-recognition on phones learns from millions of labelled photos to identify what is in a new photo. It works well for things it has seen many examples of and less well for things it has seen fewer examples of.
💡 Low-resource tipAny available objects work. Natural objects from the immediate environment are ideal. The rule can be anything visually identifiable — size, colour, shape, texture, whether it floats. No technology needed.
Activity 3 — What can AI do and what can it not? The AI strengths game
PurposeChildren develop a realistic and nuanced understanding of AI capabilities — moving beyond both fear and uncritical excitement towards informed understanding.
How to run itRead out a list of tasks and ask children to vote: could a computer do this well, do this badly, or not do it at all? Recognise a dog in a photo — very well. Write a story — it can try, but not the same as a person. Understand when a friend is sad and know the right thing to say — not really. Play chess — very well. Feel happy when it wins — no. Notice that something is unfair — only if it is told what unfair means. Learn a new language — it can learn patterns, but does it understand? Remember every book ever written — yes. Have a favourite colour — no. Know when it is making a mistake — often not. After each item, discuss: why? What does this tell us about what AI is and is not? Introduce the key insight: AI is very good at finding patterns in large amounts of data, and very bad at understanding meaning, feeling, and context the way people do. It does not know what things mean — it knows which things tend to appear together. Ask: does this change how you feel about AI? Is it more impressive, less impressive, or differently impressive than you thought?
💡 Low-resource tipNo technology needed. Works entirely as a class discussion and vote. Use a show of hands or children physically moving to one side of the room for yes, the other for no. The discussion after each item is more valuable than the vote itself.
Reflection Questions
  • Q1What do you think a computer can do that a person cannot? What can a person do that a computer cannot?
  • Q2If a computer makes a mistake, whose fault is it?
  • Q3Have you ever used a computer or a phone to do something? What did it help you with?
  • Q4Would you trust a computer to make an important decision about you? Why or why not?
  • Q5If someone made a computer that was as clever as a person, how would you know?
Practice Tasks
Drawing task
Draw a robot or a computer helping a person do something. Write or say: the computer is helping by __________, and a person is still needed to __________.
Skills: Building balanced understanding of AI as a tool that works with people rather than replacing them — and that requires human judgment in important areas
Model Answer

Any drawing showing a human and a computer working together on a task, with the completion naming both a genuine AI capability and a genuine area where human judgment remains essential. Celebrate answers where the human role is specific and genuine — not just watching but doing something the computer cannot.

Marking Notes

Ask: what would happen if the person was not there? This question reveals whether the child understands the human role as genuinely necessary or just decorative.

Question task
Write or say two questions you would want to ask an AI if you could talk to it, and two questions you would ask the person who made it.
Skills: Developing inquiry and critical thinking about AI — distinguishing between what an AI system does and the choices made by the people who created it
Model Answer

Questions for the AI: Do you know that you are a computer? Can you ever be wrong? Questions for the person who made it: How did you decide what it should learn? What happens when it makes a mistake that hurts someone?

Marking Notes

The most interesting answers will show the child already understands the difference between the AI as a system and the human choices behind it. Celebrate questions that probe decision-making, accountability, and the limits of AI knowledge.

Common Mistakes
Common misconception

AI is like a human brain — it thinks and feels the way people do.

What to teach instead

AI systems process information according to patterns learned from data. They do not think, feel, or understand the way human beings do. A language model that produces fluent text does not understand what the words mean — it has learned which words tend to follow which other words in large amounts of human text. The appearance of understanding is real; the understanding itself is not. This distinction is important because it affects when we should trust AI and when we should not.

Common misconception

If a computer says something, it must be true.

What to teach instead

AI systems make mistakes — sometimes very confidently. Language models in particular can produce statements that sound authoritative and are completely wrong — a phenomenon called hallucination. AI systems also reflect the biases in the data they were trained on. A computer that says something is not more trustworthy than the person or data it learned from. Critical evaluation of information from any source — including AI — is essential.

Common misconception

AI decides things on its own — people are not involved.

What to teach instead

Every AI system is designed, built, and trained by people who make countless decisions about what it should learn, what data to use, what outcomes to optimise for, and when to use it. AI does not decide on its own — it reflects the decisions of the people who built it. This means that when AI makes a mistake or causes harm, people are responsible — not the AI. Understanding this is important for thinking clearly about accountability.

Key Ideas at This Level
1 How machine learning works — learning patterns from data
2 What AI is good at and what it is not good at
3 Bias in AI — how unfairness gets built into systems
4 AI in everyday life — where students already encounter it
5 Data and privacy — what AI needs and what we give up
6 Human judgement and AI — who should decide what
Teacher Background

AI at primary level means helping students understand the actual mechanisms of artificial intelligence — specifically machine learning — at a conceptual level, and beginning to think critically about where it is used, how it can go wrong, and who is responsible for those failures.

Machine learning

Most contemporary AI systems are built on machine learning — a technique in which a system learns patterns from large amounts of data rather than being explicitly programmed with rules. A facial recognition system, for example, learns to recognise faces by being shown millions of labelled images of faces. A spam filter learns to identify spam by being shown thousands of emails labelled spam and not spam. A language model learns to generate text by being trained on billions of words of human writing. The system learns statistical patterns — which inputs tend to be associated with which outputs — and applies these patterns to new inputs. The key point is that the system learns from the data it is given — and if the data is incomplete, biased, or unrepresentative, the system will be too.

Bias in AI

AI bias refers to systematic errors in AI output that reflect biases in the training data, in the design of the system, or in the choices made about what the system should optimise for. Documented examples include facial recognition systems that are significantly less accurate for darker-skinned faces (because training datasets contained more images of lighter-skinned faces), hiring algorithms that discriminated against women (because they were trained on historical hiring data from male-dominated industries), and healthcare algorithms that allocated less care to Black patients (because cost was used as a proxy for need, and Black patients had historically had less spent on their care). These are not hypothetical future risks — they are documented present harms.

AI in context

The word AI covers an enormous range of technologies — from simple rule-based systems to sophisticated language models. Teachers should help students be specific about which kind of AI they are discussing, because the capabilities, limitations, and ethical implications differ significantly. Importantly, in many low-connectivity and low-resource contexts, the AI systems most likely to affect students' lives are not the sophisticated language models that receive most media attention but automated systems in healthcare, finance, and public services that make decisions about resource allocation with much less human oversight.

Key Vocabulary
Artificial intelligence (AI)
Computer systems designed to perform tasks that normally require human intelligence — such as recognising images, understanding language, making recommendations, or making decisions. AI is not one technology but a broad family of related approaches.
Machine learning
A type of AI in which a system learns patterns from large amounts of data rather than being explicitly programmed with rules. Most modern AI systems — including image recognition, language models, and recommendation systems — use machine learning.
Training data
The large collection of examples used to teach a machine learning system. The quality, diversity, and representativeness of training data directly determine the quality and fairness of the AI system produced.
Algorithm
A set of rules or instructions that a computer follows to complete a task or solve a problem. AI systems are built on algorithms — but unlike traditional algorithms, machine learning algorithms develop their own rules from data rather than having all rules specified in advance.
AI bias
Systematic errors or unfairness in AI output — typically caused by biased training data, design choices that reflect the assumptions of developers, or decisions about what the system should optimise for that disadvantage certain groups.
Hallucination
When an AI system — particularly a language model — produces a confident, fluent statement that is factually incorrect or completely invented. Hallucination is a serious limitation of current language models and a major reason they cannot be trusted without verification.
Automation
Using machines or computers to perform tasks that were previously done by people. AI enables new forms of automation — particularly in tasks that previously required human judgment — raising important questions about work, employment, and what humans do best.
Data privacy
The right to control information about yourself — who collects it, what they use it for, and who they share it with. AI systems depend on data and often collect more than users realise or intend to share.
Skill-Building Activities
Activity 1 — Training an AI: the sorting simulation
PurposeStudents experience the process of machine learning — training a system on examples and then testing it — understanding what works, what goes wrong, and why data quality matters.
How to run itDivide the class into two groups: trainers and the AI. The AI group must learn to distinguish two categories — for example, sentences that express positive feelings versus negative feelings, or descriptions of healthy versus unhealthy meals — based only on examples shown by the trainers, with no explanation of the rule. The trainers prepare fifteen to twenty example cards (written on paper or spoken aloud): five clear positive examples, five clear negative examples, and five ambiguous ones they hold back. The AI group sees the first ten examples and tries to identify the pattern. The trainers then test the AI with the five ambiguous examples. How well does the AI do? Now introduce complications: what if the trainer had mostly shown examples from one context — only happy sentences about sport? What happens when the AI meets a happy sentence about food? What if some of the training examples were mislabelled? Debrief: what does this tell us about why the quality, diversity, and labelling of training data is so important? Where might real-world training data be incomplete or biased?
💡 Low-resource tipWorks entirely without technology. Paper cards are ideal but spoken examples also work. The ambiguous test cases are the most valuable part — they reveal what the system learned and what it missed. Any two clearly distinguishable categories work: formal versus informal language, questions versus statements, local versus imported foods.
Activity 2 — AI bias: when the data is unfair
PurposeStudents understand how AI bias arises from biased data and design choices — and connect this to real documented examples of AI causing harm to specific groups.
How to run itIntroduce the core problem: AI systems learn from data, and data reflects the world — including the unfairness in the world. If you train a hiring AI on twenty years of hiring records from a company that mainly hired men, the AI will learn that men are the right people to hire. It is not deliberately unfair — it is accurately reflecting an unfair history. Now present three documented cases of AI bias. Case 1 — Facial recognition: in 2019 a study found that some facial recognition systems had error rates of 35 percent for dark-skinned women compared to 0.8 percent for light-skinned men. Ask: why might this happen? What could be done about it? Case 2 — Healthcare: an algorithm used in US hospitals to allocate healthcare resources was found to systematically allocate less care to Black patients than to white patients with equal health needs. The algorithm used cost of care as a proxy for need — but Black patients had historically had less spent on their care, so lower cost was misread as lower need. Case 3 — Language: many AI language systems perform less well in languages with fewer speakers, because less training data is available. Ask: what does this mean for people whose languages are under-resourced? After each case: who was harmed? Who was responsible? What should have been done differently? What should be done now?
💡 Low-resource tipThe cases can be presented verbally without any technology or printed materials. Teachers should adapt the examples to include any documented AI bias relevant to their local context — cases from the Global South, from local governments or companies using automated decision-making, are more powerful than only US or European examples.
Activity 3 — Who decides? Human judgement and AI decisions
PurposeStudents think carefully about which decisions should and should not be delegated to AI — developing the evaluative framework they need to be informed participants in debates about AI governance.
How to run itPresent a spectrum of decision types and ask students to place each one on a line from fully AI is fine to must be human. Medical diagnosis — whether a scan shows cancer. Choosing which news stories to show someone. Deciding whether someone is granted bail (released from prison before trial). Recommending which students should get extra support in school. Deciding who gets a job interview. Setting the price of insurance. Deciding whether a social media post should be removed. Deciding who gets a loan. After placing each decision, ask: what are the arguments for AI involvement? What are the risks? Who would be most harmed if the AI made a mistake? Is there a case where AI assistance is fine but AI decision-making is not? Now introduce the concept of meaningful human control: the idea that for decisions with significant consequences for people's lives, a human being must remain genuinely responsible — not just rubber-stamping an AI recommendation but actually understanding and being accountable for the decision. Ask: what would meaningful human control look like for each of the decisions on the list? What conditions are needed for it to be genuine rather than performative?
💡 Low-resource tipWorks entirely through discussion. The spectrum line can be drawn on the board. Students can vote by raising hands or moving physically. The discussion of what meaningful human control means is the most important part — it is directly relevant to any context where automated decision-making affects people's lives.
Reflection Questions
  • Q1If an AI makes a decision that harms someone — a wrong medical diagnosis, an unfair loan refusal — who is responsible? The AI? The company that made it? The person who used it?
  • Q2Should AI be used to make decisions about students — which class they go in, what support they get, how their work is graded? What would be gained and what would be lost?
  • Q3What information about yourself do you think you have already given to AI systems, knowingly or unknowingly?
  • Q4If an AI system is more accurate on average than a human decision-maker, should we use it — even if it is less accurate for some groups than others?
  • Q5What is the difference between AI helping a doctor make a diagnosis and AI making the diagnosis itself? Does this distinction matter?
  • Q6Could AI ever be used to make your country more or less democratic? How?
Practice Tasks
Task 1 — Investigate an AI system
Choose one AI system that you know is used somewhere in the world — in healthcare, education, criminal justice, hiring, or social media. Write: (a) what the system does; (b) what data it uses to learn; (c) one way it could cause harm if it is biased or wrong; (d) who would be most affected by that harm; (e) what safeguards you think should exist. Write 4 to 6 sentences.
Skills: Applying AI concepts — training data, bias, harm, and accountability — to a specific real-world system
Model Answer

The system I am looking at is an algorithm used by some hospitals to predict which patients are likely to need extra medical care, so that care can be allocated in advance. It uses data about each patient — their medical history, their previous visits to hospital, and how much has been spent on their care in the past. It could cause harm if it has been trained on historical data that reflected racial or economic inequality — for example, if patients from poorer communities had less spent on their care in the past, the algorithm might predict they need less in the future even when they actually need more. The people most affected would be those who already face the greatest barriers to healthcare — poor patients, patients from minority communities, and patients in rural areas. The safeguards I think should exist are: the algorithm's predictions should always be reviewed by a doctor, the algorithm should be regularly tested for differential accuracy across different patient groups, and patients should be able to know that the algorithm is being used in their care and to challenge its decisions.

Marking Notes

Award marks for: a specific and real AI system rather than a vague or invented one; a genuine understanding of what data the system uses; a harm scenario that is specific and plausible rather than generic; clear identification of who bears the greatest risk; and safeguards that are specific and meaningful rather than just say the AI should be fair. Strong answers will connect the harm to a specific mechanism — not just this could be biased but here is specifically how the bias would arise and who it would hurt.

Task 2 — Letter to an AI developer
Write a letter to the team developing a new AI system that will be used to recommend which students get extra academic support in schools in your country. Tell them: (a) what you want them to get right; (b) what risks you are most worried about; (c) three specific questions you want answered before the system is used; (d) what meaningful human oversight should look like. Write the letter formally and specifically — as if it will actually be read.
Skills: Applying AI ethics to a specific educational context — practising the civic skill of holding technology developers accountable
Model Answer

Dear Development Team, I am writing as a student who would be affected by this system. I want it to be fair and to genuinely help students who need support — not to label students and limit their opportunities. My biggest worry is that the system will learn from historical data that reflected which students teachers already saw as needing help, which may have been shaped by bias. Students from poorer families, students who speak a different language at home, and girls in subjects like mathematics may have historically been offered less support than they needed — and a system trained on that data will repeat the same errors. My three questions are: what data exactly is the system trained on? Has it been tested for equal accuracy across students from different backgrounds? And can a student or their family challenge a recommendation the system makes? For meaningful oversight, I think every recommendation the AI makes should be reviewed by a teacher who knows the student and who can override it, the system's recommendations should be explained in plain language, and the school should keep records of how often recommendations are overridden and why. I hope you will take this seriously.

Marking Notes

Award marks for: genuine engagement with specific AI risks rather than general worry; a clear understanding of how bias arises in training data; three questions that are specific, answerable, and reveal genuine critical thinking; and a model of oversight that is meaningful and specific rather than just saying people should check it. Strong answers will demonstrate that the student understands the difference between AI assistance (fine in this context with appropriate oversight) and AI decision-making (not appropriate for educational decisions without genuine human review).

Common Mistakes
Common misconception

AI is objective and unbiased because it is based on data and mathematics rather than human opinion.

What to teach instead

AI systems inherit the biases of the data they are trained on and the design choices of the people who built them. Data is not neutral — it is a record of a world that contains inequality, historical injustice, and systematic discrimination. Mathematics applied to biased data produces biased results with mathematical precision. The objectivity of the method does not neutralise the bias of the inputs. In fact, the apparent objectivity of AI can make its biases more dangerous — because they are harder to challenge than openly subjective human decisions.

Common misconception

AI will take all the jobs and there will be nothing for people to do.

What to teach instead

AI is changing the nature of work significantly, but the history of technological change suggests that new technologies tend to change which tasks people do rather than eliminating the need for people entirely. Tasks requiring physical presence, human relationship, ethical judgment, creative originality, and contextual understanding in complex real-world situations are much harder to automate than tasks involving pattern recognition in large datasets. The more important question is not whether AI will take all jobs but which jobs, in which communities, and what will replace them — and whether the people affected have access to the education and support they need to adapt.

Common misconception

If you are not a programmer or scientist, AI has nothing to do with you.

What to teach instead

AI systems are increasingly involved in decisions about healthcare, education, employment, credit, criminal justice, housing, and social media for people all over the world — regardless of whether those people understand or have chosen to use AI. A student whose teacher uses an AI grading tool, a person whose loan application is assessed by an algorithm, a patient whose treatment is recommended by a diagnostic AI — all of these people are affected by AI whether or not they are involved in building it. Understanding AI — what it does, how it fails, and who is responsible — is a civic literacy skill for everyone, not only for technologists.

Common misconception

The countries and companies at the frontier of AI development are making decisions that are good for everyone.

What to teach instead

AI development is concentrated in a small number of countries and companies, and the decisions made about which AI systems to build, what data to use, what to optimise for, and where to deploy are driven primarily by commercial and geopolitical interests, not by the interests of the people most affected. Many of the people most affected by AI systems — including communities in the Global South where AI systems are increasingly deployed but rarely developed — have very little voice in those decisions. This is not an argument against AI but a reason why democratic governance of AI is important, and why digital literacy and civic engagement around technology matter globally, not only in wealthy countries.

Key Ideas at This Level
1 How large language models and generative AI work — and what their limits are
2 AI safety and alignment — the problem of making AI do what we actually want
3 AI governance — who should make decisions about AI and how
4 AI and global inequality — how AI development and deployment affects different parts of the world
5 The philosophical questions AI raises — consciousness, intelligence, and what makes us human
6 AI and education — what it changes and what it does not
Teacher Background

Secondary AI teaching engages students with the deeper technical concepts, the philosophical questions, and the political dimensions of AI — preparing them to be informed participants in one of the most significant technological transitions in human history.

Large language models

The most significant recent development in AI is the emergence of large language models (LLMs) such as GPT, Gemini, and Claude. These systems are trained on enormous quantities of text and learn to predict which words and sentences are likely to follow which others. This produces systems that can generate fluent, apparently knowledgeable text on almost any topic.

Important things to understand

LLMs do not know what words mean — they know which words tend to appear near which other words. They do not have beliefs, intentions, or knowledge — they have learned statistical patterns in language. They can hallucinate confidently and fluently. Their outputs reflect the biases of their training data. They are extraordinarily good at producing text that sounds authoritative and is wrong.

AI safety and alignment

One of the most important problems in AI research is alignment — ensuring that AI systems do what their designers actually intend rather than optimising in ways that produce harmful unintended consequences. This is harder than it sounds: specifying what you want an AI to do precisely enough that it cannot find a technically compliant but practically harmful way to do it is one of the deepest challenges in computer science. The alignment problem is both a technical problem and a values problem — you cannot align an AI with human values unless you can specify human values precisely and consistently, which human beings find very difficult.

AI governance

Who should make decisions about how AI is developed and deployed? Currently these decisions are made primarily by a small number of large technology companies and a small number of governments. Many of the people most affected — in lower-income countries, in marginalised communities, in the Global South — have little or no voice in these decisions. The question of democratic AI governance is one of the most important political questions of the coming decades.

AI and education

AI raises specific questions about learning and assessment that are directly relevant to students. If AI can write essays, solve problems, and produce creative work, what does this mean for how students should be taught and assessed? The honest answer is that this is genuinely uncertain — and that the most useful educational responses are those that develop human capacities AI cannot replicate (depth of understanding, genuine creativity, ethical judgment, contextual wisdom) rather than those that simply try to prevent AI use.

Key Vocabulary
Large language model (LLM)
A type of AI trained on enormous amounts of text to predict and generate language. LLMs can produce fluent, apparently knowledgeable text on almost any topic — but they do not understand meaning and can hallucinate confidently and fluently.
Neural network
A computational architecture loosely inspired by the structure of the brain — consisting of layers of connected nodes that process information. Most modern AI systems, including LLMs and image recognition systems, are built on neural networks.
Alignment
The problem of ensuring that an AI system does what its designers actually intend — that it pursues goals that are genuinely beneficial rather than technically compliant but practically harmful. Alignment is one of the central unsolved problems in AI safety.
AI governance
The systems, institutions, laws, and norms through which decisions about AI development and deployment are made and held accountable. AI governance includes technical safety standards, legal regulations, industry self-regulation, and democratic oversight.
Explainability
The degree to which an AI system's decisions can be understood and explained — by the people who built it and by those affected by it. Many powerful AI systems are black boxes — they produce outputs without any accessible explanation of how those outputs were reached.
Generative AI
AI systems that can generate new content — text, images, audio, video, or code — rather than only classifying or predicting. Generative AI raises specific questions about authorship, originality, misinformation, and what human creativity involves.
Digital divide
The gap between those who have reliable access to digital technology and those who do not — across countries, communities, and demographics. AI risks widening the digital divide by concentrating its benefits in already-advantaged communities.
Automation bias
The tendency of people to over-trust automated or AI recommendations — deferring to what a system says even when their own judgment or other evidence suggests it is wrong. Automation bias is one of the most important risks in human-AI collaboration.
Surveillance capitalism
A term coined by Shoshana Zuboff for the economic system in which personal data is collected at scale, used to predict and influence behaviour, and sold to advertisers. Most major AI systems are supported by and contribute to surveillance capitalism.
Epistemic autonomy
The ability to form your own beliefs through your own reasoning — independently of manipulation or undue influence. AI systems — particularly recommendation algorithms and generative AI — raise important questions about epistemic autonomy: whether they help or hinder people in thinking for themselves.
Skill-Building Activities
Activity 1 — What do large language models actually do? Understanding the technology
PurposeStudents develop accurate conceptual understanding of what LLMs are and are not — moving beyond both uncritical enthusiasm and uninformed fear towards the specific, grounded understanding needed for informed civic engagement.
How to run itBegin by asking students what they think a large language model like ChatGPT or Claude does when it answers a question. Collect answers — they will range from it thinks to it searches the internet to it looks things up in a database. Now explain what it actually does: an LLM learns statistical patterns from enormous amounts of text — billions of documents from the internet, books, and other sources. When given a prompt, it predicts, word by word, which text is most likely to follow. It does not know what the words mean. It does not have beliefs or knowledge. It does not search for the right answer — it generates the most statistically likely sequence of words given the input. Now explore the implications. Implication 1 — Hallucination: because it is generating likely-seeming text rather than retrieving accurate facts, it will sometimes produce confident, fluent statements that are completely wrong. Ask: what are the practical implications of this? Implication 2 — Bias: because it has learned from text produced by people, it has absorbed the biases, assumptions, and errors of that text. Ask: whose text is over-represented in training data and whose is under-represented? Implication 3 — Authority: because LLM output is fluent and confident, people tend to trust it more than they should. Ask: what skills do you need to evaluate LLM output critically? End with: knowing this, how should LLMs be used in education? In journalism? In healthcare? In law?
💡 Low-resource tipWorks entirely without technology. If a device is available, a live demonstration of a LLM hallucinating confidently is extremely powerful — ask it a very specific factual question about a local event or person and check the output. Without technology, described examples of hallucinations work effectively. The conceptual discussion is more important than the demonstration.
Activity 2 — AI governance: who should decide and how?
PurposeStudents engage with the question of how AI should be governed — who makes the rules, how those rules are enforced, and how people who are affected but not at the table can have a voice.
How to run itPresent the current situation: most decisions about which AI systems are built, how they work, and where they are deployed are made by a small number of large technology companies — primarily in the United States and China — and to a lesser extent by national governments. The people most affected by these decisions — including communities in countries with little AI development, marginalised communities in wealthier countries, and future generations — have very little voice. Now examine three different approaches to AI governance. Approach 1 — Industry self-regulation: companies set their own standards and are responsible for ensuring their systems are safe and fair. Ask: what are the strengths and limits of this? What incentives do companies have to govern their AI well, and where do those incentives fail? Approach 2 — National regulation: governments pass laws setting standards for AI safety, transparency, and accountability. Ask: how effective can national regulation be when AI systems operate globally? Which countries have the most leverage, and why? Approach 3 — International governance: an international body — like a global equivalent of the International Atomic Energy Agency for nuclear technology — sets standards that all countries and companies must meet. Ask: what would make this effective? What makes it hard? Now ask: whose voices are currently missing from AI governance? What mechanisms could include them? What would you advocate for if you were representing your community's interests in a global AI governance conversation?
💡 Low-resource tipWorks entirely through discussion. The three governance approaches can be written on the board. Use locally relevant examples of AI governance — or the absence of it — where possible. Students in countries with active AI governance debates will have direct experience to draw on.
Activity 3 — AI and what makes us human: the philosophical questions
PurposeStudents engage seriously with the philosophical questions AI raises about consciousness, intelligence, creativity, and what is distinctive about human experience — developing the reflective depth that technical AI literacy alone cannot provide.
How to run itPresent the Turing Test: in 1950, Alan Turing proposed that if a machine could carry on a conversation indistinguishable from a human, we should consider it intelligent. Ask: is this a good test? What does it actually measure? Could a machine pass this test without being intelligent, conscious, or experiencing anything at all? Introduce the Chinese Room thought experiment by John Searle: imagine a person in a room who receives questions in Chinese, looks up responses in a book of rules, and passes back correct Chinese answers — without understanding any Chinese at all. Does the room understand Chinese? Does the person? Ask: does the Chinese Room show that even very sophisticated AI cannot genuinely understand — only simulate understanding? Or does it show something else? Now turn to the specific questions AI raises about human distinctiveness. Question 1: if AI can produce creative work that people find moving, funny, or beautiful, what does this tell us about what creativity is? Question 2: if AI can apparently express empathy and care, what does this tell us about what empathy is? Question 3: if AI can make better decisions than humans in some domains, what does this tell us about the value of human judgment? Connect to the creativity skills topic if students have engaged with it.
💡 Low-resource tipWorks entirely through discussion. No technology needed. The philosophical questions are the most important part — they develop the depth of thinking that will make students genuinely prepared for an AI-shaped world, not just technically literate about how it works.
Reflection Questions
  • Q1Large language models produce fluent, confident text that is sometimes completely wrong. What does this mean for how we should use AI in education, medicine, journalism, and law?
  • Q2AI systems are primarily developed by a small number of large companies in a small number of countries. What are the implications of this concentration of power for the rest of the world?
  • Q3If an AI system is on average more accurate than a human decision-maker, but less accurate for certain groups, should it be used? What conditions would need to be met for its use to be ethically acceptable?
  • Q4John Searle's Chinese Room argument suggests that even the most sophisticated AI systems cannot genuinely understand anything — they only process symbols. Do you find this argument convincing? What would change if it is right?
  • Q5Should AI systems used in education be transparent about what they are? Is there a meaningful difference between a student using AI assistance and a student submitting AI-generated work as their own?
  • Q6What would a genuinely democratic approach to AI governance look like — one that included the voices of communities most affected, not only those with the most technical or economic power?
Practice Tasks
Task 1 — AI policy proposal
You have been asked to write a short policy proposal for how AI should be regulated in one specific area: healthcare, education, criminal justice, or hiring. Write: (a) what specific AI uses you would permit, with what conditions; (b) what specific AI uses you would prohibit or restrict; (c) what transparency and explainability requirements you would impose; (d) how affected people could challenge AI decisions; (e) how the policy would be enforced. Write 300 to 400 words.
Skills: Applying AI ethics and governance concepts to a specific policy domain — practising the civic skill of translating values into concrete policy
Task 2 — Essay: AI and humanity
Choose ONE of the following questions and write a 400 to 600 word essay. (a) AI cannot be truly creative or truly intelligent — it can only simulate these things. What hangs on whether this is true? (b) The development of AI is being driven by the interests of a small number of wealthy countries and companies. What should the rest of the world do about this? (c) AI will transform education — some of those changes will be beneficial and some will be harmful. What determines which it will be?
Skills: Constructing a reasoned argument about AI that engages with technical, ethical, and political dimensions
Common Mistakes
Common misconception

AI systems that pass the Turing Test — that seem human in conversation — are genuinely intelligent and conscious.

What to teach instead

The Turing Test measures whether a system can produce human-seeming text — not whether it understands, is conscious, or experiences anything. Current large language models can pass Turing-style tests in many contexts without any understanding, consciousness, or inner experience. This matters because it means impressive AI performance in conversation should not be taken as evidence of intelligence or consciousness in the philosophically significant senses of those terms. The appearance of understanding — which is real — should not be confused with understanding itself.

Common misconception

AI regulation will slow down beneficial AI development and leave us worse off.

What to teach instead

This argument — most often made by those who benefit from unregulated AI development — conflates all AI development with beneficial AI development. Regulation that prevents harmful, biased, or unsafe AI does not slow beneficial development — it redirects development away from harmful directions. Aviation, pharmaceuticals, food, and finance are all heavily regulated industries that nonetheless continue to innovate. The question is not whether to regulate but how — and what values should guide the rules. Unregulated development optimises for what is profitable, not for what is beneficial.

Common misconception

Using AI is always cheating in education.

What to teach instead

Whether using AI constitutes cheating depends entirely on what the educational task is intended to develop and assess. If the goal is to develop a student's ability to think, research, and argue independently, then submitting AI-generated work as your own defeats that purpose. But using AI as a tool — to check grammar, to explore different perspectives, to get feedback on a draft — may be entirely appropriate and may enhance learning. The honest question is not whether AI is used but whether the student has genuinely engaged with the learning the task is designed to produce. This requires teachers and students to have honest conversations about purpose and process rather than simple rules about tools.

Common misconception

AI is a global technology that benefits everyone equally.

What to teach instead

AI development is geographically concentrated, and its benefits and harms are distributed very unequally. The communities most likely to benefit are those in wealthy countries with reliable internet access, in languages with large amounts of training data, and with the technical skills to use AI tools effectively. The communities most likely to bear harms — from biased algorithmic decision-making, data extraction without consent, surveillance, and the disruption of labour markets — are often those least involved in AI governance. Treating AI as uniformly beneficial ignores the significant evidence that it is reproducing and sometimes amplifying existing global inequalities.

Further Practice & Resources

Key texts and resources: Kate Crawford's Atlas of AI (2021, Yale University Press) is the most comprehensive and readable account of the material, labour, and political dimensions of AI — essential reading for teachers and strong students. Safiya Umoja Noble's Algorithms of Oppression (2018, NYU Press) documents how search algorithms and AI systems reproduce and amplify racial and gender bias, with specific documented examples. Shoshana Zuboff's The Age of Surveillance Capitalism (2019, PublicAffairs) is the most complete treatment of how data and AI are used in commercial surveillance — ambitious but important. For the philosophy of AI: John Searle's original Chinese Room paper (1980, freely available online) and the responses to it provide the most important philosophical debate about AI and understanding. For accessibility: Brian Christian's The Alignment Problem (2020, W.W. Norton) is the most readable introduction to AI safety and the challenge of making AI systems do what we want. Stuart Russell's Human Compatible (2019, Viking) makes the case for a new approach to AI development from one of the field's founders. For AI and global inequality: the AI Now Institute (ainowinstitute.org) publishes freely available research on the social impacts of AI. The African Observatory on Responsible AI (aorai.org) provides specifically African perspectives on AI governance. For practical AI literacy: the MIT Media Lab's Moral Machine project explores AI ethics through accessible online exercises. Google's Teachable Machine (teachablemachine.withgoogle.com) allows students to train simple AI models without any programming — a powerful hands-on learning tool where internet access is available. For AI and education specifically: the UNESCO report AI and Education: Guidance for Policy-Makers (2021) is freely available and directly relevant to global educational contexts.