All Concepts
Democracy & Government

AI and Algorithms

What artificial intelligence and algorithms are, how they affect our lives, what can go wrong, and how societies are trying to make them fair and accountable.

Core Ideas
1 Computers can do some things really well and some things badly
2 A computer decision is not always fair
3 People should be able to ask for help from a real person
4 We can choose how much we use technology
5 Humans are in charge of computers, not the other way round
Background for Teachers

Young children encounter AI and algorithms every day — in phones, tablets, toys, games, and apps — often without knowing it. Children do not need the word 'algorithm' or 'artificial intelligence'. But they can understand the simple idea that computers follow instructions to make decisions, that these decisions can be helpful or unhelpful, and that people should always be able to ask for help from a real human when they need it. They can also learn that humans create computers — computers do not think for themselves — so it is humans who are responsible when things go wrong. These early ideas prepare children for a life in which algorithms will make ever more decisions about them, from what they see on screens to what jobs, loans, and services are offered to them. Help them stay curious about how things work and confident that a real human can always be asked. No materials are needed.

Classroom Activities
Activity 1 — Computers help with some things, not others
PurposeChildren notice that computers are good at certain tasks but not others.
How to run itAsk: what do computers or phones or tablets help grown-ups with? Collect answers. Writing messages. Finding directions. Playing music. Finding information. Taking photos. Remembering appointments. Now ask: what are they not so good at? Listening when you are sad. Giving a cuddle. Knowing when you are really tired. Understanding a joke that depends on a look. Discuss: computers do some things well. They remember lots of facts. They do calculations very quickly. But there are things only a person can do — caring, listening properly, understanding feelings, using kindness when it is needed. A good life uses both. Ask: what would you rather share with a person than a computer?
💡 Low-resource tipDiscussion only. No materials needed.
Activity 2 — When computers get it wrong
PurposeChildren understand that computer decisions are not always right and that humans can help.
How to run itTell a simple story. A family uses an app to get a lift home. The app says the car will arrive in five minutes. But the car takes twenty minutes. The family waits, getting cold. Ask: did the app get it right? No. Now imagine if there were no real person to call — just the app. What would the family do? Just wait with no one to talk to. Now imagine there is a real person they can call or message. What can the real person do? Explain what happened. Help find another car. Apologise. Discuss: computers do lots of things well, but they also make mistakes. When that happens, there should be a real person to help. This is why shops, hospitals, and important services always keep a human available — computers are tools, not replacements for people.
💡 Low-resource tipTell the story verbally. No materials needed.
Activity 3 — We decide how to use computers
PurposeChildren understand that they can choose how much to use technology.
How to run itAsk: are there times when you use a tablet, phone, or computer? Are there times when you do something else? Discuss: computers and apps can be fun. They can also be hard to stop using — they are often designed to keep you watching or playing. Ask: how do you feel after you have been on a screen for a long time? Some children say tired, grumpy, or like they have forgotten what they wanted to do. How do you feel after playing outside, reading a book with someone, or drawing? Often: happier, calmer, more relaxed. Discuss: we can choose. Screens can be part of life, but they do not have to fill all our time. Good things in life — family, friends, playing, making things, going outside — mostly happen off screens. Ask: what is one thing you love doing that does not involve a screen?
💡 Low-resource tipDiscussion only. No materials needed.
Discussion Questions
  • Q1What are some things computers do very well?
  • Q2What are some things only a person can do?
  • Q3Have you ever had a computer or app give you the wrong answer? What happened?
  • Q4What is one of your favourite things to do that does not use a screen?
  • Q5Who do you think should be in charge — humans or computers?
Writing Tasks
Drawing task
Draw a picture of something computers help with, and something only a person can do. Write or say: Computers help with ___________. Only a person can ___________.
Skills: Distinguishing computer and human strengths
Sentence completion
Computers can be helpful when ___________. We should always be able to ask a real person when ___________.
Skills: Articulating appropriate use and the value of humans
Common Misconceptions
Common misconception

If a computer said it, it must be right.

What to teach instead

Computers make mistakes. Sometimes the information they have is old or wrong. Sometimes they do not understand what we really want. Sometimes they give answers that are not true. It is always okay to check with a real person — a teacher, a parent, a librarian — especially for important things.

Common misconception

Computers are magic and we cannot understand how they work.

What to teach instead

Computers are made by people. People wrote the instructions that computers follow. When a computer does something, it is because someone told it how. This means we can understand how computers work — at least the basics — and we can change how they work if we need to. Computers are tools, not magic.

Core Ideas
1 What algorithms and AI are
2 How AI is used today
3 The good things AI can do
4 Where AI goes wrong — bias and mistakes
5 AI and privacy
6 Humans in charge of technology
Background for Teachers

An algorithm is a set of step-by-step instructions for solving a problem or making a decision. Every recipe is an algorithm. Every set of assembly instructions is an algorithm. Computers follow algorithms extremely fast, on a huge scale. Artificial intelligence (AI) is a specific kind of algorithm designed to do tasks that usually need human intelligence — recognising faces, understanding language, making predictions, playing games. Modern AI learns from large amounts of data rather than being programmed step by step. The most powerful modern AI systems — like ChatGPT, image generators, and others — use 'machine learning' and specifically 'large language models' or 'deep learning' to find patterns in huge datasets and produce responses. Algorithms and AI are already deeply woven into daily life. Search engines decide what you see first. Social media feeds choose which posts you see. Maps apps choose your route.

Online shops recommend products

Streaming services suggest films.

Email systems filter spam

Phone cameras enhance photos. Banks use AI to approve or refuse loans. Companies use AI to sort job applications. Some doctors use AI to help read medical scans. Governments use AI in tax, welfare, and criminal justice decisions. AI can do wonderful things. It helps discover new medicines and predicts disease. It helps scientists understand the climate. It helps translators work across languages. It gives people with disabilities new ways to communicate. It can help teachers personalise learning. It makes many tools faster and cheaper. But AI also creates serious problems. (1)

Bias

AI learns from data. If the data reflects past discrimination, the AI reproduces it. Hiring algorithms trained on old hiring data may favour men over women. Facial recognition works better on white faces than on Black faces in many systems. Predictive policing tools have targeted minority neighbourhoods based on biased historical data. (2)

Mistakes

AI makes errors, sometimes very confidently.

Medical AI misses diagnoses

Self-driving cars make crashes. Translation AI gets things wrong. AI may hallucinate — give confident wrong answers — especially with language models. (3)

Job change

AI is replacing some jobs and changing many others. This affects workers, families, and whole communities. (4)

Privacy

AI systems collect enormous amounts of personal data. Your voice, your face, your movements, your choices — all can be recorded, analysed, and sometimes sold. (5)

Misinformation

AI can generate fake images, videos, audio, and text that look completely real. This makes lies cheaper to produce and harder to detect. (6)

Power concentration

A few giant companies control the most powerful AI. This gives them enormous economic and political power. (7)

Accountability gaps

When AI makes a harmful decision, who is responsible? The company? The programmer? The user? The AI itself? Laws are still catching up. (8)

Safety concerns

Some researchers worry about long-term risks from more powerful AI systems. Governments and international bodies are beginning to regulate AI. The EU AI Act (2024) is the world's first comprehensive AI law, with different rules for different risk levels. The UK hosted an international AI Safety Summit in 2023. China has its own AI rules. The US has mixed federal and state-level approaches. UNESCO's Recommendation on the Ethics of AI (2021) provides a global framework. The key principle is that humans should remain in charge.

Algorithms are tools

When they affect people's lives — their jobs, their loans, their freedom — humans must remain responsible and accountable. Transparency ('what is this AI doing?'), fairness (not discriminating), and the right to appeal (a human review) are central.

Teaching note

AI is changing fast. What is technically true this year may be out of date next year. Focus on the stable principles — transparency, fairness, accountability, human oversight — rather than specific technologies.

Key Vocabulary
Algorithm
A set of step-by-step instructions for solving a problem or making a decision. Computers follow algorithms very quickly and on a huge scale.
Artificial intelligence (AI)
Computer systems designed to do tasks that usually need human thinking — like recognising faces, understanding language, or making predictions.
Machine learning
A kind of AI that learns patterns from large amounts of data rather than being given step-by-step instructions. Most modern AI uses machine learning.
Bias
When an algorithm treats some people unfairly — usually because the data it learned from reflected past discrimination.
Transparency
Being able to see how a system works and how decisions are made. An important principle for trustworthy AI.
Accountability
The idea that someone must answer for decisions — including decisions made by AI. Important when things go wrong.
Deepfake
A fake video, image, or audio made by AI to look or sound real. A growing problem for truth and trust.
Data
Information collected and stored — numbers, words, images, sounds. AI needs huge amounts of data to learn.
Classroom Activities
Activity 1 — Algorithms in daily life
PurposeStudents notice how many algorithms already shape their lives.
How to run itAsk students to walk through a typical day and identify where algorithms are at work. Morning: a phone alarm clock (simple algorithm — ring at the chosen time). Checking social media (complex algorithm — decides what to show). Navigation to school or the bus stop (algorithm — finds the shortest route). Looking up a fact (search engine algorithm — ranks results). Playing music (streaming algorithm — chooses songs). Online shopping for parents (recommendation algorithm — suggests products). Camera photos (enhancement algorithm — improves the picture). Homework with spell check (grammar algorithm — corrects errors). Evening video streaming (algorithm — suggests what to watch). Discuss: how many algorithms affect your day? Dozens — maybe hundreds. Most are invisible. Most are helpful. Some we never think about. Ask: which of these algorithms make important decisions? Social media feeds — shaping what you see. Search — shaping what you learn. Recommendation systems — shaping what you buy and watch. These shape not just small moments but over time your whole view of the world. Discuss: this is part of modern life and has brought real benefits — faster information, better tools, more entertainment. It has also raised new questions. Who decides what algorithms do? Who is responsible when they go wrong? Should we know more about how they work? These are questions societies are only beginning to answer.
💡 Low-resource tipTeacher presents typical day verbally. Students discuss. No materials needed.
Activity 2 — When AI goes wrong
PurposeStudents understand specific ways AI systems can cause harm.
How to run itPresent real cases. Case 1: biased hiring. A major company built AI to sort job applications. Because the AI was trained on past hiring data from a company that had mostly hired men, it learned to prefer CVs that looked more like past successful (male) candidates. Women's CVs were downgraded. The company found the bias and stopped using the system — but only after many women had already been filtered out. Case 2: facial recognition and bias. Many facial recognition systems work less well on darker skin tones. In some tests, error rates for Black women have been up to 35% — for white men, under 1%. This matters when facial recognition is used by police, at airports, or in unlocking phones. Several people have been wrongly arrested because facial recognition matched them to criminals they did not resemble. Case 3: welfare algorithms. In 2020, the Netherlands ended a government AI system that had wrongly accused thousands of families — mostly with immigrant backgrounds — of welfare fraud. Families lost benefits, homes, and children to state care. The scandal forced the Dutch government to resign. Case 4: medical AI mistakes. AI can help read X-rays and scans, but also makes mistakes. If a doctor relies on AI too much, real problems can be missed. Case 5: AI hallucinations. Modern language model AI (like ChatGPT) sometimes produces confident-sounding information that is completely wrong — including made-up references, fake quotes, and false facts. People have used such output in important work — legal briefs, news stories — with serious consequences. Discuss: what do these cases have in common? AI that learned from biased or incomplete data. AI that cannot be fully checked or understood. AI given too much power over important decisions, with too little human oversight. Ask: what should happen when AI systems cause harm? Responsibility must sit with the people and organisations using or making the AI. Testing must happen before systems are used. People affected must be able to appeal. Serious harms must trigger review or withdrawal. Discuss: AI can do amazing things, but it is not magic. It is built by people, using data chosen by people, for purposes decided by people. Making it fair requires people to design, test, and oversee it carefully.
💡 Low-resource tipTeacher presents cases verbally. Students discuss. Handle sensitively. No materials needed.
Activity 3 — Making AI work for people
PurposeStudents understand the principles that should guide AI and who is trying to enforce them.
How to run itWalk through the main principles that many experts agree on. (1) Transparency. We should be able to know when AI is being used on us and understand how it makes decisions. A hidden AI is hard to challenge. (2) Fairness. AI should not discriminate against people based on race, gender, age, or other characteristics. This requires careful design, testing, and fixing. (3) Accountability. When AI causes harm, real people and organisations must be responsible — not 'the algorithm'. Legal frameworks need to ensure this. (4) Human oversight. Important decisions affecting people's lives — welfare, loans, jobs, medical care, criminal justice — should have meaningful human review. A right to ask a real person to look at a decision. (5) Privacy. AI systems that collect personal data must handle it responsibly. People should know what is being collected. (6) Safety. Systems should be tested thoroughly before being used in ways that could cause harm. Especially important for autonomous vehicles, medical AI, and military uses. Present how societies are trying to enforce these. The EU AI Act (2024) is the world's first comprehensive AI law. It classifies AI by risk. Unacceptable risk systems (social scoring, manipulation) are banned. High-risk systems (in medicine, law enforcement, education) must meet strict rules. Lower-risk systems have lighter requirements. UNESCO's Recommendation on the Ethics of AI (2021) is a global framework agreed by 193 countries. It sets out principles and values. The UK, US, China, and many others have their own approaches. Some companies have internal AI ethics teams. Some do not. Ask: are these efforts enough? Most experts say we are at an early stage. AI is developing faster than regulation. Enforcement is uncertain. Big companies have enormous power. But progress is real — it is better to have the EU AI Act than not, better to have UNESCO principles than not. Discuss: who should decide what AI does? Governments? Companies? Scientists? Users? All of these have roles. But the key principle is that decisions about AI are decisions about society — they should not be made by a small number of companies alone. Everyone affected should have some voice.
💡 Low-resource tipTeacher presents principles and frameworks verbally. Students discuss. No materials needed.
Discussion Questions
  • Q1What are some ways AI or algorithms help you every day?
  • Q2Have you ever noticed an AI making a mistake or being unfair? What happened?
  • Q3Should there always be a way for a person to appeal a decision made by a computer? Why?
  • Q4Who should decide the rules for how AI is used?
  • Q5What are some jobs you think AI will change a lot in the next 20 years? Which jobs will stay mostly human?
  • Q6If a computer system harms someone, who should be responsible?
Writing Tasks
Task 1 — Explain and give an example
Explain what an algorithm is and give ONE example of an algorithm you have used today. Write 4 to 6 sentences.
Skills: Defining a concept, connecting to personal experience
Task 2 — Short argument
Explain why bias in AI systems is a serious problem, and what can be done about it. Write 4 to 6 sentences.
Skills: Reasoning about technical and social problems
Common Misconceptions
Common misconception

If AI is used, the decision must be objective.

What to teach instead

AI decisions are not automatically objective. AI learns from data chosen by people, with algorithms designed by people, for purposes decided by people. Every step involves human choices that can build in bias or errors. 'The algorithm said so' does not remove responsibility — it just hides it. Systems that seem 'objective' can produce very unfair outcomes if the underlying data or design was biased. Objectivity is a goal to work toward, not an automatic property of computer decisions.

Common misconception

AI will soon be conscious like a human.

What to teach instead

Modern AI can do impressive things — play games, write text, recognise images, translate languages. But these systems work differently from human thinking. Current AI does not understand, feel, or want things in the way humans do. It processes patterns in data very quickly. Whether AI could ever be truly conscious is a serious philosophical question, but current systems are clearly not. Confusing impressive performance with consciousness can lead to bad decisions — relying on AI judgement in areas where real understanding matters.

Common misconception

AI is developing so fast that regulation cannot keep up, so we should not try.

What to teach instead

Yes, AI is developing fast. But this is an argument for faster regulation, not for giving up on it. The EU AI Act (2024), UNESCO's Recommendation (2021), and various national rules show that regulation is possible and does matter. History suggests that powerful technologies — cars, medicines, aviation, nuclear power — all benefited from careful regulation, not from being left alone. AI is similar. Getting rules right is hard but essential. Doing nothing would leave some of the most important decisions in our lives to companies with strong profit motives.

Core Ideas
1 From algorithms to modern AI
2 The power and limits of machine learning
3 Algorithmic bias and discrimination
4 AI in critical decisions — welfare, justice, hiring
5 Generative AI — deepfakes, language models, misinformation
6 Labour, economy, and the future of work
7 AI governance — EU AI Act, UNESCO, and emerging frameworks
8 Safety, alignment, and longer-term concerns
Background for Teachers

AI has become one of the most important technological and political issues of the 21st century. Understanding its foundations, current state, and governance challenges is essential for secondary teaching.

From algorithms to modern AI

An algorithm is a finite sequence of instructions for solving a problem. Simple algorithms have existed for thousands of years (Euclid's algorithm for greatest common divisor). Modern computers execute algorithms at extraordinary speed and scale. AI as a field dates to the 1950s (Alan Turing's foundational work; the 1956 Dartmouth Conference that coined the term). Early AI used symbolic reasoning — programming explicit rules.

Progress was slow

Since the 2010s, machine learning and especially deep learning have transformed the field. Modern AI systems 'learn' by finding statistical patterns in enormous datasets. Transformers (2017) enabled large language models. 2022-2023 saw consumer-visible AI breakthroughs with ChatGPT, image generators, and similar tools. The pace of development has accelerated.

Machine learning and its limits

Modern AI systems are statistical pattern matchers, not reasoners in the human sense. They can produce impressive outputs — natural-sounding text, realistic images, game mastery — but have distinct limitations. They can be confident about wrong answers ('hallucination'). They reflect biases in training data. They struggle with reasoning requiring understanding rather than pattern matching. They do not 'know' they are wrong when they are. These limitations are important — AI that appears superhuman in one task can fail in obvious ways in another.

Algorithmic bias and discrimination

Cathy O'Neil's 'Weapons of Math Destruction' (2016) documented how algorithmic decisions can systematically harm vulnerable populations. Bias arises from several sources. Training data that reflects past discrimination produces AI that discriminates (hiring algorithms, facial recognition). Design choices can embed biases (what counts as 'qualified', 'suspicious', 'successful'). Proxy variables stand in for protected characteristics (postal code for race in many contexts).

Well-documented cases

The US COMPAS recidivism prediction tool was found by ProPublica (2016) to produce racially biased risk scores. Amazon's experimental hiring AI was scrapped in 2018 after discriminating against women. The Dutch childcare benefits scandal (2013-2020) used algorithmic profiling that disproportionately targeted families with immigrant backgrounds, resulting in thousands wrongly accused of fraud and causing the government to resign. UK's A-level algorithm in 2020 downgraded students from disadvantaged schools. Australian 'Robodebt' wrongly pursued hundreds of thousands of welfare claimants. The pattern is consistent: systems promoted as objective replicating or amplifying existing patterns of discrimination.

AI in critical decisions

AI is increasingly deployed in consequential areas.

Welfare

Automated eligibility, fraud detection.

Criminal justice

Risk assessment, predictive policing, sentencing recommendations.

Hiring

CV screening, video interview analysis.

Healthcare

Diagnostic support, triage, imaging.

Finance

Credit scoring, insurance pricing.

Education

Admissions, grading. Each context raises specific questions about fairness, accuracy, due process, and accountability. Major concerns include lack of transparency (people affected often do not know AI was used or how); inability to challenge decisions; automation bias (humans trusting AI too much); and the speed at which errors can scale.

Generative AI

2022-2023 saw consumer-facing breakthroughs. Large language models (ChatGPT, Claude, Gemini) generate human-like text. Image generators (DALL-E, Midjourney, Stable Diffusion) produce realistic images from prompts. Video and audio generation is advancing rapidly. Applications include productivity tools, education, creative work, accessibility.

Concerns include

Disinformation at scale (deepfakes, synthetic content); intellectual property (training on copyrighted work without permission); non-consensual sexual content; academic integrity; environmental cost of training; concentration of power in a few large companies. The landscape is shifting rapidly; specific capabilities and risks change month by month.

Labour and economy

AI is affecting labour markets significantly. Automation of routine cognitive tasks (in addition to manual tasks) is newer than previous waves. Studies suggest substantial portions of white-collar work could be affected. Effects are uncertain — historically, automation has destroyed some jobs and created others, usually with disruption in between. AI may expand productivity enormously but concentrate gains unless distribution policies adjust.

Questions include

Who benefits from productivity gains? What happens to workers displaced? How are gig workers and creators affected? Will AI deepen inequality or reduce it? Different futures are possible depending on policy choices.

AI governance

Regulation is emerging. EU AI Act (2024) is the world's first comprehensive AI law. It categorises AI by risk: unacceptable (banned — social scoring, manipulative AI); high-risk (strict requirements — in medicine, justice, employment); limited-risk (transparency requirements — chatbots); minimal-risk (few rules).

Full implementation over 2024-2026

UK took different approach — principles-based, sector-specific. Hosted AI Safety Summit (Bletchley Park, 2023) bringing together major powers to discuss frontier AI risks. China has its own AI rules including the Generative AI Services Regulations (2023). The US has used a mix of executive orders (Biden's 2023 AI executive order), state laws, and sector-specific action. Proposed federal legislation has not passed. UNESCO Recommendation on the Ethics of AI (2021) — 193 member states agreed framework. OECD AI Principles (2019) with broad adoption. UN is developing its own AI framework. The picture is patchy — different countries taking different approaches, with significant gaps in enforcement.

Safety and alignment

A distinct conversation concerns longer-term safety. Some researchers (Stuart Russell, Yoshua Bengio, Geoff Hinton, and others) argue that increasingly capable AI systems could pose existential or catastrophic risks.

Concerns include

AI systems pursuing unintended goals; AI deception or manipulation; concentration of AI power in hostile hands; AI systems becoming uncontrollable. Other researchers (Yann LeCun, Melanie Mitchell, and others) are more sceptical of near-term existential risk, arguing current AI is far less capable than feared. This debate is unsettled. Mainstream policy has begun to take safety concerns seriously — AI Safety Institutes have been established in the US, UK, and elsewhere. 'AI alignment' research aims to ensure advanced AI systems pursue human-beneficial goals. Whether current approaches are adequate is contested.

Teaching note

AI is changing rapidly. Focus on the enduring principles — transparency, accountability, fairness, human oversight — rather than specific products or services. Engage students as informed citizens of an AI-shaped world rather than passive consumers.

Key Vocabulary
Algorithm
A finite sequence of unambiguous instructions for solving a problem or performing a task. The foundation of all computing, from simple calculations to complex AI systems.
Artificial intelligence
Computer systems designed to perform tasks that typically require human intelligence — such as visual perception, language understanding, decision-making, and translation.
Machine learning
A branch of AI in which systems learn patterns from data rather than being explicitly programmed. The dominant paradigm in modern AI.
Deep learning
A specific machine learning approach using multi-layered neural networks. Behind most recent AI breakthroughs, including large language models.
Large language model (LLM)
A kind of AI trained on vast amounts of text, producing human-like language. ChatGPT, Claude, and Gemini are examples. Powerful but can hallucinate.
Algorithmic bias
Systematic unfairness in algorithmic systems, typically reflecting biases in training data, design choices, or deployment contexts. A major concern in AI ethics.
Generative AI
AI systems that produce new content — text, images, audio, video — rather than classifying or predicting. Includes LLMs and image generators.
Deepfake
Synthetic media in which a person's image, voice, or likeness is generated or manipulated using AI to appear authentic. A growing challenge to information integrity.
AI governance
The rules, institutions, and practices that regulate the development and deployment of AI. Includes law, standards, ethics guidelines, and organisational oversight.
AI alignment
The challenge of ensuring AI systems pursue goals that are actually beneficial to humans. A major field in AI safety research, particularly for more capable future systems.
Classroom Activities
Activity 1 — How algorithmic bias happens
PurposeStudents understand the mechanisms through which AI systems produce unfair outcomes.
How to run itSet out the framework. Bias in algorithmic systems typically emerges from several sources — rarely intentional, often subtle. Walk through mechanisms. (1) Biased training data. AI learns patterns from data. If the data reflects past discrimination, the AI reproduces it. Hiring AI trained on past hiring decisions that favoured men will prefer male candidates. (2) Representation gaps. If some groups are underrepresented in training data, AI performs worse on them. Facial recognition trained mostly on lighter-skinned faces works less well on darker-skinned faces — sometimes dramatically worse. (3) Design choices. What gets measured and how shapes what the algorithm does. A 'success' measure that reflects existing advantages will produce AI that favours the already advantaged. (4) Proxy variables. Even when protected characteristics (race, gender) are excluded from training, correlated variables (postal code, name, speech patterns) can reproduce the same bias. (5) Feedback loops. AI decisions shape future data. If an AI recommends low-income loans reluctantly, the resulting data will show lower repayment rates in those loans, reinforcing the original bias. (6) Deployment context. Even a well-designed model can cause harm in the wrong context. Police predictive algorithms deployed on historical data about arrests, not crimes, will perpetuate over-policing of certain neighbourhoods. Work through specific cases. ProPublica on COMPAS (2016): investigation found the criminal justice risk assessment tool labelled Black defendants 'high-risk' at higher rates while rating them equally accurately — a result of how accuracy is defined. This started a major debate about fairness definitions. Amazon hiring AI (2014-2018): trained on 10 years of hiring data from a male-dominated tech industry. Downgraded CVs containing the word 'women's' (e.g., 'women's chess club'). Discontinued when the bias was found. Dutch childcare benefits scandal (2013-2020): risk profiling algorithms targeted families with dual nationality and lower incomes. Over 30,000 families wrongly accused of fraud. Thousands of children placed in state care. The government resigned in 2021 over the scandal. UK A-Levels 2020: an algorithm was used to estimate grades when exams were cancelled during COVID. It systematically downgraded students at poorer-performing schools (reflecting school history, not individual ability). After protests, the government abandoned the algorithm. Ask: what do these cases have in common? None were intentionally discriminatory. All produced systematic unfair outcomes. All affected vulnerable populations most. All required external pressure to expose and fix. Discuss: what can prevent these harms? Data audits before use. Diverse teams designing AI (homogeneous teams miss biases). External auditing and testing. Explainability — being able to see why the AI made a decision. Appeal mechanisms for affected individuals. Legal frameworks making organisations responsible. Discuss: responsibility matters. When a biased algorithm causes harm, there must be humans and organisations responsible — not 'the algorithm'. The EU AI Act and similar laws move in this direction, but enforcement remains a challenge.
💡 Low-resource tipTeacher presents mechanisms and cases verbally. Students analyse. No materials needed.
Activity 2 — Generative AI, deepfakes, and truth
PurposeStudents engage with the challenges that generative AI poses to information integrity.
How to run itSet out what generative AI can now do. Since 2022-2023, generative AI has produced remarkable outputs. Text: LLMs like ChatGPT produce essays, emails, code, arguments, and creative writing at human quality. Images: systems like DALL-E, Midjourney, and Stable Diffusion generate photorealistic images from short descriptions. Audio: voice cloning from a few seconds of someone's voice. Video: increasingly convincing synthesis, with text-to-video emerging rapidly. Present the risks to truth and trust. Deepfake videos: already used in political disinformation (Zelensky deepfake 2022 urging surrender; Biden robocall 2024). Used in non-consensual sexual content (women targeted, including minors). Used in fraud (executive voice clones tricking employees into transferring money). Mass-produced misinformation: LLMs can generate unlimited amounts of plausible text. Disinformation operators use them to scale up. Slop content: AI-generated low-quality content flooding platforms. Loss of trust in real evidence: if everything could be faked, people may doubt genuine videos, audio, and photos. The 'liar's dividend' — actual wrongdoing can be dismissed as 'deepfake'. Present specific cases. The 2024 elections saw AI-generated content in multiple countries (US, India, Indonesia, UK). Some had limited impact; others contributed to specific controversies. The Hong Kong finance employee who transferred $25 million after a video call with deepfaked senior colleagues (February 2024). Taylor Swift deepfake imagery (January 2024) that spread rapidly on X before removal. Discuss responses. Technical responses: watermarking AI-generated content (mixed success so far); detection AI (cat and mouse with generation AI); provenance standards like C2PA identifying authentic content. Legal responses: UK's Online Safety Act includes provisions on intimate image abuse. EU AI Act requires transparency on AI-generated content. US laws in some states against non-consensual deepfakes. Australia's proposed social media bans. Platform responses: removal policies, fact-checking partnerships, labelling, algorithmic de-amplification. User responses: media literacy, lateral reading, scepticism of emotionally striking content. Institutional responses: major news organisations developing authentication standards; scientific journals concerned about AI-generated papers. Ask: are these responses enough? Probably not for the scale of the problem. Technology is advancing faster than response. Legal frameworks are fragmentary. Platforms have mixed incentives. Users vary in literacy. Discuss: the long-term implications may reshape how society establishes truth. For most of human history, seeing was believing — photo, video, and audio evidence could anchor shared understanding. Generative AI may end this era. What replaces it? Institutional authentication (verified sources). Personal trust networks (trusted friends and journalists). Technical provenance (cryptographic verification of content origins). Some combination will likely emerge, but the transition may be difficult. Ask: what should be illegal? Most people would agree: non-consensual sexual deepfakes, election disinformation, fraud deepfakes. Less clear: satire, artistic work, political commentary. Legal frameworks need to balance free expression with harm prevention — a live debate.
💡 Low-resource tipTeacher presents capabilities and cases verbally. Students discuss. Handle sensitively — some content involves abuse. No materials needed.
Activity 3 — Governing AI — the emerging framework
PurposeStudents engage with how societies are trying to regulate AI.
How to run itSet out the challenge. AI is developing faster than regulation. Different countries are taking different approaches. Present the main regulatory frameworks. EU AI Act (2024): the world's first comprehensive AI law. Risk-based categories. Unacceptable risk (banned): social scoring, manipulative AI, predictive policing, indiscriminate scraping for facial recognition databases. High-risk (strict obligations): AI in education, employment, critical infrastructure, law enforcement, migration, democracy, healthcare. Limited-risk (transparency requirements): chatbots, emotion recognition, AI-generated content. Minimal-risk (few rules): most other AI. Full implementation staged 2025-2027. Covers AI systems placed on the EU market regardless of origin — giving the EU global influence. UNESCO Recommendation on the Ethics of AI (2021): first global framework, agreed by 193 countries. Principles include human dignity, fairness, transparency, accountability, environmental sustainability. Not legally binding but influential. Supports countries in developing their own frameworks. OECD AI Principles (2019): widely adopted by member states. Principles include inclusive growth, human-centred values, transparency, robustness, accountability. Influential baseline. UK approach: principles-based, sector-specific. Hosted International AI Safety Summit at Bletchley Park (2023). Established AI Safety Institute. Prefers light-touch regulation of general AI, letting sector regulators handle domain-specific issues. US approach: mixed federal and state action. Biden executive order on AI (2023) set out principles and actions. Major federal agencies developing sector-specific rules. Some state laws (Illinois, Texas, California). Comprehensive federal legislation has not passed. China: Generative AI Services Regulations (2023); Algorithmic Recommendation Regulations (2022). More restrictive than Western approaches in some ways (content control, registration requirements) but similarly focused on harms. Concerns about surveillance uses. Other emerging frameworks: UN working on AI. African Union developing AI strategy. ASEAN guidance. Japan, South Korea, Singapore, and others developing own approaches. Discuss the approach differences. EU: rights-based, prescriptive. UK: innovation-friendly, principles-based. US: market-oriented, sectoral. China: state-directed, harm-focused. Each reflects broader legal and political traditions. Ask: what are the strengths of different approaches? EU has the most comprehensive coverage but may slow innovation. UK keeps regulation flexible but may undergoverrn. US leverages market competition but leaves gaps. China has specific rules but tied to authoritarian state. Ask: can AI be governed internationally? Some issues (safety of most powerful systems, weapons, fraud) may need international coordination. Initial steps — UK AI Safety Summit, ongoing UN work — suggest this is possible but slow. Geopolitical tensions (particularly US-China) complicate cooperation. Evaluate specific debates. Should AI companies face strict liability for harms? Should open-source AI be restricted differently from proprietary? Should foundation model developers face specific obligations? Should biometric identification in public spaces be banned? These are contested. Discuss: governance is emerging but incomplete. AI is still growing rapidly. The next five to ten years will be formative. The principles being established now will shape the AI future for decades.
💡 Low-resource tipTeacher presents frameworks verbally. Students discuss. No materials needed.
Discussion Questions
  • Q1Cathy O'Neil called powerful discriminatory algorithms 'weapons of math destruction'. Is this too strong, or does it capture the reality of how some AI systems have harmed vulnerable populations?
  • Q2Should there be a legal right to have an important decision about you reviewed by a human, even when AI was used? How would this work in practice?
  • Q3The EU AI Act takes a prescriptive, risk-based approach. The US relies more on market forces and sectoral regulation. Which approach is likely to produce better outcomes, and why?
  • Q4Generative AI can produce unlimited amounts of plausible content. Does this make media literacy more important, less effective, or both? What else is needed?
  • Q5Deepfakes — especially non-consensual sexual content and political disinformation — have become major problems. Should creating or sharing them be criminalised, and what are the limits of such an approach?
  • Q6AI is changing labour markets faster than previous technological waves. What responsibilities do governments, companies, and technology developers have to workers displaced or changed by AI?
  • Q7Some prominent researchers argue that sufficiently advanced AI could pose catastrophic risks. Others consider such concerns overblown. How should policy respond given genuine uncertainty?
Writing Tasks
Task 1 — Extended essay
'AI should not be used to make decisions that significantly affect people's lives without meaningful human oversight.' To what extent do you agree? Write 400 to 600 words.
Skills: Thesis-driven argument, engaging with AI capabilities and governance, using cases
Task 2 — Analytical response
Explain how algorithmic bias happens and discuss one approach to addressing it. Write 200 to 300 words.
Skills: Explaining mechanisms, proposing solutions
Common Misconceptions
Common misconception

AI decisions are objective because they use mathematics and data.

What to teach instead

AI decisions are not automatically objective. Every step involves human choices. What data to collect reflects choices. How to label data involves judgement. What counts as a 'successful' outcome is a choice. Which proxy variables to use involves assumptions. Where to deploy the system reflects priorities. Mathematical sophistication does not remove subjectivity — it often hides it. Cathy O'Neil and others have documented how 'objective' algorithms systematically harm vulnerable groups. Treating AI output as objective is a failure of critical thinking, not a mark of it.

Common misconception

AI will soon be conscious and have human-like understanding.

What to teach instead

Current AI systems, including the most advanced large language models, are statistical pattern matchers. They can produce impressive human-like outputs without human-like understanding. The question of whether AI could ever become conscious is philosophically open, but current systems clearly are not. Confusing impressive performance with genuine understanding leads to bad decisions — over-reliance on AI in areas where real understanding matters. Experts like Melanie Mitchell have written extensively on how AI's apparent abilities can mask real limits.

Common misconception

Regulating AI will kill innovation, so we should allow AI to develop freely.

What to teach instead

This claim is often made but not well-supported. Previous powerful technologies — cars, medicines, aviation, nuclear power, financial services — have benefited from regulation, not been killed by it. Regulation provides the trust, liability clarity, and predictability that enables broader adoption. Jurisdictions with no AI regulation have not produced better AI than those with some; instead, they leave harms uncompensated and create uncertainty about liability. The EU AI Act, while contested, is not preventing AI development in Europe — it is providing a framework for responsible development. The genuine question is about specific rule design, not whether to have rules.

Common misconception

Current concerns about AI existential risk are science fiction and should not affect policy.

What to teach instead

This position is taken by some experts but is contested by many prominent AI researchers. Geoffrey Hinton (2024 Nobel Prize in Physics, former Google DeepMind), Yoshua Bengio (Turing Award), Stuart Russell, and others have expressed serious concerns about risks from advanced AI systems. Governments (UK, US, and others) have established AI Safety Institutes specifically to research these risks. Whether these concerns will prove justified is uncertain, but dismissing them as pure science fiction ignores serious scientific assessment. Policy reasonably takes uncertainty into account — investing in safety research without abandoning beneficial AI development.

Further Information

Key texts for students: Cathy O'Neil, 'Weapons of Math Destruction' (2016) — foundational modern work. Meredith Broussard, 'Artificial Unintelligence' (2018). Safiya Umoja Noble, 'Algorithms of Oppression' (2018) on search bias. Kate Crawford, 'Atlas of AI' (2021) on AI's material and political dimensions. Brian Christian, 'The Alignment Problem' (2020) — technical but accessible. For policy: Stuart Russell, 'Human Compatible' (2019). Melanie Mitchell, 'Artificial Intelligence: A Guide for Thinking Humans' (2019). For foundational understanding: Alan Turing's 'Computing Machinery and Intelligence' (1950) remains important. On generative AI: Gary Marcus's ongoing commentary; OpenAI and DeepMind technical reports. Key regulatory documents: EU AI Act (full text at eur-lex.europa.eu); UNESCO Recommendation on the Ethics of Artificial Intelligence (2021); OECD AI Principles (2019). International bodies: UNESCO AI Ethics work; UN AI Advisory Body; OECD AI Observatory; Global Partnership on AI. National bodies: UK AI Safety Institute (aisi.gov.uk); US AI Safety Institute; EU AI Office. Academic centres: Stanford HAI; Oxford AI Ethics Institute; MILA; Future of Humanity Institute archive. Data sources: Stanford AI Index Annual Report; Epoch AI; AI Incident Database (incidentdatabase.ai) tracks real-world AI harms.