What artificial intelligence and algorithms are, how they affect our lives, what can go wrong, and how societies are trying to make them fair and accountable.
Young children encounter AI and algorithms every day — in phones, tablets, toys, games, and apps — often without knowing it. Children do not need the word 'algorithm' or 'artificial intelligence'. But they can understand the simple idea that computers follow instructions to make decisions, that these decisions can be helpful or unhelpful, and that people should always be able to ask for help from a real human when they need it. They can also learn that humans create computers — computers do not think for themselves — so it is humans who are responsible when things go wrong. These early ideas prepare children for a life in which algorithms will make ever more decisions about them, from what they see on screens to what jobs, loans, and services are offered to them. Help them stay curious about how things work and confident that a real human can always be asked. No materials are needed.
If a computer said it, it must be right.
Computers make mistakes. Sometimes the information they have is old or wrong. Sometimes they do not understand what we really want. Sometimes they give answers that are not true. It is always okay to check with a real person — a teacher, a parent, a librarian — especially for important things.
Computers are magic and we cannot understand how they work.
Computers are made by people. People wrote the instructions that computers follow. When a computer does something, it is because someone told it how. This means we can understand how computers work — at least the basics — and we can change how they work if we need to. Computers are tools, not magic.
An algorithm is a set of step-by-step instructions for solving a problem or making a decision. Every recipe is an algorithm. Every set of assembly instructions is an algorithm. Computers follow algorithms extremely fast, on a huge scale. Artificial intelligence (AI) is a specific kind of algorithm designed to do tasks that usually need human intelligence — recognising faces, understanding language, making predictions, playing games. Modern AI learns from large amounts of data rather than being programmed step by step. The most powerful modern AI systems — like ChatGPT, image generators, and others — use 'machine learning' and specifically 'large language models' or 'deep learning' to find patterns in huge datasets and produce responses. Algorithms and AI are already deeply woven into daily life. Search engines decide what you see first. Social media feeds choose which posts you see. Maps apps choose your route.
Streaming services suggest films.
Phone cameras enhance photos. Banks use AI to approve or refuse loans. Companies use AI to sort job applications. Some doctors use AI to help read medical scans. Governments use AI in tax, welfare, and criminal justice decisions. AI can do wonderful things. It helps discover new medicines and predicts disease. It helps scientists understand the climate. It helps translators work across languages. It gives people with disabilities new ways to communicate. It can help teachers personalise learning. It makes many tools faster and cheaper. But AI also creates serious problems. (1)
AI learns from data. If the data reflects past discrimination, the AI reproduces it. Hiring algorithms trained on old hiring data may favour men over women. Facial recognition works better on white faces than on Black faces in many systems. Predictive policing tools have targeted minority neighbourhoods based on biased historical data. (2)
AI makes errors, sometimes very confidently.
Self-driving cars make crashes. Translation AI gets things wrong. AI may hallucinate — give confident wrong answers — especially with language models. (3)
AI is replacing some jobs and changing many others. This affects workers, families, and whole communities. (4)
AI systems collect enormous amounts of personal data. Your voice, your face, your movements, your choices — all can be recorded, analysed, and sometimes sold. (5)
AI can generate fake images, videos, audio, and text that look completely real. This makes lies cheaper to produce and harder to detect. (6)
A few giant companies control the most powerful AI. This gives them enormous economic and political power. (7)
When AI makes a harmful decision, who is responsible? The company? The programmer? The user? The AI itself? Laws are still catching up. (8)
Some researchers worry about long-term risks from more powerful AI systems. Governments and international bodies are beginning to regulate AI. The EU AI Act (2024) is the world's first comprehensive AI law, with different rules for different risk levels. The UK hosted an international AI Safety Summit in 2023. China has its own AI rules. The US has mixed federal and state-level approaches. UNESCO's Recommendation on the Ethics of AI (2021) provides a global framework. The key principle is that humans should remain in charge.
When they affect people's lives — their jobs, their loans, their freedom — humans must remain responsible and accountable. Transparency ('what is this AI doing?'), fairness (not discriminating), and the right to appeal (a human review) are central.
AI is changing fast. What is technically true this year may be out of date next year. Focus on the stable principles — transparency, fairness, accountability, human oversight — rather than specific technologies.
If AI is used, the decision must be objective.
AI decisions are not automatically objective. AI learns from data chosen by people, with algorithms designed by people, for purposes decided by people. Every step involves human choices that can build in bias or errors. 'The algorithm said so' does not remove responsibility — it just hides it. Systems that seem 'objective' can produce very unfair outcomes if the underlying data or design was biased. Objectivity is a goal to work toward, not an automatic property of computer decisions.
AI will soon be conscious like a human.
Modern AI can do impressive things — play games, write text, recognise images, translate languages. But these systems work differently from human thinking. Current AI does not understand, feel, or want things in the way humans do. It processes patterns in data very quickly. Whether AI could ever be truly conscious is a serious philosophical question, but current systems are clearly not. Confusing impressive performance with consciousness can lead to bad decisions — relying on AI judgement in areas where real understanding matters.
AI is developing so fast that regulation cannot keep up, so we should not try.
Yes, AI is developing fast. But this is an argument for faster regulation, not for giving up on it. The EU AI Act (2024), UNESCO's Recommendation (2021), and various national rules show that regulation is possible and does matter. History suggests that powerful technologies — cars, medicines, aviation, nuclear power — all benefited from careful regulation, not from being left alone. AI is similar. Getting rules right is hard but essential. Doing nothing would leave some of the most important decisions in our lives to companies with strong profit motives.
AI has become one of the most important technological and political issues of the 21st century. Understanding its foundations, current state, and governance challenges is essential for secondary teaching.
An algorithm is a finite sequence of instructions for solving a problem. Simple algorithms have existed for thousands of years (Euclid's algorithm for greatest common divisor). Modern computers execute algorithms at extraordinary speed and scale. AI as a field dates to the 1950s (Alan Turing's foundational work; the 1956 Dartmouth Conference that coined the term). Early AI used symbolic reasoning — programming explicit rules.
Since the 2010s, machine learning and especially deep learning have transformed the field. Modern AI systems 'learn' by finding statistical patterns in enormous datasets. Transformers (2017) enabled large language models. 2022-2023 saw consumer-visible AI breakthroughs with ChatGPT, image generators, and similar tools. The pace of development has accelerated.
Modern AI systems are statistical pattern matchers, not reasoners in the human sense. They can produce impressive outputs — natural-sounding text, realistic images, game mastery — but have distinct limitations. They can be confident about wrong answers ('hallucination'). They reflect biases in training data. They struggle with reasoning requiring understanding rather than pattern matching. They do not 'know' they are wrong when they are. These limitations are important — AI that appears superhuman in one task can fail in obvious ways in another.
Cathy O'Neil's 'Weapons of Math Destruction' (2016) documented how algorithmic decisions can systematically harm vulnerable populations. Bias arises from several sources. Training data that reflects past discrimination produces AI that discriminates (hiring algorithms, facial recognition). Design choices can embed biases (what counts as 'qualified', 'suspicious', 'successful'). Proxy variables stand in for protected characteristics (postal code for race in many contexts).
The US COMPAS recidivism prediction tool was found by ProPublica (2016) to produce racially biased risk scores. Amazon's experimental hiring AI was scrapped in 2018 after discriminating against women. The Dutch childcare benefits scandal (2013-2020) used algorithmic profiling that disproportionately targeted families with immigrant backgrounds, resulting in thousands wrongly accused of fraud and causing the government to resign. UK's A-level algorithm in 2020 downgraded students from disadvantaged schools. Australian 'Robodebt' wrongly pursued hundreds of thousands of welfare claimants. The pattern is consistent: systems promoted as objective replicating or amplifying existing patterns of discrimination.
AI is increasingly deployed in consequential areas.
Automated eligibility, fraud detection.
Risk assessment, predictive policing, sentencing recommendations.
CV screening, video interview analysis.
Diagnostic support, triage, imaging.
Credit scoring, insurance pricing.
Admissions, grading. Each context raises specific questions about fairness, accuracy, due process, and accountability. Major concerns include lack of transparency (people affected often do not know AI was used or how); inability to challenge decisions; automation bias (humans trusting AI too much); and the speed at which errors can scale.
2022-2023 saw consumer-facing breakthroughs. Large language models (ChatGPT, Claude, Gemini) generate human-like text. Image generators (DALL-E, Midjourney, Stable Diffusion) produce realistic images from prompts. Video and audio generation is advancing rapidly. Applications include productivity tools, education, creative work, accessibility.
Disinformation at scale (deepfakes, synthetic content); intellectual property (training on copyrighted work without permission); non-consensual sexual content; academic integrity; environmental cost of training; concentration of power in a few large companies. The landscape is shifting rapidly; specific capabilities and risks change month by month.
AI is affecting labour markets significantly. Automation of routine cognitive tasks (in addition to manual tasks) is newer than previous waves. Studies suggest substantial portions of white-collar work could be affected. Effects are uncertain — historically, automation has destroyed some jobs and created others, usually with disruption in between. AI may expand productivity enormously but concentrate gains unless distribution policies adjust.
Who benefits from productivity gains? What happens to workers displaced? How are gig workers and creators affected? Will AI deepen inequality or reduce it? Different futures are possible depending on policy choices.
Regulation is emerging. EU AI Act (2024) is the world's first comprehensive AI law. It categorises AI by risk: unacceptable (banned — social scoring, manipulative AI); high-risk (strict requirements — in medicine, justice, employment); limited-risk (transparency requirements — chatbots); minimal-risk (few rules).
UK took different approach — principles-based, sector-specific. Hosted AI Safety Summit (Bletchley Park, 2023) bringing together major powers to discuss frontier AI risks. China has its own AI rules including the Generative AI Services Regulations (2023). The US has used a mix of executive orders (Biden's 2023 AI executive order), state laws, and sector-specific action. Proposed federal legislation has not passed. UNESCO Recommendation on the Ethics of AI (2021) — 193 member states agreed framework. OECD AI Principles (2019) with broad adoption. UN is developing its own AI framework. The picture is patchy — different countries taking different approaches, with significant gaps in enforcement.
A distinct conversation concerns longer-term safety. Some researchers (Stuart Russell, Yoshua Bengio, Geoff Hinton, and others) argue that increasingly capable AI systems could pose existential or catastrophic risks.
AI systems pursuing unintended goals; AI deception or manipulation; concentration of AI power in hostile hands; AI systems becoming uncontrollable. Other researchers (Yann LeCun, Melanie Mitchell, and others) are more sceptical of near-term existential risk, arguing current AI is far less capable than feared. This debate is unsettled. Mainstream policy has begun to take safety concerns seriously — AI Safety Institutes have been established in the US, UK, and elsewhere. 'AI alignment' research aims to ensure advanced AI systems pursue human-beneficial goals. Whether current approaches are adequate is contested.
AI is changing rapidly. Focus on the enduring principles — transparency, accountability, fairness, human oversight — rather than specific products or services. Engage students as informed citizens of an AI-shaped world rather than passive consumers.
AI decisions are objective because they use mathematics and data.
AI decisions are not automatically objective. Every step involves human choices. What data to collect reflects choices. How to label data involves judgement. What counts as a 'successful' outcome is a choice. Which proxy variables to use involves assumptions. Where to deploy the system reflects priorities. Mathematical sophistication does not remove subjectivity — it often hides it. Cathy O'Neil and others have documented how 'objective' algorithms systematically harm vulnerable groups. Treating AI output as objective is a failure of critical thinking, not a mark of it.
AI will soon be conscious and have human-like understanding.
Current AI systems, including the most advanced large language models, are statistical pattern matchers. They can produce impressive human-like outputs without human-like understanding. The question of whether AI could ever become conscious is philosophically open, but current systems clearly are not. Confusing impressive performance with genuine understanding leads to bad decisions — over-reliance on AI in areas where real understanding matters. Experts like Melanie Mitchell have written extensively on how AI's apparent abilities can mask real limits.
Regulating AI will kill innovation, so we should allow AI to develop freely.
This claim is often made but not well-supported. Previous powerful technologies — cars, medicines, aviation, nuclear power, financial services — have benefited from regulation, not been killed by it. Regulation provides the trust, liability clarity, and predictability that enables broader adoption. Jurisdictions with no AI regulation have not produced better AI than those with some; instead, they leave harms uncompensated and create uncertainty about liability. The EU AI Act, while contested, is not preventing AI development in Europe — it is providing a framework for responsible development. The genuine question is about specific rule design, not whether to have rules.
Current concerns about AI existential risk are science fiction and should not affect policy.
This position is taken by some experts but is contested by many prominent AI researchers. Geoffrey Hinton (2024 Nobel Prize in Physics, former Google DeepMind), Yoshua Bengio (Turing Award), Stuart Russell, and others have expressed serious concerns about risks from advanced AI systems. Governments (UK, US, and others) have established AI Safety Institutes specifically to research these risks. Whether these concerns will prove justified is uncertain, but dismissing them as pure science fiction ignores serious scientific assessment. Policy reasonably takes uncertainty into account — investing in safety research without abandoning beneficial AI development.
Key texts for students: Cathy O'Neil, 'Weapons of Math Destruction' (2016) — foundational modern work. Meredith Broussard, 'Artificial Unintelligence' (2018). Safiya Umoja Noble, 'Algorithms of Oppression' (2018) on search bias. Kate Crawford, 'Atlas of AI' (2021) on AI's material and political dimensions. Brian Christian, 'The Alignment Problem' (2020) — technical but accessible. For policy: Stuart Russell, 'Human Compatible' (2019). Melanie Mitchell, 'Artificial Intelligence: A Guide for Thinking Humans' (2019). For foundational understanding: Alan Turing's 'Computing Machinery and Intelligence' (1950) remains important. On generative AI: Gary Marcus's ongoing commentary; OpenAI and DeepMind technical reports. Key regulatory documents: EU AI Act (full text at eur-lex.europa.eu); UNESCO Recommendation on the Ethics of Artificial Intelligence (2021); OECD AI Principles (2019). International bodies: UNESCO AI Ethics work; UN AI Advisory Body; OECD AI Observatory; Global Partnership on AI. National bodies: UK AI Safety Institute (aisi.gov.uk); US AI Safety Institute; EU AI Office. Academic centres: Stanford HAI; Oxford AI Ethics Institute; MILA; Future of Humanity Institute archive. Data sources: Stanford AI Index Annual Report; Epoch AI; AI Incident Database (incidentdatabase.ai) tracks real-world AI harms.
Your feedback helps other teachers and helps us improve TeachAnyClass.