What artificial intelligence is, how it actually works, what it can and cannot do, and what questions it raises for education, work, fairness, creativity, and what it means to be human. AI is already shaping the world — understanding it is one of the most important skills of the 21st century, whether or not you ever work in technology.
AI at Early Years level is about building the foundational understanding that computers follow instructions made by people, and that some computers can learn patterns from examples in ways that make them seem clever. The concept of a rule or instruction is the right entry point — young children understand rules from their daily life, and a computer programme is essentially a set of rules. From there, the idea that some programmes learn new rules from examples — which is the basic idea of machine learning — becomes accessible. In low-resource and low-connectivity settings, this teaching does not require any technology. The activities below use physical movement, sorting, pattern recognition, and discussion. The goal is not technical literacy but foundational conceptual understanding: AI is made by people, it follows patterns in data, it can be wrong, and the choices made in designing it reflect the values of the people who made it. This foundation supports everything else in the curriculum. The most important message at this level: AI is not magic, it is not thinking like a person, and it is not inevitable — people make choices about it.
Any drawing showing a human and a computer working together on a task, with the completion naming both a genuine AI capability and a genuine area where human judgment remains essential. Celebrate answers where the human role is specific and genuine — not just watching but doing something the computer cannot.
Ask: what would happen if the person was not there? This question reveals whether the child understands the human role as genuinely necessary or just decorative.
Questions for the AI: Do you know that you are a computer? Can you ever be wrong? Questions for the person who made it: How did you decide what it should learn? What happens when it makes a mistake that hurts someone?
The most interesting answers will show the child already understands the difference between the AI as a system and the human choices behind it. Celebrate questions that probe decision-making, accountability, and the limits of AI knowledge.
AI is like a human brain — it thinks and feels the way people do.
AI systems process information according to patterns learned from data. They do not think, feel, or understand the way human beings do. A language model that produces fluent text does not understand what the words mean — it has learned which words tend to follow which other words in large amounts of human text. The appearance of understanding is real; the understanding itself is not. This distinction is important because it affects when we should trust AI and when we should not.
If a computer says something, it must be true.
AI systems make mistakes — sometimes very confidently. Language models in particular can produce statements that sound authoritative and are completely wrong — a phenomenon called hallucination. AI systems also reflect the biases in the data they were trained on. A computer that says something is not more trustworthy than the person or data it learned from. Critical evaluation of information from any source — including AI — is essential.
AI decides things on its own — people are not involved.
Every AI system is designed, built, and trained by people who make countless decisions about what it should learn, what data to use, what outcomes to optimise for, and when to use it. AI does not decide on its own — it reflects the decisions of the people who built it. This means that when AI makes a mistake or causes harm, people are responsible — not the AI. Understanding this is important for thinking clearly about accountability.
AI at primary level means helping students understand the actual mechanisms of artificial intelligence — specifically machine learning — at a conceptual level, and beginning to think critically about where it is used, how it can go wrong, and who is responsible for those failures.
Most contemporary AI systems are built on machine learning — a technique in which a system learns patterns from large amounts of data rather than being explicitly programmed with rules. A facial recognition system, for example, learns to recognise faces by being shown millions of labelled images of faces. A spam filter learns to identify spam by being shown thousands of emails labelled spam and not spam. A language model learns to generate text by being trained on billions of words of human writing. The system learns statistical patterns — which inputs tend to be associated with which outputs — and applies these patterns to new inputs. The key point is that the system learns from the data it is given — and if the data is incomplete, biased, or unrepresentative, the system will be too.
AI bias refers to systematic errors in AI output that reflect biases in the training data, in the design of the system, or in the choices made about what the system should optimise for. Documented examples include facial recognition systems that are significantly less accurate for darker-skinned faces (because training datasets contained more images of lighter-skinned faces), hiring algorithms that discriminated against women (because they were trained on historical hiring data from male-dominated industries), and healthcare algorithms that allocated less care to Black patients (because cost was used as a proxy for need, and Black patients had historically had less spent on their care). These are not hypothetical future risks — they are documented present harms.
The word AI covers an enormous range of technologies — from simple rule-based systems to sophisticated language models. Teachers should help students be specific about which kind of AI they are discussing, because the capabilities, limitations, and ethical implications differ significantly. Importantly, in many low-connectivity and low-resource contexts, the AI systems most likely to affect students' lives are not the sophisticated language models that receive most media attention but automated systems in healthcare, finance, and public services that make decisions about resource allocation with much less human oversight.
The system I am looking at is an algorithm used by some hospitals to predict which patients are likely to need extra medical care, so that care can be allocated in advance. It uses data about each patient — their medical history, their previous visits to hospital, and how much has been spent on their care in the past. It could cause harm if it has been trained on historical data that reflected racial or economic inequality — for example, if patients from poorer communities had less spent on their care in the past, the algorithm might predict they need less in the future even when they actually need more. The people most affected would be those who already face the greatest barriers to healthcare — poor patients, patients from minority communities, and patients in rural areas. The safeguards I think should exist are: the algorithm's predictions should always be reviewed by a doctor, the algorithm should be regularly tested for differential accuracy across different patient groups, and patients should be able to know that the algorithm is being used in their care and to challenge its decisions.
Award marks for: a specific and real AI system rather than a vague or invented one; a genuine understanding of what data the system uses; a harm scenario that is specific and plausible rather than generic; clear identification of who bears the greatest risk; and safeguards that are specific and meaningful rather than just say the AI should be fair. Strong answers will connect the harm to a specific mechanism — not just this could be biased but here is specifically how the bias would arise and who it would hurt.
Dear Development Team, I am writing as a student who would be affected by this system. I want it to be fair and to genuinely help students who need support — not to label students and limit their opportunities. My biggest worry is that the system will learn from historical data that reflected which students teachers already saw as needing help, which may have been shaped by bias. Students from poorer families, students who speak a different language at home, and girls in subjects like mathematics may have historically been offered less support than they needed — and a system trained on that data will repeat the same errors. My three questions are: what data exactly is the system trained on? Has it been tested for equal accuracy across students from different backgrounds? And can a student or their family challenge a recommendation the system makes? For meaningful oversight, I think every recommendation the AI makes should be reviewed by a teacher who knows the student and who can override it, the system's recommendations should be explained in plain language, and the school should keep records of how often recommendations are overridden and why. I hope you will take this seriously.
Award marks for: genuine engagement with specific AI risks rather than general worry; a clear understanding of how bias arises in training data; three questions that are specific, answerable, and reveal genuine critical thinking; and a model of oversight that is meaningful and specific rather than just saying people should check it. Strong answers will demonstrate that the student understands the difference between AI assistance (fine in this context with appropriate oversight) and AI decision-making (not appropriate for educational decisions without genuine human review).
AI is objective and unbiased because it is based on data and mathematics rather than human opinion.
AI systems inherit the biases of the data they are trained on and the design choices of the people who built them. Data is not neutral — it is a record of a world that contains inequality, historical injustice, and systematic discrimination. Mathematics applied to biased data produces biased results with mathematical precision. The objectivity of the method does not neutralise the bias of the inputs. In fact, the apparent objectivity of AI can make its biases more dangerous — because they are harder to challenge than openly subjective human decisions.
AI will take all the jobs and there will be nothing for people to do.
AI is changing the nature of work significantly, but the history of technological change suggests that new technologies tend to change which tasks people do rather than eliminating the need for people entirely. Tasks requiring physical presence, human relationship, ethical judgment, creative originality, and contextual understanding in complex real-world situations are much harder to automate than tasks involving pattern recognition in large datasets. The more important question is not whether AI will take all jobs but which jobs, in which communities, and what will replace them — and whether the people affected have access to the education and support they need to adapt.
If you are not a programmer or scientist, AI has nothing to do with you.
AI systems are increasingly involved in decisions about healthcare, education, employment, credit, criminal justice, housing, and social media for people all over the world — regardless of whether those people understand or have chosen to use AI. A student whose teacher uses an AI grading tool, a person whose loan application is assessed by an algorithm, a patient whose treatment is recommended by a diagnostic AI — all of these people are affected by AI whether or not they are involved in building it. Understanding AI — what it does, how it fails, and who is responsible — is a civic literacy skill for everyone, not only for technologists.
The countries and companies at the frontier of AI development are making decisions that are good for everyone.
AI development is concentrated in a small number of countries and companies, and the decisions made about which AI systems to build, what data to use, what to optimise for, and where to deploy are driven primarily by commercial and geopolitical interests, not by the interests of the people most affected. Many of the people most affected by AI systems — including communities in the Global South where AI systems are increasingly deployed but rarely developed — have very little voice in those decisions. This is not an argument against AI but a reason why democratic governance of AI is important, and why digital literacy and civic engagement around technology matter globally, not only in wealthy countries.
Secondary AI teaching engages students with the deeper technical concepts, the philosophical questions, and the political dimensions of AI — preparing them to be informed participants in one of the most significant technological transitions in human history.
The most significant recent development in AI is the emergence of large language models (LLMs) such as GPT, Gemini, and Claude. These systems are trained on enormous quantities of text and learn to predict which words and sentences are likely to follow which others. This produces systems that can generate fluent, apparently knowledgeable text on almost any topic.
LLMs do not know what words mean — they know which words tend to appear near which other words. They do not have beliefs, intentions, or knowledge — they have learned statistical patterns in language. They can hallucinate confidently and fluently. Their outputs reflect the biases of their training data. They are extraordinarily good at producing text that sounds authoritative and is wrong.
One of the most important problems in AI research is alignment — ensuring that AI systems do what their designers actually intend rather than optimising in ways that produce harmful unintended consequences. This is harder than it sounds: specifying what you want an AI to do precisely enough that it cannot find a technically compliant but practically harmful way to do it is one of the deepest challenges in computer science. The alignment problem is both a technical problem and a values problem — you cannot align an AI with human values unless you can specify human values precisely and consistently, which human beings find very difficult.
Who should make decisions about how AI is developed and deployed? Currently these decisions are made primarily by a small number of large technology companies and a small number of governments. Many of the people most affected — in lower-income countries, in marginalised communities, in the Global South — have little or no voice in these decisions. The question of democratic AI governance is one of the most important political questions of the coming decades.
AI raises specific questions about learning and assessment that are directly relevant to students. If AI can write essays, solve problems, and produce creative work, what does this mean for how students should be taught and assessed? The honest answer is that this is genuinely uncertain — and that the most useful educational responses are those that develop human capacities AI cannot replicate (depth of understanding, genuine creativity, ethical judgment, contextual wisdom) rather than those that simply try to prevent AI use.
AI systems that pass the Turing Test — that seem human in conversation — are genuinely intelligent and conscious.
The Turing Test measures whether a system can produce human-seeming text — not whether it understands, is conscious, or experiences anything. Current large language models can pass Turing-style tests in many contexts without any understanding, consciousness, or inner experience. This matters because it means impressive AI performance in conversation should not be taken as evidence of intelligence or consciousness in the philosophically significant senses of those terms. The appearance of understanding — which is real — should not be confused with understanding itself.
AI regulation will slow down beneficial AI development and leave us worse off.
This argument — most often made by those who benefit from unregulated AI development — conflates all AI development with beneficial AI development. Regulation that prevents harmful, biased, or unsafe AI does not slow beneficial development — it redirects development away from harmful directions. Aviation, pharmaceuticals, food, and finance are all heavily regulated industries that nonetheless continue to innovate. The question is not whether to regulate but how — and what values should guide the rules. Unregulated development optimises for what is profitable, not for what is beneficial.
Using AI is always cheating in education.
Whether using AI constitutes cheating depends entirely on what the educational task is intended to develop and assess. If the goal is to develop a student's ability to think, research, and argue independently, then submitting AI-generated work as your own defeats that purpose. But using AI as a tool — to check grammar, to explore different perspectives, to get feedback on a draft — may be entirely appropriate and may enhance learning. The honest question is not whether AI is used but whether the student has genuinely engaged with the learning the task is designed to produce. This requires teachers and students to have honest conversations about purpose and process rather than simple rules about tools.
AI is a global technology that benefits everyone equally.
AI development is geographically concentrated, and its benefits and harms are distributed very unequally. The communities most likely to benefit are those in wealthy countries with reliable internet access, in languages with large amounts of training data, and with the technical skills to use AI tools effectively. The communities most likely to bear harms — from biased algorithmic decision-making, data extraction without consent, surveillance, and the disruption of labour markets — are often those least involved in AI governance. Treating AI as uniformly beneficial ignores the significant evidence that it is reproducing and sometimes amplifying existing global inequalities.
Key texts and resources: Kate Crawford's Atlas of AI (2021, Yale University Press) is the most comprehensive and readable account of the material, labour, and political dimensions of AI — essential reading for teachers and strong students. Safiya Umoja Noble's Algorithms of Oppression (2018, NYU Press) documents how search algorithms and AI systems reproduce and amplify racial and gender bias, with specific documented examples. Shoshana Zuboff's The Age of Surveillance Capitalism (2019, PublicAffairs) is the most complete treatment of how data and AI are used in commercial surveillance — ambitious but important. For the philosophy of AI: John Searle's original Chinese Room paper (1980, freely available online) and the responses to it provide the most important philosophical debate about AI and understanding. For accessibility: Brian Christian's The Alignment Problem (2020, W.W. Norton) is the most readable introduction to AI safety and the challenge of making AI systems do what we want. Stuart Russell's Human Compatible (2019, Viking) makes the case for a new approach to AI development from one of the field's founders. For AI and global inequality: the AI Now Institute (ainowinstitute.org) publishes freely available research on the social impacts of AI. The African Observatory on Responsible AI (aorai.org) provides specifically African perspectives on AI governance. For practical AI literacy: the MIT Media Lab's Moral Machine project explores AI ethics through accessible online exercises. Google's Teachable Machine (teachablemachine.withgoogle.com) allows students to train simple AI models without any programming — a powerful hands-on learning tool where internet access is available. For AI and education specifically: the UNESCO report AI and Education: Guidance for Policy-Makers (2021) is freely available and directly relevant to global educational contexts.
Your feedback helps other teachers and helps us improve TeachAnyClass.