How to find, evaluate, and think critically about the information you encounter — in news, on social media, and in everyday sources. In a world of abundant information and deliberate misinformation, knowing how to tell the difference between reliable and unreliable sources is one of the most urgent skills of our time.
Media literacy at Early Years level is about building the foundational habit of questioning information before accepting it — recognising that not everything seen or heard is necessarily true, and that asking questions is a sign of intelligence, not doubt or distrust. Young children are learning to distinguish fantasy from reality, to understand that adults sometimes disagree, and to notice that the same event can be described very differently by different people. These are the building blocks of media literacy. In low-connectivity contexts, the most relevant information environments for young children may not be social media but community talk, radio, local media, and the stories told by adults around them. Media literacy at this level should engage with these genuinely local information environments rather than assuming digital media is the only relevant context. The most important message at this level: it is good and brave to ask is that true? It is wise to check before you believe, and wise to check before you repeat. Children who develop this habit early are significantly more resistant to misinformation throughout their lives. No materials are needed for any activity below.
Two figures, each with a speech bubble containing a different version of the same event. The completion shows a genuine difference between the two versions — not just different words but a genuinely different interpretation or emphasis. Celebrate drawings that show the tellers in different positions relative to the event (one was there, one was not; one is older, one is younger).
Ask: which version do you believe? How would you find out what really happened? The checking question is as important as the recognition of difference.
Before I believe something I hear, I will ask: did the person telling me see it themselves, or did they hear it from someone else? And before I repeat something, I will check: is this true and am I sure, or am I only passing on something I heard without knowing if it is right?
The specificity of the checking question is the most important thing. Vague commitments to check (I will think about it) are less valuable than specific questions (did the person who told me see it themselves? Can I find out from another source?).
If many people believe something, it must be true.
Many people can believe the same false thing — especially when that thing spreads quickly through a community or when it confirms what people already want to believe. The number of people who believe something does not determine whether it is true. Throughout history, large numbers of people have believed things that turned out to be false. The question to ask is not how many people believe it but what evidence supports it.
Adults always tell the truth.
Adults are generally more reliable sources of information than rumour or guesswork — they often have more experience and knowledge. But adults also make mistakes, believe things that are not true, have reasons to tell stories in particular ways, and sometimes deliberately mislead. Respecting adults does not mean accepting everything they say without thought — it means giving their information appropriate weight while still thinking carefully about it. Questioning a claim is not the same as disrespecting the person who made it.
Checking whether something is true means you do not trust the person who told you.
Checking whether something is true is not an act of distrust towards any person — it is a good habit that good information-users practise regardless of who the source is. Even people we trust completely can be wrong — they may have received false information themselves, they may be misremembering, or they may not have had access to the full picture. Checking is a sign of intellectual care, not personal suspicion.
Media literacy at primary level introduces students to a structured framework for evaluating information — distinguishing between news, opinion, and advertising; asking critical questions about sources and motives; understanding how misinformation spreads and why it is so effective; and developing practical verification habits.
Students often struggle to distinguish between these three categories of content. News aims to report verifiable facts about events. Opinion aims to express and argue for a point of view. Advertising aims to persuade people to buy or do something. These three categories often appear together in the same newspaper, website, or broadcast — and the boundaries between them are often deliberately blurred. Students who can identify which category a piece of content belongs to are significantly better equipped to evaluate it appropriately.
The most important question about any piece of information is who is telling me this and why? Primary sources (people who witnessed or participated in an event) are generally more reliable than secondary sources (people reporting what someone else said happened). Expert sources (people with relevant knowledge and track records) are generally more reliable than non-expert sources on specialist topics. Sources with transparent funding and editorial standards are generally more reliable than anonymous or opaque sources. But all of these are heuristics, not rules — there are excellent anonymous sources and terrible expert sources.
Research by Sam Wineburg and the Stanford History Education Group shows that the most effective fact-checking technique is lateral reading — opening multiple browser tabs about an unfamiliar source and reading what others say about it, rather than reading the source itself deeply. Expert fact-checkers spend very little time reading the content of a suspicious source and much more time checking the source's reputation, funding, and track record. This is counterintuitive but highly effective.
Misinformation is false information spread without deliberate intent to deceive; disinformation is false information spread with deliberate intent to deceive. The distinction matters for moral assessment but both produce the same harm. False information spreads faster and further than true information, particularly on social media, because it tends to be more emotionally arousing — more surprising, more frightening, more outrageous. Understanding this helps students resist the pull of emotionally compelling false information.
The claim I heard was that a new medicine was being tested in our area that could cause serious side effects, and that families should refuse to allow their children to participate. The source was a message shared between parents on a community phone network — the original sender was not identified. The purpose appeared to be to warn the community, but the message created significant fear without offering specific information about what the medicine was or where the information came from. No evidence was offered — no named organisation, no document, no named person with medical knowledge. When I asked a health worker at the local clinic, she said she had no information about any such trial and that the message appeared to be false. My conclusion: this claim is unreliable. The anonymity of the source, the absence of specific verifiable details, and the strong emotional effect (fear) are all warning signs. The pause-and-check habit would have prevented unnecessary distress if applied before sharing.
Award marks for: a genuine and locally relevant claim; honest application of all four evaluation criteria; a conclusion that is consistent with the evidence gathered; and an explanation of the reasoning rather than just a verdict. Strong answers will identify the specific features that make the source more or less reliable rather than making a general judgment.
You can tell if something is false by whether it looks professional or well-written.
Some of the most effective disinformation is highly professional, well-written, and visually sophisticated — because producers of disinformation know that these qualities increase credibility. Conversely, reliable information can be informal, imperfectly presented, and locally produced. The quality of the presentation tells you nothing reliable about the accuracy of the content. The relevant questions are about source, evidence, and corroboration — not about how polished the presentation is.
Misinformation is mainly a problem created by unintelligent or uneducated people.
Research consistently shows that intelligence and education provide limited protection against misinformation — and in some cases, educated people are more susceptible to sophisticated misinformation because they are more confident in their ability to evaluate it. Misinformation exploits universal features of human cognition — confirmation bias, emotional responsiveness, social trust — that affect everyone. The people who are best at resisting misinformation are not necessarily the most intelligent but those who have developed specific habits of verification and those who are most aware of their own vulnerability.
Once you know that a piece of information is false, you stop believing it.
Research on the continued influence effect shows that corrections to false beliefs are often less effective than hoped — people continue to be influenced by information they know to be false, particularly when the false information was vivid, emotionally engaging, or consistent with pre-existing beliefs. Effective correction requires not only stating that the information is wrong but explaining why it was wrong, providing an accurate alternative narrative, and repeating the correction multiple times. A single correction is rarely sufficient.
Being media literate means being sceptical about everything.
Universal scepticism — doubting everything — is as problematic as naive credulity. Excessive scepticism prevents people from accepting reliable information, creates paralysis, and can itself be exploited by those who want to prevent belief in true and important facts (about climate change, public health, justice). The goal of media literacy is calibrated scepticism — applying scrutiny proportional to the importance of the claim, the reliability of the source, and the potential cost of being wrong. Strong evidence from reliable sources should produce genuine belief. Weak evidence from unreliable sources should produce scepticism.
Secondary media literacy engages students with the systemic dimensions of information — how information ecosystems are structured, who has power over them, how they can be manipulated, and what democratic information environments require. The information ecosystem: information does not flow neutrally through society. It is produced, selected, amplified, and suppressed by institutions and individuals with specific interests, values, and resources. Understanding the information ecosystem means understanding who produces information, who distributes it, who benefits from its spread, who is excluded from the conversation, and what structural conditions shape what information is available.
Social media platforms use algorithmic systems to determine which content each user sees — prioritising content that is most likely to keep the user on the platform (which tends to be emotionally arousing, identity-affirming, and novel). This produces filter bubbles (seeing only information consistent with existing beliefs) and echo chambers (interacting primarily with people who share your views). Research on filter bubbles is more mixed than the initial claims suggested — people are exposed to more cross-cutting information than the filter bubble hypothesis implies — but algorithmic amplification of emotionally arousing content (particularly outrage) is well-documented.
Deliberate disinformation campaigns — by governments, political actors, and commercially motivated agents — are well-documented features of the current information environment. The most effective disinformation campaigns exploit existing social divisions, amplify extreme voices, and create confusion rather than specific false beliefs. Documented examples include Russian information operations during the 2016 US election, disinformation about COVID-19 vaccines, and systematic false information about electoral integrity.
In most countries, the media is owned by a small number of individuals and corporations whose interests and values inevitably shape what is produced, emphasised, and ignored. The concentration of media ownership is associated with reduced diversity of perspective, less investigative journalism, and greater alignment of editorial content with owner interests. Public service media (funded by government but editorially independent) is one structural response to this problem.
Fact-checking organisations are the solution to the misinformation problem.
Fact-checking is valuable but has significant limitations as a response to large-scale misinformation. Fact-checks reach a much smaller audience than the original false claim — most people who see misinformation never see the fact-check. The continued influence effect means that even people who see a fact-check often remain influenced by the original false information. Fact-checks can sometimes amplify false claims by repeating them. And the volume of false information now far exceeds the capacity of any fact-checking infrastructure to address it. Fact-checking is one useful tool among many, not a comprehensive solution.
Mainstream media is always more reliable than independent or social media.
Mainstream media varies enormously in reliability, editorial standards, and independence from commercial and political pressure. Some mainstream media is excellent; some is unreliable, partisan, or captured by powerful interests. Some independent media has very high editorial standards. The relevant questions are about editorial independence, transparency, track record for accuracy, and correction practices — not about whether a source is classified as mainstream. In many countries, the most reliable journalism is done by organisations that are not part of the mainstream media establishment.
The solution to the misinformation problem is to teach everyone to be more sceptical.
Increasing general scepticism is as likely to be exploited by disinformation as to resist it. Disinformation campaigns often deliberately target trust in reliable institutions — encouraging scepticism of established science, legitimate media, and electoral systems — so that false claims seem as plausible as true ones. The goal of media literacy is not maximum scepticism but calibrated scepticism — trusting reliable sources appropriately and being sceptical of unreliable ones. This requires the ability to distinguish between sources, which is a specific skill, not a general attitude of distrust.
The misinformation problem is mainly caused by social media and would be solved by better platform regulation.
Misinformation long predates social media and existed in print, radio, and television media for decades. Social media has significantly accelerated the speed and scale of false information spread — but the underlying cognitive, social, and political conditions that make misinformation effective are not created by social media. Platform regulation can help reduce algorithmic amplification of false content, but it cannot address confirmation bias, social identity motivations for believing false information, or deliberate state-sponsored disinformation. A complete response to misinformation requires action at multiple levels — individual, platform, regulatory, and social.
Key texts and resources: Sam Wineburg's Why Learn History (When It's Already on Your Phone) (2018, University of Chicago Press) is the most rigorous account of how professional fact-checkers actually evaluate information, with specific practical implications for education. His Stanford History Education Group's Civic Online Reasoning curriculum (sheg.stanford.edu) provides free, peer-reviewed media literacy lessons. Kate Starbird's research on crisis misinformation is available through her University of Washington lab and in many accessible articles. Whitney Phillips's This Is Why We Can't Have Nice Things (2015, MIT Press) examines the origins of trolling and online harassment as disinformation precursors. For the algorithmic dimension: Eli Pariser's The Filter Bubble (2011, Penguin) introduced the concept; Zeynep Tufekci's Twitter and Tear Gas (2017, Yale) provides the most nuanced analysis of social media and political mobilisation. For media ownership: Robert McChesney's The Political Economy of Media (2008, Monthly Review Press) provides the most comprehensive structural analysis. For prebunking: John Cook and Stephan Lewandowsky's research on inoculation theory is available through their respective university websites and in many freely available articles. For journalists and standards: the Reuters Institute Digital News Report (annual, freely available at digitalnewsreport.org) provides the most comprehensive global data on news consumption, trust, and misinformation. For local African context: the Africa Check website (africacheck.org) is one of the most reliable fact-checking organisations on the continent and publishes free educational resources. For practical classroom tools: the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims) developed by Mike Caulfield is freely available and widely used.
Your feedback helps other teachers and helps us improve TeachAnyClass.