All Concepts
Democracy & Government

Media Literacy

What media literacy is, how to tell trustworthy information from misleading content, why this matters for democracy, and practical skills for navigating today's media landscape.

Core Ideas
1 Not everything we hear is true
2 It is good to ask questions
3 Different people may tell us different things
4 A good friend tells the truth
5 We can check with a grown-up we trust
Background for Teachers

Young children can begin to understand media literacy through the simple idea that not everything they hear is true, and that asking questions is a good habit. Children do not need the word 'media literacy'. But they can understand that different people might tell them different things, that stories can be made up, and that it is okay — even wise — to check. They know the feeling when a friend tells them something that sounds too good to be true. They know the difference between a story for fun and a fact about the world. Building these early habits — curiosity, questioning, checking — is the foundation of the critical thinking they will need to navigate a world full of information, much of it unreliable. No materials are needed.

Classroom Activities
Activity 1 — True stories and made-up stories
PurposeChildren understand that some stories are true and some are made up, and that both have their place.
How to run itAsk: what is your favourite story? Collect answers. Some may be fairy tales; some may be about real things. Discuss: some stories are made up to teach us or to entertain us — like stories about dragons or princesses. Made-up stories are wonderful. Other stories are real — like stories about things that happened in your family, in your town, or in the world. Real stories are important too. Ask: what is the difference? Made-up stories often have magical things that do not happen in real life. Real stories stick to what really happened. Discuss: it is good to know which is which. When someone tells you a story, one useful question is: is this a made-up story, or a true story?
💡 Low-resource tipDiscussion only. Use stories children already know. No materials needed.
Activity 2 — Is it true? How do you know?
PurposeChildren learn the useful habit of asking questions about what they hear.
How to run itTell the children: sometimes people tell us things that are not true. Sometimes by accident. Sometimes on purpose. What can we do? Play a small game. Teacher says a claim — some true, some silly. 'The sun is bigger than the moon.' True. 'Cats can drive cars.' Not true. 'Birds have feathers.' True. 'Fish live in trees.' Not true. Ask after each: how do you know? Discuss: we often know because we have seen something ourselves, because a grown-up we trust told us, because a book we read said so, or because it just makes sense. Asking 'how do I know?' is one of the most important questions in life. It is okay not to be sure. It is wise to ask a grown-up when you are not sure.
💡 Low-resource tipDiscussion only. No materials needed.
Activity 3 — When stories are not kind or fair
PurposeChildren start to notice when stories are made to be unkind or unfair.
How to run itTell a gentle story. In a playground, one child tells another: 'All the children in the red class are mean.' The second child believes it and starts avoiding the red class. But actually, most red class children are lovely. One of them shares snacks and another helps when someone falls. Ask: was the first child's story fair? No. It was unfair to the whole red class. How should the second child have checked? By going to meet red class children. By asking a grown-up. By noticing that not everyone fits one label. Discuss: sometimes people tell us stories that make a group of people seem bad — all boys, all girls, all children from one school, all people from one country. These stories are almost never true. They miss the fact that every group is made up of different individuals. Real life is more complicated than simple stories.
💡 Low-resource tipTell the story verbally. No materials needed.
Discussion Questions
  • Q1What is the difference between a story for fun and a true story?
  • Q2Has anyone ever told you something that turned out not to be true? How did you find out?
  • Q3Who do you go to when you are not sure if something is true?
  • Q4Can you think of a time when a story made a group of people sound bad — but was not actually true?
  • Q5Why is it good to ask questions instead of just believing?
Writing Tasks
Drawing task
Draw a picture of you asking a question. Write or say: I asked about ___________. It is good to ask questions because ___________.
Skills: Celebrating questioning as a habit
Sentence completion
When someone tells me something surprising, I can ___________. A good way to check is to ___________.
Skills: Articulating a habit of verification
Common Misconceptions
Common misconception

If someone says something with confidence, it must be true.

What to teach instead

Some people sound very sure even when they are wrong — and some people sound uncertain even when they are right. The way someone says something is not proof. The useful question is: how do they know? If the answer is 'someone told me' or 'I saw it somewhere', that is not always good enough. Real knowing usually comes from careful checking.

Common misconception

If you read something on a screen, it must be true.

What to teach instead

Anyone can put anything on the internet. Some of it is true; lots of it is not. A phone or computer screen is just a place where many people share what they say. Some of those people are careful; some are not; some are trying to trick you. Just because something shows up on a screen does not mean it is true. It always helps to ask: who made this? How do they know?

Core Ideas
1 What media is and how much of it there is
2 Different kinds of sources — trustworthy and unreliable
3 Misinformation, disinformation, and bias
4 Skills for checking information
5 How social media can mislead
6 Being a thoughtful consumer of information
Background for Teachers

Media literacy is the ability to find, understand, check, and use information wisely. It includes skills for reading news, watching videos, using social media, and recognising advertising. In the modern world, these skills are essential. People receive more information every day than previous generations saw in a month. Much of it is useful and true. Much of it is misleading, biased, or outright false. The challenge is knowing how to tell the difference.

Types of media

News media (newspapers, TV news, radio, online news) report on current events. Social media (Facebook, Instagram, TikTok, X/Twitter, YouTube) lets ordinary people and organisations share content. Messaging apps (WhatsApp, Telegram) spread content person to person. Entertainment media (films, games, music) is often mixed with messages and advertising. Podcasts, blogs, and newsletters are growing rapidly. All of these can carry good information or bad information.

Key terms

Misinformation is false information spread without intent to deceive — such as rumours passed on by people who think they are true. Disinformation is false information spread on purpose to mislead — such as political propaganda or scams. Bias is a leaning toward one side or viewpoint, which may or may not be admitted. Advertising is content paid for to promote products, services, or ideas — sometimes clearly marked, sometimes disguised. The problem is real and serious. Misleading information spreads fast on social media — often faster than accurate information. False news stories have influenced elections, public health decisions, and international relations. 'Deepfakes' (faked videos or audio made by AI) are making it harder to trust what we see and hear. Many people spend hours a day on platforms designed to grab their attention, not to inform them well.

Basic skills for checking information

(1)

Check the source

Who made this? Is it a known, trustworthy organisation? A random user? A fake account? (2)

Check the date

Is this story current, or from years ago? Old stories sometimes get recycled as new. (3)

Look for evidence

Does the story link to sources? Are the sources real? (4)

Read beyond the headline

Headlines are designed to grab attention and often oversimplify. (5)

Check with other sources

Can you find the same story reported by multiple trusted outlets? (6) Be cautious of emotional reactions. Content designed to make you angry or frightened is often designed to bypass your thinking. (7)

Check your own biases

We tend to believe stories that match what we already think. Practising media literacy does not mean being cynical about everything. It means being curious, careful, and willing to update your views when evidence changes.

Why this matters for democracy

Democracies depend on citizens being able to make informed choices. When people cannot tell trustworthy information from manipulation, democracy itself is threatened. People vote based on false beliefs. Public health decisions are made on bad information. Hostile foreign powers can influence elections.

Hatred spreads through rumours

Media literacy is not a luxury — it is a survival skill for free societies.

Teaching note

Media literacy can be taught in many ways, and age-appropriate skills matter. Young children need to start with basic questioning habits. Older students can learn more technical skills. The key is building lifelong habits of curiosity and careful thinking, not teaching specific lists to memorise.

Key Vocabulary
Media
Ways of sharing information with many people at once — like newspapers, TV, radio, websites, and social media apps.
Media literacy
The ability to find, understand, check, and use information from many sources wisely.
Misinformation
False information that is shared without meaning to mislead — like rumours passed on by people who think they are true.
Disinformation
False information that is shared on purpose, in order to mislead. Used in political campaigns, scams, and hostile operations.
Source
The person or organisation that produced a piece of information. Checking the source is one of the most important media literacy habits.
Bias
A leaning toward one side of an issue. Every source has some bias; good journalism tries to be aware of it and fair.
Fact-checking
Carefully testing whether claims are true — by checking evidence, contacting sources, and comparing with other reports.
Deepfake
A fake video or audio made by computers to look or sound real. A growing danger for media trust.
Classroom Activities
Activity 1 — Where does it come from?
PurposeStudents learn to ask the first important question: who made this?
How to run itExplain that the first question for any piece of information is 'where does it come from?' Walk through examples. (1) A story on a well-known newspaper's website, written by a named journalist. The source is clear: the newspaper. Journalists are trained and bound by rules. Newspapers can be sued for false stories. This does not mean every story is right, but it means there is accountability. (2) A post on social media by someone with a made-up name and no photo. The source is anonymous. There is no accountability. The post could be by anyone — a concerned neighbour, a bored teenager, a paid propagandist, or a hostile foreign operation. (3) A video from a government agency explaining a health topic. The source is clear and likely reliable on health facts (though political agencies may be less neutral on politics). (4) A message forwarded many times on WhatsApp, with no original source. Impossible to check. Such messages often contain misinformation. (5) An advertisement designed to look like a news story. The source is a company trying to sell something. Everything in it is designed to help sales, not to inform. Ask: which sources should you trust most? Trust levels vary, but the principle is simple: known, accountable sources are more reliable than anonymous ones. A story from the New York Times or BBC or a major national newspaper can be wrong, but it will usually be corrected. A WhatsApp forward cannot be corrected because nobody knows who wrote it. Discuss: this does not mean all traditional media is perfect or that everything on social media is wrong. Some social media posters are careful experts. Some traditional media outlets are biased. The rule is to think about who is speaking and what they might have to gain from saying it — then judge accordingly.
💡 Low-resource tipTeacher presents examples verbally. No materials needed.
Activity 2 — Spotting tricks in misleading content
PurposeStudents learn to recognise common techniques used in misleading content.
How to run itTell students that people who spread misleading content use certain tricks. Learning these makes them easier to spot. Walk through common techniques. (1) Shocking headlines. If a headline is designed to make you feel angry, afraid, or shocked — stop and check before sharing. This is often the first sign. (2) Vague sources. 'Scientists say...' 'A doctor told us...' without naming which scientists or which doctor. Real science cites real researchers at real institutions. (3) Old or out-of-context images. An image from a different event is used to illustrate a new story. Photos from past wars, old disasters, or unrelated incidents are constantly recycled. A reverse image search (asking a search engine where an image came from originally) can often reveal this. (4) Too-good-to-be-true claims. Miracle cures. Enormous conspiracies. Simple solutions to complex problems. If it sounds too good to be true, or too shocking to be true, it usually is not true. (5) Emotional manipulation. Content designed to trigger strong emotions — especially anger or fear — is often designed to make you share it before thinking. Real information can also be emotional, but it usually focuses on evidence. (6) Targeting a group. Claims that 'all X people are Y' — 'all immigrants do this', 'all politicians are corrupt' — are usually oversimplifications at best and prejudice at worst. Real information recognises variety within any group. (7) No source or evidence. A claim with no link, no source, no evidence. A claim that 'some people are saying' without identifying who. Ask: have you seen content that uses these tricks? Most students will recognise examples. Discuss: recognising tricks does not mean being paranoid. Most content you see will not use these tricks. But knowing the warning signs lets you pause on the few pieces that do — and these are usually the ones most worth checking.
💡 Low-resource tipTeacher presents techniques verbally. Use examples from students' own experience. No materials needed.
Activity 3 — The social media challenge
PurposeStudents understand how social media specifically shapes what they see and why this matters.
How to run itExplain how most social media platforms work. When you use Facebook, Instagram, TikTok, or similar, you do not see everything posted by everyone you follow. You see what the platform's algorithm decides you will most likely engage with — click, like, share, comment. The algorithm learns from what you do. The more time you spend on content, the more you see of that type. Platforms make money from your attention — through ads that appear while you use the platform. So they are designed to keep you using the platform as long as possible. Discuss what this means. (1) You see more of what you already like. This is good for short-term enjoyment but bad for understanding different views. (2) You see more of what provokes strong emotion. Content that makes you angry, amazed, or outraged gets more clicks. Platforms learn this and show you more of it. This can push you toward more extreme views over time. (3) Misleading content that is emotional often spreads further than careful content. Studies have found that false stories spread faster than true ones on many platforms. (4) You do not usually see what you have blocked or disagreed with. This can create a 'bubble' where you only see one side of issues. (5) Ads, sponsored posts, and real content can look similar. Companies, political groups, and sometimes hostile foreign actors pay to show you specific content. Present some habits for healthier use. (1) Know that your feed is curated, not a fair sample of the world. (2) Follow some accounts that challenge your views. (3) Pause before sharing — especially if you feel a strong emotion. (4) Check sources before believing. (5) Take breaks. The platforms are designed to keep you scrolling; doing something else is a useful break. (6) Learn about the platform's tools — 'block', 'mute', 'report' — but do not expect the platform itself to protect you. Discuss: social media has real benefits — connection with friends and family, learning, creativity. The goal is not to avoid it but to use it wisely. Understanding how it works is the first step.
💡 Low-resource tipTeacher presents concepts verbally. Adapt to local platforms. No materials needed.
Discussion Questions
  • Q1Where do you get most of your information about the world? How reliable do you think each source is?
  • Q2Have you ever shared something online that turned out not to be true? What did you learn?
  • Q3Why do you think false stories sometimes spread faster than true ones?
  • Q4How can you tell if a news story is trying to inform you or to make you feel strongly?
  • Q5Should social media companies do more to stop misleading content? Why or why not?
  • Q6What is one habit about media use you would like to change or improve?
Writing Tasks
Task 1 — Explain and give an example
Explain what media literacy is and give ONE reason why it matters in today's world. Write 4 to 6 sentences.
Skills: Defining a concept, showing its importance
Task 2 — Short argument
Explain how social media can sometimes mislead people even when it is not intentionally sharing false stories. Write 4 to 6 sentences.
Skills: Reasoning about system design
Common Misconceptions
Common misconception

If lots of people are sharing it, it must be true.

What to teach instead

The number of shares or likes tells you nothing about whether something is true. Studies show that false stories often spread faster and wider than true ones — because they are more emotional, more surprising, or more reinforcing of what people already believe. Popular is not the same as accurate. Judging information by how many people engage with it is one of the easiest ways to be misled.

Common misconception

If it has a picture, it must be real.

What to teach instead

Pictures can be misleading in many ways. They can be taken out of context — a real photo from another event used to illustrate a different story. They can be edited. They can be generated by AI. They can be arranged to tell a particular story. The rise of AI image generation is making this even more serious — images that look completely real can now be created from nothing. Always ask: where is this photo from? What is it actually showing?

Common misconception

Checking sources is for experts or for people who have lots of time.

What to teach instead

Basic source-checking takes only a few moments and anyone can do it. Asking 'who made this?', 'when was it made?', 'what is the source?' is quick. A quick search to see whether other outlets are reporting the same thing is also fast. You do not need to be an expert to be thoughtful. What matters is building the habit of pausing before you believe or share — especially when a story makes you feel strong emotions.

Core Ideas
1 Media ecosystems in the 21st century
2 Types of misleading content and their tactics
3 Cognitive biases and why we fall for misinformation
4 Algorithmic amplification and platform design
5 Fact-checking and verification
6 Disinformation as political weapon
7 Media literacy as a democratic skill
8 Building critical habits for a lifetime
Background for Teachers

Media literacy has moved from a useful skill to an essential civic requirement in the digital age. Understanding its theoretical foundations and practical challenges is essential for secondary teaching.

Media ecosystems

The modern information environment differs fundamentally from what existed before 2000. Traditional media (newspapers, broadcast TV, radio) dominated 20th-century information. Their business model depended on mass audiences; their practice included editorial control, fact-checking, and professional standards. These were imperfect but created some shared framework for public discourse. The internet and social media have transformed this.

Anyone can publish

Distribution is instantaneous. Gatekeepers no longer control what reaches audiences. Business models shifted from circulation to attention. Platforms aggregate content from millions of sources and curate through algorithms. These changes have democratised publishing — giving voice to many previously excluded — but also enabled unprecedented spread of misinformation.

Types of misleading content

Claire Wardle and others have developed useful typologies.

Misinformation

False content spread without intent to deceive.

Disinformation

False content spread deliberately to mislead.

Malinformation

True information spread out of context to harm.

Specific tactics

Fabricated content (entirely false); manipulated content (real but altered); imposter content (false sources impersonating real ones); false context (real content presented in misleading context); misleading content (using information to frame issues misleadingly); satire misread as truth. Sophisticated disinformation combines these techniques.

Recent developments include

Deepfakes (AI-generated video/audio); synthetic media; coordinated inauthentic behaviour (networks of fake accounts); algorithmic manipulation (exploiting platform dynamics).

Cognitive biases

Understanding why people fall for misinformation requires psychology.

Confirmation bias

We favour information that confirms existing beliefs.

Availability heuristic

We judge probability by ease of recall (vivid memorable content seems more likely).

Emotional reasoning

Strong emotions override analytical thinking.

Group identity

We trust our ingroup more than outgroups.

Repeated exposure

We believe things more as we see them more (even when we know they are false — the 'illusory truth' effect).

Dunning-Kruger

Incompetent people often overestimate their understanding.

Attention economics

Limited cognitive bandwidth means we cannot check everything. Social media platforms exploit these biases — deliberately or through algorithmic selection for engagement.

Algorithmic amplification

Platforms use algorithms to decide what users see. Algorithms typically optimise for engagement — metrics like time spent, clicks, shares, comments. This has documented effects: amplification of emotional and outrage-inducing content; filter bubbles (people seeing more of what they already agree with); radicalisation pathways (recommendations leading to increasingly extreme content); mis/disinformation spreading faster than corrections. Research (Vosoughi et al., MIT 2018) found false news spreads significantly faster and wider than true news on Twitter. Platform design choices matter enormously. Meta's own research (leaked 2021, 'Facebook Files') showed awareness of harms including teen mental health impacts, ethnic violence amplification, and political polarisation.

Fact-checking and verification

Professional fact-checking has grown significantly. Organisations include PolitiFact, FactCheck.org, Snopes, Full Fact, AFP Fact Check, BBC Verify, and many others. The International Fact-Checking Network (IFCN) sets professional standards. Fact-checking has real effects — labels reduce sharing of false content; corrections do change some minds.

But fact-checking faces serious challenges

Can never keep pace with misinformation volume; often reaches audiences different from those spreading false claims; can be politicised.

Broader verification skills include

Reverse image search; metadata examination; source tracing; lateral reading (checking other sources about an unfamiliar site); using web archives. Stanford History Education Group research (2017) found students at all levels performed poorly on verification tasks — they judged sources by appearance rather than tracing the evidence. This led to redesigned curricula emphasising lateral reading.

Disinformation as political weapon

State-sponsored disinformation is now a major international challenge. Russia has documented networks operating across multiple platforms and languages, with goals including undermining Western democracies, supporting specific political movements, and confusing information environments. The Internet Research Agency's activities around the 2016 US election are the most studied case. Other states (China, Iran, North Korea, various others) also conduct information operations. Domestic actors — politicians, movements, commercial operators — also run disinformation campaigns. The overall effect is to make trust harder everywhere.

Media literacy as democratic skill

Stanley Cohen, Neil Postman, and others long argued that mass media shapes democratic politics profoundly. In the digital era, media literacy has become essential for democratic participation. Citizens unable to distinguish reliable from unreliable information cannot make informed choices. The erosion of shared factual baselines makes democratic deliberation harder. Extensive research links media literacy education to civic engagement and resistance to misinformation. Finland has been a leader — its national media literacy programme, integrated across school curricula, is often cited as a model. Research suggests Finland's program has produced measurable effects on students' ability to evaluate information.

Building critical habits

Skills alone are not enough; habits matter more.

Key habits include

Lateral reading (checking what other sources say about something before trusting it); the 'pause before sharing' discipline; actively following diverse sources; awareness of own biases; tolerance of uncertainty; willingness to update views. These are dispositions that must be practised, not just taught. Schools increasingly integrate media literacy across subjects rather than treating it as a separate topic.

Teaching note

Media literacy can sometimes become politicised if taught in ways that appear to favour one political side. Focus on universal skills applicable to any source — including sources that agree with one's own views. The best media literacy education is scrupulously non-partisan, teaching students to question all sources, including ones they trust.

Key Vocabulary
Media literacy
The ability to access, analyse, evaluate, and create media in a variety of forms. Includes skills for navigating news, social media, advertising, and other information sources.
Misinformation
False or misleading information shared without intent to deceive. Often spread by people who believe it is true.
Disinformation
False information deliberately created and spread to deceive. Used in political campaigns, propaganda, scams, and hostile state operations.
Malinformation
Genuine information deliberately shared out of context to cause harm. Includes leaking private information or presenting real statements selectively to mislead.
Deepfake
Synthetic media — typically video or audio — in which a person's image or voice is replaced or manipulated using AI. A significant challenge to verification.
Filter bubble
Eli Pariser's term for the state of intellectual isolation that can result from algorithmic personalisation, where users encounter mostly information that aligns with their existing views.
Echo chamber
A media or social environment in which people mostly encounter opinions that reinforce their own. Similar to filter bubbles but emphasises social rather than algorithmic causes.
Lateral reading
A verification technique: when encountering unfamiliar information, open other tabs to check the source rather than reading only what the source itself says. Recommended by fact-checking professionals.
Coordinated inauthentic behaviour
Networks of accounts (often fake) acting together to manipulate discussions, boost content, or create false impressions of public opinion. Major platforms now formally police this category.
Confirmation bias
The tendency to favour information that confirms existing beliefs and discount information that contradicts them. A major factor in susceptibility to misinformation.
Classroom Activities
Activity 1 — Analysing a piece of content
PurposeStudents apply critical analysis skills to an actual piece of media content.
How to run itPresent students with a piece of content — a news story, social media post, video, or advertisement. Ideally use real current examples, adapted to local context. Walk through a structured analysis. (1) Who created this? Look for named authors, publishers, organisations. Are they real? Are they trustworthy? (2) When was it created? Is this current, or is old content being recycled? (3) What is the claim? Separate factual claims from opinion. A factual claim is either true or false; an opinion is a judgement. (4) What evidence is offered? Linked sources, named experts, data, images. Is the evidence real and appropriate? (5) How does it make you feel? Strong emotional reaction is a warning sign — but not proof of anything. Pause and think. (6) What is the purpose? To inform, entertain, persuade, sell, or manipulate? Every piece of content has some purpose. (7) What is missing? Counter-arguments, context, complexity. (8) Can I check this elsewhere? Search for the same claim in other sources. If only one source has it, that is significant. Apply to a specific example. Take a recent viral claim — health, politics, celebrity, whatever fits your context. Walk students through the analysis. Note where the content succeeds (clear sourcing, balanced presentation) and where it fails (anonymous claims, emotional manipulation, missing context). Discuss: this kind of analysis does not have to be exhaustive — even a quick version, done regularly, catches most serious problems. The goal is habit formation, not every piece of content getting 20 minutes of analysis. Practise with several more examples. Include a mix: some reliable, some misleading, some in between. Ask students to identify which is which and what the clues were. Discuss common patterns. Reliable content: clear authorship, traceable sourcing, measured tone, acknowledgement of complexity, available correction mechanisms. Unreliable content: anonymous or suspicious sources, emotional manipulation, simplistic claims, lack of evidence, designed to provoke sharing. Discuss: these are probabilities, not certainties. Reliable sources can be wrong; unreliable sources can occasionally be right. But over time, habits of critical analysis yield much better results than either blanket trust or blanket suspicion.
💡 Low-resource tipTeacher selects content. Students analyse in groups. No materials needed beyond content to analyse.
Activity 2 — How algorithms shape what you see
PurposeStudents engage critically with the platform design driving their information environment.
How to run itSet out how major platforms work. Social media platforms (Facebook, Instagram, TikTok, X/Twitter, YouTube) are dominated by algorithmic feeds that decide what users see. The algorithm learns from each user's behaviour — clicks, time spent, shares, comments — and selects content likely to produce more of that behaviour. The explicit goal is engagement, not information quality. Present the research findings. Vosoughi et al. (MIT, 2018, Science): false news spread significantly faster, further, and deeper than true news on Twitter, particularly politically. Peel-Yunis (Stanford, 2021): YouTube recommendations drove viewers toward increasingly extreme content. Meta's internal research (leaked 2021, 'Facebook Files'): platform was aware that its design harmed teen mental health, enabled ethnic violence in multiple countries, and amplified political polarisation. Specific documented cases. Rohingya genocide in Myanmar (2017-onwards): Facebook acknowledged its platform played a significant role in spreading anti-Rohingya hate that contributed to ethnic cleansing. Brazilian political violence: WhatsApp forwarded misinformation at enormous scale in multiple elections. COVID-19 misinformation: anti-vaccine content and conspiracy theories spread widely on multiple platforms. Teen mental health: correlations between platform use (especially Instagram for girls) and depression, anxiety, and body image issues are well-documented, though causation is debated. Discuss mechanisms. Engagement-maximising algorithms tend to favour: emotionally provocative content (especially outrage); content confirming existing beliefs (for pleasure); extreme content (more stimulating than moderate); misinformation (often more striking than boring truth). Over time, this shapes users' information diet toward more extreme, more one-sided, and less accurate content. Discuss responses. Platform-level changes: algorithmic redesign (less emphasis on engagement, more on quality); content moderation (removing clearly harmful content); labels (indicating disputed or misleading content); sourcing transparency. Regulatory responses: EU Digital Services Act (2022) requires large platforms to address systemic risks; UK Online Safety Act (2023) similar. User-level responses: understanding the algorithm's goals (not yours); following diverse sources; pausing before engaging with emotional content; taking breaks; using external tools to block or filter. Ask: what responsibilities do platforms have? Strong view: if you design a system that amplifies harm, you are responsible for that harm. Platforms make billions from engagement; they should use those resources to address consequences. Weaker view: platforms are neutral tools; users are responsible for their own choices. Most thoughtful positions combine: platforms should take reasonable steps to reduce known harms; users also need skills and agency. Discuss: can algorithmic amplification be fixed without destroying what people value about social media? Probably yes, but it requires trade-offs — less engagement in return for better quality. Some platforms (Reddit's older design, Mastodon) operate differently with different results. Major platforms generally resist significant changes because engagement is their business model.
💡 Low-resource tipTeacher presents research and cases verbally. Students discuss. No materials needed.
Activity 3 — Disinformation as political weapon
PurposeStudents engage with state-sponsored and political disinformation as a serious contemporary challenge.
How to run itSet out the phenomenon. Disinformation — deliberately false content spread to mislead — has become a major tool of political and international power. Governments, political movements, and commercial operators run sophisticated campaigns targeting specific audiences to shape beliefs and behaviour. Present well-documented cases. Russian operations around the 2016 US election: the Internet Research Agency operated hundreds of accounts across Facebook, Twitter, Instagram, and other platforms. Content included fake political pages attracting millions of followers, fake protests organised through social media, and coordinated disinformation targeting specific communities. The goal was not to support one candidate but to deepen social divisions and undermine confidence in democratic institutions. Other Russian operations: long-running campaigns in Europe targeting EU cohesion, NATO, and specific national elections. Operations in Africa targeting French and British influence. Operations in Latin America. In 2022-2023, major operations around the Ukraine invasion, including false claims of Ukrainian 'Nazis' and 'biolabs'. Chinese operations: increasingly active globally. Campaigns around the Hong Kong protests, Xinjiang, COVID origins, Taiwan. Generally more focused on promoting positive narratives about China than on active disinformation, but with exceptions. Iranian operations: active in Middle East, US, and other contexts. Domestic political disinformation: widespread globally. Political campaigns in many countries use misleading content about opponents. Commercial disinformation farms (often in Eastern Europe or South Asia) sell services to clients who want to influence specific debates. Discuss tactics. Bots and fake accounts: networks of accounts posing as ordinary users. Hijacking real people: hacking genuine accounts or creating fake ones with real names. Coordinated hashtags: making specific phrases trend artificially. Deepfakes: AI-generated video or audio of real figures saying things they did not say. Emotional content designed to spread: outrage and fear are the most effective. Appearing as ordinary citizens: fake 'grassroots' (astroturfing) to make positions look popular. Discuss responses. Platform responses have improved: identification and removal of coordinated inauthentic behaviour; partnership with fact-checkers; transparency reports. But response remains insufficient for the scale. Government responses: counter-disinformation agencies (EU, UK, Canada, several others); election integrity measures; sanctions on state disinformation operators. Civil society responses: fact-checking organisations; digital literacy programmes; academic research groups (Stanford Internet Observatory, Oxford Internet Institute, Atlantic Council DFRLab, and many others). Citizen responses: developing media literacy; supporting reliable sources; reporting suspicious content. Ask: can disinformation be defeated? Probably not entirely. It is a persistent challenge with incentives for many actors to engage in it. But it can be significantly reduced through combined platform, regulatory, educational, and civic responses. The alternative — ignoring it — is not viable. Discuss: disinformation threatens democracy directly. If voters cannot distinguish reality from manipulation, democratic choice is corrupted. Responses require defending information environments as civic infrastructure, as seriously as we defend physical infrastructure.
💡 Low-resource tipTeacher presents cases and tactics verbally. Students discuss. Handle sensitively in politically tense contexts. No materials needed.
Discussion Questions
  • Q1Research consistently finds that misinformation spreads faster and further than accurate information on social media. What does this tell us about the gap between platform design and democratic health?
  • Q2Meta's internal research revealed knowledge of significant platform harms that the company did not fully act on. What responsibilities should large platforms bear for the consequences of their design choices?
  • Q3Finland's media literacy programme is widely cited as a model. What specific elements should other countries try to adopt, and what might not translate?
  • Q4The concept of 'filter bubbles' is often invoked, but some researchers argue the phenomenon is overstated — people do encounter diverse views online. How strong is the evidence, and what are the implications?
  • Q5Deepfakes and synthetic media are rapidly improving. Within a few years, it may be impossible to distinguish real from fake video. What changes in media literacy, law, and institutions will be needed?
  • Q6State-sponsored disinformation (from Russia, China, and others) targets democratic processes worldwide. What level of response is justified, and what risks come with too strong a response?
  • Q7Some argue that algorithmic curation is not fundamentally different from editorial decisions made by traditional media. Others see it as qualitatively different. Which view is more accurate, and what follows from it?
Writing Tasks
Task 1 — Extended essay
'Media literacy is the most important civic skill of our time.' To what extent do you agree? Write 400 to 600 words.
Skills: Thesis-driven argument, engaging with democracy and information, balanced analysis
Task 2 — Analytical response
Explain why misinformation spreads so effectively on social media, referring to specific mechanisms of platform design and human psychology. Write 200 to 300 words.
Skills: Explaining interconnected mechanisms, using evidence
Common Misconceptions
Common misconception

Older people fall for misinformation; younger people are digital natives who are savvy.

What to teach instead

Research shows that younger people who grew up with digital media are often no better — and sometimes worse — at evaluating online information. The Stanford History Education Group's 2017 research found students across age groups struggled with verification tasks. 'Digital native' refers to familiarity with devices, not skills at evaluating information, which require explicit learning. Older adults may have more traditional research skills but less familiarity with platform dynamics. Both groups need media literacy education; neither group is automatically protected.

Common misconception

Fact-checking is effective against misinformation, so we should rely on it.

What to teach instead

Fact-checking is valuable but limited. Fact-checkers cannot match the volume of misinformation. Their audience often differs from those spreading false content. Research on corrections shows mixed effects — some people update beliefs; others reject corrections as biased. The 'backfire effect' (where corrections strengthen belief in false claims) is probably overstated but real in some cases. Fact-checking is one part of a broader response that must also include platform design, media literacy education, and regulation. Treating fact-checking as a complete solution overestimates what it can do alone.

Common misconception

The solution to misinformation is to trust only established mainstream media.

What to teach instead

Mainstream media can be wrong, biased, or manipulated. Treating it as automatically reliable produces its own errors. Established media outlets have sometimes spread misinformation, reported inadequately, or been captured by specific interests. The better approach is lateral reading: check claims across multiple sources of different kinds, including but not limited to mainstream media. Skilled media consumers trust no single source fully and verify important claims through multiple paths. Blanket trust in any source — mainstream or alternative — undermines media literacy.

Common misconception

Media literacy means being sceptical of everything.

What to teach instead

Universal scepticism is not media literacy — it is cynicism, and it produces its own problems. Cynics tend to be less, not more, accurate about reality. They often become convinced of specific alternative narratives (conspiracy theories) that they treat as uniquely true precisely because they are rejected by 'the mainstream'. Real media literacy involves calibrated trust — different sources earn different levels of confidence based on track record, transparency, accountability, and independent verification. It also involves tolerating uncertainty about things that genuinely cannot be known. The goal is better judgement, not wholesale rejection of sources.

Further Information

Key texts for students: Eli Pariser, 'The Filter Bubble' (2011) — foundational. Neil Postman, 'Amusing Ourselves to Death' (1985) — still relevant. Cass Sunstein, 'Republic.com 2.0' (2007) on political fragmentation. Zeynep Tufekci, 'Twitter and Tear Gas' (2017). Jonathan Haidt's work on social media and youth mental health. Jamais Cascio's writing on the disinformation landscape. For fact-checking: First Draft's resources (firstdraftnews.org); IFCN code of principles. For algorithms and platforms: Frances Haugen's disclosures; Mark MacCarthy, 'Regulating Digital Industries' (2023). Academic research: Renee DiResta, 'Invisible Rulers' (2024); Joan Donovan's work; MIT Media Lab. International documents: UNESCO's Media and Information Literacy Curriculum; OECD reports on digital citizenship. International bodies: First Draft / Shorenstein Center; Stanford Internet Observatory; Oxford Internet Institute; Atlantic Council DFRLab. Tools: Google Reverse Image Search; TinEye; Wayback Machine (archive.org); fact-checking sites (Snopes, PolitiFact, Full Fact, Snopes); Bellingcat tutorials on open-source investigation. Data sources: annual reports from Reuters Institute (Oxford); Pew Research on media use.