How to use, evaluate, and stay safe with technology — a practical skill for every classroom, designed for low-cost environments where devices may be shared or limited and internet access may be inconsistent.
Digital literacy at Early Years level is about building three simple habits: pause before you touch or share, tell a trusted adult when something feels wrong, and remember that technology is a tool — not a world. Many children in low-income settings first encounter technology through shared family phones, community radios, or televisions rather than personal devices. This is normal. You do not need devices in the classroom to teach digital literacy. Role-play, stories, drawings, and discussion are all highly effective. Focus especially on the idea of private information. In many communities, personal safety concerns — trafficking, exploitation, or community conflict — make this a genuinely important protective lesson, not just a technical one. Avoid making children afraid of technology. The goal is confidence and care, not fear. Celebrate children who ask questions and who tell adults when something confuses or worries them.
Any drawing of a named trusted adult with a completion such as: I go to my mother, or I stop and tell my teacher. The goal is for children to identify a real person they can go to and connect that to a concrete action.
Look for a specific named person and an active response verb — tell, ask, go to. Discuss: who is this person? When did you last talk to them? Have you told them about this lesson?
Phone number and home address are private. Favourite colour and a picture of the sky are public. Name is a useful discussion — context matters and both answers can be valid depending on the situation.
Celebrate all thoughtful answers. The most important outcome is the discussion about why, not just correct placement. Use any disagreements to explore the idea that context changes what is safe to share.
If I can see it on a screen, it is real and true.
Screens can show real things and pretend things. We need to ask an adult we trust when we are not sure if something is real.
Technology is only for adults.
Children use technology every day — radios, phones, TVs, and more. Learning how to use it carefully is an important skill for everyone.
Sharing my name or photo online is the same as telling a friend.
When something is shared online, many people can see it — not just one friend. We should always ask a trusted adult before sharing anything about ourselves.
Digital literacy at primary level goes beyond safety to include information evaluation, responsible use, and the basics of how technology works. In many developing-world contexts, primary students encounter technology primarily through mobile phones — sometimes shared family devices — rather than personal computers or tablets. Teaching should reflect this reality. Key local considerations: In many communities, mobile data is expensive and internet access is irregular. Students may be more familiar with SMS and WhatsApp than with web browsing. False information spreads very rapidly through messaging apps and is a genuine community concern in many regions. Online scams — especially those targeting money transfers, fake job offers, and false health claims — are common and cause real harm. Cyberbullying may look different in low-connectivity contexts: it often happens through SMS, voice messages, or on platforms used by adults as well as children. All activities can be taught without devices. Where a device is available, it can be used as a demonstration tool, but this is not required. The thinking skills are more important than the technology.
A message came to our family WhatsApp group saying that drinking hot water with lemon every morning cures malaria. It was shared by a family member who was trying to help people stay healthy. I stopped before sharing it because I know malaria is a serious illness and I was not sure this was true. I checked by asking my teacher, who said this claim is not supported by medical evidence and that malaria needs proper medicine from a health clinic. I did not share the message and I explained to my family member what my teacher said.
Award marks for: identifying a specific real-world message; noting the source and a plausible reason for sharing; applying all three steps; and explaining the decision to share or not share with a reason. Strong answers will identify a genuine warning sign or a genuine checking step using a non-internet source such as a teacher, health worker, or radio programme.
Rule 1: Never share your password — even with a friend — because if they tell someone else, your account is not safe. Rule 2: Ask a trusted adult before clicking a link — because some links download bad things onto the phone without you knowing. Rule 3: Stop and check before you share a message — because false information can hurt people in your community.
Award marks for three specific, actionable rules — not just be careful — with genuine explanations of why. The explanations should show understanding of cause and effect. Penalise vague rules such as be safe online with no specific action.
Free things online have no cost.
Free apps and websites often collect your personal information — who you are, what you do, who you know. Your data has value, even when you do not pay money.
If a message looks official, it must be real.
Anyone can make a message look official with logos, names, and formal language. Always check the sender's real contact details and never click links without asking a trusted adult.
What I do online is private because I use a nickname.
Nicknames do not make you invisible online. Apps, websites, and companies collect information about your device, your location, and your behaviour even if they do not know your real name.
I can trust information if many people have shared it.
False information can spread very quickly online. The number of shares or likes does not tell us whether something is true. We must check the source.
Secondary digital literacy requires students to engage critically with the systems behind technology — not just how to use it safely. At this stage, students can and should ask harder questions: Who controls this? Who benefits? Who is left out? In developing-world contexts, several issues are particularly urgent.
Students may have fewer opportunities to practise digital skills, which disadvantages them in further education and employment.
False job advertisements, pyramid schemes promoted on social media, and advance-fee fraud are serious threats in many communities. Mobile money fraud is widespread in parts of Africa and South and Southeast Asia.
False claims about vaccines, traditional medicine, and disease treatment spread rapidly and cause serious harm.
During elections, false information and coordinated manipulation of social media are documented in many countries. These are not hypothetical threats — they affect students' families and communities directly.
Many students will already interact with AI systems in translation apps, recommendation feeds, image filters, and exam-preparation tools. Teaching a realistic understanding of what AI can and cannot do, and where its biases come from, is increasingly essential. AI systems built primarily on data from wealthy, English-speaking countries may not work well for all languages, cultures, or contexts — and this is a justice issue, not just a technical one.
During the last election period in our community, a voice message spread on WhatsApp claiming that a candidate had been arrested for corruption and that voting for them would result in the whole community losing government support. The message used urgency pressure — it said to share immediately before it was deleted — and false authority, claiming it came from a senior official. To check it, I would search a trusted news website, listen to a national radio news programme, and contact the candidate's office directly. When speaking to someone who believes it, I would say: I heard this too and I wanted to check it before believing it. I could not find any trusted source that reported this. The technique of urgency — share before it is deleted — is a warning sign that someone does not want us to check. A true story does not disappear.
Award marks for: a specific and realistic example from the student's own context; correct identification of at least one technique with an explanation of how it works in this case; a realistic account of potential harm; at least two checking strategies, ideally including a non-internet option; a respectful and realistic response to the believer that addresses the concern underneath the false belief.
Deleting a post means it is gone forever.
Anyone can take a screenshot before you delete something. Companies also store data for long periods. Think carefully before you post — deletion is not always possible or complete.
Using private browsing or a VPN makes me completely anonymous.
Private browsing only stops your device from saving your history. Your internet provider, school network, and websites can still see what you do. A VPN adds some protection but is not total anonymity.
Artificial intelligence is neutral and has no bias.
AI systems are trained on data made by humans. If that data contains bias — about gender, race, or class — the AI learns that bias too. AI can make unfair decisions that affect real people.
Only careless people get scammed or hacked online.
Cybercriminals use advanced and convincing tricks. Many educated, careful people are successfully deceived. Staying safe requires knowledge and up-to-date habits, not just carefulness.
Key resources for teachers: The SIFT method (Stop, Investigate, Find better coverage, Trace claims) is documented at checkplease.cc and works well in low-connectivity contexts. First Draft (firstdraftnews.org) publishes free guides on misinformation in local contexts including Africa, Asia, and Latin America. The Alliance for Affordable Internet (a4ai.org) tracks connectivity and access data by country. Mozilla Foundation digital literacy resources (foundation.mozilla.org) are free and available in multiple languages. For AI literacy: the AI4K12 initiative provides free age-appropriate resources. For cybersecurity basics appropriate for low-resource contexts: the Electronic Frontier Foundation's Surveillance Self-Defence guide (ssd.eff.org) is free, practical, and available in many languages.
Your feedback helps other teachers and helps us improve TeachAnyClass.