All Concepts
Democracy & Government

Science and Society

How science and public life meet — what science is and is not, how we should use it in decisions, why trust in science matters, and what happens when science is ignored or misused.

Core Ideas
1 Science is how we learn about the world
2 It is good to ask questions and look for answers
3 Being willing to change your mind is smart, not weak
4 Scientists are ordinary people doing careful work
5 The world is full of things we do not yet understand
Background for Teachers

Young children are natural scientists. They ask why. They experiment with food, with toys, with their own bodies. They notice things adults have stopped noticing. At this age, the goal is to keep that curiosity alive and to build the basic attitudes that make science work — asking questions, looking carefully, being willing to change your mind when you learn something new. Do not introduce scientific method formally. Do not teach them about policy. Focus on the joy of noticing and the habit of honest looking. Science as a civic topic at this age is simply: we find out about the world by asking, looking, and being willing to be surprised. No materials are needed, though if you have any simple objects to look at closely, children will engage.

Classroom Activities
Activity 1 — Asking why
PurposeChildren celebrate the habit of asking questions about the world.
How to run itAsk: what are some things you have wondered about? Collect answers. Why is the sky blue? Why do my fingers go wrinkly in water? Where does the moon go in the day? Why do my feet get cold before my hands? Why do grown-ups like coffee? Why do leaves change colour? Celebrate every question. Tell the children: these are not silly questions. These are the kinds of questions scientists ask. Scientists are people who noticed something, wondered why, and tried to find out. Every answer we have about the world — about space, about our bodies, about animals, about weather — started because someone asked a question like yours. Discuss: some questions have answers people have found. Some questions do not have answers yet — nobody has worked them out. Some questions cannot ever be answered by science (like 'what is the best colour?' — that is not really a question science can answer). Ask the children: what is a question you would like to find the answer to? Collect the wishes. Finish with a simple idea: asking questions is the beginning of knowing. Never stop asking.
💡 Low-resource tipDiscussion only. No materials needed.
Activity 2 — What we see and what we think
PurposeChildren notice the difference between looking carefully and just assuming.
How to run itPick an object anyone can see — a leaf, a pen, your own hand. Ask the children to tell you everything they can see about it. At first they will say obvious things. Keep asking. Look again. What else? What shape? What colour, exactly? What does it feel like? What does it smell like? Is there something small you missed? Discuss: most of us walk around noticing only a little of what is around us. Scientists practise looking more carefully. They look and they look and they look, and they notice things others have missed. Discuss: sometimes what we think we know is not what is really there. If I asked you how many windows are in our school building, you might give a number without looking. If you went and counted, you might find a different number. Looking and counting give us a real answer. Guessing gives us what we already thought. Science is mostly about not just trusting what we already think, but going and looking — or counting, or measuring, or testing. Finish with a simple idea: careful looking is how we find out what is real. This is true in science. It is also true in life.
💡 Low-resource tipUse any object at hand. No materials needed.
Activity 3 — Changing your mind is smart
PurposeChildren learn that updating beliefs based on new information is a strength, not a weakness.
How to run itTell a simple story. A child believed that all birds could fly. Then one day they went to the zoo and saw a penguin. Penguins are birds, but penguins cannot fly. At first the child was confused. They thought: maybe this is not really a bird. But then they learned: yes, it is a bird. Not all birds can fly. Their belief was too simple. They had to update it. Ask the children: was the child weak for changing their mind? No. They were learning. If they had kept saying 'all birds can fly' after seeing the penguin, that would have been silly. Discuss: in science, and in life, we are always meeting things that make us change what we think. That is not bad. It is how knowing grows. The person who never changes their mind never learns anything new. Ask: has anyone here ever believed something and then learned they were wrong? Celebrate it. That is not a failure. That is a successful moment of learning. Finish with a simple idea: it takes courage to say 'I used to think X, but now I know Y'. People who can do this are smart and brave. They are the kind of people who actually learn about the world.
💡 Low-resource tipTell the story verbally. No materials needed.
Discussion Questions
  • Q1What is something you have always wondered about?
  • Q2Is it okay not to know? What do you do when you do not know?
  • Q3Have you ever changed your mind about something? What made you change it?
  • Q4What is one thing you can look at closely today that you usually walk past?
  • Q5Do you think anyone knows everything? Even grown-ups?
Writing Tasks
Drawing task
Draw a picture of something you are curious about. Write or say: I want to know ___________. One way I could find out is ___________.
Skills: Celebrating curiosity and the habit of investigation
Sentence completion
A good way to find out about something is ___________. Changing your mind when you learn something new is ___________.
Skills: Articulating investigation and open-mindedness
Common Misconceptions
Common misconception

Smart people always know the answer.

What to teach instead

Really smart people are usually the ones who say 'I do not know' the most. They say it because they know how much there is to find out. Someone who always thinks they know the answer usually has stopped looking. The smart thing — in science and in life — is to be happy to say 'I am not sure'. That is where new learning starts. A scientist who thinks they know everything is a bad scientist. A scientist who keeps asking is a good one.

Common misconception

Science is only something scientists do in special places with white coats.

What to teach instead

Science is something everyone can do. When you look at a snail and wonder how it moves, you are being a scientist. When you try a new way of kicking a ball and see if it works better, you are being a scientist. When you ask 'why?' and then try to find out, you are being a scientist. Real scientists in laboratories do careful, detailed work. But the way of thinking — being curious, looking carefully, testing, being willing to be surprised — is something every person can practise, starting from very young.

Core Ideas
1 What science is and how it works
2 Evidence — how we know what we know
3 Uncertainty and why it is honest
4 Science and public decisions
5 When science is trusted and when it is not
6 Disinformation and how to spot it
7 Science as a human activity
Background for Teachers

Science is the systematic study of the natural world through observation, experiment, and reasoning. It is both a body of knowledge (what we currently understand about physics, biology, chemistry, and other fields) and a way of finding things out (the scientific method, broadly understood).

Neither is perfect

Both have transformed human life.

Key features of scientific thinking include

Asking testable questions, gathering evidence, being willing to be wrong, and subjecting conclusions to challenge by others (peer review). Scientific knowledge grows not because scientists are always right, but because the process of challenge and testing weeds out mistakes over time. Individual scientists can be wrong. The scientific community, working over time, is remarkably good at converging on better understanding — though always with uncertainty. Uncertainty is not a weakness of science but a feature. Good science is honest about what it knows and does not know. Probabilities, confidence intervals, and margins of error are ways of expressing honest uncertainty. When the public hears 'scientists are not sure', this is sometimes taken to mean they do not know — when in fact expressing uncertainty is often a sign of careful, honest work.

Science and public decisions

Almost every significant public decision today involves scientific evidence — health policy, climate change, technology regulation, education, food safety, environmental protection. Science does not make these decisions alone; values, priorities, costs, and political realities also matter. But decisions made without scientific evidence are usually worse. Good policy uses science as one essential input, alongside democratic deliberation about what we want as a society.

Trust in science

Most people in most countries report significant trust in scientists — though with variations by country, topic, and political affiliation. Trust has been shaken by high-profile cases of scientific misconduct (rare), by commercial corruption (pharmaceutical companies hiding data, tobacco companies denying harm), by political interference in science, and by disinformation campaigns. The COVID-19 pandemic tested public trust in ways that are still being studied. Rebuilding trust requires transparency, acknowledgement of uncertainty, honest communication, and real independence from political and commercial pressure. Disinformation about science has grown substantially. Climate denial, anti-vaccine movements, and many other anti-science narratives now reach large audiences through social media. Some is driven by ideology; some is commercially funded; some is foreign state disinformation. Teaching children to think critically about scientific claims — and to distinguish good evidence from bad — is now a basic civic skill.

Science as human

Scientists are people. They have biases, interests, and limitations. The strength of science is not that scientists are objective (they are not) but that the method subjects claims to challenge that reduces bias over time. The history of science shows both brilliant discoveries and shameful episodes — the Tuskegee experiments, eugenics, various research scandals. Acknowledging this history is part of a mature relationship with science, not an attack on it.

Teaching note

Treat science with respect but without reverence. Children should learn that science is a powerful and honest way of finding out about the world, while also learning that scientists are humans and the process has its failures. Beware of both 'science denial' (dismissing settled findings) and 'scientism' (treating science as able to answer every question, including moral and political ones it cannot answer). Good civic engagement with science combines respect for evidence with democratic deliberation about values.

Key Vocabulary
Science
The systematic study of the natural world through observation, experiment, and careful reasoning. Also the body of knowledge built up through this work over centuries.
Evidence
Information — from observation, measurement, or experiment — that supports or challenges a claim. Good science depends on gathering and testing evidence carefully.
Hypothesis
A possible explanation for something, that can be tested. Good science involves forming hypotheses and then testing whether they are right or wrong.
Peer review
When other scientists carefully check the work of a researcher before it is published. Peer review helps catch mistakes and ensures work meets basic standards.
Uncertainty
The honest acknowledgement that we do not know something for certain, or that our answer could be a little higher or lower. Uncertainty in science is usually a sign of careful work, not of ignorance.
Scientific consensus
When the large majority of scientists working in an area agree on what the evidence shows. Consensus is not perfect but is usually the best guide to current understanding.
Disinformation
False information spread deliberately to mislead people. Science-related disinformation is widespread — on climate, vaccines, and many other topics.
Scientific method
The general way of working that most science uses — asking a question, forming a hypothesis, testing it through evidence, sharing results, and being willing to be wrong.
Classroom Activities
Activity 1 — How science actually works
PurposeStudents understand the basics of scientific method without being misled about how neat it really is.
How to run itWalk through what science actually involves. Someone notices something. Maybe an apple falls. Maybe a plant does not grow as expected. Maybe people in one village seem to get a certain illness more. They wonder why. This is the starting point of all science — curiosity about something that needs explaining. They form a possible answer. This is called a hypothesis. It is not a wild guess. It is a reasonable idea that could explain what they noticed and that they can test. They test. They do experiments, gather data, look carefully, measure. They find out whether their hypothesis fits the evidence or not. They share what they found. They write it up and other scientists check it. This is called peer review. If the work is good, it is accepted. If there are problems, they have to be fixed. Others try to confirm the finding. If many different scientists, in different places, get similar results, confidence grows. If the results cannot be repeated, confidence falls. Over time, this produces what we call scientific knowledge. Not perfect. Not final. But strong enough to trust for most practical purposes. Discuss: this is how we know the age of the earth, how diseases spread, how atoms work, how medicines are tested, how climate change is measured, and much more. It is not magic. It is careful, slow, often boring work, done by thousands of people checking and correcting each other. Discuss the realities. Scientists are people. They sometimes make mistakes. They sometimes cheat (rare but real). They sometimes miss things. Their findings can be affected by who funds them. But the system — many scientists checking each other, willing to prove each other wrong, publishing their work openly — catches most errors over time. Science works not because scientists are perfect, but because the system pushes for correction. Ask: why does this matter for us as citizens? Because when politicians, companies, or social media say 'science says X', we need tools to check. Is this consistent with what many scientists in the area agree? Is it peer-reviewed? Is the research funded by people with a stake in the answer? Is it new and not yet widely tested, or is it strong consensus? Knowing how science works helps us use it wisely. Finish with a point. Science is not a perfect machine for finding truth. It is a careful, imperfect, collective way of reducing error over time. That is enough to have built our understanding of the world. It is also why we should trust scientific consensus on well-studied questions — not blindly, but because the process has done its work.
💡 Low-resource tipTeacher explains verbally. No materials needed.
Activity 2 — Science in public decisions
PurposeStudents understand how science and public choices connect, and where they do not.
How to run itAsk: what kinds of decisions should use scientific evidence? Collect answers. Work through examples. Health. Should children be vaccinated? How should food safety rules work? What medicines should be allowed? These need scientific evidence. A law that allowed poisonous food because officials thought it was safe would harm people. Environment and climate. How serious is climate change? What actions would reduce it most? What risks come from pollution? These need scientific evidence. A policy that ignored the physics of the atmosphere would fail. Technology. How should new technologies be tested before use? What risks come from AI? How should data be protected? These need scientific evidence, though decisions also need values. Education. How do children learn best? What teaching methods work? These benefit from scientific evidence, though much depends on context. Now walk through the harder half. Science tells us what is the case — what is likely to happen, how things work. It does not tell us what should be. Values, priorities, and judgements matter too. An example. Science can tell us that burning coal causes pollution and climate change. It cannot tell us how much economic disruption we should accept to stop it. That is a question of values — how much do we weight future generations against present ones, rich against poor, human welfare against nature's health, speed of change against worker transitions. These are not questions science can answer. Science can tell us that a certain policy will probably reduce deaths from a disease by a specific amount. It cannot tell us whether the cost is worth it, or whether the specific policy is fair, or whether people should be required to follow it. These are political and moral questions. Discuss: this means good public decisions need both. Without science, we decide in ignorance of how the world actually works. Without democratic deliberation about values, we hand decisions to experts who cannot tell us what is good — only what is possible. Discuss mistakes. When science is ignored. Decisions made without evidence often fail. COVID-19 responses in some countries that ignored scientific advice early produced worse outcomes than those that listened. Climate policies that denied physics have produced worse outcomes over decades. Health policies that ignored evidence (banning vaccines, promoting fake cures) have killed people. When science is treated as answering value questions. Experts, however skilled, do not automatically know what is good. Pretending scientific findings settle value questions is a form of abdication — letting experts decide things only citizens can properly decide. Finish with a point. A thoughtful citizen takes science seriously without treating it as an oracle. Ask: does our decision use the best available evidence? Are we clear about what science tells us and what it cannot? Are the value questions being deliberated properly by citizens and their representatives? Where these questions are answered well, societies tend to make better decisions. Where they are not, mistakes compound.
💡 Low-resource tipDiscussion only. Use examples relevant to students. No materials needed.
Activity 3 — Spotting bad information about science
PurposeStudents learn practical skills for telling honest science from disinformation.
How to run itStart with a problem. The internet is full of claims about science. Some are true. Many are not. Some are honest mistakes. Some are deliberate lies. Learning to tell the difference is one of the most useful civic skills today. Walk through warning signs of bad science information. Claims that 'scientists have discovered' something revolutionary. Real breakthroughs happen, but most claims of this kind turn out to be overblown news stories about tentative findings. Dramatic language. Good science is usually careful and cautious. 'Proof', 'cure', 'completely safe', 'absolutely certain' — these words are often a warning, not a guarantee. Attacks on 'the scientific establishment' or 'mainstream science'. Sometimes mainstream science is wrong. But nearly always, when someone attacks the whole scientific mainstream on a well-studied topic (evolution, climate change, vaccine safety, etc.), they are wrong. Claims that one study proves something. Single studies are usually just the start. Real confidence builds as many studies find similar things. A headline that says 'new study shows X' is almost never the whole story. Appeals to unnamed experts. 'Scientists say...', 'doctors believe...', 'researchers warn...' without naming who, where, or in what study. Real science can be traced to real researchers publishing real work. Commercial interest. If a claim benefits someone selling something, look closely. Tobacco companies denying smoking causes cancer; fossil fuel companies denying climate change; supplement sellers promoting unproven cures. Financial interest does not make a claim false, but it should raise suspicion. Cherry-picked data. Real science considers the whole picture. When someone shows you only one graph, one study, one example — they may be hiding a larger pattern. Unfalsifiable claims. Claims that cannot be tested, or that the believer insists cannot be shown wrong, are not science. Walk through positive signs of honest science. Published in peer-reviewed journals. Acknowledges uncertainty. Named researchers at identifiable institutions. Consistent with other work in the area. Explained in careful, qualified language. Data available for others to check. Willing to say 'we do not know' where it does not know. Discuss common disinformation targets. Climate change. Scientific consensus is strong: human activity is warming the planet. A small number of contrarian scientists, often funded by fossil fuel interests, claim otherwise. They are wrong — but they are given disproportionate attention in some media. Vaccines. Scientific consensus is strong: vaccines are safe and effective. The 1998 Wakefield paper linking MMR to autism was fraudulent and retracted. Yet anti-vaccine claims continue, amplified by social media. Evolution. Scientific consensus is total: species evolve over time. Some religious and political groups deny this; the scientific case is overwhelming. These are not cases of genuine scientific debate. They are cases of disinformation and scientific consensus being challenged by ideology or commercial interest. Finish: knowing how to tell honest science from disinformation does not require being a scientist. It requires a few simple habits. Check the source. Look for peer review. Look for scientific consensus. Be wary of dramatic claims, unnamed experts, and commercial funding. Trust care over confidence. Real science is rarely about certainty; it is about reducing error over time.
💡 Low-resource tipDiscussion only. Use current examples relevant to students. No materials needed.
Discussion Questions
  • Q1Is there a scientific question you are curious about?
  • Q2Why do you think some people do not trust scientists?
  • Q3Should politicians always do what scientists say? Why or why not?
  • Q4What is the difference between science telling us what is true and telling us what is right?
  • Q5How do you tell a trustworthy source of science information from an untrustworthy one?
  • Q6What is one scientific question that you think your country's laws should pay more attention to?
Writing Tasks
Task 1 — Explain and give an example
Explain what 'scientific consensus' means and give ONE example of an area where there is strong consensus. Write 4 to 6 sentences.
Skills: Defining a key concept and grounding it in a real case
Task 2 — Persuasive writing
Write a short piece (4 to 6 sentences) arguing that public decisions should be informed by science but not decided by science alone — and explain why.
Skills: Persuasive writing on the proper role of evidence in democracy
Common Misconceptions
Common misconception

If scientists disagree about something, science cannot tell us the answer.

What to teach instead

Some disagreement among scientists is normal and healthy. What matters is whether the disagreement is about the edges of a question or about its core. On most major scientific questions (climate change, evolution, the age of the earth, vaccine safety, the basic workings of genetics), there is strong agreement among scientists about the core, with ongoing research on details. Pointing to the details to claim 'scientists disagree' misrepresents this. Real scientific controversy is different from manufactured controversy where a small, often commercially backed minority is portrayed as equivalent to a vast majority. Understanding the difference is a key civic skill.

Common misconception

Scientific theories are just guesses — they are not facts.

What to teach instead

The word 'theory' is often misunderstood. In everyday speech, 'theory' can mean a guess or hunch. In science, a theory is an explanation supported by a large body of evidence, consistent with many observations, and able to make reliable predictions. The theory of evolution, the theory of gravity, the germ theory of disease — these are not guesses. They are among the most well-supported understandings in human knowledge. 'It's just a theory' is usually based on confusion about the word. Well-established scientific theories are the strongest form of understanding we have about the natural world.

Common misconception

Scientists are always objective and free from personal bias.

What to teach instead

Scientists are humans. They have biases, interests, and viewpoints like everyone else. This is not a problem for science itself — science works precisely because it does not rely on individual objectivity. The method subjects claims to peer review, replication, and challenge from other scientists, which reduces the impact of individual bias over time. Individual studies can be biased; the body of scientific knowledge, built up through many studies and many researchers checking each other, is much harder to bias systematically. Pretending scientists are perfectly objective is not accurate. But dismissing science because scientists are human misses how the method is designed to work despite this.

Core Ideas
1 The philosophy and method of science
2 Science and values — where they meet and where they don't
3 Scientific consensus and how it forms
4 The history of science and society — conflicts and progress
5 Scientific misconduct and commercial corruption
6 Disinformation and science denial
7 Trust in science and how to maintain it
8 Science in policy and democracy
Background for Teachers

The relationship between science and society is one of the most important civic questions of our time. Teaching it well requires honest treatment of both science's strengths and its complications.

Philosophy and method

Science is not a single thing with one method. Physics, biology, psychology, sociology, and other sciences use different specific methods.

But common features include

Systematic observation, testable claims, willingness to revise in light of evidence, and collective verification through publication and challenge. Karl Popper's emphasis on falsifiability — that scientific claims must be potentially disprovable — remains influential. Thomas Kuhn's 'The Structure of Scientific Revolutions' (1962) showed that science develops in part through 'paradigm shifts' where entire frameworks change. Both perspectives capture important truths about how science actually works.

Science and values

A foundational distinction in philosophy of science is between 'is' and 'ought'. Science can tell us what is the case — how the world works, what is likely to happen, how different factors interact. It cannot tell us what ought to be — what goals we should pursue, what trade-offs are worthwhile, what is fair. When science is treated as answering value questions, or when values are used to override evidence, problems result. Good public engagement with science requires honest separation.

Scientific consensus

How does consensus form? Through many studies, by many researchers, in many places, producing consistent findings that resist attempts to disconfirm them. Consensus is not majority vote; it is convergence of evidence. Strong consensus exists on the age of the earth (about 4.5 billion years), evolution, the germ theory of disease, vaccine safety and effectiveness, and human-caused climate change. These are not contested within the relevant scientific communities. What is sometimes called 'scientific controversy' is often not controversy within the field but manufactured controversy from outside it. The history of science and society has been complicated. Science has produced enormous benefit — medicine, agriculture, communication, basic understanding of the universe. It has also been misused. Eugenics was 'science' to many early 20th-century researchers. Medical experiments on vulnerable populations without consent — the Tuskegee syphilis study in the US (1932-1972), Nazi experiments, various colonial contexts — damaged trust that has not fully recovered, particularly in affected communities. Scientific racism attempted to justify colonialism and slavery. Acknowledging this history is not an attack on science; it is part of a mature relationship with it. Scientific misconduct is rare but real. Andrew Wakefield's 1998 MMR-autism paper was fraudulent; it was retracted in 2010 but has caused ongoing harm. Corporate corruption has been more systematic. Tobacco companies spent decades denying evidence of smoking harm. Fossil fuel companies knew about climate change in the 1970s and 1980s but funded public denial. Pharmaceutical companies have hidden unfavourable data. These are not failures of science itself but failures of institutions that should have provided oversight.

Disinformation and science denial

Climate denial, anti-vaccine movements, creationism in some contexts, and newer forms of disinformation have reached substantial audiences. Naomi Oreskes and Erik Conway's 'Merchants of Doubt' (2010) documented the deliberate construction of doubt about settled science. Social media has amplified these movements. Some receive significant funding from interested parties. Some are driven by genuine distrust of institutions. Addressing them requires both countering specific claims and addressing underlying conditions that produce distrust.

Trust in science

Major surveys show that most people in most countries report significant trust in scientists, often higher than trust in many other institutions. But trust varies by topic (scientists are more trusted on, say, basic physics than on contested social issues), by country, and by political affiliation. The Pew Research Center, Wellcome Trust, and others produce regular surveys. In the US, there has been growing politicisation of trust in science in recent decades. In other countries, trust has remained relatively stable. Rebuilding trust requires transparency about uncertainty, genuine independence from political and commercial pressure, engagement with concerns rather than dismissal of concerned groups, and acknowledgement of science's own limits and past failures.

Science in policy

Good science-policy relationships involve clear communication of evidence and uncertainty, democratic deliberation about values, and meaningful expert input to decisions. They are undermined by political interference with scientific advice (removing or reframing evidence for political reasons) and by treating political questions as scientific ones. The COVID-19 pandemic tested these relationships globally. Some countries managed science-policy relationships well; others failed. Lessons are still being drawn.

Contemporary challenges

Synthetic biology, artificial intelligence, climate engineering, and other emerging technologies raise questions that societies have barely begun to address. These need careful scientific research, honest public communication, and democratic deliberation. Treating them as too technical for citizens — or too political for scientists — leads to worse outcomes in both directions.

Teaching note

Treat science with respect but without reverence. Present its successes honestly and its failures honestly. Help students develop the tools to engage with science as citizens, not as passive recipients of expert wisdom. Avoid both 'science denial' (dismissing settled findings) and 'scientism' (treating science as able to answer every question, including moral and political ones).

Key Vocabulary
Scientific method
The general approach to generating and testing knowledge about the natural world — involving observation, hypothesis, experiment, and revision. Different scientific fields apply it differently.
Falsifiability
Karl Popper's criterion that scientific claims must be potentially disprovable by evidence. Claims that cannot be disproved, no matter what, are not scientific.
Peer review
The process by which scientists evaluate each other's work before publication. Not perfect — it can miss errors and can be slow — but generally improves research quality.
Replication
Repeating a study to see if the results hold up. A single study is rarely conclusive; replicated findings are much stronger. The 'replication crisis' in some fields has highlighted problems with poor practice.
Scientific consensus
Agreement among scientists who study an area, built up through evidence over time. Not unanimity, but clear convergence. On major well-studied questions (climate, evolution, vaccines) consensus is very strong.
Paradigm
Thomas Kuhn's term for a broad framework guiding scientific work in an area. Paradigms occasionally shift — major scientific revolutions — when accumulated problems cannot be resolved within the current framework.
Scientism
The view that scientific methods can answer all meaningful questions, including moral and political ones. Widely criticised as confusing what science can and cannot do.
Manufactured doubt
The deliberate creation of public uncertainty about scientific findings, particularly when the findings threaten commercial or political interests. Documented in tobacco, fossil fuels, and other industries.
Science communication
The work of explaining science to non-specialists — journalists, educators, public information officers, and scientists themselves. Essential for a functioning democracy, though often done poorly.
Precautionary principle
The idea that, where serious harm is possible but uncertain, caution should be exercised. Applied in environmental and health policy. Contested — some argue it blocks beneficial innovation.
Classroom Activities
Activity 1 — Manufactured doubt — the tobacco playbook
PurposeStudents understand how commercial interests have deliberately created public doubt about settled science.
How to run itBegin with the case. In the 1950s, scientific evidence that smoking caused cancer became overwhelming. Studies across many populations, different methods, different researchers — all showing the same thing. By the 1960s, the evidence was undeniable to anyone examining it honestly. The tobacco industry had a problem. Their product was shown to kill customers. What did they do? They could have acknowledged the evidence and worked to reduce harm. Instead, they launched one of the most sophisticated disinformation campaigns in history. Walk through what they did. They funded their own research — not to find answers, but to produce doubt. If they could find a researcher willing to say 'more research is needed' or 'the evidence is not conclusive', they would publicise it. Science is always uncertain; they exploited this honest uncertainty to claim the whole picture was uncertain. They hired public relations firms. The Hill & Knowlton memo of 1953, following a meeting with tobacco executives, is famous for laying out a strategy of 'creating doubt' about the health case against tobacco. They funded front groups that appeared independent but were industry-controlled. These groups published reports, testified before legislators, and placed op-eds in major newspapers. They attacked specific scientists whose research was damaging to their case. Scientists who published strong findings against tobacco faced smear campaigns and legal pressure. They lobbied politically. They funded politicians and political campaigns, hoping to delay regulations and warnings. They targeted specific markets. When regulation increased in wealthy countries, they focused marketing on developing countries where oversight was weaker. The result. For decades, official recognition that smoking causes cancer was delayed. Warnings on packets took decades to appear. Public understanding was muddled by apparent 'controversy' where none really existed among independent scientists. Millions died prematurely. Tobacco companies have since paid enormous legal settlements — the 1998 Master Settlement Agreement in the US for $206 billion, plus many others — based on their knowledge of harms they publicly denied. Discuss the lessons. Merchants of Doubt (Naomi Oreskes and Erik Conway, 2010) documents that the same strategies — and sometimes the same people — were used against scientific findings on acid rain, the ozone layer, climate change, and pesticides. The tobacco 'playbook' has been followed industry by industry. Understanding it helps explain what looks like 'debate' about settled science. The usual signs: well-funded organisations challenging mainstream findings; emphasis on uncertainty rather than evidence; attacks on specific scientists; political lobbying against regulation; advertising campaigns to shape public opinion. When these signs appear, it is worth asking who is funding the challenge and what they stand to gain. Discuss how to recognise it today. Climate change denial follows the tobacco playbook closely. Fossil fuel companies knew about climate change in the 1970s and 1980s (internal documents have since been released) but funded public denial for decades. Exxon's own scientists warned about climate change internally while the company funded deniers publicly. Similar patterns have appeared around specific chemicals, drugs, and other industry-challenged findings. Ask: what does this mean for engaged citizens? Recognising that organised 'science denial' campaigns exist is part of critical thinking about information. Where major findings are challenged, it is worth asking: are the challenges from independent researchers, or from industry-funded sources? Are they peer-reviewed, or in friendly publications? Do they follow the tobacco playbook? Treating manufactured doubt as equivalent to genuine scientific debate is itself a sign that the strategy has worked. Finish with a point. Science is not corrupted by the existence of manufactured doubt — the scientific community has largely ignored these campaigns and continued to converge on evidence-based findings. The corruption is of public understanding. Citizens who know about these strategies are better equipped to see past them and support policies based on real evidence.
💡 Low-resource tipTeacher presents history verbally. Students discuss. No materials needed.
Activity 2 — Science and values — what science cannot decide
PurposeStudents understand the crucial distinction between empirical and evaluative questions.
How to run itBegin with the philosophical distinction. Science can tell us what is the case — how things are, what will happen, how things interact. Science cannot, by itself, tell us what should be — what we should do, what matters most, what is just. This distinction, sometimes called the 'is-ought gap' after David Hume, is foundational. It matters enormously for how science and public life interact. Walk through concrete examples. Example 1: Climate change. Science can tell us that burning fossil fuels is warming the planet. It can tell us that warming will cause specific impacts — sea level rise, extreme weather, ecosystem changes. It can tell us that certain policies (carbon pricing, renewable energy, reduced consumption) would reduce emissions by measurable amounts. These are empirical questions. Science cannot tell us how much economic disruption we should accept to reduce emissions. It cannot tell us how to trade off future welfare against present welfare, rich against poor, human interests against ecosystem interests. It cannot tell us whether climate action should be through markets or regulation, whether nuclear power should be part of the solution, or who should pay for transition. These are value questions requiring democratic deliberation. Example 2: Pandemic response. Science can tell us how COVID-19 spreads, what interventions (masks, distancing, vaccines) reduce transmission by specific amounts, and what impacts different policies have had. Science cannot tell us how much restriction of freedom is justified by given health benefits, how to balance mental health harms of lockdowns against infection reduction, whether children's education should be disrupted, or how to handle individual risk preferences. These are value questions. Example 3: Genetic technology. Science can tell us what is possible — editing human embryos, making new organisms, extending lifespan. Science cannot tell us what is right — whether we should edit embryos for non-medical traits, whether to create novel organisms that might escape, how to handle life extension that not everyone can afford. These require moral deliberation. Discuss the failure modes. Failure mode 1: scientism. Treating science as able to answer value questions. If scientists say we should ban a technology, do this automatically without democratic debate. If medical experts say a public health measure is 'the science', end discussion. This fails because scientists, however skilled, do not automatically know what is right. Expertise in facts is not expertise in values. Failure mode 2: value-based denial of evidence. Rejecting scientific evidence because its implications are inconvenient to your values. 'I don't like what climate science implies, therefore it must be wrong.' 'Vaccines conflict with my preferred view of autonomy, therefore they must be unsafe.' This fails because wishing does not change reality. Evidence is not determined by preference. Both failures are common in public debate. Good engagement avoids both. Discuss the proper relationship. Science tells us what is. Democratic deliberation decides what we should do about it, given our values and trade-offs. Neither can replace the other. Policy without science is ignorant; science without democracy is technocratic. Both are failure modes. Walk through a careful example. Consider climate policy. Scientists tell us: human activity is warming the planet at approximately 0.2°C per decade, with serious projected impacts. Democracy must decide: what reduction target? How fast? How to distribute costs? Through what mechanisms? Each is a value choice. Different countries, different communities, different voters may reach different conclusions, all consistent with the science. This is not a failure of science — it is science doing its job and democracy doing its job. Where this breaks down, problems result. Countries that deny the science cannot make informed policy. Countries that treat the policy as purely technical (handled by scientists) undermine democratic legitimacy. Countries that separate the two honestly — clear science, open democratic debate — make better decisions. Ask students: what examples do they know where science is being confused with values, in either direction? Can they see manufactured science denial (tobacco, climate)? Can they see 'science says X must be done' framings that hide value choices? Recognising both helps. Finish with a point. The 'is-ought gap' is not an obscure philosophical point. It is a practical civic skill. Citizens who can distinguish empirical from evaluative questions — and who insist that both be treated properly — make better democracies. Science is a powerful tool for knowing what is. It does not replace the harder democratic work of deciding what we should do together.
💡 Low-resource tipTeacher presents examples verbally. Students discuss. No materials needed.
Activity 3 — Trust in science — honest history, honest response
PurposeStudents engage with the real history of science-society relationships and what maintains or damages trust.
How to run itBegin honestly. Scientists and scientific institutions have sometimes earned distrust. Acknowledging this is necessary for mature engagement. Walk through some of the real history. Eugenics. In the early 20th century, many respected scientists advocated eugenics — selective breeding of humans. This led to forced sterilisations in the US and Scandinavia (tens of thousands of people, disproportionately minority and poor), to Nazi policies that killed hundreds of thousands through 'racial hygiene' programmes, and to other abuses. Eugenics was then mainstream scientific opinion. It was deeply wrong. Scientific racism. 19th and early 20th century 'race science' claimed to show racial hierarchies. This scientific racism was used to justify colonialism, slavery, and exclusion. The findings were based on bad methods, cultural bias, and motivated reasoning. They were not corrected until much later. Tuskegee. The US Public Health Service study from 1932-1972 deliberately left Black men with syphilis untreated to study the disease's progression — even after effective treatment became available. Four hundred men and their families were affected. When the study was exposed, it produced enduring distrust of medical research in Black American communities. Colonial medical experiments. Experiments on populations in colonised countries, often without consent or with coerced consent, were widespread through the 20th century. Military experiments. Radiation experiments on citizens during the Cold War (US, UK, others). LSD experiments on subjects who did not know they had been dosed. Pharmaceutical misconduct. Repeated cases of companies hiding unfavourable data about drugs they sold — thalidomide, Vioxx, opioids, and others. People have died as a result. Scientific misconduct. The Wakefield MMR paper. Various high-profile fraud cases. These are rare in proportion to legitimate science but real. Discuss what this history means. It means trust in science cannot be demanded. It has to be earned. Communities that have been specifically harmed — particularly Black Americans, Indigenous peoples, and others who have been experimented on without consent — have reason to be cautious. Dismissing these reasons as 'science denial' is historically illiterate. It also means that 'trust the science' as a slogan is inadequate. Science at its best earns trust through transparency, honesty about uncertainty, willingness to admit errors, and accountability when mistakes occur. Trust based on authority alone is fragile. Discuss what supports trust. Several factors help. Transparent methods. Showing how findings were reached, not just reporting conclusions. Open data. Where possible, making data available for others to check. Honest uncertainty. Saying what we do not know, not claiming more confidence than evidence supports. Independence. Freedom from political and commercial pressure — or at least transparency about conflicts of interest. Engagement with concerns. Addressing real questions from affected communities rather than dismissing them. Acknowledgement of past wrongs. Taking history seriously rather than pretending past abuses did not happen. Accountability. When mistakes or misconduct occur, real consequences — not quiet corrections. Discuss current challenges. The COVID-19 pandemic tested public trust in ways that are still being studied. Public health communications were sometimes inconsistent. Uncertainty was sometimes presented as certainty, and vice versa. Political interference occurred in several countries. Vaccine mandates raised legitimate questions that were sometimes handled badly. All of this fed existing distrust in some communities. Conversely, when scientists were honest about uncertainty, engaged with concerns, and acknowledged mistakes, trust often held up. Anti-science movements — climate denial, anti-vaccine, and others — are real but often smaller than they appear in media. Most people in most countries continue to report substantial trust in scientists. Rebuilding trust where it has been damaged is ongoing work. Ask students: what would make them more trusting of scientific claims? What makes them less trusting? How should scientists communicate with the public? Finish with a point. Trust in science is earned, not demanded. It is built through honesty, transparency, independence, engagement with concerns, and accountability for failures. Students who will be future scientists, journalists, policymakers, and citizens should understand that trust is a relationship requiring work from both sides. The alternative — either blind trust or reflexive distrust — serves no one well.
💡 Low-resource tipTeacher presents history verbally. Students discuss. Handle sensitively in contexts affected by specific abuses. No materials needed.
Discussion Questions
  • Q1Karl Popper argued that scientific claims must be falsifiable — they must be able to be disproved. Does this mean science can never give us certain knowledge, and if so, is that a problem?
  • Q2The tobacco industry's 'manufactured doubt' strategy has been used against many other scientific findings. Why does it work so well, and how can public understanding be defended against it?
  • Q3COVID-19 tested relationships between science, policy, and public trust. What worked well, what failed, and what should be done differently for the next public health emergency?
  • Q4Indigenous communities and others with histories of being experimented on often express caution about participating in medical research. Is this distrust justified, and how should scientific institutions respond?
  • Q5Some argue that 'scientism' — treating science as able to answer all questions — is as dangerous as science denial. Do you agree, and where is the line between respecting science and over-extending it?
  • Q6New technologies (AI, genetic engineering, climate intervention) are moving faster than democratic deliberation about them. How can societies combine expert understanding with citizen input in ways that make good decisions?
  • Q7Should scientists be more willing or less willing to speak out on policy questions in their areas? What are the risks of either choice?
Writing Tasks
Task 1 — Extended essay
'The most important skill for citizens in the 21st century is the ability to tell good scientific evidence from bad.' To what extent do you agree? Write 400 to 600 words.
Skills: Thesis-driven argument engaging with science-literacy as civic skill
Task 2 — Analytical response
Explain what is meant by the 'is-ought gap' and why it matters for how science is used in public decisions. Write 200 to 300 words.
Skills: Analytical treatment of a foundational philosophical distinction
Common Misconceptions
Common misconception

Science proceeds by pure logic and objectivity — personal biases do not affect it.

What to teach instead

Scientists are humans with biases, interests, and viewpoints. Individual studies can be shaped by these. What makes science relatively reliable over time is not individual objectivity but the collective process — peer review, replication, and challenge that reduce the impact of individual bias. Philosophers of science (Kuhn, Feyerabend, and others) have documented how science is a human activity, shaped by social and cultural contexts. Historians have documented how scientific findings have sometimes been wrong in ways that reflected the biases of their time (eugenics, scientific racism, much more). This does not make science useless or unreliable — the self-correcting features of the method have eventually overturned these errors. But it does mean the image of science as pure, objective, and individually neutral is a myth. A more honest picture is of science as a powerful but fallible human collective activity, whose strength lies in its methods of collective correction rather than in individual perfection.

Common misconception

If scientists disagree, there is no scientific consensus.

What to teach instead

Some disagreement among scientists is normal and healthy — it is how science progresses. Consensus does not require unanimity. What matters is the distribution and quality of disagreement. On most major scientific questions (climate change, evolution, vaccine safety, age of the earth), strong consensus exists among scientists working in the relevant field, with only a small number of dissenters, often with weaker credentials or conflicts of interest. Pointing to these dissenters to claim 'scientists disagree' misrepresents the real situation. This tactic, sometimes called 'false balance', has been systematically used in manufactured doubt campaigns. Recognising the difference between genuine scientific controversy (live debates among well-qualified researchers with good evidence on multiple sides) and manufactured controversy (a small industry-aligned minority amplified by media and disinformation) is a key civic skill.

Common misconception

Science can answer any question if we just do enough research.

What to teach instead

Science is powerful but limited. It is extraordinarily good at answering empirical questions — what is the case, how things work, what will happen under certain conditions. It cannot, by itself, answer questions about values, meaning, or morality. What should we do? What is right? What makes a good life? Science can inform these questions but cannot answer them. Religion, philosophy, art, and everyday moral reflection address questions science cannot reach. Treating science as able to answer every meaningful question is called scientism. Scientism tends to produce bad decisions in areas science cannot settle, and to push into areas where other forms of reflection matter. Respecting what science can do — a great deal — does not require pretending it can do everything.

Common misconception

Public trust in science is declining, so faith in expertise is collapsing.

What to teach instead

Data on public trust in science is more complex than common narratives suggest. Major surveys (Pew, Wellcome Global Monitor, Eurobarometer) show that in most countries, most people report substantial trust in scientists — often higher than trust in most other institutions. Trust has declined in some countries and on some topics, particularly where politicisation is acute (US on climate, for example). But global trust in science remains relatively high. The perception of collapsing trust is often driven by vocal minorities who are amplified in media attention out of proportion to their numbers. That said, trust varies substantially by topic (higher on basic physics than on contested applications), by country, by political affiliation, and by community (historically harmed communities having lower trust for understandable reasons). Maintaining trust requires ongoing work — transparency, honesty about uncertainty, acknowledgement of past wrongs, engagement with concerns. Panic about collapsing trust, while based on real concerns in some contexts, can also obscure the substantial trust that continues to exist.

Further Information

Key texts for students: Karl Popper, 'Conjectures and Refutations' (1963) — on falsifiability and scientific method. Thomas Kuhn, 'The Structure of Scientific Revolutions' (1962) — on paradigm shifts. Naomi Oreskes and Erik M. Conway, 'Merchants of Doubt' (2010) — on manufactured doubt campaigns from tobacco to climate. Ben Goldacre, 'Bad Science' (2008) and 'Bad Pharma' (2012) — accessible critiques of science misuse. Naomi Oreskes, 'Why Trust Science?' (2019) — philosophical defence with honest history. David Wootton, 'The Invention of Science' (2015) — history of scientific revolution. Adam Rutherford, 'How to Argue with a Racist' (2020) — on scientific racism and its rejection. For specific topics: James Lovelock and others on planetary science; Robert Proctor, 'Golden Holocaust' (2011) on tobacco; Rebecca Skloot, 'The Immortal Life of Henrietta Lacks' (2010) on ethics in research. For data: the Wellcome Global Monitor on global attitudes to science; the Pew Research Center; the Eurobarometer surveys on science and technology. For contemporary science communication: the Science Media Centre (UK); the AAAS (US); the CSES on public engagement. For disinformation: the First Draft Coalition; the Reuters Institute Digital News Report. For philosophy of science: the Stanford Encyclopedia of Philosophy (free online). For science policy specifically: Daniel Sarewitz and others at the Consortium for Science, Policy and Outcomes.