How science and public life meet — what science is and is not, how we should use it in decisions, why trust in science matters, and what happens when science is ignored or misused.
Young children are natural scientists. They ask why. They experiment with food, with toys, with their own bodies. They notice things adults have stopped noticing. At this age, the goal is to keep that curiosity alive and to build the basic attitudes that make science work — asking questions, looking carefully, being willing to change your mind when you learn something new. Do not introduce scientific method formally. Do not teach them about policy. Focus on the joy of noticing and the habit of honest looking. Science as a civic topic at this age is simply: we find out about the world by asking, looking, and being willing to be surprised. No materials are needed, though if you have any simple objects to look at closely, children will engage.
Smart people always know the answer.
Really smart people are usually the ones who say 'I do not know' the most. They say it because they know how much there is to find out. Someone who always thinks they know the answer usually has stopped looking. The smart thing — in science and in life — is to be happy to say 'I am not sure'. That is where new learning starts. A scientist who thinks they know everything is a bad scientist. A scientist who keeps asking is a good one.
Science is only something scientists do in special places with white coats.
Science is something everyone can do. When you look at a snail and wonder how it moves, you are being a scientist. When you try a new way of kicking a ball and see if it works better, you are being a scientist. When you ask 'why?' and then try to find out, you are being a scientist. Real scientists in laboratories do careful, detailed work. But the way of thinking — being curious, looking carefully, testing, being willing to be surprised — is something every person can practise, starting from very young.
Science is the systematic study of the natural world through observation, experiment, and reasoning. It is both a body of knowledge (what we currently understand about physics, biology, chemistry, and other fields) and a way of finding things out (the scientific method, broadly understood).
Both have transformed human life.
Asking testable questions, gathering evidence, being willing to be wrong, and subjecting conclusions to challenge by others (peer review). Scientific knowledge grows not because scientists are always right, but because the process of challenge and testing weeds out mistakes over time. Individual scientists can be wrong. The scientific community, working over time, is remarkably good at converging on better understanding — though always with uncertainty. Uncertainty is not a weakness of science but a feature. Good science is honest about what it knows and does not know. Probabilities, confidence intervals, and margins of error are ways of expressing honest uncertainty. When the public hears 'scientists are not sure', this is sometimes taken to mean they do not know — when in fact expressing uncertainty is often a sign of careful, honest work.
Almost every significant public decision today involves scientific evidence — health policy, climate change, technology regulation, education, food safety, environmental protection. Science does not make these decisions alone; values, priorities, costs, and political realities also matter. But decisions made without scientific evidence are usually worse. Good policy uses science as one essential input, alongside democratic deliberation about what we want as a society.
Most people in most countries report significant trust in scientists — though with variations by country, topic, and political affiliation. Trust has been shaken by high-profile cases of scientific misconduct (rare), by commercial corruption (pharmaceutical companies hiding data, tobacco companies denying harm), by political interference in science, and by disinformation campaigns. The COVID-19 pandemic tested public trust in ways that are still being studied. Rebuilding trust requires transparency, acknowledgement of uncertainty, honest communication, and real independence from political and commercial pressure. Disinformation about science has grown substantially. Climate denial, anti-vaccine movements, and many other anti-science narratives now reach large audiences through social media. Some is driven by ideology; some is commercially funded; some is foreign state disinformation. Teaching children to think critically about scientific claims — and to distinguish good evidence from bad — is now a basic civic skill.
Scientists are people. They have biases, interests, and limitations. The strength of science is not that scientists are objective (they are not) but that the method subjects claims to challenge that reduces bias over time. The history of science shows both brilliant discoveries and shameful episodes — the Tuskegee experiments, eugenics, various research scandals. Acknowledging this history is part of a mature relationship with science, not an attack on it.
Treat science with respect but without reverence. Children should learn that science is a powerful and honest way of finding out about the world, while also learning that scientists are humans and the process has its failures. Beware of both 'science denial' (dismissing settled findings) and 'scientism' (treating science as able to answer every question, including moral and political ones it cannot answer). Good civic engagement with science combines respect for evidence with democratic deliberation about values.
If scientists disagree about something, science cannot tell us the answer.
Some disagreement among scientists is normal and healthy. What matters is whether the disagreement is about the edges of a question or about its core. On most major scientific questions (climate change, evolution, the age of the earth, vaccine safety, the basic workings of genetics), there is strong agreement among scientists about the core, with ongoing research on details. Pointing to the details to claim 'scientists disagree' misrepresents this. Real scientific controversy is different from manufactured controversy where a small, often commercially backed minority is portrayed as equivalent to a vast majority. Understanding the difference is a key civic skill.
Scientific theories are just guesses — they are not facts.
The word 'theory' is often misunderstood. In everyday speech, 'theory' can mean a guess or hunch. In science, a theory is an explanation supported by a large body of evidence, consistent with many observations, and able to make reliable predictions. The theory of evolution, the theory of gravity, the germ theory of disease — these are not guesses. They are among the most well-supported understandings in human knowledge. 'It's just a theory' is usually based on confusion about the word. Well-established scientific theories are the strongest form of understanding we have about the natural world.
Scientists are always objective and free from personal bias.
Scientists are humans. They have biases, interests, and viewpoints like everyone else. This is not a problem for science itself — science works precisely because it does not rely on individual objectivity. The method subjects claims to peer review, replication, and challenge from other scientists, which reduces the impact of individual bias over time. Individual studies can be biased; the body of scientific knowledge, built up through many studies and many researchers checking each other, is much harder to bias systematically. Pretending scientists are perfectly objective is not accurate. But dismissing science because scientists are human misses how the method is designed to work despite this.
The relationship between science and society is one of the most important civic questions of our time. Teaching it well requires honest treatment of both science's strengths and its complications.
Science is not a single thing with one method. Physics, biology, psychology, sociology, and other sciences use different specific methods.
Systematic observation, testable claims, willingness to revise in light of evidence, and collective verification through publication and challenge. Karl Popper's emphasis on falsifiability — that scientific claims must be potentially disprovable — remains influential. Thomas Kuhn's 'The Structure of Scientific Revolutions' (1962) showed that science develops in part through 'paradigm shifts' where entire frameworks change. Both perspectives capture important truths about how science actually works.
A foundational distinction in philosophy of science is between 'is' and 'ought'. Science can tell us what is the case — how the world works, what is likely to happen, how different factors interact. It cannot tell us what ought to be — what goals we should pursue, what trade-offs are worthwhile, what is fair. When science is treated as answering value questions, or when values are used to override evidence, problems result. Good public engagement with science requires honest separation.
How does consensus form? Through many studies, by many researchers, in many places, producing consistent findings that resist attempts to disconfirm them. Consensus is not majority vote; it is convergence of evidence. Strong consensus exists on the age of the earth (about 4.5 billion years), evolution, the germ theory of disease, vaccine safety and effectiveness, and human-caused climate change. These are not contested within the relevant scientific communities. What is sometimes called 'scientific controversy' is often not controversy within the field but manufactured controversy from outside it. The history of science and society has been complicated. Science has produced enormous benefit — medicine, agriculture, communication, basic understanding of the universe. It has also been misused. Eugenics was 'science' to many early 20th-century researchers. Medical experiments on vulnerable populations without consent — the Tuskegee syphilis study in the US (1932-1972), Nazi experiments, various colonial contexts — damaged trust that has not fully recovered, particularly in affected communities. Scientific racism attempted to justify colonialism and slavery. Acknowledging this history is not an attack on science; it is part of a mature relationship with it. Scientific misconduct is rare but real. Andrew Wakefield's 1998 MMR-autism paper was fraudulent; it was retracted in 2010 but has caused ongoing harm. Corporate corruption has been more systematic. Tobacco companies spent decades denying evidence of smoking harm. Fossil fuel companies knew about climate change in the 1970s and 1980s but funded public denial. Pharmaceutical companies have hidden unfavourable data. These are not failures of science itself but failures of institutions that should have provided oversight.
Climate denial, anti-vaccine movements, creationism in some contexts, and newer forms of disinformation have reached substantial audiences. Naomi Oreskes and Erik Conway's 'Merchants of Doubt' (2010) documented the deliberate construction of doubt about settled science. Social media has amplified these movements. Some receive significant funding from interested parties. Some are driven by genuine distrust of institutions. Addressing them requires both countering specific claims and addressing underlying conditions that produce distrust.
Major surveys show that most people in most countries report significant trust in scientists, often higher than trust in many other institutions. But trust varies by topic (scientists are more trusted on, say, basic physics than on contested social issues), by country, and by political affiliation. The Pew Research Center, Wellcome Trust, and others produce regular surveys. In the US, there has been growing politicisation of trust in science in recent decades. In other countries, trust has remained relatively stable. Rebuilding trust requires transparency about uncertainty, genuine independence from political and commercial pressure, engagement with concerns rather than dismissal of concerned groups, and acknowledgement of science's own limits and past failures.
Good science-policy relationships involve clear communication of evidence and uncertainty, democratic deliberation about values, and meaningful expert input to decisions. They are undermined by political interference with scientific advice (removing or reframing evidence for political reasons) and by treating political questions as scientific ones. The COVID-19 pandemic tested these relationships globally. Some countries managed science-policy relationships well; others failed. Lessons are still being drawn.
Synthetic biology, artificial intelligence, climate engineering, and other emerging technologies raise questions that societies have barely begun to address. These need careful scientific research, honest public communication, and democratic deliberation. Treating them as too technical for citizens — or too political for scientists — leads to worse outcomes in both directions.
Treat science with respect but without reverence. Present its successes honestly and its failures honestly. Help students develop the tools to engage with science as citizens, not as passive recipients of expert wisdom. Avoid both 'science denial' (dismissing settled findings) and 'scientism' (treating science as able to answer every question, including moral and political ones).
Science proceeds by pure logic and objectivity — personal biases do not affect it.
Scientists are humans with biases, interests, and viewpoints. Individual studies can be shaped by these. What makes science relatively reliable over time is not individual objectivity but the collective process — peer review, replication, and challenge that reduce the impact of individual bias. Philosophers of science (Kuhn, Feyerabend, and others) have documented how science is a human activity, shaped by social and cultural contexts. Historians have documented how scientific findings have sometimes been wrong in ways that reflected the biases of their time (eugenics, scientific racism, much more). This does not make science useless or unreliable — the self-correcting features of the method have eventually overturned these errors. But it does mean the image of science as pure, objective, and individually neutral is a myth. A more honest picture is of science as a powerful but fallible human collective activity, whose strength lies in its methods of collective correction rather than in individual perfection.
If scientists disagree, there is no scientific consensus.
Some disagreement among scientists is normal and healthy — it is how science progresses. Consensus does not require unanimity. What matters is the distribution and quality of disagreement. On most major scientific questions (climate change, evolution, vaccine safety, age of the earth), strong consensus exists among scientists working in the relevant field, with only a small number of dissenters, often with weaker credentials or conflicts of interest. Pointing to these dissenters to claim 'scientists disagree' misrepresents the real situation. This tactic, sometimes called 'false balance', has been systematically used in manufactured doubt campaigns. Recognising the difference between genuine scientific controversy (live debates among well-qualified researchers with good evidence on multiple sides) and manufactured controversy (a small industry-aligned minority amplified by media and disinformation) is a key civic skill.
Science can answer any question if we just do enough research.
Science is powerful but limited. It is extraordinarily good at answering empirical questions — what is the case, how things work, what will happen under certain conditions. It cannot, by itself, answer questions about values, meaning, or morality. What should we do? What is right? What makes a good life? Science can inform these questions but cannot answer them. Religion, philosophy, art, and everyday moral reflection address questions science cannot reach. Treating science as able to answer every meaningful question is called scientism. Scientism tends to produce bad decisions in areas science cannot settle, and to push into areas where other forms of reflection matter. Respecting what science can do — a great deal — does not require pretending it can do everything.
Public trust in science is declining, so faith in expertise is collapsing.
Data on public trust in science is more complex than common narratives suggest. Major surveys (Pew, Wellcome Global Monitor, Eurobarometer) show that in most countries, most people report substantial trust in scientists — often higher than trust in most other institutions. Trust has declined in some countries and on some topics, particularly where politicisation is acute (US on climate, for example). But global trust in science remains relatively high. The perception of collapsing trust is often driven by vocal minorities who are amplified in media attention out of proportion to their numbers. That said, trust varies substantially by topic (higher on basic physics than on contested applications), by country, by political affiliation, and by community (historically harmed communities having lower trust for understandable reasons). Maintaining trust requires ongoing work — transparency, honesty about uncertainty, acknowledgement of past wrongs, engagement with concerns. Panic about collapsing trust, while based on real concerns in some contexts, can also obscure the substantial trust that continues to exist.
Key texts for students: Karl Popper, 'Conjectures and Refutations' (1963) — on falsifiability and scientific method. Thomas Kuhn, 'The Structure of Scientific Revolutions' (1962) — on paradigm shifts. Naomi Oreskes and Erik M. Conway, 'Merchants of Doubt' (2010) — on manufactured doubt campaigns from tobacco to climate. Ben Goldacre, 'Bad Science' (2008) and 'Bad Pharma' (2012) — accessible critiques of science misuse. Naomi Oreskes, 'Why Trust Science?' (2019) — philosophical defence with honest history. David Wootton, 'The Invention of Science' (2015) — history of scientific revolution. Adam Rutherford, 'How to Argue with a Racist' (2020) — on scientific racism and its rejection. For specific topics: James Lovelock and others on planetary science; Robert Proctor, 'Golden Holocaust' (2011) on tobacco; Rebecca Skloot, 'The Immortal Life of Henrietta Lacks' (2010) on ethics in research. For data: the Wellcome Global Monitor on global attitudes to science; the Pew Research Center; the Eurobarometer surveys on science and technology. For contemporary science communication: the Science Media Centre (UK); the AAAS (US); the CSES on public engagement. For disinformation: the First Draft Coalition; the Reuters Institute Digital News Report. For philosophy of science: the Stanford Encyclopedia of Philosophy (free online). For science policy specifically: Daniel Sarewitz and others at the Consortium for Science, Policy and Outcomes.
Your feedback helps other teachers and helps us improve TeachAnyClass.