All Thinkers

Timnit Gebru

Timnit Gebru is an Ethiopian-American computer scientist. She is one of the most important researchers in the field of AI ethics. Her work has shown how artificial intelligence systems can encode and amplify racial bias, gender bias, and other forms of harm. She was born in 1982 or 1983 in Addis Ababa, Ethiopia. She came to the United States as a teenage refugee. Her family fled Ethiopia after political violence affected her brothers. She arrived in the US in 2001 and faced typical refugee challenges. She did her undergraduate work at Stanford University in electrical engineering. She earned a PhD at the Stanford AI Lab in 2017, studying under the leading computer vision researcher Fei-Fei Li. Her doctoral work used Google Street View images to predict demographic and political patterns from cars in different neighbourhoods. The research drew attention. After her PhD, she did postdoctoral work at Microsoft Research, then joined Google in 2018. She co-led Google's Ethical AI team alongside Margaret Mitchell. In 2020, Gebru and her colleagues drafted a research paper about the risks of large language models, then a new technology. The paper, often called the Stochastic Parrots paper, argued that very large AI language systems carried serious risks. Google ordered her to retract the paper or remove her name. She refused. Google fired her, though Google described the departure differently. The firing caused a major public controversy. Many researchers signed petitions in her support. Several Google researchers later resigned in protest. In 2021 she founded the Distributed AI Research Institute (DAIR), an independent research lab focused on AI ethics. She has continued speaking publicly about harms in AI systems. She has become one of the most visible critics of how the technology industry develops AI.

Origin
Ethiopia (later United States)
Lifespan
1982/1983 - present
Era
Modern / 21st Century
Subjects
Artificial Intelligence Ai Ethics Computer Science 21st Century Tech Industry
Why They Matter

Timnit Gebru matters for three reasons. First, her research has documented serious problems in AI systems that mainstream developers had ignored or downplayed. Her 2018 paper Gender Shades, with Joy Buolamwini, showed that commercial facial recognition systems were much less accurate on faces of dark-skinned women than on faces of light-skinned men. Some systems failed to identify Black women's faces correctly almost 35 per cent of the time, while identifying white men's faces correctly over 99 per cent of the time. The paper changed the conversation about AI bias. Several major companies revised their systems after the research came out.

Second, the public controversy over her firing from Google in 2020 made AI ethics a major public issue. Most people had not thought much about how AI systems were developed, who controlled them, or what risks they carried. Her firing showed that researchers raising concerns at major companies could be punished. The case launched serious public debate about whether tech companies should police their own AI development. Many other researchers followed her in raising concerns publicly.

Third, she founded the Distributed AI Research Institute in 2021. The institute is an independent research lab focused on AI ethics, especially as AI affects communities outside the wealthy West. The institute employs researchers from around the world. It has produced important work on language model harms, on the labour conditions of workers who train AI systems, and on AI's effects on the global South. The institute continues to grow. It is one model for how serious AI research can happen outside the major tech companies.

Key Ideas
1
What Is AI Bias?
2
Gender Shades
3
Why She Was Fired
Key Quotations
"We can't take care of all of the problems with our technology by hoping for the best."
— Timnit Gebru, public lectures, c. 2019-2024
Gebru has often pushed back against the hope that AI will work out fine if we just trust the people building it. Hope is not a strategy, she argues. The harms in AI systems are not random accidents. They follow from how the systems are built, who builds them, what data they use, and who profits. Without serious changes, the harms will continue. The line is direct. Many people in technology argue we just need to be optimistic, to trust that progress will sort itself out. Gebru disagrees. She argues we need careful research, regulation, and accountability. Hoping for the best is not enough. For students, the line is a useful prompt for thinking about technology generally. Many problems in many fields cannot be solved by good intentions alone. They require structures, rules, and accountability. Gebru applies this insight to AI in ways the industry has often resisted.
"I'm tired of being asked to be a good Black girl in AI ethics."
— Timnit Gebru, interviews and public statements, c. 2021
Gebru said versions of this around the time of her firing from Google. The line captures her impatience with being expected to make her criticism polite, smooth, and easy for powerful companies to ignore. She is tired of being told she should raise her concerns more gently. She is tired of the unspoken expectation that a Black woman in tech should be grateful for her position and not push too hard. The line is sharp. It connects her work in AI ethics to wider questions of how Black women in professional settings are expected to perform. Modesty. Patience. Cooperation. Gebru has refused these expectations. She has spoken directly about the harms she sees, regardless of how this is received. The cost has been real. The willingness to take that cost has shaped her career. For students, the line is a useful introduction to how race and gender shape professional life, even in technical fields that are sometimes claimed to be neutral. They are not.
Using This Thinker in the Classroom
Scientific Thinking When introducing students to AI bias
How to introduce
Tell students about the Gender Shades paper. Gebru and Buolamwini tested commercial face recognition systems. The systems were highly accurate on light-skinned men. They failed to correctly classify Black women's faces nearly 35 per cent of the time. Discuss with students why this might happen. The systems learn from examples. If the examples are mostly light-skinned faces, the systems learn that pattern best. They then perform worse on faces they have not seen many examples of. The bias is not deliberate. It is built into the choices made when the systems were trained. Discuss with students how AI is now used in many parts of daily life: phones, hiring software, school admissions, security cameras, healthcare. Bias in these systems can affect real lives in serious ways.
Ethical Thinking When teaching students about technology and responsibility
How to introduce
Discuss with students who is responsible when an AI system harms someone. Software companies often say their systems are tools. Users decide how to use them. But the systems are designed in particular ways. The training data is chosen. The tests run are chosen. The products are shipped at chosen moments. Gebru argues these choices carry real responsibility. Companies that build AI cannot just say that any harm is the user's fault. Discuss with students how this connects to other technology debates. Cars are designed with safety features. Food is regulated for purity. Why should AI be different? The discussion can be done at age-appropriate levels. The basic question is real and worth taking seriously.
Cultural Heritage and Identity When teaching students about migration and creative work
How to introduce
Tell students that Timnit Gebru came to the United States as a teenage refugee from Ethiopia. She faced typical challenges of refugee life in a new country. She also became one of the most important AI researchers of her generation. Discuss with students how migration shapes careers. Many leading scientists, artists, and entrepreneurs have come from one country to another. The combination of perspectives often produces work that neither home alone could have. Gebru's experience as a Black woman from Africa working in mostly white tech companies shaped what she noticed about AI bias. She saw what others missed. The example connects to many other migrants whose outsider perspective has driven important work.
Further Reading

For a first introduction, the documentary Coded Bias (2020), directed by Shalini Kantayya, follows Gebru, Joy Buolamwini, and others working on AI ethics. It is suitable for general audiences and includes accessible explanations. Gebru's TED talks and conference keynotes are widely available on YouTube. The DAIR Institute website provides accessible summaries of current AI ethics research. Karen Hao's articles in MIT Technology Review have covered Gebru's career carefully.

Key Ideas
1
The Stochastic Parrots Paper
2
Datasheets for Datasets
3
DAIR: Independent Research
Key Quotations
"Gender and racial bias in computer vision systems doesn't just happen. It is built in by the choices of the people who build the systems."
— Paraphrased from Gebru's published research and lectures
Gebru has often emphasised that AI bias is not a freak accident. It is the result of specific choices. The choice to use a training dataset that mostly contains light-skinned faces. The choice to test the system mostly on developers, who are mostly men. The choice to ship a product before testing it on diverse populations. The choice to ignore researchers who raise concerns. None of these choices are forced. People make them. Different choices would produce different systems. The view is important. It moves AI bias from a sad inevitability to a problem with people responsible for it. If choices caused the bias, different choices can fix it. Companies, regulators, and citizens can demand the different choices. For students, this is a useful framing. Technology is not separate from the people who make it. It carries their assumptions. It reflects their priorities. Gebru insists that we keep this connection in view. AI is not just maths. It is human work with human consequences.
"There is no AI ethics if the people building AI never have to listen to the people harmed by it."
— Paraphrased from Gebru's lectures and writings
This kind of statement appears in different forms in Gebru's public talks. The point is sharp. AI ethics, she argues, is meaningless if it is just internal consultation among the people who profit from AI. Real ethics requires meaningful input from the people who are harmed. Workers whose jobs are replaced. Communities whose data is taken. Defendants who are misidentified by face recognition. The pattern of recent AI development has often been to consult the powerful and ignore the rest. Gebru argues this is a betrayal of the word 'ethics'. Ethics requires listening to those affected, especially when they have less power than those making decisions. The DAIR institute she founded tries to put this principle into practice. Researchers from communities affected by AI lead the work. For students, the line is a useful test for any claim of corporate ethics. Are the affected people in the room? Are they being listened to? Or is 'ethics' just internal communication among those who benefit?
Using This Thinker in the Classroom
Critical Thinking When teaching students about whistleblowers and corporate power
How to introduce
Discuss with students Gebru's firing from Google in 2020. Her co-authored paper raised concerns about large language models. Google asked her to retract or remove names. She refused. Google fired her. The case became a major public scandal. Discuss with students what this means for how research is done at large companies. Researchers who find problems may face serious career consequences for publishing what they find. Companies have power over employee speech. Public interest sometimes conflicts with company interests. Gebru's case is a clear example. Discuss with students how to think about this. Different students will have different views on her response. The discussion is the point. The dilemma is real and applies to many fields beyond AI.
Research Skills When teaching students about how rigorous research changes things
How to introduce
Walk students through what made Gender Shades effective. Gebru and Buolamwini did not just complain about face recognition bias. They tested. They put together a careful sample of faces with different skin tones and genders. They ran the major commercial systems on the sample. They published the results in a peer-reviewed venue. The evidence was clear. Major companies had to respond. Several improved their systems. Some stopped selling face recognition to police. Discuss with students what made this work powerful. Specific evidence. Careful methodology. Clear writing. Public availability. Targeted at the right audience. The combination is a model for how research can produce real change. Students working on serious projects can learn from Gebru's example. Anger alone often achieves little. Anger plus careful evidence can move mountains.
Further Reading

For deeper reading, Gebru's published papers including Gender Shades (with Joy Buolamwini, 2018), Datasheets for Datasets (2018), and the Stochastic Parrots paper (with Emily Bender, Angelina McMillan-Major, and Margaret Mitchell, 2021) are all freely available online. Joy Buolamwini's book Unmasking AI (2023) covers the Gender Shades research and the wider field. Kate Crawford's Atlas of AI (2021) gives broader context for the issues Gebru works on.

Key Ideas
1
Race and Gender in AI
2
Her Pushback Against AI Hype
3
The Costs of Speaking Out
Key Quotations
"We should be focused on the immediate harms of these systems, not hypothetical ones."
— Paraphrased from Gebru's commentary on AI safety debates
Gebru has consistently pushed back against debates that focus on hypothetical future AI risks while ignoring documented current harms. Some prominent voices in technology talk about risks that AI could end humanity in the future. Gebru argues this kind of talk often distracts from immediate harms happening now. People are being denied jobs, denied loans, wrongly arrested, harassed, and exploited by AI systems already deployed. The hypothetical future risks may or may not become real. The current harms are documented. Both could be addressed. In practice, attention often goes to the dramatic future at the expense of the unglamorous present. Gebru argues this serves the interests of companies who profit from current AI development. Talking about the future avoids accountability for the present. For advanced students, the framing is one of the most important in current AI policy. The choice of which risks to focus on shapes which policies get developed. Gebru's framing centres people being harmed now. Different framings produce different priorities.
"If you can't tell where your data came from, you don't actually know what your model does."
— Paraphrased from Gebru's research on datasets and documentation
Gebru's work on Datasheets for Datasets makes this point clearly. AI models are trained on data. The data shapes what the model can do, what biases it has, what kinds of mistakes it makes. If developers do not know where their data came from, how it was collected, or who it represents, they cannot really say what their model does. They can describe how it behaves on tests. They cannot say why. They cannot predict how it will fail in new situations. Many major AI systems have been trained on huge datasets scraped from the internet with little documentation. The problems are predictable. The systems reproduce internet biases. They make harmful claims confidently. They fail in surprising ways. Gebru's argument is that proper data documentation is not optional. It is essential to knowing what you have built. For advanced students, the line is useful for thinking about AI development carefully. Without good documentation of inputs, claims about outputs are uncertain. The current AI industry often ignores this principle. Gebru insists on it.
Using This Thinker in the Classroom
Ethical Thinking When teaching students about whose voices count
How to introduce
Discuss with advanced students Gebru's claim that AI ethics is meaningless without input from the people most affected by AI. Most AI is currently developed by wealthy white and Asian men in a few wealthy countries. The harms fall disproportionately on women, people of colour, and communities outside the wealthy West. Gebru argues that 'ethics' that consults only the wealthy developers is not really ethics. Discuss with students how this principle might apply to other fields. Medical research that ignores the patients most affected. Policy made without listening to the affected communities. Education designed without input from students. The pattern is widespread. Gebru's argument is one of many calls for it to change. The discussion can be applied to many situations students will encounter.
Critical Thinking When teaching students about AI hype and reality
How to introduce
Discuss with advanced students Gebru's pushback against AI hype. Tech executives and some commentators talk about AI in dramatic terms. AI will solve all major problems. Or AI will destroy humanity. Gebru argues both extremes serve the interests of AI companies. Talking about future utopia or apocalypse distracts from current harms. The current harms are documented. Discrimination in hiring, in policing, in healthcare access, in welfare decisions. The future scenarios are speculative. The current harms are real. Discuss with students how this framing affects policy debates. If we focus on speculative futures, we may miss harms happening now. If we focus on current harms, we may miss real future risks. Both kinds of attention may be needed. The question of how to balance them is genuinely difficult. Gebru's framing is one position in this debate, worth taking seriously.
Common Misconceptions
Common misconception

Timnit Gebru is anti-AI.

What to teach instead

She is not. She is a computer scientist with a Stanford PhD in AI. She has spent her career building AI systems and analysing them. She is a critic of how AI is currently developed by a few large companies, not of AI as a technology. She believes AI can be developed responsibly with proper documentation, accountability, and inclusion of affected communities. The DAIR Institute she founded does AI research, not anti-AI activism. Treating her as opposed to AI itself misses the careful position she actually holds. She wants AI developed differently, not abandoned. The distinction matters.

Common misconception

Her firing from Google was a personal disagreement, not about research.

What to teach instead

It was about research. Google asked her to retract the Stochastic Parrots paper or remove her name and her colleagues' names. She refused. Google argued she had not followed proper review processes. She and her supporters argued the demand to retract was unprecedented and the review process was being applied selectively. The paper itself raised concerns about large language models that have been borne out by subsequent events. Several researchers at Google later resigned in solidarity with Gebru. Internal Google documents that became public suggested the dispute was indeed about the substance of her research, not just personal conduct. Treating it as just a personnel matter misses what was actually at stake.

Common misconception

AI bias is just about training data, easily fixed by adding more diverse data.

What to teach instead

Gebru's research shows the problem is deeper. Bias enters AI systems at many points: in how data is collected, in how it is labelled, in what tasks the systems are designed for, in how they are tested, in who is allowed to flag problems. Adding more diverse training data helps but does not solve the structural problem. AI systems often serve the interests of the people building them. If the same people keep building the same systems, the same biases keep appearing. Real fixes require changes to who is in the room, who has decision-making authority, and what counts as a successful AI product. Treating bias as a technical bug misses how it is built into the structure of AI development itself.

Common misconception

AI ethics researchers like Gebru are slowing down beneficial AI progress.

What to teach instead

This is contested. AI ethics researchers including Gebru argue they are pointing out real harms that responsible developers should address. Critics argue this slows beneficial development. The framing depends on what counts as beneficial. AI that works well for some communities and badly for others is not equally beneficial. AI that solves some problems while creating new harms is not just progress. Ethics research has often led to better systems, not worse ones. Companies that took Gender Shades seriously improved their products. Companies that ignored such research have shipped harmful products and faced public backlash, lawsuits, and regulatory action. The picture of ethics as opposed to progress simplifies a much more complex relationship. Better ethics often produces better technology, not worse.

Intellectual Connections
Develops
W.E.B. Du Bois
Du Bois, the great early Black American sociologist, did pioneering empirical research on race in America at the turn of the 20th century. He used careful data, surveys, and statistical analysis to document racism rather than just describe it. Gebru works in this tradition. Her Gender Shades research uses careful data analysis to document AI bias. Reading them together gives students a sense of how Black scholars across more than a century have used rigorous evidence to make racism visible to people who might otherwise deny it. Du Bois's methods set a model. Gebru's work, in a very different field, applies the same rigorous empirical spirit to new problems.
Develops
Alan Turing
Turing's work on the foundations of computing made modern AI possible. Gebru works in the field he helped create. The relationship is one of building on a foundation while also questioning some of its assumptions. Turing famously asked whether machines could think. The question has shaped AI for over 70 years. Gebru's work asks different questions: who builds these machines, who they serve, who they harm. Both questions matter. Reading them together gives students a sense of how a major intellectual field can be critically extended. Turing's questions are still important. Gebru's questions are also important. Modern AI ethics needs both.
Complements
Kimberlé Crenshaw
Crenshaw, the legal scholar who developed the framework of intersectionality, analysed how race and gender combine to produce specific kinds of discrimination Black women face. Gebru's Gender Shades research is essentially an application of intersectional analysis to AI. The systems failed not just on Black faces, not just on women's faces, but specifically on Black women's faces, where race and gender bias compounded. Reading them together gives students a sense of how Crenshaw's framework illuminates real-world problems beyond law. Intersectional analysis shows up in technology, healthcare, and many other fields. Gebru's work is one of the most influential applications.
Complements
Patricia Hill Collins
Collins, the great Black feminist sociologist, has written extensively about how Black women's standpoints can produce knowledge that other perspectives miss. Gebru's career is a clear illustration of this argument. As a Black woman in AI, she noticed problems that white and Asian male researchers had not. Her standpoint did not just give her different opinions. It gave her access to evidence and patterns that the dominant perspective had overlooked. Reading them together gives students a sense of how social position and intellectual contribution connect. Collins's theoretical framework helps explain why Gebru's work was possible and why it took someone like her to do it.
Complements
Mary Midgley
Midgley, the British philosopher, criticised what she called scientism: the idea that science can answer all serious questions including moral and social ones. Gebru's work in AI ethics raises related concerns. Technology companies often present AI as a neutral technical achievement. Gebru argues the technical and the ethical cannot be separated. Like Midgley's critique of scientism, Gebru's work insists that powerful intellectual systems carry social and ethical assumptions that need to be examined directly. Reading them together gives students a sense of how a long tradition of critical engagement with science and technology continues into the AI era.
In Dialogue With
Richard Dawkins
Dawkins's strong version of scientific authority on moral and social questions has been contested by figures including Mary Midgley. Gebru's work raises related concerns about how technical authority can be abused. AI researchers, like prominent biologists before them, sometimes claim that their technical expertise gives them authority on much broader questions. Gebru pushes back. Technical expertise in AI does not include knowing whether AI should be deployed in particular ways. That question requires different kinds of input, including from communities affected by the technology. Reading them together helps students think about the proper limits of technical authority in social and political questions. The boundary is real and contested.
Further Reading

For research-level engagement, the FAccT (Fairness, Accountability, and Transparency) conference proceedings publish much of the leading work in the field. The journals Big Data and Society, Patterns, and others regularly publish AI ethics research. Recent work by Margaret Mitchell, Emily Bender, Deborah Raji, Meredith Broussard, Ruha Benjamin, and many others builds on and extends what Gebru has helped establish. The DAIR Institute publishes ongoing research that represents the leading edge of the field.