Timnit Gebru is an Ethiopian-American computer scientist. She is one of the most important researchers in the field of AI ethics. Her work has shown how artificial intelligence systems can encode and amplify racial bias, gender bias, and other forms of harm. She was born in 1982 or 1983 in Addis Ababa, Ethiopia. She came to the United States as a teenage refugee. Her family fled Ethiopia after political violence affected her brothers. She arrived in the US in 2001 and faced typical refugee challenges. She did her undergraduate work at Stanford University in electrical engineering. She earned a PhD at the Stanford AI Lab in 2017, studying under the leading computer vision researcher Fei-Fei Li. Her doctoral work used Google Street View images to predict demographic and political patterns from cars in different neighbourhoods. The research drew attention. After her PhD, she did postdoctoral work at Microsoft Research, then joined Google in 2018. She co-led Google's Ethical AI team alongside Margaret Mitchell. In 2020, Gebru and her colleagues drafted a research paper about the risks of large language models, then a new technology. The paper, often called the Stochastic Parrots paper, argued that very large AI language systems carried serious risks. Google ordered her to retract the paper or remove her name. She refused. Google fired her, though Google described the departure differently. The firing caused a major public controversy. Many researchers signed petitions in her support. Several Google researchers later resigned in protest. In 2021 she founded the Distributed AI Research Institute (DAIR), an independent research lab focused on AI ethics. She has continued speaking publicly about harms in AI systems. She has become one of the most visible critics of how the technology industry develops AI.
Timnit Gebru matters for three reasons. First, her research has documented serious problems in AI systems that mainstream developers had ignored or downplayed. Her 2018 paper Gender Shades, with Joy Buolamwini, showed that commercial facial recognition systems were much less accurate on faces of dark-skinned women than on faces of light-skinned men. Some systems failed to identify Black women's faces correctly almost 35 per cent of the time, while identifying white men's faces correctly over 99 per cent of the time. The paper changed the conversation about AI bias. Several major companies revised their systems after the research came out.
Second, the public controversy over her firing from Google in 2020 made AI ethics a major public issue. Most people had not thought much about how AI systems were developed, who controlled them, or what risks they carried. Her firing showed that researchers raising concerns at major companies could be punished. The case launched serious public debate about whether tech companies should police their own AI development. Many other researchers followed her in raising concerns publicly.
Third, she founded the Distributed AI Research Institute in 2021. The institute is an independent research lab focused on AI ethics, especially as AI affects communities outside the wealthy West. The institute employs researchers from around the world. It has produced important work on language model harms, on the labour conditions of workers who train AI systems, and on AI's effects on the global South. The institute continues to grow. It is one model for how serious AI research can happen outside the major tech companies.
For a first introduction, the documentary Coded Bias (2020), directed by Shalini Kantayya, follows Gebru, Joy Buolamwini, and others working on AI ethics. It is suitable for general audiences and includes accessible explanations. Gebru's TED talks and conference keynotes are widely available on YouTube. The DAIR Institute website provides accessible summaries of current AI ethics research. Karen Hao's articles in MIT Technology Review have covered Gebru's career carefully.
For deeper reading, Gebru's published papers including Gender Shades (with Joy Buolamwini, 2018), Datasheets for Datasets (2018), and the Stochastic Parrots paper (with Emily Bender, Angelina McMillan-Major, and Margaret Mitchell, 2021) are all freely available online. Joy Buolamwini's book Unmasking AI (2023) covers the Gender Shades research and the wider field. Kate Crawford's Atlas of AI (2021) gives broader context for the issues Gebru works on.
Timnit Gebru is anti-AI.
She is not. She is a computer scientist with a Stanford PhD in AI. She has spent her career building AI systems and analysing them. She is a critic of how AI is currently developed by a few large companies, not of AI as a technology. She believes AI can be developed responsibly with proper documentation, accountability, and inclusion of affected communities. The DAIR Institute she founded does AI research, not anti-AI activism. Treating her as opposed to AI itself misses the careful position she actually holds. She wants AI developed differently, not abandoned. The distinction matters.
Her firing from Google was a personal disagreement, not about research.
It was about research. Google asked her to retract the Stochastic Parrots paper or remove her name and her colleagues' names. She refused. Google argued she had not followed proper review processes. She and her supporters argued the demand to retract was unprecedented and the review process was being applied selectively. The paper itself raised concerns about large language models that have been borne out by subsequent events. Several researchers at Google later resigned in solidarity with Gebru. Internal Google documents that became public suggested the dispute was indeed about the substance of her research, not just personal conduct. Treating it as just a personnel matter misses what was actually at stake.
AI bias is just about training data, easily fixed by adding more diverse data.
Gebru's research shows the problem is deeper. Bias enters AI systems at many points: in how data is collected, in how it is labelled, in what tasks the systems are designed for, in how they are tested, in who is allowed to flag problems. Adding more diverse training data helps but does not solve the structural problem. AI systems often serve the interests of the people building them. If the same people keep building the same systems, the same biases keep appearing. Real fixes require changes to who is in the room, who has decision-making authority, and what counts as a successful AI product. Treating bias as a technical bug misses how it is built into the structure of AI development itself.
AI ethics researchers like Gebru are slowing down beneficial AI progress.
This is contested. AI ethics researchers including Gebru argue they are pointing out real harms that responsible developers should address. Critics argue this slows beneficial development. The framing depends on what counts as beneficial. AI that works well for some communities and badly for others is not equally beneficial. AI that solves some problems while creating new harms is not just progress. Ethics research has often led to better systems, not worse ones. Companies that took Gender Shades seriously improved their products. Companies that ignored such research have shipped harmful products and faced public backlash, lawsuits, and regulatory action. The picture of ethics as opposed to progress simplifies a much more complex relationship. Better ethics often produces better technology, not worse.
For research-level engagement, the FAccT (Fairness, Accountability, and Transparency) conference proceedings publish much of the leading work in the field. The journals Big Data and Society, Patterns, and others regularly publish AI ethics research. Recent work by Margaret Mitchell, Emily Bender, Deborah Raji, Meredith Broussard, Ruha Benjamin, and many others builds on and extends what Gebru has helped establish. The DAIR Institute publishes ongoing research that represents the leading edge of the field.
Your feedback helps other teachers and helps us improve TeachAnyClass.