5 Women in Artificial Intelligence You Should Know

As we approach the Fifth Industrial Revolution, these female AI researchers are acting as a much-needed voice of morality.

Apr 18, 2025By Vanja Subotic, PhD Philosophy

women artificial intelligence you should know

 

Since the dawn of computer science, the quest for artificial general intelligence (AGI) has spurred fear and enthusiasm. “Can machines think?” was the question that kept great minds like Alan Turing awake at night and later gave rise to the famous Turing test and other thought experiments. This has, historically, been out of reach—until the most recent version of ChatGPT passed the rigorous Turing test. Interestingly, the ethical direction of AI research is being guided by five remarkable women. Their noble intentions starkly contrast to the commercial exploitation of AI.

 

1. Emily Bender

emily bender conv
A photo of Emily Bender by Ian Allen. Source: NY Magazine

 

Who would ever think that one of the biggest tech scandals in the last couple of years would be ignited by a linguist? Emily Bender was the lead co-author of the now-legendary paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, which led to the controversial firing of Timnit Gebru and Margaret Mitchell from Google. With an MA and PhD from Stanford, Bender got tenure at the University of Washington, where she also founded the Computational Linguistics Laboratory. She was elected a Fellow of the American Association for the Advancement of Science in 2022 and currently serves as the Association for Computational Linguistics president.

 

Bender and her co-authors stressed the environmental, financial, and social harms of large language models that were slowly brewing in the depths of the laboratories in leading tech companies in 2021. The metaphor “stochastic parrot” should convey their key point that these models only parrot anything they can find in insanely large and unfiltered datasets given the assigned probabilities of particular words. However, they do not possess linguistic meaning. This further means that ChatGPT does not understand what you ask him but rather only spits out what seems to be the most likely answer to your query.  That’s why the chatbot makes mistakes and can easily spiral towards hate and toxic speech.

 

Bender has also recently proposed an interesting thought experiment, reminiscent of John Searle’s Chinese Room, to illustrate this shortcoming of chatbots. Imagine two native speakers of English who ended up on two separate islands and can communicate only through underwater cables. An octopus, having nothing better to do, eavesdrops on the conversation. The octopus may detect statistical patterns of the used words and even engage in conversation. Does this mean that the octopus understands English, though? Bender goes on to show that if one of the speakers asks for help due to the bear attack, the octopus could not provide such help despite maybe impersonating the other speaker based on the frequency of particular words.

Get the latest articles delivered to your inbox

Sign up to our Free Weekly Newsletter

 

2. Timnit Gebru

timnit gebru
An illustrated photo of Timnit Gebru for TIME’s List of 100 Most Influential People in AI. Source: TIME

 

Timnit Gebru is one of the most fervent advocates for diversity in the male-dominated field of AI. Gebru is one of the co-founders of the Black in AI workshop and the Distributed Artificial Intelligence Research Institute (DAIR) founder, where she implements all she preaches. She was named one of the 100 most influential people in AI by the Times in 2023 and one of Nature’s top 10 researchers who shaped science in 2021.

 

Born in Addis Ababa, Gebru and her family fled Ethiopia during the Ethiopian-Eritrean War, which followed the Eritrean War of Independence. Gebru got political asylum in the USA and received her BSc, MSc, and PhD from Stanford. Her area of specialization was computer vision, and even before becoming an AI activist, Gebru warned that the lack of diversity in datasets makes software’s outputs skewed and biased. She published a paper with Joy Buolamwini to point out that some instances of facial recognition software are 35% less likely to recognize black women than white men.

 

Gebru was fired from Google along with Margaret Mitchell after publishing a notorious paper on the risks of large language models, along with Emily Bender. Gebru and Mitchell were members of Google’s Ethical AI lab. However, as one door closed, another one opened. Hence, she started an independent research institute to fight racial prejudices in AI and the unsafe and uncritical development of AI systems. In one of the recent publications, she traces the inspiration for AGI in eugenics. Essentially, this amounts to a utopian dream of a particular ethnic and social group that gets rebranded to end humankind’s suffering. Gebru, therefore, infers that AGI is unethical in principle, while legal experts should scrutinize companies that purport to build it.

 

3. Margaret Mitchell

margaret mitchell
A photo of Margaret Mitchell by Chona Kasinger. Source: Bloomberg

 

An intellectual offspring of Ada Lovelace, Margaret Mitchell worked in deep learning, natural language generation, and developing clinical and assistive technology for persons with mild cognitive impairments and non-verbal persons. Mitchell obtained a master’s in computational linguistics from the University of Washington under the supervision of Emily Bender and did her PhD in computer science at the University of Aberdeen. She held important positions at Microsoft and Google at the intersection of science and industry. Thus, Mitchell was a founding member of the research group Cognition in Microsoft and the co-lead of Google’s Ethical AI lab and Machine Learning Fairness research group within the same company.

 

After getting fired from Google along with Timnit Gebru due to their co-authored paper with Emily Bender, Mitchell joined start-up Hugging Face and took part in developing BLOOM, one of the open-source, multilingual large language models with a remarkable level of transparency concerning training data and inner machinery. Being a top-notch AI researcher, Mitchell is currently one of the most competent AI ethicists in developing a viable ethical framework for developing large language models and mitigating the possible harms of chatbots fueled by large language models. For this reason, she was also named one of the 100 most influential people in AI by the Times in 2023.

 

4. Joy Buolamwini

joy buolamwini
Photo of Joy Buolamwini by Shaniqwa Jarvis. Source: Poet of Code

 

Also known as the Poet of Code, Joy Buolamwini is both an artist and scientist, holding a PhD from MIT, two master’s degrees from Oxford and MIT, and a bachelor’s degree in computer science from the Georgia Institute of Technology. She tackles AI’s social implications and harms through scholarly publishing, creative science communication, art installations, and short film production. Impressive, right? Buolamwini was covered as one of the currently most creative people by the Fast Company, recognized as one of the World’s 50 greatest leaders by the Fortune magazine, one of the 100 most influential people in AI by the Times, as well as made two important lists⎯Forbes’ 30 under 30 and the MIT Tech Review’s 35 under 35. She also advises elected officials during US congressional hearings and serves on the Global Tech Panel.

 

In 2016, she founded the Algorithmic Justice League to highlight the injustice of the “coded gaze” and provide users with a channel to report it. Buolamwini coined this term to label the cases of algorithmic discrimination against persons of color, which plague facial recognition technology. Her groundbreaking PhD thesis revealed a deeply unsettling amount of racial and gender biases in AI services from major companies like Microsoft, IBM, and Amazon. However, her research played a pivotal role in leading these companies to halt the sale of facial recognition technology to law enforcement in 2020. Remember Buolamwini’s academic contribution when you start to think that all academics are disconnected from real-world issues and happily confined in their Ivory Tower.

 

In 2023, she published a national bestseller, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, which describes her journey intertwined with the trajectory of unconcerned, profit-hungry technology development.

 

5. Raluca Crisan

Raluca Crisan
A photo of Raluca Crisan. Source: The European AI Conference 2022 | AI.HAMBURG

 

Coming from data science rather than computer science, Raluca Crisan is a no-nonsense woman who has built a testing tool for checking and monitoring data and software for issues like leakage and bias. A CTO and co-founder of the London-based startup Etiq, Crisan tries to fight off algorithmic bias and help the machine learning community make their results more robust. This, in turn, enhances their software’s prediction capabilities than would otherwise be possible. Being an alumna of Amherst College, Oxford, and the University of York, Crisan has an interdisciplinary background in economics, English, and data science. She also received a prize in 2020 for the European Women in AI and was recognized as one of the 100 influential Women in AI Ethics in 2021.

 

Given the quick automatization of businesses through the reliance on generative artificial intelligence like chatbots, which are trained on vast amounts of unfiltered data, there is a growing concern that our lives will be impacted to a large extent. Concerns raised by women in AI ethics highlight that discrimination against ethnic and vulnerable groups is no longer just a theoretical issue discussed in a scholarly publication—it’s a harsh reality. One way to identify and mitigate algorithmic bias is through the methods proposed by Crisan and her Etiq. In other words, the startup’s solution enables everyone on a team—from business stakeholders to data engineers—to validate that the used algorithms within their software are performing as expected and to minimize the chances of unfair decisions.



Author Image

By Vanja SuboticPhD PhilosophyVanja Subotić works as a research associate at the University of Belgrade, where she also earned her PhD in Philosophy in 2023. She was a researcher fellow at the University of Turin, Italy, and visiting teaching staff at the University of Rijeka, Croatia. Vanja specializes in philosophy of science, philosophy of mind & cognition, and philosophy of language. She is passionate about science communication and public outreach and believes that everyone in academia has a moral and epistemic responsibility to leave the ivory tower now and then.