The Last Word: How AI Must Change to Foster Diversity in Tech
Building fairer and more effective AI systems requires us to prioritize diversity and challenge bias head-on, argues Humans for AI founder Beena Ammanath.
tl;dr
As the founder of Humans for AI, Beena emphasizes the need for diversity in AI to combat bias and enhance system reliability.
Ammanath’s experience revealed a shortage of diverse candidates, prompting the creation of Humans for AI to integrate underrepresented groups into AI development.
Initiatives include training for women, college-to-high school programs, and support for survivors of human trafficking. Proactively addressing biases and embedding diverse perspectives can make AI fairer and more inclusive.
As a business executive and founder of Humans for AI, I’ve seen firsthand the critical need for diversity in AI and tech. While building data science teams, I struggled to find female candidates or people of color. This shortage exposed a larger issue—one that AI can help solve, but only if we approach it thoughtfully.
AI isn’t just about coding; it’s about amplifying human intelligence. To do this effectively, we need subject matter expertise—such as understanding the mechanical properties of jet engines to predict their failures accurately. The interdisciplinary nature of AI opens doors for people from diverse backgrounds to contribute, even if they aren’t traditional technologists. Recognizing this opportunity, I founded Humans for AI to help women and underrepresented minorities find their place in AI development. Our mission isn’t just about checking diversity boxes; it’s about creating AI systems that are robust, fair, and reflective of the diverse world they are meant to serve.
As AI begins to impact every aspect of our lives, we risk embedding and amplifying existing biases if it isn’t shaped by diverse perspectives. To tackle this, we’ve launched initiatives ranging from events for women’s groups to programs where college students teach AI fluency to high schoolers in underserved communities. One of our most meaningful experiences was at UC Berkeley, where we provided AI training to survivors of human trafficking. Watching these young people, who had been deprived of so much, engage with AI and discover new possibilities for themselves was a powerful reminder of why our work matters.
The real challenge AI faces today isn’t some sci-fi scenario of machines taking over the world; it’s the very real and present danger of bias and unreliability. AI systems are trained on historical data, which is often riddled with human biases. For instance, resume screening tools might inadvertently favor certain demographics due to biased training data. Addressing these biases shouldn’t be an afterthought—it must be integral to AI development.
By embedding diverse perspectives from the start, we can catch and mitigate biases before they become entrenched in AI models. This proactive approach makes AI fairer and more inclusive. Companies must self-regulate, educate their teams about AI’s ethical implications, and ensure they ask the right questions whether developing AI or purchasing it.
AI can also be a tool for promoting diversity. It can create personalized learning materials that cater to different styles and backgrounds, helping more people understand and engage with AI—from nurses in healthcare to accountants in finance. Governments and policymakers also have a crucial role. While technology often moves faster than regulation, we must create thoughtful guidelines for AI use. Just as it took years to establish safety standards for cars, we now have the chance to shape ethical AI regulations.
AI’s future depends on embracing diversity—not just for ethical reasons but for its own advancement. At Humans for AI, we’re committed to this mission and invite companies, policymakers, and individuals to join us. Together, we can harness AI’s potential to create a more inclusive and diverse future.