Tess Posner, CEO of AI4All AI4ALL
Humans are naturally prone to having bias. External factors, opinions and feelings all help influence the decisions we make. Machine learning forms the core of modern AI systems, with deep learning algorithms being particularly popular. These algorithms are very data-hungry. Specifically, what makes these systems effective is a large quantity of good training data that is relevant to the area in which you’re trying to achieve some machine-learning objective. However, the machine learning data is only as good as the data you feed it. What if there are implicit or explicit biases in this training data? It follows that the AI system will inherit those same biases. Some of these can be easy to spot, but sometimes it’s more subtle than anyone can initially realize which can have significant unintended consequences.
Diverse Teams Equal Data that are More Diverse
The AI Today podcast interviewed Tess Posner, CEO at AI4All, a non-profit organization that focuses on increasing the diversity of AI. Unfortunately, the tech industry today is very homogeneous; there are not many women or people of color in the field. AI4All aims to bring a more diverse background to the AI world. Research has consistently shown that working in a group with little to no diversity leads to less productivity and less imaginative thinking. AI4All’s goal is to help bring new people into the AI industry that may not have otherwise gotten into the industry.
In AI, there is an even bigger impact from limited diversity. Groups that are primarily made up of similar people naturally exhibit bias. Developers with similar backgrounds tend to think in a similar way and this bias can be introduced into datasets. When the AI goes to learn and use that data, the biases appear and sometimes are amplified by the way algorithms interpret the data.
As a result, the datasets that power AI are not as diverse as we’d like them to be. An example of the impact of biased training data is the Correctional Offender Management Profiling for Alternative Sanctions software, or COMPAS for short. COMPAS is used by U.S. courts to assess the likelihood of a defendant becoming a recidivist and helps make decisions on court sentences. Using AI, the system helps courts determine how likely someone is to commit a crime in the future. While this augmented intelligence tool sounds great in theory, the software has been found to be racially biased.
Another big area in which biased data impacts AI is facial recognition. On numerous platforms, AI-based facial recognition systems have problems recognizing women or people of color as accurately as they can identify white men. Much of this problem is due to the lack of diverse training data the models were trained on before being released. The problems of Apple’s face unlock or Google Photos face/object identification has been well documented in the media. One could argue that if AI teams had more diversity, then they would produce more diverse sets of training data that are more representative of society at large.
Why Biased AI Systems is a Big Deal
When bias becomes embedded in AI, it impacts our daily lives. The bias is exhibited in the form of exclusion such as certain groups being denied loans or not being able to use the technology or the technology not working the same for everyone. As AI continues to become more a part of our everyday lives the risks from bias only grow larger.
Companies, researchers and developers have a responsibility to minimize bias in AI systems. A lot of it comes down to ensuring that the datasets are representative and that the interpretation of datasets is correctly understood. However, just making sure that the datasets aren’t bias won’t actually remove bias. Having diverse teams of people working towards the development of AI can be impactful. AI4All works to correct this, and they started by learning how the lack of diversity in the field started.
How to Build Diversity in AI Teams
To help overcome many of these issues it is important to hire and form groups of employees from diverse backgrounds. More input and different perspectives only helps to create a more powerful AI system that doesn’t exhibit many of the problems attributed to biased training sets. Talent is also a major issue in the AI field. There is just not enough skilled talent to fill the recent demand seen from industry. This talent shortage starts as early as high school. Not enough people have access to the tools they need to learn about AI and computer science.
The lack of standard training in computer science and related fields results in AI experts originating from a small pool of schools and having very similar backgrounds. Because of this, programs that open up technological education to everyone are essential. AI4All is aiming to expand such programs. Tess Posner recounted a student who came up to her after a class. The student told her that she once thought that being in the tech field was impossible because she wasn’t smart enough. After attending the class and working among other girls, she now knew that wasn’t true. She had been empowered to find success.
There are so many opportunities in artificial intelligence that there is no room for homogeneous groups. We need more diversity in order to develop the future, not just products to make money. AI4All is making big strides towards this, and they aren’t alone. Numerous programs have been created to increase diversity and bring technology education to everyone.
Kathleen Walch, columnist, is Co-Founder, Senior Analyst at Cognilytica. Kathleen is a serial entrepreneur, savvy marketer, AI and Machine Learning expert, and tech industry connector. She is a senior analyst and founder of Cognilytica, an AI research and advisory firm, and co-host of the popular AI Today podcast.
Prior to her work at Cognilytica, Kathleen founded tech startup-up HourlyBee, an online scheduling system for home services where she quickly became an expert in grassroots marketing, networking, and employee management. Before that, Kathleen was a key part of the direct marketing operation for Harte Hanks managing large scale direct mail campaigns for clients including Bed Bath and Beyond and BuyBuyBaby. Managing mailings with millions of records each month, she created efficiencies in the process saving thousands of dollars and days of processing time from each campaign. Kathleen then spent many years as the Content and Innovation Director for TechBreakfast, the largest monthly morning tech meetup in the nation with over 50,000 members and 3000+ attendees at the monthly events across the US including Baltimore, DC, NY, Boston, Austin, Silicon Valley, Philadelphia, Raleigh and more. In addition she is a SXSW Innovation Awards Judge and AI / Hardware Meetup organizer.
As a master facilitator and connector, who is well connected in the technology industry, Kathleen
regularly meets with innovators in key markets and gets the opportunity to see the latest and newest technologies from game changing companies.
Kathleen graduated from Loyola University with a degree in Marketing. In her spare time she enjoys hanging out with her husband and two young girls and working out – you can frequently find her on jogging paths and workout studios.