In the 1970s and 1980s the top orchestras moved to conduct blind auditions, which conceal the identity of the performer so judges are focused on musical talent and not gender, race, education, or other criteria that has nothing to do with talent and technique. Interestingly, the employment of blind auditions was originally made in response to students feeling that the auditions were biased in favor of graduates from top music programs such as Julliard. The most noted impact from the implementation of the blind audition was actually seen in a different bias: gender bias. Over a period of approximately 25 years, the percentage of women in the orchestra shifted significantly: In 1970, female musicians made up only 5% of players in the top 5 U.S. orchestras, and by 1997 women represented 25% — still not adequate, but an improvement.
In other areas, we can collect data to determine if biases exist that need to be reduced or eliminated. In HR, studies such as this National Bureau of Economic Research field experiment have shown that people with traditional white-sounding names needed to send 10 resumes to receive one callback, while those with names that sound typically African-American needed to send around 15 resumes to get one callback. A number of companies now leverage data science and analytics platforms like Alteryx to remove elements of applicants’ resumes in an effort to reduce opportunities for bias to enter the hiring process.
When bias reduction methods have the opposite effect
As much as I love and live by data, we can’t discount the human element in identifying and eliminating bias. In one study in Australia, a team of behavioral economists worked with more than 2,000 managers in the Australian Public Service. The study randomized resumes and used a control and test group with resumes that showed and disguised applicant gender respectively.
The researchers expected that disguising applicant gender would lead to a reduction in gender bias (much like the blind auditions for orchestras), but were shocked to find that it had the opposite result. It appeared that the managers were already practicing a form of affirmative action, and when gender was disguised on the resumes, they were unable to be proactive.
The role of artificial intelligence and machine learning in bias reduction
When leveraging artificial intelligence and advanced analytic methods, we must be careful that inputs don’t bias the outcomes. In many cases, models are built based on historical data, and if these data include biases, they can propagate into future decision-making. In one now-famous “AI fail” example, a tech company looking to automatically perform initial resume screening built a model based on the characteristics of historically “high-achieving” employees. However, the inputs were flawed: the tech industry is heavily male-dominated and their high-achieving employees were thus more likely to be male as a percentage.