Our lives are shaped by algorithms to an extent that most people probably don’t even realize. We’re all familiar with dystopian movies, books and tropes that foretell what the world will look like when computers are the ones in control, but in real life, the question is not whether computers control things or people control things — it’s always people. Computers do what we tell them to do. The fact that many if not most of the biometric and artificial intelligence (AI) tools we use have racially and gender-biased outcomes isn’t the work of The Ultimate Computer; it’s our own biases, written in code — and that includes both conscious and unconscious biases.

If we are ever to put to rest our history of discrimination, slavery and oppression, we need to deal effectively with it in every facet of our lives, including our digital lives. Along with regulators and citizens, companies and investors need to understand how to recognize bias in technology and take steps to eradicate it.

Where do algorithms touch our lives?

  • Healthcare. Healthcare systems often use predictive algorithms to identify where spending should be directed, and they have routinely provided racially biased outcomes. “At a given risk score, Black patients are considerably sicker than white patients,” researchers found.1
  • Private security. Many retailers have at least tried using facial recognition software to enhance store security. The aim is often to protect both customers and employees by identifying “repeat offenders and organized crime syndicates,” but in at least one experience, a retailer installed the technology in “largely lower-income, non-white neighborhoods,” a move easily interpreted as reflecting existing racial bias.2
  • Law enforcement. It is well known that many police departments already use facial recognition, and while there are legitimate applications of that technology in law enforcement, it can also be used to surveil and oppress minorities. This is particularly concerning not just because much policing already appears to be racially biased, but because the potential for false positives is so high. The only demographic group for which facial recognition technology is reasonably accurate is white males. Nearly 44% of tech employees responding to a recent survey noted they believed their companies should not work with law enforcement, following 2020’s Black Lives Matter protests.3
  • National, state and local security. There are many national security users of facial recognition, and we probably don’t know what all of them are. We do know that Homeland Security and Border Control use facial recognition systems, and most of these systems do not involve the consent of those scanned. Use of facial recognition for government no-fly lists, for example, generally does not let people who are flagged know that they have been flagged, which means they have no way to file a grievance or legal action to redress misunderstandings or false positives.4
  • Targeted advertising or products. Artificial intelligence is used to score or rate beauty, which not only involves biometrics but subjective judgments about what is or isn’t pretty.5 Like other applications of biometrics, there are useful and helpful applications, but potentially harmful ones as well. Cosmetics companies have invested in beauty AI applications, and to the extent that these encourage Black women and other women of color to invest in certain skin care and cosmetic products, it may also increase exposure of these demographic groups to toxic products. While some beauty products are formulated to limit exposure to hazardous chemicals, there are proportionally fewer choices in the less-hazardous group available to Black women than to white women,6 and women of color have higher levels of beauty-related environmental contaminants in their bodies than white women, regardless of socioeconomic status.
  • Credit scoring. Lenders are legally prohibited from making credit decisions based on applicants’ race, color, religion, nationality, sex, marital status or age — though even without specific information on an applicants’ characteristics, algorithms for credit scoring often wind up with highly biased outcomes — but lenders can use credit scores, and millions of Americans don’t have one. People who live in “credit deserts,” or lower-income census tracts, which often means people of color, are eight times more likely to lack credit scores. Moreover, being identified as medically high risk can affect credit scores, and since many healthcare algorithms have racially biased outcomes, the effect spills over into access to finance.7
  • Hiring. Employers are often deluged with applications for open positions, and many use algorithms to help decide which applicants to interview. One field experiment that randomly assigned names to fictitious résumés showed that applicant names alone resulted in significant racial bias. We’ll explore more about that below.8

How does bias get into biometrics?

We assume most biometrics developers do not deliberately incorporate racial bias, but how, then, do algorithms and AI systems wind up being so routinely and reliably racially biased? Sometimes it’s because biased data is used to “train” the models to make decisions, and sometimes it’s because incorrect data are used. Finally, machine learning — the engine of AI — can also find correlations that were never built into any data set but often emerge when vast amounts of data — big data — are harvested for machine learning.

These systems can also have gender bias as well. In a well-known incident in 2021, many customers noted that Apple’s credit card apparently offered women smaller lines of credit than men, even when their economic status was similar. The algorithm that determined credit lines for that card, which came from Goldman Sachs, stated that gender data were not collected to make the determinations — but there are many proxies for gender in big data sets that can still result in biased outcomes.

Training data

For the most part, AI and machine learning are black boxes. We know what data are put into them, and we see what comes out; what happens inside is rarely visible. But they are all built by humans, and to train the algorithms to make decisions, humans feed them mountains of data. This is called training, and the data used to train many systems may already be biased or incomplete or are simply ill-suited to the decisions being made.10 An investigation into a machine learning system that court systems use to predict who is likely to continue criminal activity, for example, rated Black people as higher risks than white people. Trained on data about who is most likely to end up in jail, the algorithm ended up with the same bias that the criminal justice system already reflects — a bias against Black people.11

One famous experiment tested responses to fictitious résumés sent in to a wide variety of job opening announcements, finding significant racial bias in the results. Fictitious applicants with white-sounding names like Emily and Greg were 50% more likely to receive callbacks than fictitious applicants with names that sounded African American, like Lakisha and Jamal. Even applicants with high-quality résumés were less likely to receive callbacks if their names were traditionally African American sounding.8 Now suppose that a hiring algorithm were trained with data from actual job applications and hires. That algorithm is trained on data that contain significant racial bias and will likely continue to reflect that bias. Algorithms are simply our biases — conscious or unconscious — written in code.

“Algorithms are simply our biases — conscious or unconscious — written in code.”

This is by no means hypothetical. The American healthcare system readily demonstrates the impact of biased training data. It uses predictive algorithms to identify healthcare needs in the population and can affect where healthcare dollars are allocated. That algorithm also routinely directs those dollars to white patients and areas, because it was trained not on datasets identifying people’s diseases and conditions, but on healthcare spending. Since Black people are on average considerably less wealthy than white ones, they tend to spend less on everything, including healthcare.1

Facial recognition technologies are unfortunate exemplars as well. “In fact, a commonly used dataset features content with 74% male faces and 83% white faces. If the source material is predominantly white, the results will be too … Since facial recognition software is not trained on a wide range of minority faces, it misidentifies minorities based on a narrow range of features.”12 MIT research found that 99% of the time facial recognition accurately identified a face as male if the picture was of a white man, but the error rate went up 20-34% when the picture was of a darker-skinned female.

In another famous example, Amazon found that its own hiring algorithm was biased against women because the formula was based on the number of résumés submitted over a specified period, and most of those résumés came from men. The algorithm was essentially trained to favor men over women.

Incorrect data sources

In some cases, regulations have made the perfect the enemy of the good. Current credit ratings under FICO, which is grandfathered into our credit system and endorsed by regulators, shows manifestly disparate credit ratings by race (Figure 1). When FICO scores are used, Black and Hispanic Americans are disadvantaged.

Source: Aaron Klein, “Reducing Bias in AI-based Financial Services,” Brookings Institution, July 10, 2020.

After FICO scores were adopted, the Consumer Financial Protection Bureau attempted to address its imperfections by putting in place rules that say any new scoring systems must not cause disparate impact, that is, must not cause results that negatively impact 50% of minorities if they only represent 25% of the population. But FICO, itself, causes disparate impact and would no longer satisfy this criterion. Therefore, an alternative to FICO would have to be perfect for it to be adopted, not just better.

Developing better solutions is an iterative process. We cannot keep letting discrimination happen because we’ve found better alternatives instead of perfect solutions.

Spurious correlation and proxy discrimination

One of the great advantages of AI and machine learning is that it can generate insights from data sets too great for human minds to comprehend and find patterns in, unaided by technology. But not all patterns are meaningful or helpful. In the same vein as the old saw that said stock market movements were correlated with skirt lengths, we have to be skeptical about the “insights” machine learning may create, especially if there is no reason to suspect that there are any causal or plausible links between two variables.

One example is a correlation that came out of a model constructed at Duke University meant to predict the probability that a loan would be repaid. One factor that emerged was the power of a name: If a person’s email address contained their name, and that name was associated with Black or white naming habits, a racial bias could creep in. Even though race might not be a specific factor in the model, if emails were used, those could be a proxy for race and produce a discriminatory outcome. It is common to assume, according to one researcher, that users of Macintosh computers are slightly better credit risks compared with PC users. And Macintosh users are disproportionately white. If a model used both as inputs, it would, all else equal, come out with higher credit scores for white people. While it is illegal to charge people different prices based on race, and lenders cannot use data that are “solely correlated with race and are not predictive of repayment,” there are various ways to proxy race through the use of correlations that may or may not have anything to do with the issue at hand — in this case, loan repayment.

Getting the bias out

Some flawed algorithms are fixable,13 but it can be challenging to get the bias out of most black-box algorithms. Users often don’t know what data were used to train the machine, and whether there are racial biases in that data, and unless tested for, it’s almost impossible to figure out what outcomes are generated by spurious correlations. There are roles for federal and other governmental policy in establishing appropriate rules for AI and biometrics, and enforcing them.14 Regulatory approaches are already being developed: The New York City Council is considering a proposed law that would require the sellers of algorithmic hiring assessment tools to audit them for built-in bias and to require the users of such tools to disclose their use to candidates.15 

The private sector can also do a better job. In mid-2020, after months of unrest after the killing of George Floyd and Black Lives Matter protests nationwide, several developers of these algorithms announced that they either do not sell facial recognition technologies to police (e.g., Microsoft and IBM), or that they would institute moratoria on police use of such technology (Amazon). Some noted that they would not sell such technology until there was federal regulation governing it. However, a moratorium is not necessarily forever, and it may not prohibit existing users from using flawed systems. For developers of AI and biometric technologies, it is crucial to take every possible step to avoid racial (and other) biases from becoming part of their algorithms. For users, it is reasonable to ask vendors for information regarding bias in the system. But while many may claim their results are accurate, there is little evidence that they are subjecting themselves to audits or even sharing how their systems perform with respect to different demographic groups.10

A role for investors

There are multiple ways that biased algorithms can hurt people: being misidentified as a shoplifter in a store, getting a lower credit score, having less access to healthcare, having less access to finance, and many others. Investors, while they may not face these exact risks in their day jobs, also see flawed and biased AI as risky. There is certainly reputational risk involved in investing in companies whose products or practices are shown to be racially biased, and there is also likely to be increasing litigation risk as well. For companies that are using AI systems to provide employee safety, to the extent that these tools do not work as advertised and misidentify possible threats, that safety is compromised.

If the risks are well understood and priced accordingly, investors have many tools to assure they will be compensated for taking them. But for the most part, the risk that any particular AI system or algorithm even has biased outcomes is largely unknown and currently unknowable.

It is time for investors to step up on these risks, and we at Impax will be doing so. We will reach out to any companies we own that produce biometrics or algorithms and ask them to disclose any racially biased outcomes from use of their systems, and we will ask them to conduct regular audits to check for bias. We will also reach out to companies that may be using these systems to see whether they have asked vendors for similar information and if they will disclose the results to investors. Users should always understand the outcomes of any biometric system, not only for the intended purpose, but in terms of racial and gender demographics. We will also ask companies that may be using these systems to support federal and other governmental regulations that establish boundaries for the use of biometrics and AI.

Conclusion

Our future is digital. Online work, digital networking, and AI, among other developments, present tremendous opportunities for improved productivity, reduced repetitive and stultifying tasks, and liberated human minds that can do what they do best. But there are also tremendous risks, and we should not fall into the trap of thinking that because something is digital it is unbiased. All of our digital tools are built by humans, and humans are biased. We need to take steps now — before even more of our lives are controlled by artificial intelligence — to assure that we’re not encoding our biases in black boxes that make them harder to solve.


1 Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhiil Mullainathan, “Dissecting Racial Bias in an Algorithm Used To Manage the Health of Populations,” Science (336:6464), Oct. 25, 2019.

2 Jeffrey Dastin, “Rite Aid Deployed Facial Recognition Systems in Hundreds of US Stores,” Reuters Investigates, July 28, 2020.

3 Emily Birnbaum and Issie Lapowsky, “How Tech Workers Feel About China, AI and Big Tech’s Tremendous Power,” Protocol, March 15, 2021. 

4 Douglas Yeung, Rebecca Balebako, Carlos Ignacio Gutierrez and Michael Chaykowsky, “Face Recognition Technologies: Designing Systems that Protect Privacy and Prevent Bias,” Rand Corporation, 2020. 

5 Tate Ryan-Mosley, “I Asked an AI To Tell Me How Beautiful I Am,” MIT Technology Review, March 5, 2021.

6 Marcia G. Yerman, “The Effects of Toxic Beauty Products on Black Women,” Huffpost, March 22, 2017; Nneka Lelba and Paul Pestano, “Study: Women of Color Exposed to More Toxic Chemicals in Personal Care Products,” Environmental Working Group, Aug. 17, 2017; and Ami R. Zota and Bhavna Shamasunder, “The Environmental Injustice of Beauty: Framing Chemical Exposures from Beauty Products as a Health Disparities Concern,” American Journal of Obstetrics and Gynecology,Oct. 2017.

7 Motley Fool, Are Algorithms Hurting Your Finances? What You Need to Know,” Jan. 30, 2020. 

8 Marianne Bertrand and Sendhil Mullainathan, “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination,” American Economic Review (94: 4), Sept. 2004. 

9 Will Knight, “The Apple Card Didn’t ‘See’ Gender — and That’s the Problem,” Wired, Nov. 19, 2019. 

10 Rebecca Heilweil, “Why Algorithms Can Be Racist and Sexist,” Vox, Feb. 18, 2020.

11 Brian Resnick, “Yes, Artificial Intelligence Can Be Racist,” Vox, Jan. 24, 2019.

12 Amanda Fawcett, “Understanding Racial Bias in Machine Learning Algorithms,” Educative, June 7, 2020.

13 Emily Sokol, “Eliminating Racial Bias in Algorithm Development,” Health Analytics, Dec. 26, 2019.

14 See, for example, “Facial Recognition Technology: Privacy and Accuracy Issues Related to Commercial Uses,” Government Accountability Office, GAO-20-522, 2020.

15 Alexandra Reeve Givens, Hilke Schellmann and Julia Stoyanovich, “We Need Laws To Take On Racism and Sexism in Hiring Technology,” The New York Times, March 17, 2021.

Julie Gorte, Ph.D.

Senior Vice President for Sustainable Investing

Julie is a leading figure in Impax Asset Management’s sustainable investing work, coordinating systemic engagement and the financial implications of integrating sustainability into investment decision-making. Julie researches the connections between sustainability and economic performance. She also tracks and develops insights into the impact of public policy on investment and communicates with public policymakers to help make public policy more favourable to sustainability and sustainable investing. Julie is a member of our Gender Analytics team and the Impax Sustainability Centre.

Prior to joining the firm, Julie headed up the social investment strategy at Calvert. She has held senior roles at the Congressional Office of Technology Assessment, The Wilderness Society, and the Environmental Protection Agency.

Julie serves on the boards of the Endangered Species Coalition, E4theFuture, Clean Production Action, the Forum for Sustainable and Responsible Investment (US SIF) and is the board chair of the Sustainable Investments Institute. She holds a Ph.D. and a master’s degree in resource economics from Michigan State University and has a bachelor’s degree in forest management from Northern Arizona University.

Recent Insights

Investing to address biodiversity loss

The scale of exposure to nature-related risks means investors must urgently understand the drivers of biodiversity loss and invest in ways to reduce them

1 March 2024
Climate change: the impact for investors

Reviewing the research that demonstrates the financial materiality of physical, transition and adaptation risks to companies, issuers and their investors

15 September 2023

David Loehwing

Head of Sustainability & Stewardship, North America

David leads the sustainability team in North America and is responsible for overseeing Impax’s sustainability and ESG research and development of its methodologies, as well as engagement and stewardship across Impax portfolios. David leads on Impax’s internal sustainability frameworks including the construction and management of Impax’s Systematic ESG Rating and the Impax Gender Score. He is a member of the Impax Sustainability Centre, Sustainability Lens Committee, Sustainability Policy Committee and ESG & Sustainability Committee which govern integration of sustainability and stewardship in Impax’s investment process.

David’s role at Impax also includes Co-Chair of the Environment Group and member of the Equity, Diversity, and Inclusion Group.

David has worked in sustainable investing since 1998. Before joining Impax in 2007, he worked in the field at Citizens Advisers and the Investor Responsibility Research Centre. David has been active within industry working groups and advisory committees related to sustainable investing.

David graduated from Bowdoin College, Maine, with a BA in sociology.

Authored Insights

Scroll to top
To get started, please select your location and investor type below.

If you are invested in Impax Funds – regardless of share class (Investor, Institutional, or Class A) or account type (individual, business or other entity) please select Impax Funds Investor as your Investor Type.

Access Impax Asset Management Limited’s Form CRS here.

Important Information

I confirm that I am an [investor_type] based in [investor_country] and that I have read and understood the important information, privacy policy and terms and conditions which govern the use of this website.

Risk Warning

Capital at risk. The value of investments may go up or down and is not guaranteed.