|
The Greenlining Institute Releases New Report Examining Biased Algorithms that Invisibly Limit Opportunities for Marginalized Groups
02/27/2021
[ Article was originally posted on https://greenlining.org ]
Human-designed algorithms and artificial intelligence can create redlines and roadblocks to getting a job, receiving healthcare, and investing in neighborhoods The Greenlining Institute released a report titled “Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination.” The report examines how biased algorithms discriminate against people of color, women, and people who earn lower incomes. Often the discrimination is invisible to its victims. The findings of this research shine a light on what Greenlining calls algorithmic redlining and provides recommendations on how to update laws to address this growing problem. Decision-making algorithms work by taking the characteristics of an individual, like the age, income, and ZIP code of a loan applicant, and reporting back a prediction of that person’s outcome — for instance, the likelihood they will default on a loan — according to a certain set of rules. That prediction is then used to make a decision — in this case, to approve or deny the loan. But, if the training data is biased then the algorithm can “learn” the pattern of discrimination and replicate it in future decisions. For example, a bank’s historical lending data may show that it routinely and unfairly gives higher interest rates to residents in a majority Black ZIP code. A banking algorithm trained on that biased data could pick up that pattern of discrimination and learn to charge residents in that ZIP code more for their loans even if they don’t know the race of the applicant. “With this report, Greenlining Institute elevates the harm algorithmic redlining is causing to marginalized communities, and puts forth specific recommendations to promote accountability and transparency,” said Vinhcent Le, Technology Equity Legal Counsel, Greenlining Institute. “We have an opportunity to ensure the decision-making tools our society uses are building equity instead of advancing disparities.” Despite the massive impact algorithms have on the day to day lives of citizens, there are currently no laws effectively holding governments, companies, and organizations accountable for the development, implementation, and impact of their use. Algorithms are designed by people. Often, people may have gaps in their knowledge, biases, or want to do things the cheapest, simplest way. That’s been shown to lead to flawed algorithms that make bad decisions. Algorithmic accountability laws would allow us to identify and fix algorithmic harms and to enforce our existing laws against discrimination. Algorithmic transparency and accountability measures can include algorithmic impact assessments, data audits to test for bias, and critically, a set of laws that penalize algorithmic bias, particularly in essential areas like housing, employment, and credit. California’s legislature is now considering a bill, AB 13, which would take the first steps toward regulating algorithmic bias. “We need to update our discrimination laws to reflect the realities of today’s technological world,” said Debra Gore-Mann, President and CEO of Greenlining Institute. “Instead of a defensive strategy aimed at limiting discrimination and preventing disparate impacts, we promote an idea called algorithmic greenlining. This approach emphasizes using automated decision systems in ways that promote equity and help close the racial wealth gap. This means that algorithms go beyond simply not causing harm to addressing systemic barriers to economic opportunity.” Additional Examples of Biased Algorithms at work:
# # # THE GREENLINING INSTITUTE is a multi-ethnic public policy, research and advocacy institute that envisions a nation where race is never a barrier to economic opportunity and communities of color thrive. www.greenlining.org. @Greenlining SOURCE: https://greenlining.org/press/2021/the-greenlining-institute-releases-new-report-examining-biased-algorithms-that-invisibly-limit-opportunities-for-marginalized-groups/ Back To News |
|