Building Transparency in the Age of Algorithms

 
06/24/2022

[ Article originally appeared in https://greenlining.org ]

By ,

Building Transparency in the Age of Algorithms

What does AI transparency mean, and why do we need it? In this age of technology, artificial intelligence algorithms power the software behind life-changing decisions in employment, housing, and more. The tech world has a long history of leveraging software to maximize efficiency. Without oversight, these decision-making algorithms have the potential to create and replicate and create new kinds of discriminatory practices, deepening existing inequality particularly for people of color and low-income people. This phenomenon is called algorithmic bias, and it occurs when an algorithmic decision creates unfair outcomes that unjustifiably and arbitrarily privilege certain groups over others. Given the stakes involved in decisions like whether or not to provide loans or offer jobs, it would seem like a no-brainer to ensure those impacted have an understanding of how these decisions are made.  

We need insight and oversight on how these systems work, but AI transparency remains complicated. The few companies that do offer insight into AI-based decisions provide explanations that are not meaningful or helpful. For instance, Facebook (Meta)’s “Why am I seeing this?” tag on social media posts doesn’t actually provide transparency into its algorithm or how consumer data is used to determine which paid ads to promote over others. Without offering meaningful and specific explanation, Facebook denies its consumers autonomy over how their data is used. 

The practice of building out documentation and risk assessments can help bridge the knowledge gap between algorithms and their outcomes. Documentation and risk assessments are tools that outline how and why algorithms are designed, function, and perform, and provide an opportunity to investigate ethical and legal considerations around the potential harms they cause. In the past several years, researchers and policymakers alike have used documentation and risk assessments to increase AI accountability — from Google’s Model Cards to the European Union’s General Data Protection Regulation (GDPR)

Documentation can take many different forms — but with limited space and infinite questions, it’s important to establish transparency guidelines that capture what’s most useful.

Documentation standards should be designed with three main stakeholders in mind–those impacted by algorithmic decisions, regulators, and developers. To serve the unique goals of each stakeholder, key priorities to keep in mind for each group include: 

  1. For impacted people and community members: information on the presence of bias
  2. For regulators: the results of bias testing and businesses’ explanations justifying any potential harm
  3. For industry and AI developers: clear examples of documentation & risk assessments to emulate

Impacted People & Community: Agency and information

AI systems are created from peoples’ data to make decisions on their behalf. It only makes sense that transparency should aim to center what information would be most useful for impacted people.

People need pathways to understanding how AI systems relate to them, and if these systems lead to biased outcomes that may impact them. To do this, any impact assessment should provide insight into how the AI system performs on disaggregated subgroups of race, ethnicity, gender and other protected groups. This information on bias must be public and easy to access, as opposed to being limited to a select group of people or regulators. Once this information is made accessible, future policies can be designed to empower people to make decisions about how individuals participate and engage with these AI systems, such as opting out of AI-made decisions, civil action, and more. 

Algorithms on a screen
Woman in hijab on phone and facing a laptop

Regulators: the rationale for handling risk

Algorithms are inevitably imperfect because they are created by biased and imperfect individuals. What regulators — the agencies and individuals responsible for enforcing laws — need to know is whether the developers of these systems have 1) identified potential harms, 2) understand the scale of potential harm, and 3) taken steps to mitigate, or reduce these harms. 

In the United States, civil rights laws prohibit discrimination under certain categories — race, gender, and other protected subgroups — only if it is done to someone on the basis of membership in a protected class, and not justified by a legitimate business purpose. To hold biased algorithmic systems accountable, regulators must be able to assess whether  discrimination present in an algorithm is legal under existing U.S. law. In short, requiring that an impact assessment include a company’s business rationale will hold developers accountable to designing algorithms that abide by existing legal requirements.

Business and Industry: Make Examples and Templates  

Documentation compels companies to do their due diligence in development, and consequently, helps businesses improve their own accuracy, understand their trade-offs, and learn and name the limitations of the systems they develop. However, there are no clear standards established in the United States on what kind of documentation or transparency is needed when it comes to algorithms.  The question is: how will developers know what to include or how to write these impact assessments? 

The European Union’s implementation of the General Data Protection Regulation in 2018, and expanded proposed rules of the Digital Services Act and Digital Markets Act include a requirement for impact assessments for certain companies that use personal data. However, many countries and businesses struggled to implement these assessments. Although the GDPR provides broad requirements, many companies have limited experience in conducting impact assessments, and the lack of templates and clearer guidelines left them struggling to complete these assessments. This led to government officials taking on the lion’s share of the work and businesses ending up frustrated with the unclear guidelines. In short, these impact assessments are only as useful as their given examples.

Conclusion

Thoughtful documentation can help meet the needs of the stakeholders affected by AI systems, whether that’s individuals seeking clarity on AI-driven decisions that impact their lives, business vendors aiming to sharpen the accuracy of the AI they deploy, or regulators tasked with holding companies accountable to anti-discrimination laws. As California looks to develop more standards on AI transparency and accountability, the question is not if there should be documentation at all, but what kind and what it should include.

SOURCE: https://greenlining.org/blog-category/2022/transparency-age-of-algorithms/



Back To News



SBE Northeast
 


Louisiana Business JournalArchive