Professor Chris Goodman, "Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants" -- University of Colorado Law Review
Professor Chris Chambers Goodman's article, "Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants," (SSRN) is published in the University of Colorado Law Review. The article considers how employers are using AI technologies and highlights concerns with AI-assisted employment processes.
Abstract of "Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants"
This Article proceeds as follows: Part I provides some background on how employers are using AI technologies and highlights concerns with AI-assisted employment processes. Part II describes efforts by the executive branch to regulate AI systems and some of the limits in the federal arena. Part III highlights recent state and local attempts to regulate AI for the first time and focuses on a 2023 New York City ordinance. From there, it explores ways to build upon and improve that start. Part IV concludes the Article with additional recommendations for double-blinding data to optimize the balance between privacy and fairness. For instance, organizations that use AI tools to sort and hire job seekers should consider conducting an ethical risk assessment followed by a bias risk assessment. These organizations should then make any needed adjustment before deploying AI in their hiring processes. This assessment should be an iterative process to ensure that the AI tool does not perpetuate bias and prevent the organization from hiring diverse candidates.