toggle accessibility mode
image of man with artifical intelligence sign

New York City Bans Bias in Use of Artificial Intelligence in Employment Decisions; What Does it Mean?

By John DiGiacomo

To streamline hiring, many employers have begun using artificial intelligence (“AI”) and machines learning programs to screen employment applicants. However, this has given rise to concerns of AI bias. Indeed, to combat this bias in the use of AI, New York City has just passed an ordinance banning bias in the use of AI and machine learning tools. The ordinance will become effective in 2023.

But, what does AI bias mean? To help understand, it is useful to look at some recent developments in the financial industry. The financial industry is facing some similar concerns about the use of AI and machine learning in the making of lending decisions. We can get a glimpse of the “AI bias” issue by taking a quick look at what politicians and regulators are saying with respect to bias and potential discrimination in the financial industry.

For example, Congresswoman Maxine Waters recently issued a letter, as Chair of the House Financial Services Committee, to five major financial regulatory agencies. See letter here. Essentially, the problems with AI bias concern historical inputs and the use of supposedly “neutral” inputs. The first concern can be summarized by this quote from the letter: “Historical data used as inputs for AI and ML can reveal longstanding biases, potentially creating models that discriminate against protected classes, such as race or sex, or proxies of these variables.” In effect, if the historical data used by an AI program is biased, then the AI outputs are also biased. Likewise, with the second issue — supposedly “neutral” inputs — these too can contain hidden bias. Examples include zip codes and a borrower’s frequent websites and domain destinations. In her letter, Waters suggested that financial regulators focus on several methods of combating AI bias such as demanding transparency and resisting purely automated decision-making.

In that same manner, the New York City Ordinance focuses on transparency by requiring that companies that use AI for employment decisions must conduct a “bias audit” and make the audit results available to applicants. The Ordinance also requires that job applicants be given notice that AI programs are being utilized in employment decisions. Notice also must be given concerning which parts of the job application will be subject to AI processing, what data is being collected, the sources from which data is being collected and used and the employer’s data retention and destruction policies. These notices are to be given 10 days before the AI is being used (but, frankly, it seems it would be better if notice was given before an applicant fills out an online application).

The NYC Ordinance is actually broader than AI and machine learning tools. The Ordinance applies to any use of an “automated employment decision tool” (“AEDT”). Other examples are statistical modeling and data analytics. AEDTs have been used commonly over the last decade for remote videoed employment interviews. Thus, the NYC Ordinance would apply to such interviews. The Ordinance is particularly directed at curtailing bias that is hidden in “simplified outputs” such as scores or rankings.

Companies can be penalized for violating the new Ordinance, but there is no private right of action.

If you have business law questions or questions about consumer privacy, data security or other legal issues related to internet law, contact the trusted internet and business lawyers at Revision Legal at 231-714-0100.

Put Revision Legal on your side

LET’S DISCUSS YOUR CASE