New York City Bans Bias in Use of Artificial Intelligence in Employment Decisions; What Does it Mean? featured image

New York City Bans Bias in Use of Artificial Intelligence in Employment Decisions; What Does it Mean?

by John DiGiacomo

Partner

Internet Law

To streamline hiring, many employers have begun using artificial intelligence (“AI”) and machines learning programs to screen employment applicants. However, this has given rise to concerns of AI bias. Indeed, to combat this bias in the use of AI, New York City has just passed an ordinance banning bias in the use of AI and machine learning tools. The ordinance will become effective in 2023.

But, what does AI bias mean? To help understand, it is useful to look at some recent developments in the financial industry. The financial industry is facing some similar concerns about the use of AI and machine learning in the making of lending decisions. We can get a glimpse of the “AI bias” issue by taking a quick look at what politicians and regulators are saying with respect to bias and potential discrimination in the financial industry.

For example, Congresswoman Maxine Waters recently issued a letter, as Chair of the House Financial Services Committee, to five major financial regulatory agencies. See letter here. Essentially, the problems with AI bias concern historical inputs and the use of supposedly “neutral” inputs. The first concern can be summarized by this quote from the letter: “Historical data used as inputs for AI and ML can reveal longstanding biases, potentially creating models that discriminate against protected classes, such as race or sex, or proxies of these variables.” In effect, if the historical data used by an AI program is biased, then the AI outputs are also biased. Likewise, with the second issue — supposedly “neutral” inputs — these too can contain hidden bias. Examples include zip codes and a borrower’s frequent websites and domain destinations. In her letter, Waters suggested that financial regulators focus on several methods of combating AI bias such as demanding transparency and resisting purely automated decision-making.

In that same manner, the New York City Ordinance focuses on transparency by requiring that companies that use AI for employment decisions must conduct a “bias audit” and make the audit results available to applicants. The Ordinance also requires that job applicants be given notice that AI programs are being utilized in employment decisions. Notice also must be given concerning which parts of the job application will be subject to AI processing, what data is being collected, the sources from which data is being collected and used and the employer’s data retention and destruction policies. These notices are to be given 10 days before the AI is being used (but, frankly, it seems it would be better if notice was given before an applicant fills out an online application).

The NYC Ordinance is actually broader than AI and machine learning tools. The Ordinance applies to any use of an “automated employment decision tool” (“AEDT”). Other examples are statistical modeling and data analytics. AEDTs have been used commonly over the last decade for remote videoed employment interviews. Thus, the NYC Ordinance would apply to such interviews. The Ordinance is particularly directed at curtailing bias that is hidden in “simplified outputs” such as scores or rankings.

Companies can be penalized for violating the new Ordinance, but there is no private right of action.

If you have business law questions or questions about consumer privacy, data security or other legal issues related to internet law, contact the trusted internet and business lawyers at Revision Legal at 231-714-0100.

Extra, Extra!
Recent Posts

Does the AI-Copyright Legal Fight Represent a National Security Threat?

Does the AI-Copyright Legal Fight Represent a National Security Threat?

Copyright

The holders of copyrights for newspapers, magazines, books, and other publications are involved in numerous legal battles with owners of AI modules over alleged copyright infringement. The plaintiff copyright owners claim that the AI large language modules have been trained on huge quantities of copyrighted materials without permission and — most importantly — without payment. […]

Read more about Does the AI-Copyright Legal Fight Represent a National Security Threat?

How Does Buy-Sell Insurance Work For An Owners’ Agreement?

How Does Buy-Sell Insurance Work For An Owners’ Agreement?

Corporate

The owners of most small, closely-held businesses negotiate and sign some form of an “Owner’s Agreement.” An important part of such Agreements is the “Buy-Sell” provisions. These are often some of the most difficult to negotiate. The gist of the buy-sell part of the Owners’ Agreement is to establish the rules for what happens if […]

Read more about How Does Buy-Sell Insurance Work For An Owners’ Agreement?

Status on Social Media Moderation Statutes and Cases

Status on Social Media Moderation Statutes and Cases

Internet Law

Social media content moderation by technology platforms was one of the “hot” legal topics in 2023-2024. Three States — California, Texas, and Florida — passed different statutes to either require more content moderation (California) or to limit such moderation (Texas and Florida). All the statutes, in one way or another, demanded more transparency and information […]

Read more about Status on Social Media Moderation Statutes and Cases

Put Revision Legal on your side