Various levels of artificial Intelligence (“AI”) have been around for a long time. As computing power and speed has increased, AI programs have become larger and are used for an increasing number of functions. AI programs are now used, for example, by many businesses as part of employee recruitment, hiring, and retention. AI systems are, of course, computer programs and algorithms used to perform tasks that might otherwise be done by human beings.
However, the increasing use of AI systems is creating a number of thorny legal issues that must be resolved either through the courts or through the legislative process. As one example, in late 2021, New York City enacted an Ordinance regulating the use of AI and machine learning in hiring. See Forbes media report here. Along similar lines, the EEOC recently issued guidelines on the use of AI in hiring decisions with respect to whether such use might amount to disability discrimination.
Some common concerns involve these types of questions:
- Do AI systems that score or classify job applicants have inherent biases that create violations of anti-discrimination laws?
- Are inputs used by AI systems biased?
- Are the AI algorithms and inputs transparent enough for courts, lawmakers and the public to search for bias or even understand how outcomes are achieved?
- Are persons who are subject to AI use given notice?
- Should consent be given?
- Should persons be permitted to opt out of having AI used with respect to decisions being made?
- What data is being collected and used and what happens to that data?
- Is data used and stored sufficiently secured from wrongful access or exfiltration by cybercriminals?
As another example, in Washington, the House Financial Services Committee has been conducting hearings on AI and machine learning in the financial industry. The Committee is asking financial regulators — like the Office of the Comptroller of Currency — to ensure that use of AI by banks and lender does result in a rise in lender discrimination. In November 2021, the Committee sent a letter to financial regulators asking for investigations. Of notable concern were these items:
- Transparency — AI systems should not be unexplainable “black boxes”
- Explainability — what is the AI modeling? What are the data sets? What are the methodologies?
- Oversight — can regulators understand the AI systems enough to conduct proper oversight?
- Enforceability — can regulators understand the AI system to properly enforce the laws?
- Consumer privacy — do the AI systems endanger consumer privacy?
All of these concerns raise another legal issue with AI use: holding persons and organizations legally liable for wrongdoing and injuries. These include legal doctrines associated with products liability, negligence law, malpractice, toxic torts and more. Will defendants avoid liability by claiming that the “AI did it?” Will judges and juries be able to understand how the events happened and how the results were achieved?
On a more mundane level, use of AI is creating some interesting legal questions for intellectual property law. For example, there is now a world-wide debate about whether an AI system can own a patent. See Guardian media report here.
Contact the Trademark Lawyers at Revision Legal
For more information or if you have questions about creating and registering a trademark, contact the internet and IP lawyers at Revision Legal. You can contact us through the form on this page or call (855) 473-8474.