Artificial intelligence is increasingly becoming part of everyday business, from customer service chatbots to AI-driven sales assistants. AI tools are also becoming more human-like, leading to a very practical legal question: Do businesses have to tell users they are interacting with AI? The answer is not a simple ‘yes’ or ‘no.’ It mainly depends on where your business operates, how the AI is applied, and whether the interaction could mislead a reasonable consumer. If your business is using chatbots or automated systems, read on to learn more about AI disclosure rules.
State Laws on AI Disclosure Rules
Several states have recently been passing laws that specifically address AI disclosure in consumer interactions, and the requirements vary by state.
For example, California’s bot disclosure law makes it unlawful to use a chatbot to communicate with someone online with the intent to mislead them about its artificial identity to influence a commercial transaction, unless the business clearly discloses that it is a bot. The New York Synthetic Performer Disclosure law requires commercial advertisers to disclose conspicuously when a synthetic performer is used in a visual or audiovisual advertisement.
Over in Colorado, the Colorado AI Act requires “high-risk” AI systems, such as those involved in consequential decisions about employment, housing, or healthcare, to be disclosed unless it’s obvious that AI is being used. Maine’s Chatbot Disclosure Act requires disclosure when a chatbot could reasonably mislead a consumer into thinking they are interacting with a human. In New Jersey, the law requires that where a bot is used in advertising or sales, disclosure should be made at the start of the interaction. Utah AI disclosure laws require disclosure in certain professional or consumer interactions, especially if a user asks or when the interaction concerns regulated services.
Even when state law doesn’t specifically require disclosure, businesses are not off the hook. Remember, there are general consumer protection laws, often referred to as unfair or deceptive acts or practices, that apply. The Federal Trade Commission (FTC) can take action against a business that uses AI in a way that misleads consumers. The risk may even increase in situations where users expect human interactions, such as customer support, dating platforms, or professional services. If your business benefits from such interactions without clarifying its AI use, it could face regulatory scrutiny.
What You Should Do as a Business
Given the growing number of laws governing AI disclosure, the safest approach is transparency. Clear, upfront disclaimers, such as “You are chatting with an AI assistant,” can significantly reduce your legal risk. Additionally, you should also consider context. For instance, AI used in sensitive interactions, such as healthcare, finance, or legal services, should be disclosed upfront for clarity and accountability.
Finally, ensure you stay informed. AI regulation is evolving quickly, and more states may be introducing AI disclosure requirements that may influence how you run your business.
So, are businesses required to tell users they are chatting with AI? Increasing, yes, especially where there is a risk of confusion or deception.
Contact the Internet Law and Compliance Attorneys at Revision Legal
For more information, contact the experienced Internet Law and Compliance Lawyers at Revision Legal. You can contact us through the form on this page or call (855) 473-8474.