The growing usage of automated systems, especially artificial intelligence (AI), has stressed concerns about the technology’s risk to citizen rights. Four U.S. federal agencies recently issued a joint statement that targeted responsibility for these innovations. Each entity pledged to monitor automated systems and strictly enforce relative antidiscrimination laws. Businesses have been put on notice that the federal government will actively govern AI practices.
There’s still much uncertainty surrounding artificial intelligence and its impact on the workplace. Review our summary of the joint statement and recommended policies to implement to help your organization stay compliant and prevent potential exposures.
How Artificial Discrimination Can Occur
Automated systems can contribute to an unjustified different treatment or impact that disfavors individuals based on specific classifications (e.g., religion, age, disability) that are protected by law. During specific circumstances, these biases can violate legal protections. Discrimination occurs as humans designate the data that automated systems use and how the results are applied. Unconscious biases filter into the algorithms and then are automated through AI systems.
Agencies That Pledged to Enforce Authority to Automated Systems
The Equal Employment Opportunity Commission (EEOC)
The EEOC identified AI technology as a priority in its 2023--2027 Strategic Enforcement Plan, signaling AI-related enforcement actions. The agency recently issued guidance for employers’ use of algorithms and AI tools for recruitment or other employment decisions to prevent violations of employees’ federal civil rights. In 2023, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative to safeguard workplace usage of AI compliant with federal civil rights laws.
The Department of Justice (DOJ)
In the joint statement, the DOJ’s Civil Rights Division promoted a statement of interest that offers guidance, including:
- Examples of innovative technology utilized by employers
- Emphasizes employers must consider any technology’s impact on disabled individuals
- Explains an employer’s ADA obligations in response to automated decision-making tools and when reasonable accommodations must be made.
The Consumer Financial Protection Bureau (CFPB)
The CFPB fortifies federal laws that prohibit discrimination and unfair, deceptive or abusive practices in the financial marketplace. The organization responded to the usage of automated systems with a circular that confirmed the technology falls under normal federal consumer financial laws and their adverse action requirements. The Bureau also reiterated that companies can’t utilize the technology’s newness as a defense for violations. Creditors that cannot respond to adverse actions should refrain from employing automated systems.
The Federal Trade Commission (FTC)
The FTC issued a report that evaluates the application and impact of AI to combat online harms identified by Congress. The report outlines significant concerns that AI tools are, by design, inaccurate, biased and discriminatory, and incentivizes reliance on invasive forms of commercial surveillance. The FTC also warned market participants that operate automated tools with discriminatory impacts. If they make unsubstantiated claims about AI or deploy AI before they assess and mitigate risks, they may violate the FTC Act. Finally, the Commission requires firms to destroy algorithms or other work products that were trained on data that should not have been collected.
In the joint statement, the agencies note that current laws and regulations that address discrimination and other unlawful practices do apply to automated systems and innovative new technology similar to other practices.
What the Joint Statement Means for Employers
The consensus from the four agencies was issued for informational purposes only and doesn’t establish or promote any new legal rights or obligations. Employers that implement automated systems into employment decisions should become familiar with the joint statement and ensure all related policies and practices comply with all applicable laws enforced by each agency.
Suggested Workplace Policies on Artificial Intelligence
Laws and regulations have lagged behind the business world’s acceptance and incorporation of AI into operations. Some federal and state regulations do address AI tools in the employment context, but employers should anticipate future federal and state laws to emerge as automated systems become more advanced.
Data Privacy & Surveillance
Employers have implemented AI-derived insights to track worker performance. While it could increase workforce and organizational productivity, there is a potential for employee privacy rights infringement. Some jurisdictions (e.g., New York, Delaware, Connecticut) have imposed consent and notice requirements for using AI tools in the workplace. Others require consent and notice requirements when AI technology is utilized as an interview tool. Verify your business has policies that address these issues to ensure employee AI monitoring isn’t intrusive and does not reveal private or confidential information. Ensure transparency to applicants and employees of this technology’s usage.
Copyright & Intellectual Property Rights
Content that is generated through AI can potentially violate copyright laws or infringe on third-party intellectual property rights. For instance, employee conversations with AI chatbots may be reviewed by AI trainers and inadvertently disclose sensitive and confidential business information and trade secrets to third parties. Employers could legally be at risk under privacy laws. Organizations should also consider the status of any content generated from AI tools and identify who holds the right to that content. Employers should review and update their confidentiality and trade secret policies to ensure they cover third-party AI tools. Businesses should train employees about potential copyright and intellectual property concerns to ensure AI-generated content excludes any protected or confidential data.
Antidiscrimination Concerns
Discrimination, whether intentional or not, from AI technology can leave your company financially liable for expensive lawsuits and investigations. For example, AI algorithms that generate employment decisions could be founded on biased historical data sets if benchmarked resumes or other job requirements are based on protected characteristics (e.g., age, race, gender, national origin). Employers should cautiously develop, apply or modify data to operate AI tools for employment decisions.
While many employers already have antidiscrimination policies, they should institute bias audits to impartially evaluate the disparate impacts of their AI tools on protected classes. Organizations can also review their AI-based compensation management tools to prevent equity pay law violations.
Ethical Issues
An employer’s capability to control AI tools will become more limited as the technology continues to advance. Organizations shouldn’t delay establishing policies that ensure the ethical usage of AI tools. There are still many unknowns about AI tools, but employers can establish policies now to account for what is known and reevaluate policies as the technology evolves.
We’re Here to Help Protect Against AI Risks
Artificial intelligence technology has revolutionized the employment landscape. As more organizations embrace this technology, proper workplace policies can help employers protect against related risks and prevent potential violations. A proactive approach to AI-related policies and procedures can help employers identify their exposures and outline strategies to address them. If you have questions about your coverage or additional risk management suggestions, connect with a member of our team.