Texas Attorney General Obtains Settlement of Alleged False and Misleading Statements About Healthcare Artificial Intelligence Product Accuracy
On September 18, 2024, the Attorney General (AG) of Texas announced a settlement with an artificial intelligence-focused healthcare technology company to resolve allegations of false and misleading statements about the accuracy of its product. The company provides a generative artificial intelligence-enabled service to support health system doctors and nurses to summarize, chart, and draft clinical notes in the electronic health record, among other AI-enabled functionality related to patient care. The company promotes its product’s capability to minimize AI “hallucinations” using “highly tuned adversarial AI” and “board-certified clinician oversight.” The Texas AG asserts that the company’s claims that its product was “highly accurate” and that its “critical hallucination rate” was “<.001%” were false and misleading and may have deceived hospitals about product safety and accuracy, putting the public interest at risk. The AG investigated the matter as a violation of the Texas Deceptive Trade Practices – Consumer Protection Act.
The company denies all wrongdoing and contends that its statements about the hallucination rate were accurate. To resolve the matter, however, the company entered an Assurance of Voluntary Compliance, pursuant to which it agrees to comply with specific requirements for five years. Among other requirements:
- If the company makes direct or indirect reference to metrics about the output of its generative AI products in its marketing or advertising, it must clearly and conspicuously disclose the meaning or definition of the metric and the method used to calculate it. Alternatively, the company may retain an independent auditor to substantiate the claims;
- The company and its agents must not (a) make any misleading or unsubstantiated statements about the accuracy, testing or monitoring procedures, metrics, or data used to train any of its products, (b) mislead any customer or user about the accuracy, functionality, purpose or any feature of any of its products or services, or (c) fail to disclose any financial arrangement with any person who participates in the company’s marketing or advertising, or who endorses or promotes any of its products or services;
- The company must provide all current and future customers of any of its products or services with documentation that clearly and conspicuously discloses any known or reasonably knowable potentially harmful uses or misuses of the products and services, which must include (a) the type of data and/or models used to train the products and services, (b) a detailed explanation of the intended purpose and use, and any necessary training or documentation for proper use, of the products and services, (c) any known or reasonably knowable limitations of the products or services, including risks to patients or healthcare providers, such as the risk of physical or financial injury in connection with inaccurate outputs, (d) any known or reasonably knowable misuses of a product or service that can increase the risk of inaccurate outputs or of harm to individuals, and (e) all other documentation reasonably necessary for a user to understand the nature and purpose of an output generated by a product or service, monitor for patterns or inaccuracy, and reasonably avoid misuse of the product or service.
Beyond Texas Consumer Protection Enforcement: Emerging AI Laws and Regulations
As commercial interest is developing and using Artificial Intelligence in the healthcare sector continues to grow, so does the interest of legislators and regulators. Since the White House issued its Executive Order on the Safe, Secure and Trustworthy Development and Use of AI in late 2023, U.S. Department of Health and Human Services (HHS) has published two final rules that address AI, the Health Data, Technology, and Interoperability Final Rule (January 2024) and the Final Rule, Section 1557 of the Affordable Care Act (May 2024). At the state level, applicable generally and not just in the health and wellness context, Utah has enacted its Artificial Intelligence Policy Act, Colorado has its Consumer Protections in Interactions with AI Systems law, and California’s rules regarding automated decision-making technology under the California Consumer Privacy Act continue to circulate in draft form. Even without AI-specific laws, however, healthcare providers and vendors that develop or use AI are subject to regulatory oversight under existing, general purpose consumer protection and privacy laws, as the Texas AG has demonstrated.
What Can You Do?
As healthcare entities and their service providers increasingly implement and use products and services leveraging Artificial Intelligence, including machine learning, predictive analytics and generative capabilities, it is prudent to do so in reference to principles of trustworthy and responsible AI. In addition to addressing the specific requirements of emerging laws; establishing, implementing, and documenting then maintenance of an AI governance program not only promises to yield better and more accepted products but it will also lay the foundation for compliance with laws that have yet to be written. A practical first step to understanding and adopting the principles (algorithmic validity and reliability, safety, security and resiliency, accountability and transparency, explainability and interpretability, privacy, and fairness with mitigation of harmful bias) is to review the NIST AI Risk Management Framework, the associated Generative Artificial Intelligence Profile, and the companion NIST AI RMF Playbook. This practical, flexible, and risk-balanced approach can help businesses to reduce the chance of harm, meet the expectations of customers and regulators, and experience the increasing benefits that this rapidly evolving technology will make possible.
For questions about this settlement or AI generally, please contact your Quarles attorney or:
- Daniel Guggenheim: (619) 822-1474 / dan.guggenheim@quarles.com
- Meghan O’Connor: (414) 277-5423 / meghan.oconnor@quarles.com
- Simone Colgan Dunlap: (602) 229-5510 / simone.colgandunlap@quarles.com