Artificial Intelligence in Healthcare Translation and Interpretiv


As artificial intelligence’s role in healthcare services continues to expand, lawmakers have been taking action to regulate the use of AI in healthcare at both the federal and state levels. Like the practice of medicine by AI, healthcare translation and interpretive services are an area which the Centers for Medicare & Medicaid Services and a number of state legislatures are paying attention to. The legal framework governing healthcare translation and interpretive services was not written with generative AI in mind, and as AI-powered interpretive tools become more commercially accessible, the gap between the established framework and the services current technology can deliver has created some ambiguity around what is permissible.

The primary federal framework for the provision of interpretive services is § 1557 of the Affordable Care Act, which prohibits discrimination on the basis of national origin – including discrimination against individuals with limited English proficiency – by covered healthcare entities. However, this framework neither clearly encompasses nor excludes AI and machine translation. The implementing regulations under § 1557 require that covered entities offer a qualified interpreter for individuals with limited English proficiency. Among other requirements, a qualified interpreter must have demonstrated proficiency in speaking and understanding English and another language, and must also adhere to “generally accepted interpreter ethics principles, including client confidentiality.”[1]

While the regulations do not prohibit the use of machine translation, they do require that a qualified human translator or interpreter review interpretive services or translation services provided via machine translation “when the underlying text is critical to the rights, benefits, or meaningful access of an individual with limited English proficiency, when accuracy is essential, or when the source documents or materials contain complex, non-literal or technical language.”[2] Practically speaking, many documents in a healthcare setting are likely to meet at least one of these criteria, meaning AI translation without human review might not be compliant under these regulations.

Adding to the ambiguity, the Department of Health and Human Services (HHS) has provided varying opinions on human review of machine translation. In May 2024, the HHS Office for Civil Rights (OCR) rescinded and replaced portions of the previous 2020 § 1557 rule. OCR noted it had received a comment requesting that machine translation always be checked by a qualified human translator – but actively chose not to implement such a requirement or to require patient notification when machine translation is used.[3] A few months later, in December 2024, HHS OCR issued a Dear Colleague Letter stating that in exigent circumstances, a qualified translator may not be able to feasibly proofread a machine-generated translation until after the emergency has passed. However, this exception is a narrow one and a translator still must review the translation “as soon as practicable.” The letter also provided that patients should be warned that a machine-translated document may contain errors.[4]

As federal regulators’ positions on the issue of AI in interpretive services continue to evolve, it is no surprise that some states have begun to take action. The Texas Responsible AI Governance Act (TRAIGA) requires that a disclosure be provided if AI is used in relation to healthcare services or treatment. Similarly, California’s AB 3030 requires that AI communications that are not reviewed by a human be accompanied by a disclaimer (although human review is not required). Notably, California’s AB 1242 (currently pending in the California Senate) would exclude AI without human review from the definitions of “qualified interpreter,” “qualified translator,” and “translation.”[5] California AB 1242 is among the most explicit in requiring human oversight of AI translation, and its wording reflects a belief that current AI tools do not satisfy the “qualified” requirements.

As seen above, the legal framework governing AI in healthcare translation and interpretation is taking shape in real time. The common theme across these federal and state frameworks is that AI may assist with translation and interpretation, but AI functionally cannot substitute for a qualified human translator where meaningful language access is required. Covered entities evaluating AI translation tools will need to be aware of each state’s varying disclosure and review requirements on top of the ambiguous federal framework HHS has established. As AI technology continues to advance and regulators develop a more refined sense of what it can and cannot do, these frameworks will continue to evolve.

FOOTNOTES

[1] 45 C.F.R. § 92.4.

[2] 45 C.F.R. § 92.201(c)(3).

[3] 89 Fed. Reg. 37522, 37581 (May 6, 2024).

[4] Office for Civil Rights, Department of Health & Human Services, Re: Language Access Provisions of the Final Rule Implementing Section 1557 of the Affordable Care Act (Dec. 5, 2024).

[5] AB 1242 §§ 135001(c)-(e).

Listen to this article



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *