Health Providers Must Beware FCA Risks When Using AI

As the highly regulated healthcare industry grapples with how to incorporate emerging artificial intelligence (“AI”) tools into their platform, AGG Litigation, Government Investigations, and Healthcare partner, Sara Lord, authored an article published by Law360 on May 9, 2023, detailing risks under the False Claims Act (“FCA”) for providers leveraging the technology.

While ChatGPT has made significant improvements with each new iteration since its initial release in November 2022, the product is still subject to error, manipulation, or misinformation, which raises concerns about fraud, particularly in healthcare where decisions involve claims and invoice payments.

“Government contractors and service providers using AI to generate the information supporting their billing claims should be especially mindful of these considerations where the False Claims Act is concerned,” Sara said. “AI is frequently seen as a means of relieving contractors and providers of the many administrative requirements that are conditions for government reimbursement, and developers are already working on ways to make it industry specific and more accessible to users.”

As these tools improve, it is important for providers to recognize that they can still produce false and even fabricated information that subject the provider to potential FCA liability. Also, if AI can generate information to support a claim, it can also generate claims for patients that did not exist and for services not provided.

Because AI responses are based on available data, providers need to ensure that responses reflect the best, most recent data. As human involvement and oversight are reduced based on data accumulation, careless errors and inaccurate information in records may increase. As for data input into AI tools, HIPAA should also remain top of mind.

To read the full article, please click here.