Artificial intelligence (AI) applications in health care can create better, stronger, and faster solutions to complex issues for providers, such as clinical diagnosis, health care staffing, and revenue cycle management. But AI can potentially increase risks to vulnerable patients and expose providers to lawsuits and other legal ramifications. As a result, implementing AI systems that touch clinical care, whether at the point of patient contact or in administration, demands close, coordinated planning and strategic, deliberate, and measured thinking.
So, how and where does the health care provider begin implementing a robust, safe, and highly effective AI program? Successful AI programs in health care begin—and arguably end—with data governance.
Continue reading the article below or watch this short video for key points and takeaways.
While data is the key to achieving the promise of AI, it is also its biggest threat. For example, an AI model that is fed biased, mislabeled, or unreliable data can lead to serious health consequences, such as misdiagnosis, inequitable treatment recommendations, and loss of trust in provider-patient relationships. And if AI vendors grab as much data as possible, including patient data, to train their system—and possibly for purposes outside the scope of the engagement—it might expose that data in its output in violation of privacy and security regulations. Another concern is who ultimately “owns” the system’s inputs, outputs, and trained models when health data is used to train an AI system. The training data, on a digital health platform for instance, could infringe a third party’s intellectual property (IP), or the AI application could generate outputs that resemble another IP-protected process, procedure, or work, while any of the parties involved might have contractual interests that depend on the rights of others or notices that others should have provided.
Health care entities can mitigate these data challenges by understanding and taking into account the legal and operational considerations described below
Legal Considerations
Health care providers must give special attention to various contract provisions when negotiating with AI vendors to ensure proper governance of the data. First, their indemnity provisions must be robust to cover the health care provider’s defense and legal expenses that might spring from an AI vendor’s violation of federal and/or state law. Second, comprehensive privacy and security representations and warranties should ensure verifiable guardrails around the AI developer/vendor’s use and security of patient data in compliance with HIPAA and other applicable federal and state laws—to protect against using the data for proprietary purposes, for example. Third, the agreement must clearly set forth the rights/ownership of each party to the data (including training, input, and output data) to mitigate IP disputes. Fourth, noninfringement and performance warranty provisions need to hold the AI vendor accountable for illegal or unreliable data outputs. Fifth, monitoring and audit provisions should empower the health care provider to verify that the AI vendor complies with all applicable regulatory and contractual requirements and retain strategic control over the data.
Operational Considerations
Understanding and managing data flow undergirds solid AI system development. Hospitals can have very fragmented operations due to their division into health care delivery departments and specialized administrative operations. As a result, a health care system must intentionally create a cohesive, centralized AI governance program to ensure reliable data and accurate algorithms at all levels and in all divisions that use AI. Data governance will be a key component of the overall AI governance program.
A hospital’s AI governance program should operate like a well-run health care compliance program to ensure as much control as possible over the data flow. This means involving the entire organization, top to bottom, with accountability and communication at every level. Organizational support at the board level helps prioritize data governance, and a centralized authority at the operational level for all things AI (e.g., an AI Compliance Officer) helps align those activities with organizational priorities. Such authority also depends on a strong communication link between executive management and the board. Multidisciplinary and cross-functional work teams can raise and address issues in the design and delivery of AI applications early. The organization should perform, evaluate, and reconfigure routine audits of AI programs and outputs as needed.
In sum, health care organizations can capture the benefit of AI and mitigate the risks by focusing on intentional data governance throughout the AI’s development, use, and maintenance. FBT attorneys can help health care providers create and implement an overall AI governance program, including the important data governance component. We also can work to adapt existing data governance programs into AI governance frameworks. For more information, contact the authors or any attorney with the firm’s Artificial Intelligence team.