The ethical and legal challenges of using generative Artificial Intelligence (AI) in healthcare were discussed during the latest instalment of Weill Cornell Medicine-Qatar’s (WCM-Q) Intersection of Law & Medicine series.
A panel of expert speakers hosted a webinar to explore the generative AI tools available to healthcare professionals, the legal and ethical risks involved, and the limitations of laws regulating their usage. The event, titled “Automated Healthcare: ChatGPT, Bing, Bard & the Law of Generative AI,” was coordinated and delivered by WCM-Q’s Division of Continuing Professional Development in collaboration with the College of Law at Hamad Bin Khalifa University (HBKU).
Course directors included Dr Thurayya Arayssi, professor of clinical medicine and vice dean for academic and curricular affairs at WCM-Q, and Dr Barry Solaiman, assistant professor of law at HBKU College of Law and adjunct assistant professor of medical ethics in clinical medicine at WCM-Q.
Other speakers were Dr Faisal Farooq, director of artificial intelligence, LinkedIn, California; Sara Gerke, assistant professor of law, Penn State Dickinson Law, Carlisle; Jessica Roberts, Leonard H. Childs Chair in Law, director of the Health Law & Policy Institute, professor of law at University of Houston; and David A Simon, lecturer on law, Harvard Law School; research fellow, Petrie-Flom Centre for Health Law Policy; Jamie Gray, director of the health sciences library at WCM-Q; Dr Alaa Abd-alrazaq and Dr Arfan Ahmed, research associates from WCM-Q’s AI Centre for Precision Health.
Generative AI systems can be used for clinical decision support, medical record keeping, medical translation, patient triage, mental health support, and remote patient monitoring, among others. It can also support drug discovery, clinical decision-making, clinical documentation, treatment plans, and advanced imaging.
Even though generative AI could benefit the healthcare sector, one of the key legal and ethical issues discussed during the session was that there are also associated risks for end users. Some of the challenges of generative AI are that the system could change the way virtual assistants interact with their patients and that there is no umbrella legislative framework governing the space.
Dr Solaiman said: “Generative AI raises questions about the guidelines that should be required during the development stage to avoid biases in the data used to train these systems. This is currently a very important topic of discussion because some hospitals are now considering using generative AI systems while others have already started developing them. Even though generative AI will provide support to the healthcare sector, the applications pose legal and ethical challenges.”
In his presentation titled “The Uses and Risks of Generative AI,” Dr Farooq examined how generative AI systems are trained on texts, images, audio, and videos and how this could result in privacy, trust, safety, low interpretability, bias, misuse, and over-reliance risks.
Dr Roberts presented “Biased ChatBots & Health Disparity Populations,” where she discussed biased data in healthcare and the risks involved. The presentation also assessed legal protection for data biases in healthcare and their shortcomings.
The experts agreed that although generative AI applications could change the way medicine is practised, AI could never replace the role of a physician or a healthcare practitioner.
Dr Arayssi said: “ The discussion has provided a platform for healthcare professionals to learn more about the generative AI applications that are available to them and the unintended consequences of using these tools.”
Dr Thurayya Arayssi and Dr Barry Solaiman.