Predictions of AI-powered models strictly trial-specific, have no generalisability: Study

AI-powered prediction models made accurate predictions within the trial they were developed in, but gave “random predictions” outside of it, according to a new research. Researchers said the study showed that generalisations of predictions of artificial intelligence-based models across different study centres cannot be ensured at the moment and that these models were “highly context-dependent”. Results of the study have been published in the journal Science. Pooling data from across trials too did not help matters, the team found.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
Northwestern University Kellogg Post Graduate Certificate in Product Management Visit
MIT MIT Technology Leadership and Innovation Visit
Indian School of Business ISB Professional Certificate in Product Management Visit

The team of researchers, including those from the universities of Cologne (Germany) and Yale (US), were testing the accuracy of AI-driven models in predicting the response of schizophrenic patients to antipsychotic medication across several independent clinical trials.

The current study pertained to the field of precision psychiatry, which makes use of data-related models for targeted therapies and suitable medications for individuals. “Our goal is to use novel models from the field of AI to treat patients with mental health problems in a more targeted manner,” said Joseph Kambeitz, Professor of Biological Psychiatry at the Faculty of Medicine of the University of Cologne and the University Hospital Cologne.

“Although numerous initial studies prove the success of such AI models, a demonstration of the robustness of these models has not yet been made,” said Kambeitz, adding that safety was of great importance for everyday clinical use.

Discover the stories of your interest

“We have strict quality requirements for clinical models and we also have to ensure that models in different contexts provide good predictions. “The models should provide equally good predictions, whether they are used in a hospital in the USA, Germany or Chile,” said Kambeitz.

That these AI models have highly limited generalisability was an important signal for clinical practice and shows that further research is needed to actually improve psychiatric care, the researchers said.

The team is hoping to overcome these obstacles and is currently working on examining large patient groups and data sets in order to improve the accuracy of AI models, they said.

Originally posted 2024-01-14 09:49:37.