Vision-language models (VLMs) can answer clinically relevant questions with their reasoning capabilities and user-friendly interfaces. However, their robustness to commonly existing medical image artefacts has not been explored, leaving major concerns in trustworthy clinical decision-making. In this study, we assessed the robustness of recent VLMs to medical image artefacts in disease detection across three different medical fields. Specifically, we included five categories of image artefacts, and evaluated the VLMs’ performance on images with and without artefacts. We build evaluation benchmarks in brain MRI, Chest X-ray, and retinal images, involving four real-world medical datasets. Our results demonstrate that VLMs showed poor performance on original unaltered images and performed even worse when weak artefacts were introduced. The strong artefacts were barely detected by those VLMs. Our findings indicate that VLMs are not yet capable of performing medical tasks with image artefacts, underscoring the critical need to explicitly incorporate artefact-aware method design and robustness tests into VLM development.
Competing Interest StatementP.A.K. is supported by a Moorfields Eye Charity Career Development Award (grant no. R190028A) and a UK Research & Innovation Future Leaders Fellowship (grant no. MR/T019050/1).
Funding StatementThis study did not receive any funding
Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
Yes
The details of the IRB/oversight body that provided approval or exemption for the research described are given below:
The study used (or will use) ONLY openly available human data that were originally located at: https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset https://www.kaggle.com/datasets/paultimothymooney/kermany2018 https://www.kaggle.com/datasets/darshan1504/covid19-detection-xray-dataset https://github.com/nkicsl/DDR-dataset
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
Yes
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
Yes
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Yes
Data availabilityThe benchmark dataset, which includes three imaging modalities with introduced artefacts, is derived from publicly available sources and can be accessed via the following links: MRI (https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset), OCT (https://www.kaggle.com/datasets/paultimothymooney/kermany2018), and X-ray (https://www.kaggle.com/datasets/darshan1504/covid19-detection-xray-dataset). We have reorganized, randomly sampled, and added artefacts to create this benchmark, which is available at: https://github.com/ziijiecheng/VLM_robustness. Additionally, the colour fundus dataset, containing real-world artefacts, is available at: https://github.com/nkicsl/DDR-dataset. We have reorganized and sampled it according to image distortion levels, and it can be found alongside the benchmark dataset.
Comments (0)