ESCRS - PP15.06 - Comparative Evaluation Of Lasik Surgery Information Accuracy: Ai Models, Google, And Ophthalmologist Consultations

Comparative Evaluation Of Lasik Surgery Information Accuracy: Ai Models, Google, And Ophthalmologist Consultations

Published 2025 - 43rd Congress of the ESCRS

Reference: PP15.06 | Type: Free paper | DOI: 10.82333/4mcx-p390

Authors: Ruchi Gupta* 1 , Dan Reinstein 1 , Timothy Archer 1 , Joseph Potter 1

1London Vision Clinic,London,United Kingdom

Purpose

To compare the accuracy, comprehensiveness, clarity, consistency, and credibility of LASIK surgery information provided by AI models, Google, and ophthalmologists.

Setting

This study was conducted in HIMSR & HAHC Hospital, focusing on the accuracy of LASIK-related information from AI models, Google, and ophthalmologists. The study population included board-certified ophthalmologists, AI-generated responses from leading models, and Google search results collected using standardized search queries. The study was conducted in a digital setting, leveraging online AI tools and search engines, while the ophthalmologists' responses were obtained through google form.

Methods

A structured questionnaire covering key LASIK topics—including procedure details, eligibility, risks, post-operative care, and long-term outcomes—was developed based on previously published literature. Three AI models (ChatGPT, Gemini, Claude), Google search results, and board-certified ophthalmologists were asked to respond to the same set of questions. A scoring system (0-10) was applied to evaluate the responses across five parameters: accuracy, comprehensiveness, clarity, consistency, and source credibility. The final scores were analyzed to determine the most reliable source of LASIK-related information.

Results

Ophthalmologists provided the most accurate and comprehensive responses, achieving the highest total score of 24.9/25. AI models demonstrated better clarity and consistency compared to Google search results but lacked medical nuance and credibility, with ChatGPT scoring 20.0/25, Gemini 19.3/25, and Claude 18.5/25. Google search results exhibited the lowest overall reliability, scoring 15.3/25, due to inconsistencies, outdated sources, and the influence of non-medical content. AI models provided structured and easy-to-understand responses but failed to cite authoritative medical sources, reducing their credibility.

Conclusions

While AI models offer clear and consistent LASIK-related information, their lack of depth and verified sources limits their reliability in patient education. Google search results remain inconsistent and prone to misinformation. Human ophthalmologists continue to be the most accurate and comprehensive source of LASIK information, emphasizing the need for professional consultations in medical decision-making. AI models can serve as supplementary tools but require improvements in source citation and depth to enhance their effectiveness in ophthalmic patient education.