Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14365/5880
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKolac, U.C.-
dc.contributor.authorKarademir, O.M.-
dc.contributor.authorAyik, G.-
dc.contributor.authorKaymakoglu, M.-
dc.contributor.authorFamiliari, F.-
dc.contributor.authorHuri, G.-
dc.date.accessioned2025-01-25T17:07:25Z-
dc.date.available2025-01-25T17:07:25Z-
dc.date.issued2025-
dc.identifier.issn2666-6383-
dc.identifier.urihttps://doi.org/10.1016/j.jseint.2024.11.012-
dc.description.abstractBackground: Rotator cuff tears are common upper-extremity injuries that significantly impair shoulder function, leading to pain, reduced range of motion, and a decrease in quality of life. With the increasing reliance on artificial intelligence large language models (AI LLMs) for health information, it is crucial to evaluate the quality and readability of the information provided by these models. Methods: A pool of 50 questions was generated related to rotator cuff tear by querying popular AI LLMs (ChatGPT 3.5, ChatGPT 4, Gemini, and Microsoft CoPilot) and using Google search. After that, responses from the AI LLMs were saved and evaluated. For information quality the DISCERN tool and a Likert Scale was used, for readability the Patient Education Materials Assessment Tool for Printable Materials (PEMAT) Understandability Score and the Flesch-Kincaid Reading Ease Score was used. Two orthopedic surgeons assessed the responses, and discrepancies were resolved by a senior author. Results: Out of 198 answers, the median DISCERN score was 40, with 56.6% considered sufficient. The Likert Scale showed 96% sufficiency. The median PEMAT Understandability score was 83.33, with 77.3% sufficiency, while the Flesch-Kincaid Reading Ease score had a median of 42.05 with 88.9% sufficiency. Overall, 39.8% of the answers were sufficient in both information quality and readability. Differences were found among AI models in DISCERN, Likert, PEMAT Understandability, and Flesch-Kincaid scores. Conclusion: AI LLMs generally cannot offer sufficient information quality and readability. While they are not ready for use in medical field, they show a promising future. There is a necessity for continuous re-evaluation of these models due to their rapid evolution. Developing new, comprehensive tools for evaluating medical information quality and readability is crucial for ensuring these models can effectively support patient education. Future research should focus on enhancing readability and consistent information quality to better serve patients. © 2024 The Author(s)en_US
dc.description.sponsorshipUniversità degli Studi Magna Graecia di Catanzaroen_US
dc.language.isoenen_US
dc.publisherElsevier B.V.en_US
dc.relation.ispartofJSES Internationalen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAi Tools In Healthcareen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectBasic Science Studyen_US
dc.subjectChatgpten_US
dc.subjectFrequently Asked Questionsen_US
dc.subjectLarge Language Modelsen_US
dc.subjectPatient Informationen_US
dc.subjectRotator Cuff Tearsen_US
dc.subjectValidation Of Ai In Patient Informationen_US
dc.titleCan Popular Ai Large Language Models Provide Reliable Answers To Frequently Asked Questions About Rotator Cuff Tearsen_US
dc.typeArticleen_US
dc.identifier.doi10.1016/j.jseint.2024.11.012-
dc.identifier.scopus2-s2.0-86000433418-
dc.departmentİzmir Ekonomi Üniversitesien_US
dc.authorscopusid58490414900-
dc.authorscopusid59517163000-
dc.authorscopusid57209241639-
dc.authorscopusid57208080357-
dc.authorscopusid56978246900-
dc.authorscopusid36005147600-
dc.identifier.volume9en_US
dc.identifier.issue2en_US
dc.identifier.startpage390en_US
dc.identifier.endpage397en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.scopusqualityQ2-
dc.identifier.wosqualityN/A-
item.fulltextNo Fulltext-
item.grantfulltextnone-
item.cerifentitytypePublications-
item.openairetypeArticle-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.languageiso639-1en-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Show simple item record



CORE Recommender

Page view(s)

58
checked on Jun 23, 2025

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.