Evaluating Retrieval-Augmented Generation-Large Language Models for Infective Endocarditis Prophylaxis: Clinical Accuracy and Efficiency
- Paak Rewthamrongsris, Vivat Thongchotchat, Jirayu Burapacheep, Vorapat Trachoo, Zohaib Khurshid, Thantrira Porntaveetus
- https://doi.org/10.1016/j.identj.2025.109344
Abstract
Introduction and Aims
The use of large language models (LLMs) in healthcare is expanding. Retrieval-augmented generation (RAG) addresses key LLM limitations by grounding responses in domain-specific, up-to-date information. This study evaluated RAG-augmented LLMs for infective endocarditis (IE) prophylaxis in dental procedures, comparing their performance with non-RAG models assessed in our previous publication using the same question set. A pilot study also explored the utility of an LLM as a clinical decision support tool.
Methods
An established IE prophylaxis question set from previous research was used to ensure comparability. Ten LLMs integrated with RAG were tested using MiniLM L6 v2 embeddings and FAISS to retrieve relevant content from the 2021 American Heart Association IE guideline. Models were evaluated across five independent runs, with and without a preprompt (‘You are an experienced dentist’), a prompt-engineering technique used in previous research to improve LLMs accuracy. Three RAG-LLMs were compared to their native (non-RAG) counterparts benchmarked in the previous study. In the pilot study, 10 dental students (5 undergraduate, 5 postgraduate in oral and maxillofacial surgery) completed the questionnaire unaided, then again with assistance from the best performing LLM. Accuracy and task time were measured.
Results
DeepSeek Reasoner achieved the highest mean accuracy (83.6%) without preprompting, while Grok 3 beta reached 90.0% with preprompting. The lowest accuracy was observed for Claude 3.7 Sonnet, at 42.1% without preprompts and 47.1% with preprompts. Preprompting improved performance across all LLMs. RAG’s impact on accuracy varied by model. Claude 3.7 Sonnet showed the highest response consistency without preprompting; with preprompting, Claude 3.5 Sonnet and DeepSeek Reasoner matched its performance. DeepSeek Reasoner also had the slowest response time. In the pilot study, LLM support slightly improved postgraduate accuracy, slightly reduced undergraduate accuracy, and significantly increased task time for both.
Conclusion
While RAG and prompting enhance LLM performance, real-world utility in education remains limited.
Clinical relevance
LLMs with RAG provide rapid and accessible support for clinical decision-making. Nonetheless, their outputs are not always accurate and may not fully reflect evolving medical and dental knowledge. It is crucial that clinicians and students approach these tools with digital literacy and caution, ensuring that professional judgment remains central.
