Evaluation of Artificial Intelligence-Based Chatbot Responses to Common Dermatological Queries
DOI:
https://doi.org/10.35516/jmj.v58i2.2960Keywords:
Artificial Intelligence, Dermatologists, Search Engine, Delivery of Health Care, IntelligenceAbstract
Abstract
Background and aim: Conversational artificial intelligence (AI) can streamline healthcare by offering instant and personalized patient interactions, answering queries, and providing general medical information. Its ability for early disease detection and treatment planning may improve patient outcomes. We aimed to investigate the utility of conversational AI models in addressing diagnostic challenges and treatment recommendations for common dermatological ailments.
Methods: A dataset comprising 22 case vignettes of dermatological conditions was compiled, each case accompanied by three specific queries. These case vignettes were presented to four distinct conversational AI models - ChatGPT 3.5, Google Gemini, Microsoft Copilot (GPT 4), and Perplexity.ai and responses were saved. To assess clinical appropriateness and accuracy, two expert dermatologists independently evaluated the responses of the AI systems using a 5-point Likert scale ranging from highly accurate (= 5) to inaccurate (= 1).
Results: The average score of ChatGPT was 4.1±0.61, Gemini was 3.86±0.88, Copilot was 4.51±0.33, and Perplexity was 4.14±0.64, P=0.01. The high difference in score was for Gemini vs. Copilot (Cohen’s d = 0.98), ChatGPT vs. Copilot (Cohen’s d = 0.83), and Copilot vs. Perplexity (Cohen’s d = 0.75). All of the chat bot’s scores were similar to 80% accuracy (one sample t-test with a hypothetical value of 4) except Copilot which showed an accuracy of nearly 90%.
Conclusion: This study highlights AI chatbots' potential in dermatological healthcare for patient education. However, findings underscore their limitations in accurate disease diagnosis. The programs may be used as a supplementary resource rather than primary diagnostic tools.

