Evaluation of Artificial Intelligence-Based Chatbot Responses to Common Dermatological Queries

Authors

  • Indrasish Podder Department of Dermatology and Veneriology, College of Medicine and Sagore Dutta Hospital, West Bengal, India.
  • Neha Pipil Department of Pharmacology, Rajshree Medical Research Institute, Bareilly, Uttar Pradesh, India
  • Arunima Dhabal Department of Dermatology, Jagannath Gupta Institute of Medical Sciences, West Bengal, India.
  • Shaikat Mondal Department of Physiology, Raiganj Government Medical College and Hospital, West Bengal, India
  • Vitsomenuo Pienyii Department of Physiology, Nagaland Institute of Medical Science and Research, Kohima, Nagaland, India.
  • Himel Mondal Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India

DOI:

https://doi.org/10.35516/jmj.v58i2.2960

Keywords:

Artificial Intelligence, Dermatologists, Search Engine, Delivery of Health Care, Intelligence

Abstract

Abstract

Background and aim: Conversational artificial intelligence (AI) can streamline healthcare by offering instant and personalized patient interactions, answering queries, and providing general medical information. Its ability for early disease detection and treatment planning may improve patient outcomes. We aimed to investigate the utility of conversational AI models in addressing diagnostic challenges and treatment recommendations for common dermatological ailments.

Methods: A dataset comprising 22 case vignettes of dermatological conditions was compiled, each case accompanied by three specific queries. These case vignettes were presented to four distinct conversational AI models - ChatGPT 3.5, Google Gemini, Microsoft Copilot (GPT 4), and Perplexity.ai and responses were saved. To assess clinical appropriateness and accuracy, two expert dermatologists independently evaluated the responses of the AI systems using a 5-point Likert scale ranging from highly accurate (= 5) to inaccurate (= 1).

Results: The average score of ChatGPT was 4.1±0.61, Gemini was 3.86±0.88, Copilot was 4.51±0.33, and Perplexity was 4.14±0.64, P=0.01. The high difference in score was for Gemini vs. Copilot (Cohen’s d = 0.98), ChatGPT vs. Copilot (Cohen’s d = 0.83), and Copilot vs. Perplexity (Cohen’s d = 0.75). All of the chat bot’s scores were similar to 80% accuracy (one sample t-test with a hypothetical value of 4) except Copilot which showed an accuracy of nearly 90%.

Conclusion: This study highlights AI chatbots' potential in dermatological healthcare for patient education. However, findings underscore their limitations in accurate disease diagnosis. The programs may be used as a supplementary resource rather than primary diagnostic tools.

Downloads

Published

2024-03-01

How to Cite

Podder, I. ., Pipil, N. ., Dhabal, A. ., Mondal, S. ., Pienyii, V. ., & Mondal, H. . (2024). Evaluation of Artificial Intelligence-Based Chatbot Responses to Common Dermatological Queries. Jordan Medical Journal, 58(3). https://doi.org/10.35516/jmj.v58i2.2960

Issue

Section

Special Issue