A dermatologist examines moles and skin growths on the patient's body using a dermatoscope.

Multimodal AI improves early melanoma detection.

credit: istock.com/Kalinovskiy

Multimodal AI could change how dermatologists detect melanoma

A new deep learning model combines dermoscopic images with patient data to improve accuracy and reduce false positives.
Photo of Bree Foster
| 4 min read
Register for free to listen to this article
Listen with Speechify
0:00
4:00

Skin cancer is the most common malignancy in the US. Among skin cancers, melanoma is the deadliest, claiming thousands of lives annually. According to the American Cancer Society, an estimated 104,960 new cases of invasive melanoma and 107,240 cases of in situ melanoma will be diagnosed in the US in 2025, with approximately 8,430 deaths expected. Early detection dramatically improves survival rates; however, diagnosing melanoma is notoriously challenging, as it often resembles harmless moles or lesions.

Recent innovations are helping address this challenge. In 2024, the FDA cleared DermaSensor, a light spectroscopy tool that uses artificial intelligence (AI) to detect skin cancers, including melanoma, squamous cell carcinoma, and basal cell carcinoma. The device is designed to help primary care providers identify patients that need immediate biopsy or specialist care, potentially addressing dermatologist shortages and long wait times. The pivotal study, titled DERM-SUCCESS, was conducted on 1,579 lesions of 1,005 patients from 22 primary care centers, and reported a device sensitivity of 96 percent for skin cancers, with a negative result indicating a 97 percent chance of benign disease. In a companion clinical utility study with 108 physicians, DermaSensor cut the number of missed cancers in half.

However, despite these impressive results, tools like DermaSensor have low specificity and are limited by a fixed hardware platform. An AI model has now been developed that combines dermoscopic images with patient data to improve accuracy, specificity, and enables flexible deployment across clinical and telemedicine settings.

Combining images and metadata

During routine screenings, dermatologists capture magnified dermoscopic images of skin lesions. Alongside these images, relevant patient metadata is collected. Even with expert training, dermatologists’ accuracy can vary due to fatigue, cognitive biases, and the subtle visual variations of melanoma. Integrating AI into this process promises to enhance diagnostic precision.

Led by Gwanggil Jeon, a bioinformatician from Incheon National University, in collaboration with researchers at the University of West of England, Anglia Ruskin University, and the Royal Military College of Canada, the team developed a deep learning model that combines dermoscopic images with clinical patient data.

Published in Information Fusion on December 1, the study used over 33,000 dermoscopic images from the Society for Imaging Informatics in Medicine-International Skin Imaging Collaboration (SIIM-ISIC) melanoma dataset, along with associated clinical metadata such as patient age, sex, and lesion location, to train a deep learning model. The resulting multimodal AI model achieved 94.5 percent accuracy, 95.2 percent specificity, and an F1-score of 0.93, showing it can reliably detect melanoma while minimizing missed cases and false alarms, outperforming traditional image-only models like ResNet-50 and EfficientNet.

“Skin cancer, particularly melanoma, is a disease in which early detection is critically important for determining survival rates,” said Jeon in the press release. “Since melanoma is difficult to diagnose based solely on visual features, I recognized the need for AI convergence technologies that can consider both imaging data and patient information.”

Infographic on melanoma detection accuracy using AI and metadata

A new deep learning system detects melanoma with 94.5 percent accuracy by fusing dermoscopic images and patient metadata.

Gwanggil Jeon from Incheon National University, Korea

Why multimodal AI matters

Most AI tools in dermatology rely solely on images. However, these models can miss subtle clues that patient information provides. For example, the same-looking lesion might have different implications depending on the patient’s age, gender, or anatomical site. By combining image data with structured clinical information, the AI can make more informed, precise predictions.

Continue reading below...
An illustration showing red cancer cells surrounded by white immune cells interacting on a dark textured background.
ExplainersWhy does immunotherapy work better for some cancers than others?
A powerful tool in modern oncology, immunotherapy doesn’t work the same for everyone. Researchers are exploring why and developing ways to improve its effectiveness.
Read More

The research team’s model integrated a multi-layer convolutional neural network (CNN) for image analysis with a fully connected neural network that processes patient metadata. These features are fused at the final layer to produce a joint representation, which is then classified to determine the likelihood of melanoma.

This approach delivered multiple advantages over conventional image-only models, achieving 94.5 percent accuracy and 95.2 percent specificity, alongside a sensitivity of 94 percent. Its high specificity is particularly important, as it demonstrates the model’s ability to reduce false positives while maintaining high sensitivity, improving overall diagnostic reliability compared with traditional image-only models. The model also surpassed existing AI-enabled devices, such as DermaSensor, which achieved just 20.7 percent specificity.

However, as Jeon told DDN, “The results simply demonstrate technical potential within a limited research setting. We fully expect variability when the model is exposed to real-world settings, including different skin tones, imaging devices, lighting environments, and non-dermoscopic images. Additional clinical validation would be required before considering broader deployment.”

Practical potential in dermatology

While other CNNs like ResNet have been widely adopted in skin cancer classification, many models suffer from overfitting, limited generalization to real-world clinical datasets, and challenges in handling imbalanced datasets that can bias the algorithm. This is a common issue in medical image classification, as some tumor types, such as malignant melanoma, may have far fewer examples than benign lesions.

To address this, the research team employed data augmentation techniques, artificially increasing the variety of melanoma images through methods such as rotation, flipping, and color adjustments. This approach helped the model learn to recognize cancerous lesions more effectively without becoming biased toward the more numerous benign cases. Additionally, feature importance analysis highlighted which patient characteristics most strongly influenced diagnosis, including lesion size, patient age, and lesion location.

Beyond academic success, Jeon told DDN that if further developed, such a tool might ultimately serve consumers directly: “I envision general users — especially those seeking accessible, smartphone-based self-screening — as the primary beneficiaries.”

This model represents a significant step toward more accurate, accessible, and personalized melanoma detection, enabling earlier diagnosis and giving patients the best chance for effective treatment.

About the Author

  • Photo of Bree Foster

    Bree Foster is a science writer at Drug Discovery News with over 2 years of experience at Technology Networks, Drug Discovery News, and other scientific marketing agencies. She holds a PhD in comparative and functional genomics from the University of Liverpool and enjoys crafting compelling stories for science.

Related Topics

Loading Next Article...
Loading Next Article...
Subscribe to Newsletter

Subscribe to our eNewsletters

Stay connected with all of the latest from Drug Discovery News.

Subscribe

Sponsored

: A magnifying glass focuses on a puzzle piece labeled “mRNA,” symbolizing examining or analyzing messenger RNA.
A streamlined analytical strategy supports reliable plasmid and mRNA quality assessment at every mRNA production stage.
A 3D illustration of two glowing cells with visible nuclei floating in a purple and blue gradient background.
Explore evolving technologies, analytical strategies, and expert guidance supporting high-quality flow cytometry research.
Bands of diffused color illustrating pigment separation.
Discover how supercritical fluids expand chromatographic capabilities across diverse analytical challenges.
Drug Discovery News December 2025 Issue
Latest IssueVolume 21 • Issue 4 • December 2025

December 2025

December 2025 Issue

Explore this issue