Poster Session B   |   7:00am Expo - Hall A & C   |   Poster ID #108

Synthetic melanoma image generation and evaluation with different Generative Adversarial Networks (GANs).

Academic Research
Prevention, Early Detection, Implementation, and Dissemination
FDA Status:
Not Applicable
CPRIT Grant:
Cancer Site(s):
Melanoma of the skin
George Zouridakis
University of Houston
Renjie Hu
University of Houston
Pei Yu Lin
University of Houston


Melanoma, an aggressive skin cancer originating from pigment-producing cells, poses a significant risk of metastasis if left untreated. Detecting melanoma early greatly increases the chances of successful treatment and reduces costs. Leveraging imaging technologies and machine learning, researchers strive to improve melanoma detection accuracy by analyzing dermatoscopic images. However, limited labeled data hampers algorithm training. To address this, synthetic image generation techniques, particularly Generative Adversarial Networks (GANs), offer a solution by creating artificial images resembling real data. GANs excel in generating realistic images but often suffer from low resolution due to computational limitations.

This paper focuses on generating high-resolution synthetic melanoma images using cutting-edge GAN models. The objectives include matching the original input resolution, comparing StyleGAN setups with DCGAN, and evaluating synthetic melanoma image quality using the DermScreen method—an explainable feature identifier based on the 7-point checklist diagnostic method for melanoma.


We employed a total of four models, including three StyleGAN and one DCGAN, to generate synthetic melanoma images. The dataset utilized for training was the ISIC 2018 dataset, which is a comprehensive compilation of dermatoscopic images provided by the International Skin Lesion Collaboration (ISIC). To assess the quality of the generated images, we employed two evaluation metrics: the Fréchet Inception Distance (FID) score, a widely adopted measure for evaluating the fidelity of synthetic images produced by GANs, and DermoScreen, a robust Machine Learning tool developed and patented by our lab.


The DCGAN, StyleGAN2, StyleGAN3-T, and StyleGAN3-R models achieved their lowest Fréchet Inception Distance (FID) scores of 66.5, 27.4, 246.4, 26.5 when trained on 6 million (6M), 4M, 6.8M, and 6.4M images, respectively. While StyleGAN2 and StyleGAN3-R exhibited similar FID scores, a visual examination of the generated images revealed a distinct contour-like pattern in the images produced by StyleGAN3-R. This pattern can be attributed to the unique "translation and rotation equivariance" feature of the StyleGAN3-R model. As a result, StyleGAN3-R is not suitable for synthetic melanoma images. Furthermore, the DCGAN model performed poorly in terms of both FID scores and image quality. In conclusion, among the models compared, StyleGAN2 emerged as the top-performing model.

The synthetic images generated by each model will be available and presented in conference.


In conclusion, StyleGAN2 proves to be the best model for generating reliable and realistic high-resolution synthetic melanoma images. The model was trained on 4M images for a duration of 25.5 hours, with 4 GPUs.

Future directions:

To enhance the diversity, quality, and resolution of synthetic melanoma images, our future research focuses on optimizing StyleGAN2 parameters for higher resolution and larger melanoma datasets. This would allow the model to learn from a broader range of patterns and capture richer fine details. The expanded dataset consist of newer public melanoma datasets and data collected from our collaborating dermatologists.

To exert control over the generated melanoma images and incorporate desired melanoma features, training a Conditional-GAN is a priority. This model would enable the ability to manipulate the generated images with specific dermatoscopy characteristics such as irregular borders, blue-whiteish veil, and more.

Validation of the proposed models in accurately mimicking real melanoma characteristics relies on two collaborating dermatologists who blindly score mixtures of real and model generated melanoma images in terms of image quality and malignancy features.