Help ?

IGMIN: あなたがここにいてくれて嬉しいです. お願いクリック '新しいクエリを作成してください' 当ウェブサイトへの初めてのご訪問で、さらに情報が必要な場合は.

すでに私たちのネットワークのメンバーで、すでに提出した質問に関する進展を追跡する必要がある場合は, クリック '私のクエリに連れて行ってください.'

Search

Organised by  IgMin Fevicon

Regional sites

Browse by Subjects

Welcome to IgMin Research – an Open Access journal uniting Biology, Medicine, and Engineering. We’re dedicated to advancing global knowledge and fostering collaboration across scientific fields.

Browse by Sections

At IgMin Research, we bridge the frontiers of Biology, Medicine, and Engineering to foster interdisciplinary innovation. Our expanded scope now embraces a wide spectrum of scientific disciplines, empowering global researchers to explore, contribute, and collaborate through open access.

Members

We strive to create opportunities for interdisciplinary engagement to push the boundaries of knowledge.

Articles

We strive to create opportunities for interdisciplinary engagement to push the boundaries of knowledge.

Explore Content

We strive to create opportunities for interdisciplinary engagement to push the boundaries of knowledge.

Identify Us

We strive to create opportunities for interdisciplinary engagement to push the boundaries of knowledge.

IgMin Corporation

Welcome to IgMin, a leading platform dedicated to enhancing knowledge dissemination and professional growth across multiple fields of science, technology, and the humanities. We believe in the power of open access, collaboration, and innovation. Our goal is to provide individuals and organizations with the tools they need to succeed in the global knowledge economy.

Publications Support
[email protected]
E-Books Support
[email protected]
Webinars & Conferences Support
[email protected]
Content Writing Support
[email protected]
IT Support
[email protected]

Search

Select Language

Explore Section

Content for the explore section slider goes here.

このアイテムは受け取った
67  訪問
26  ダウンロード
18.6MB  データ量
次元
スキャンしてリンクを取得
Engineering Group Research Article Article ID: igmin307

Melanocytic Nevi Classification using Transfer Learning

Uma Mahesh RN 1 * ,
Harsha Jain HJ 2 ,
Hemanth Kumar CS 2 ,
Shreyash Umrao 2 and
Mohith DL 2
Machine Learning Artificial Intelligence

受け取った 07 May 2025 受け入れられた 07 Jul 2025 オンラインで公開された 08 Jul 2025

Abstract

In this paper, the binary classification of skin images has been performed using deep learning technique. i.e the skin disease recognition has been performed using deep learning technique. Here, the binary classification of skin images namely melanocytic nevi and normal skin images has been classified using resnet50 deep learning network. Normal skin images have been considered in TRUE class. Melanocytic nevi skin images have been considered in FALSE class. Traditional method such as biopsy involves lot of computational procedures and consumes a lot of time which is tedious process. Therefore, deep learning-based skin disease recognition has been proposed here. The dataset for melanocytic nevi and normal skin images have been prepared for 9792 images. This dataset is passed through all five deep residual network models to obtain the results. The results such as error/accuracy curves, error matrix, false-positive-rate (FPR) vs. true-positive-rate (TPR) curve are shown for the confirmation of the work. The results obtained from the ResNet50 model were compared with other deep residual network models i.e ResNet18, ResNet34, ResNet101, and ResNet152 models.

Introduction

A melanocytic nevus is a non-cancerous accumulation of melanocytes, the pigment-producing cells found in the skin, hair follicles, and the uveal tract of the eye. Normally, melanocytes are spread individually among keratinocytes, resting on the basement membrane. However, in melanocytic nevi, these cells appear in higher concentrations, either as single units or clustered together in groups of three or more [1]. These moles are generally benign; however, having a large number of them can elevate the risk of developing melanoma, a form of skin cancer that originates in melanocytes [2]. Melanocytic lesions frequently appear in routine surgical pathology. While most cases can be accurately diagnosed using established morphological criteria, a subset remains challenging to classify. These ambiguous cases can cause concern for patients, clinicians, and pathologists, as missing a melanoma diagnosis can have severe consequences [3]. A. Jibhakate, et al. [4] conducted a study which presented a deep learning-based system for skin lesion classification using a combination of Convolutional Neural Network (CNN) and transfer learning techniques. Q. Sun, et al. [5] introduced a deep learning-based approach for the classification of melanocytic nevi based on depth of nevus cell nests: junctional, compound, and intradermal nevi by incorporating physiological and channel encoded depth information from dermoscopic images. The proposed model combines CNN with transformer architectures, achieving high classification accuracy that surpasses both traditional models and dermatologists. Esteva A, et al. [6] used a deep learning neural network model in classifying skin cancer. Utilizing 129,450 clinical images of 2,032 diseases, the GoogleNet Inception v3 model was trained via transfer learning, achieving over 91% AUC in distinguishing malignant melanomas from benign nevi, outperforming dermatologists in both true-positive-rate (TPR) and false-positive-rate (FPR). The model, tested on biopsy proven datasets and validated with cross-validation strategies, generalizes well across various image modalities like smartphone and dermoscopic images without extensive preprocessing or handcrafted features.

Kassem MA, et al. [7] did a comprehensive review of various machine learning and deep learning models applied to skin lesion classification, with a particular focus on melanocytic nevi. It examines common methodologies, challenges, and potential future directions in automating skin lesion diagnosis. By combining visual examination with dermatoscopy images, the review highlights that these approaches achieved an absolute accuracy of 75% - 84% for melanoma detection. The study offers an in-depth analysis of the strengths and limitations of deep learning in dermatology, providing valuable insights into the advancements required for effective clinical implementation. Ling-Fang Li, et al. [8] reviewed 45 studies conducted between 2016 and 2020 on the application of deep learning for diagnosing skin diseases [9-21].

In this paper, the binary classification of skin lesion images has been performed using various deep residual network models. i.e ResNet50, ResNet18, ResNet34, ResNet101, and ResNet152 models. The major difference between the proposed work and the previous works [4-6] is that the binary classification of skin lesion images has been performed using all five deep residual network models with better results in accuracy, false-positive-rate (FPR) vs. true-positive-rate (TPR) curve and other performance metrics such as positive predictive value (PPV), true-positive-rate, and f1-score [22,23]. The major advantage of this proposed work compared to other works is that the skin image classification has been performed using ResNet50 model and then compared with other deep residual network models. The dataset has been prepared for 9792 images and was trained through all five deep residual network models to produce the results.

Methodology

The binary classification of skin images has been performed using all five deep residual neural network models. i.e ResNet50, ResNet18, ResNet34, ResNet101, and ResNet152 models. All the five deep residual network [9,24] models takes the input as normal skin and melanocytic nevi skin images of size 224 × 224. All five deep residual network models consists of several convolutional, pooling, batch normalization layers in the feature extraction stage and classification layer is made up of dense and output layers [23]. The convolutional layer performs convolution operation between the skin image and the kernel to produce the feature map [23]. This feature map was given to the pooling layer [23]. Max-Pooling2D technique was employed in the pooling layer [23]. Batch normalization layer was used in all five deep residual network models to avoid over fitting problem [23]. The convolutional and dense layers incorporate rectified linear unit (ReLU) activation function [23]. The equation for the ReLU activation function is shown in (1) [23].

RwLU(x)={ x for x0 0 for x<0 } (1)

Resnet50 model consists of four convolutional layers each of which is made up of different number of residual blocks [23]. The first convolutional layer is made up of three residual blocks. The second, third, and fourth convolutional layers are made up of four, six, and three residual blocks. Each residual block is made up of different number of kernels and kernel size is also different. The residual block includes skip connection which helps to avoid vanishing gradient problem. ResNet18 model consists of four convolutional layers each of which is made up of two residual blocks [23]. The first, second, third, and fourth convolutional layers is made up of two residual blocks. ResNet34 model consists of four convolutional layers each of which is made up of different number of residual blocks [23]. The first convolutional layer is made up of three residual blocks. The second, third, and fourth convolutional layers are made up of four, six, and three residual blocks. ResNet101 model consists of four convolutional layers each of which is made up of different number of residual blocks [23]. The first convolutional layer is made up of three residual blocks. The second, third, and fourth convolutional layers are made up of four, twenty-three, and three residual blocks. ResNet152 model consists of four convolutional layers each of which is made up of different number of residual blocks [23]. The first convolutional layer is made up of three residual blocks. The second, third, and fourth convolutional layers are made up of eight, thirty-six, and three residual blocks. The output of the last or fourth convolutional layer is passed through the dense layer that converts the 2-D data into 1-D form [23]. This dense layer output was passed through the final output layer. The output layer selects single neuron and consists of sigmoid activation function to produce the final result [23]. All five deep residual network models consist of single neuron along with sigmoid activation function to produce the result.

Dataset preparation

We used publicly available dermatological datasets sourced from credible repositories such as the dataset by Ismail Hossain (2015) from Kaggle and Mendeley for training and evaluation. The dataset comprises high-resolution images of skin lesions, categorized into two groups: melanocytic nevi and normal skin. The normal skin images have been considered in TRUE class and melanocytic nevi skin images has been considered in FALSE class. Each image was preprocessed to a consistent resolution of 224 × 224 pixels for the training of the all five deep residual network models i.e ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152 models. i.e all five deep residual network models takes the input of size 224 × 224 for the training. The four sample skin images from the Kaggle and Mendeley datasets are shown in Figure 1.

Preprocessed Skin Images including normal skin and melanocytic nevi skin images. (a) Melanocytic nevi (b) melanocytic nevi (c) normal skin (d) normal skin.Figure 1: Preprocessed Skin Images including normal skin and melanocytic nevi skin images. (a) Melanocytic nevi (b) melanocytic nevi (c) normal skin (d) normal skin.

From kaggle repository, melanocytic nevi skin images have been obtained and from the mendeley repository, normal skin images have been used. The dataset consists of 9792 images in total which consists of both normal skin and melanocytic nevi skin images. The dataset has been separated into three sub sets namely training, validation, and test subsets. The training subset consists of 6249 (64%) images whereas validation subset consists of 1761 (18%) images and finally test subset consists of 1782 (18%) images [23]. All five deep residual network models i.e ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152 models was trained using Adam optimizer with a learning rate of 0.0001 [23]. The binary cross entropy was used as a loss function for the binary classification task of skin lesion images. All five deep residual network models were implemented in a tensor flow environment using python programming.

Result analysis and discussion

A. ResNet50

The ResNet50 model was trained on a mini subset of 32 images from training/validation sets in single epoch. Each epoch was iterated over 195/55 steps for both sets till the images are completed [23].

From Figure 2(a), it can be said that the training accuracy and validation accuracy is increasing initially and then approaches around 1.00 i.e 100% at 50 epochs [23]. Therefore, ResNet50 model is correctly converging. Similarly, from Figure 2(b), it can be said that the margin between training error and validation error is also very small [23].

Loss and accuracy curves from ResNet50 model (a) accuracy (b) loss.Figure 2: Loss and accuracy curves from ResNet50 model (a) accuracy (b) loss.

The ResNet50 model was tested on 1782 images from the test subset. From Figure 3, it can be said that the 586 images are recognized as normal skin and 1196 images are recognized as melanocytic nevi skin. Further it can be said that, 1195 images which are actually melanocytic nevi skin has been predicted as melanocytic nevi skin, 1 image which is actually melanocytic nevi has been predicted as normal skin, 0 images which are actually normal skin has been predicted as melanocytic nevi skin, and 586 images which are actually normal skin has been predicted as normal skin. The performance metrics namely positive predictive value (PPV), true-positive-rate (TPR), and f1-score are calculated for both normal skin and melanocytic nevi skin classes respectively [22,23]. The overall accuracy is defined as ratio of true predictions to sum of all predictions [22]. The metric PPV is defined as ratio of label TP to the sum of labels TP and FP respectively [22]. The metric TPR is defined as ratio of label TP to the sum of labels TP and FN respectively [22]. The metric f1-score is defined as harmonic mean of PPV and TPR metrics [22].

Confusion matrix from ResNet50 model.Figure 3: Confusion matrix from ResNet50 model.

From Table 1, it can be said that the ResNet50 model has an overall accuracy of 1.00 i.e 100%. The ResNet50 model has unity values of PPV and TPR for both normal skin and melanocytic nevi skin classes respectively [22]. The metric macro average is calculated by taking average of normal skin and melanocytic nevi skin for PPV, TPR, and f1-score metrics respectively [22]. The metric weighted average is calculated by taking the ratio of metrics macro average and micro average for the metrics PPV, TPR, and f1-score respectively [23]. Since there are zero false negatives and only single false positive, the ResNet50 model achieves unity values of PPV, TPR, and f1-score labels for both normal skin and melanocytic nevi skin classes respectively [22]. Since the ResNet50 model has unity values of PPV, TPR, and f1-score for both normal skin and melanocytic nevi skin classes, the ResNet50 model achieves unity values of macro average and weighted average for PPV, TPR, and f1-score metrics respectively [22,23].

Table 1: Classification report for the ResNet50 model.
label Positive predictive value (PPV) True-positive-rate (TPR) f1-score support
Not 1 (Melanocytic nevi) 1.00 1.00 1.00 1196
1 (normal skin) 1.00 1.00 1.00 586
accuracy     1.00 1782
macro avg 1.00 1.00 1.00 1782
weighted avg 1.00 1.00 1.00 1782

The FPR-TPR characteristic is used to indicate the performance of the binary classifier to distinguish between normal skin and melanocytic nevi skin classes [22]. The Area under the Curve (AUC) metric is used as one of the measures in the FPR-TPR characteristic indicating the classification performance between two classes.

From Figure 4, it can be said that the ResNet50 model has an AUC value of 1.00 i.e 100% indicating excellent binary classification performance for skin image classification task [23].

ROC curve from ResNet50 model.Figure 4: ROC curve from ResNet50 model.

B. ResNet18

The ResNet18 model was trained on a mini subset of 32 images from training/validation sets in single epoch. Each epoch was iterated over 195/55 steps for both sets till the images are completed [23].

From Figure 5(b), it can be said that the training accuracy and validation accuracy is fluctuating throughout 50 epochs [23]. After the completion of the training, the training accuracy is 1.00 i.e 100% and the validation accuracy is 0.97 i.e 97%. The margin between training accuracy and validation accuracy is small. Therefore, ResNet18 model is correctly converging. Similarly, from Figure 5(a), it can be said that the margin between training error and validation error is also small [23].

Loss and accuracy curves from ResNet18 model (a) loss (b) accuracy.Figure 5: Loss and accuracy curves from ResNet18 model (a) loss (b) accuracy.

The ResNet18 model was tested on 1782 images from the test subset. From Figure 6, it can be said that the 586 images are recognized as normal skin and 1196 images are recognized as melanocytic nevi skin. Further it can be said that, 1180 images which are actually melanocytic nevi skin has been predicted as melanocytic nevi skin, 16 images which is actually melanocytic nevi has been predicted as normal skin, 15 images which are actually normal skin has been predicted as melanocytic nevi skin, and 571 images which are actually normal skin has been predicted as normal skin.

Confusion matrix from ResNet18 model.Figure 6: Confusion matrix from ResNet18 model.

From Table 2, it can be said that the ResNet18 model has an overall accuracy of 0.98 i.e 98%. The ResNet18 model has same values of PPV, TPR, and f1-score labels for both normal skin and melanocytic nevi skin classes respectively [22]. The ResNet18 model has PPV of 0.99, TPR of 0.99, and f1-score of 0.99 for the class melanocytic nevi skin. The ResNet18 model has PPV of 0.97, TPR of 0.97, and f1-score of 0.97 for the class normal skin. The metrics macro and weighted averages results in the same values of PPV, TPR, and f1-score labels respectively. The metrics macro and weighted averages results in the values of 0.98, 0.98, and 0.98 for PPV, TPR, and f1-score labels respectively.

Table 2: Classification report for the ResNet18 model.
label Positive predictive value (PPV) True-positive-rate (TPR) f1-score support
Not 1 (Melanocytic nevi) 0.99 0.99 0.99 1196
1 (normal skin) 0.97 0.97 0.97 586
accuracy     0.98 1782
macro avg 0.98 0.98 0.98 1782
weighted avg 0.98 0.98 0.98 1782

From Figure 7, it can be said that the ResNet18 model has an AUC value of 1.00 i.e 100% indicating excellent binary classification performance for skin image classification task [23].

ROC curve from ResNet18 model.Figure 7: ROC curve from ResNet18 model.

C. ResNet34

The ResNet34 model was trained on a mini subset of 32 images from training/validation sets in single epoch. Each epoch was iterated over 195/55 steps for both sets till the images are completed [23].

From Figure 8(b), it can be said that the training accuracy and validation accuracy is fluctuating throughout 50 epochs [23]. After the completion of the training, the training accuracy is 1.00 i.e 100% and the validation accuracy is 0.95 i.e 95%. The margin between training accuracy and validation accuracy is small. Therefore, ResNet34 model is correctly converging. Similarly, from Figure 8(a), it can be said that the margin between training error and validation error is also small [23].

Loss and accuracy curves from ResNet34 model (a) loss (b) accuracy.Figure 8: Loss and accuracy curves from ResNet34 model (a) loss (b) accuracy.

The ResNet34 model was tested on 1782 images from the test subset. From Figure 9, it can be said that the 586 images are recognized as normal skin and 1196 images are recognized as melanocytic nevi skin. Further it can be said that, 1192 images which are actually melanocytic nevi skin has been predicted as melanocytic nevi skin, 4 images which is actually melanocytic nevi skin has been predicted as normal skin, 69 images which are actually normal skin has been predicted as melanocytic nevi skin, and 517 images which are actually normal skin has been predicted as normal skin.

Confusion matrix from ResNet34 model.Figure 9: Confusion matrix from ResNet34 model.

From Table 3, it can be said that the ResNet34 model has an overall accuracy of 0.96 i.e 96%. The ResNet34 model has higher PPV and lower TPR for the class normal skin and lower PPV and higher TPR for the class melanocytic nevi skin [22]. The ResNet34 model has PPV of 0.95, TPR of 1.00, and f1-score of 0.97 for the class melanocytic nevi skin. The ResNet34 model has a PPV of 0.99, TPR of 0.88, and f1-score of 0.93 for the class normal skin. The metrics macro and weighted averages results in the values of 0.97 and 0.96 for PPV, 0.94 and 0.96 for TPR, 0.95 and 0.96 for f1-score labels respectively.

Table 3: Classification report for the ResNet34 model.
label Positive predictive value (PPV) True-positive-rate (TPR) f1-score support
Not 1 (Melanocytic nevi) 0.95 1.00 0.97 1196
1 (normal skin) 0.99 0.88 0.93 586
accuracy     0.96 1782
macro avg 0.97 0.94 0.95 1782
weighted avg 0.96 0.96 0.96 1782

From Figure 10, it can be said that the ResNet34 model has an AUC value of 1.00 i.e 100% indicating excellent binary classification performance for skin image classification task [23].

ROC curve from ResNet34 model.Figure 10: ROC curve from ResNet34 model.

D. ResNet101

The ResNet101 model was trained on a mini subset of 32 images from training/validation sets in single epoch. Each epoch was iterated over 195/55 steps for both sets till the images are completed [23].

From Figure 11(b), it can be said that the training accuracy and validation accuracy is fluctuating throughout 50 epochs [23]. After the completion of the training, the training accuracy is 0.95 i.e 95% and the validation accuracy is 0.80 i.e 80%. The margin between training accuracy and validation accuracy is more. Therefore, ResNet101 model is overfitting. Similarly, from Figure 11(a), it can be said that the margin between training error and validation error is also large [23].

Loss and accuracy curves from ResNet101 model (a) loss (b) accuracy.Figure 11: Loss and accuracy curves from ResNet101 model (a) loss (b) accuracy.

The ResNet101 model was tested on 1782 images from the test subset. From Figure 12, it can be said that the 586 images are recognized as normal skin and 1196 images are recognized as melanocytic nevi skin. Further it can be said that, 1189 images which are actually melanocytic nevi skin has been predicted as melanocytic nevi skin, 7 images which is actually melanocytic nevi skin has been predicted as normal skin, 303 images which are actually normal skin has been predicted as melanocytic nevi skin, and 283 images which are actually normal skin has been predicted as normal skin.

Confusion matrix from ResNet101 model.Figure 12: Confusion matrix from ResNet101 model.

From Table 4, it can be said that the ResNet101 model has an overall accuracy of 0.83 i.e 83%. The ResNet101 model has higher PPV and lower TPR for the class normal skin and lower PPV and higher TPR for the class melanocytic nevi skin [22]. The ResNet101 model has PPV of 0.80, TPR of 0.99, and f1-score of 0.88 for the class melanocytic nevi skin. The ResNet101 model has a PPV of 0.98, TPR of 0.48, and f1-score of 0.65 for the class normal skin. The metrics macro and weighted averages results in the values of 0.89 and 0.86 for PPV, 0.74 and 0.83 for TPR, 0.77 and 0.81 for f1-score labels respectively.

Table 4: Classification report for the ResNet101 model.
label Positive predictive value (PPV) True-positive-rate (TPR) f1-score support
Not 1 (Melanocytic nevi) 0.80 0.99 0.88 1196
1 (normal skin) 0.98 0.48 0.65 586
accuracy     0.83 1782
macro avg 0.89 0.74 0.77 1782
weighted avg 0.86 0.83 0.81 1782

From Figure 13, it can be said that the ResNet101 model has an AUC value of 1.00 i.e 100% indicating excellent binary classification performance for skin image classification task [23].

ROC curve from ResNet101 model.Figure 13: ROC curve from ResNet101 model.

E. ResNet152

The ResNet152 model was trained on a mini subset of 32 images from training/validation sets in single epoch. Each epoch was iterated over 195/55 steps for both sets till the images are completed [23].

From Figure 14(b), it can be said that the training accuracy and validation accuracy is fluctuating throughout 50 epochs [23]. After the completion of the training, the training accuracy is 0.95 i.e 95% and the validation accuracy is 0.80 i.e 80%. The margin between training accuracy and validation accuracy is more. Therefore, ResNet152 model is overfitting. Similarly, from Figure 14(a), it can be said that the margin between training error and validation error is also large [23].

Loss and accuracy curves from ResNet152 model (a) loss (b) accuracy.Figure 14: Loss and accuracy curves from ResNet152 model (a) loss (b) accuracy.

The ResNet152 model was tested on 1782 images from the test subset. From Figure 15, it can be said that the 586 images are recognized as normal skin and 1196 images are recognized as melanocytic nevi skin. Further it can be said that, 1196 images which are actually melanocytic nevi skin has been predicted as melanocytic nevi skin, 0 images which is actually melanocytic nevi skin has been predicted as normal skin, 394 images which are actually normal skin has been predicted as melanocytic nevi skin, and 192 images which are actually normal skin has been predicted as normal skin.

Confusion matrix from ResNet152 model.Figure 15: Confusion matrix from ResNet152 model.

From Table 5, it can be said that the resnet152 model has an overall accuracy of 0.78 i.e 78%. The resnet152 model has higher PPV and lower TPR for the class normal skin and lower PPV and higher TPR for the class melanocytic nevi skin [22]. The ResNet152 model has PPV of 0.75, TPR of 1.00, and f1-score of 0.86 for the class melanocytic nevi skin. The ResNet152 model has a PPV of 1.00, TPR of 0.33, and f1-score of 0.49 for the class normal skin. The metrics macro and weighted averages results in the values of 0.88 and 0.83 for PPV, 0.66 and 0.78 for TPR, 0.68 and 0.74 for f1-score labels respectively.

Table 5: Classification report for the ResNet152 model.
label Positive predictive value (PPV) True-positive-rate (TPR) f1-score support
Not 1 (Melanocytic nevi) 0.75 1.00 0.86 1196
1 (normal skin) 1.00 0.33 0.49 586
accuracy     0.78 1782
macro avg 0.88 0.66 0.68 1782
weighted avg 0.83 0.78 0.74 1782

From Figure 16, it can be said that the ResNet152 model has an AUC value of 1.00 i.e 100% indicating excellent binary classification performance for skin image classification task [23].

ROC curve from ResNet152 model.Figure 16: ROC curve from ResNet152 model.

Conclusion

In this paper, the binary classification of skin images namely melanocytic nevi and normal skin images has been classified using deep learning network. The normal skin images have been considered for TRUE class while melanocytic nevi skin images has been considered for FALSE class. The dataset for melanocytic nevi and normal skin images consists of 9792 images. This dataset was passed through the all five deep learning network models i.e ResNet50, ResNet18, ResNet34, ResNet101, and ResNet152 models to perform skin image classification and produce the results [23]. The results such as error/accuracy curves, error matrix, false-positive-rate (FPR) vs. true-positive-rate (TPR) characteristic are shown for the confirmation of the work [23]. From the error curve of the ResNet50 model, it can be said that the training/validation error is less [23]. Therefore, the ResNet50 model is converging. From accuracy curve of the ResNet50 model, the training accuracy and validation accuracy is also higher [23]. The error matrix obtained from the ResNet50 model tells that there is higher number of images for melanocytic nevi skin compared to normal skin. The overall accuracy obtained from the ResNet50 model is 1.00 i.e 100% [23]. From FPR-TPR characteristic, it can be said that the ResNet50 is having AUC value of 1.00 i.e 100% [23]. From the error curve of the ResNet18 model, it can be said that the training/validation error is less [23]. Therefore, the ResNet18 model is converging. From accuracy curve of the ResNet18 model, the training accuracy and validation accuracy is also higher [23]. The error matrix obtained from the ResNet18 model tells that there is higher number of images for melanocytic nevi skin compared to normal skin. The overall accuracy obtained from the ResNet18 model is 0.98 i.e 98% [23]. From FPR-TPR characteristic, it can be said that the ResNet18 is having AUC value of 1.00 i.e 100% [23]. From the error curve of the ResNet34 model, it can be said that the training/validation error is less [23]. Therefore, the ResNet34 model is converging. From accuracy curve of the ResNet34 model, the training accuracy and validation accuracy is also higher [23]. The error matrix obtained from the ResNet34 model tells that there is higher number of images for melanocytic nevi skin compared to normal skin. The overall accuracy obtained from the ResNet34 model is 0.96 i.e 96% [23]. From FPR-TPR characteristic, it can be said that the ResNet34 is having AUC value of 1.00 i.e 100% [23]. From the error curve of the ResNet101 model, it can be said that the training/validation error is higher [23]. Therefore, the ResNet101 model is not converging. From accuracy curve of the ResNet101 model, the training accuracy and validation accuracy is also higher [23]. The error matrix obtained from the ResNet101 model tells that there is higher number of images for melanocytic nevi skin compared to normal skin. The overall accuracy obtained from the ResNet101 model is 0.83 i.e 83% [23]. From FPR-TPR characteristic, it can be said that the ResNet101 is having AUC value of 1.00 i.e 100% [23]. From the error curve of the ResNet152 model, it can be said that the training/validation error is higher [23]. Therefore, the ResNet152 model is not converging. From accuracy curve of the ResNet152 model, the training accuracy and validation accuracy is also higher [23]. The error matrix obtained from the ResNet152 model tells that there is higher number of images for melanocytic nevi skin compared to normal skin. The overall accuracy obtained from the ResNet152 model is 0.78 i.e 78% [23]. From FPR-TPR characteristic, it can be said that the ResNet152 is having AUC value of 1.00 i.e 100% [23]. From all five deep residual network models, the ResNet152 model has lower accuracy compared to the remaining four deep residual network models. Further, it can be inferred that all five deep residual network models has overall very good AUC for the skin image classification task. Therefore, it can concluded that the all five deep residual network models has good binary classification performance for normal skin images compared to melanocytic nevi skin images. Even though the confusion matrix has higher number of images for melanocytic nevi skin compared to normal skin, the AUC value is higher for all five deep residual network models i.e ResNet50, ResNet18, ResNet34, ResNet101, and ResNet152 models for normal skin images compared to melanocytic nevi skin images. Therefore, it can be said that the deep residual network models has good binary classification performance for skin image classification task. In this manner, the binary classification of skin images i.e skin image recognition has been performed using deep learning technique.

Appendix (click here)

References

  1. Huang JM, Chikeka I, Hornyak TJ. Melanocytic Nevi and the Genetic and Epigenetic Control of Oncogene-Induced Senescence. Dermatol Clin. 2017 Jan;35(1):85-93. doi: 10.1016/j.det.2016.08.001. PMID: 27890240; PMCID: PMC5391772.

  2. Gandini S, Sera F, Cattaruzza MS, Pasquini P, Abeni D, Boyle P, Melchi CF. Meta-analysis of risk factors for cutaneous melanoma: I. Common and atypical naevi. Eur J Cancer. 2005 Jan;41(1):28-44. doi: 10.1016/j.ejca.2004.10.015. PMID: 15617989.

  3. Harvey NT, Wood BA. A Practical Approach to the Diagnosis of Melanocytic Lesions. Arch Pathol Lab Med. 2019 Jul;143(7):789-810. doi: 10.5858/arpa.2017-0547-RA. Epub 2018 Jul 30. PMID: 30059258.

  4. Jibhakate A, Parnerkar P, Mondal S, Bharambe V, Mantri S. Skin lesion classification using deep learning and image processing. In: 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS); 2020 Dec; Palladam, India. Piscataway (NJ): IEEE; 2020;333-40.

  5. Sun Q, Tang Y, Wang S, Chen J, Xu H, Ling Y. A deep learning-based melanocytic nevi classification algorithm by leveraging physiologic-inspired knowledge and channel encoded information. IEEE Access. 2024;12.

  6. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017 Feb 2;542(7639):115-118. doi: 10.1038/nature21056. Epub 2017 Jan 25. Erratum in: Nature. 2017 Jun 28;546(7660):686. doi: 10.1038/nature22985. PMID: 28117445; PMCID: PMC8382232.

  7. Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics (Basel). 2021 Jul 31;11(8):1390. doi: 10.3390/diagnostics11081390. PMID: 34441324; PMCID: PMC8391467.

  8. Li LF, Wang X, Hu WJ, Xiong NN, Du YX, Li BS. Deep learning in skin disease image recognition: A review. IEEE Access. 2020;8:208264–80.

  9. Theckedath D, Sedamkar RR. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput Sci. 2020;1(2):79.

  10. Calderón C, Sanchez K, Castillo S, Arguello H. BILSK: A bilinear convolutional neural network approach for skin lesion classification. Comput Methods Programs Biomed Update. 2021;1:100036.

  11. Gessert N, Sentker T, Madesta F, Schmitz R, Kniep H, Baltruschat I, Werner R, Schlaefer A. Skin Lesion Classification Using CNNs With Patch-Based Attention and Diagnosis-Guided Loss Weighting. IEEE Trans Biomed Eng. 2020 Feb;67(2):495-503. doi: 10.1109/TBME.2019.2915839. Epub 2019 May 9. PMID: 31071016.

  12. Abbas Q, Celebi ME. DermoDeep—A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed Tools Appl. 2019;78(16):23559–80.

  13. Sinha S, Gupta N. Classification of melanocytic nevus images using BigTransfer (BiT): A study on a novel transfer learning-based method to classify melanocytic nevus images. In: 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS); 2022 Nov; Greater Noida, India. Piscataway (NJ): IEEE; 2022;708–12.

  14. Fraiwan M, Faouri E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. Sensors (Basel). 2022 Jun 30;22(13):4963. doi: 10.3390/s22134963. PMID: 35808463; PMCID: PMC9269808.

  15. Xin C, Liu Z, Zhao K, Miao L, Ma Y, Zhu X, Zhou Q, Wang S, Li L, Yang F, Xu S, Chen H. An improved transformer network for skin cancer classification. Comput Biol Med. 2022 Oct;149:105939. doi: 10.1016/j.compbiomed.2022.105939. Epub 2022 Aug 10. PMID: 36037629.

  16. Mahesh U, Kiran B. Three-dimensional (3-D) objects classification by means of phase-only digital holographic information using Alex Network. In: 2024 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT); 2024 Jul; [location unspecified]. Piscataway (NJ): IEEE; 2024;1–5.

  17. Argenziano G, Zalaudek I, Ferrara G, Hofmann-Wellenhof R, Soyer HP. Proposal of a new classification system for melanocytic naevi. Br J Dermatol. 2007 Aug;157(2):217-27. doi: 10.1111/j.1365-2133.2007.07972.x. Epub 2007 Jun 6. PMID: 17553053.

  18. Kumar G, Bhatia PK. A detailed review of feature extraction in image processing systems. In: 2014 Fourth International Conference on Advanced Computing & Communication Technologies (ACCT); 2014 Feb; Rohtak, India. Piscataway (NJ): IEEE; 2014;5–12.

  19. Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. J Big Data. 2016;3:9.

  20. Tschandl P, Rosendahl C, Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data. 2018 Aug 14;5:180161. doi: 10.1038/sdata.2018.161. PMID: 30106392; PMCID: PMC6091241.

  21. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2009 Jun; Miami, FL. Piscataway (NJ): IEEE; 2009; 248–55.

  22. Reddy BL, Uma Mahesh RN, Nelleri A. Deep convolutional neural network for three-dimensional objects classification using off-axis digital Fresnel holography. J Mod Opt. 2022;69(13):705–17.

  23. Mahesh RU, Nagaraju S. Three-dimensional (3-D) objects classification by means of phase-only digital holographic information using deep learning. In: Data Science & Exploration in Artificial Intelligence: Proceedings of the First International Conference on Data Science & Exploration in Artificial Intelligence (CODE-AI 2024); 2024 Jul 3–4; Bangalore, India. Volume 1. Boca Raton (FL): CRC Press; 2025;363.

  24. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27–30; Las Vegas, NV. Piscataway (NJ): IEEE; 2016;770–8. doi: 10.1109/CVPR.2016.90.

記事について

Check for updates
この記事を引用する

Uma Mahesh RN, Harsha Jain HJ, Hemanth Kumar CS, Umrao S, Mohith DL. Melanocytic Nevi Classification using Transfer Learning. IgMin Res. July 08, 2025; 3(7): 258-267. IgMin ID: igmin307; DOI:10.61927/igmin307; Available at: igmin.link/p307

07 May, 2025
受け取った
07 Jul, 2025
受け入れられた
08 Jul, 2025
発行された
この記事を共有する

次のリンクを共有した人は、このコンテンツを読むことができます:

トピックス
Machine Learning Artificial Intelligence
  1. Huang JM, Chikeka I, Hornyak TJ. Melanocytic Nevi and the Genetic and Epigenetic Control of Oncogene-Induced Senescence. Dermatol Clin. 2017 Jan;35(1):85-93. doi: 10.1016/j.det.2016.08.001. PMID: 27890240; PMCID: PMC5391772.

  2. Gandini S, Sera F, Cattaruzza MS, Pasquini P, Abeni D, Boyle P, Melchi CF. Meta-analysis of risk factors for cutaneous melanoma: I. Common and atypical naevi. Eur J Cancer. 2005 Jan;41(1):28-44. doi: 10.1016/j.ejca.2004.10.015. PMID: 15617989.

  3. Harvey NT, Wood BA. A Practical Approach to the Diagnosis of Melanocytic Lesions. Arch Pathol Lab Med. 2019 Jul;143(7):789-810. doi: 10.5858/arpa.2017-0547-RA. Epub 2018 Jul 30. PMID: 30059258.

  4. Jibhakate A, Parnerkar P, Mondal S, Bharambe V, Mantri S. Skin lesion classification using deep learning and image processing. In: 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS); 2020 Dec; Palladam, India. Piscataway (NJ): IEEE; 2020;333-40.

  5. Sun Q, Tang Y, Wang S, Chen J, Xu H, Ling Y. A deep learning-based melanocytic nevi classification algorithm by leveraging physiologic-inspired knowledge and channel encoded information. IEEE Access. 2024;12.

  6. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017 Feb 2;542(7639):115-118. doi: 10.1038/nature21056. Epub 2017 Jan 25. Erratum in: Nature. 2017 Jun 28;546(7660):686. doi: 10.1038/nature22985. PMID: 28117445; PMCID: PMC8382232.

  7. Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics (Basel). 2021 Jul 31;11(8):1390. doi: 10.3390/diagnostics11081390. PMID: 34441324; PMCID: PMC8391467.

  8. Li LF, Wang X, Hu WJ, Xiong NN, Du YX, Li BS. Deep learning in skin disease image recognition: A review. IEEE Access. 2020;8:208264–80.

  9. Theckedath D, Sedamkar RR. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput Sci. 2020;1(2):79.

  10. Calderón C, Sanchez K, Castillo S, Arguello H. BILSK: A bilinear convolutional neural network approach for skin lesion classification. Comput Methods Programs Biomed Update. 2021;1:100036.

  11. Gessert N, Sentker T, Madesta F, Schmitz R, Kniep H, Baltruschat I, Werner R, Schlaefer A. Skin Lesion Classification Using CNNs With Patch-Based Attention and Diagnosis-Guided Loss Weighting. IEEE Trans Biomed Eng. 2020 Feb;67(2):495-503. doi: 10.1109/TBME.2019.2915839. Epub 2019 May 9. PMID: 31071016.

  12. Abbas Q, Celebi ME. DermoDeep—A classification of melanoma-nevus skin lesions using multi-feature fusion of visual features and deep neural network. Multimed Tools Appl. 2019;78(16):23559–80.

  13. Sinha S, Gupta N. Classification of melanocytic nevus images using BigTransfer (BiT): A study on a novel transfer learning-based method to classify melanocytic nevus images. In: 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS); 2022 Nov; Greater Noida, India. Piscataway (NJ): IEEE; 2022;708–12.

  14. Fraiwan M, Faouri E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. Sensors (Basel). 2022 Jun 30;22(13):4963. doi: 10.3390/s22134963. PMID: 35808463; PMCID: PMC9269808.

  15. Xin C, Liu Z, Zhao K, Miao L, Ma Y, Zhu X, Zhou Q, Wang S, Li L, Yang F, Xu S, Chen H. An improved transformer network for skin cancer classification. Comput Biol Med. 2022 Oct;149:105939. doi: 10.1016/j.compbiomed.2022.105939. Epub 2022 Aug 10. PMID: 36037629.

  16. Mahesh U, Kiran B. Three-dimensional (3-D) objects classification by means of phase-only digital holographic information using Alex Network. In: 2024 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT); 2024 Jul; [location unspecified]. Piscataway (NJ): IEEE; 2024;1–5.

  17. Argenziano G, Zalaudek I, Ferrara G, Hofmann-Wellenhof R, Soyer HP. Proposal of a new classification system for melanocytic naevi. Br J Dermatol. 2007 Aug;157(2):217-27. doi: 10.1111/j.1365-2133.2007.07972.x. Epub 2007 Jun 6. PMID: 17553053.

  18. Kumar G, Bhatia PK. A detailed review of feature extraction in image processing systems. In: 2014 Fourth International Conference on Advanced Computing & Communication Technologies (ACCT); 2014 Feb; Rohtak, India. Piscataway (NJ): IEEE; 2014;5–12.

  19. Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. J Big Data. 2016;3:9.

  20. Tschandl P, Rosendahl C, Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data. 2018 Aug 14;5:180161. doi: 10.1038/sdata.2018.161. PMID: 30106392; PMCID: PMC6091241.

  21. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2009 Jun; Miami, FL. Piscataway (NJ): IEEE; 2009; 248–55.

  22. Reddy BL, Uma Mahesh RN, Nelleri A. Deep convolutional neural network for three-dimensional objects classification using off-axis digital Fresnel holography. J Mod Opt. 2022;69(13):705–17.

  23. Mahesh RU, Nagaraju S. Three-dimensional (3-D) objects classification by means of phase-only digital holographic information using deep learning. In: Data Science & Exploration in Artificial Intelligence: Proceedings of the First International Conference on Data Science & Exploration in Artificial Intelligence (CODE-AI 2024); 2024 Jul 3–4; Bangalore, India. Volume 1. Boca Raton (FL): CRC Press; 2025;363.

  24. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27–30; Las Vegas, NV. Piscataway (NJ): IEEE; 2016;770–8. doi: 10.1109/CVPR.2016.90.

Experience Content

ビュー ダウンロード
IgMin Research 67 26
次元

Licensing

類似の記事

Modeling of an Electric-fired Brick Oven, Directly Heated
André-Jacques Nlandu Mvuezolo, Jean Noël Luzolo Ngimbi and Lucien Mbozi
DOI10.61927/igmin157
Potentially Toxic Metals in Cucumber Cucumis sativus Collected from Peninsular Malaysia: A Human Health Risk Assessment
Chee Kong Yap, Rosimah Nulit, Aziran Yaacob, Zaieka Shamsudin, Meng Chuan Ong, Wan Mohd Syazwan, Hideo Okamura, Yoshifumi Horie, Chee Seng Leow, Ahmad Dwi Setyawan, Krishnan Kumar, Wan Hee Cheng and Kennedy Aaron Aguol
DOI10.61927/igmin200