Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

diagnostics-logo

Article Menu

skin disease detection using machine learning research paper

  • Subscribe SciFeed
  • Recommended Articles
  • PubMed/Medline
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Recent advancements and perspectives in the diagnosis of skin diseases using machine learning and deep learning: a review.

skin disease detection using machine learning research paper

1. Introduction

2. materials, 2.1. study selection, 2.2. datasets, 2.3. selection criteria of ai algorithms for different types of skin images, 3.1. segmentation methods, 3.1.1. traditional machine learning, 3.1.2. deep learning, 3.2. classification methods, 3.2.1. traditional machine learning, 3.2.2. deep learning, 4.1. indicators of evaluation, 4.2. analysis of results, 5. discussions, 5.1. current state of research, 5.2. challenges, 5.2.1. limitations of datasets, 5.2.2. explainability of deep learning methods, 5.2.3. homogenized research directions, 5.2.4. more innovative algorithms are needed, 5.3. future directions, 5.3.1. establish a standardized dermatological image dataset, 5.3.2. provide reasonable explanations for predicted results, 5.3.3. increase the diversity of the types of research, 5.3.4. actively explore innovative models and methods, 6. conclusions, author contributions, institutional review board statement, informed consent statement, conflicts of interest.

  • Tschandl, P.; Codella, N.; Akay, B.N.; Argenziano, G.; Braun, R.P.; Cabo, H.; Gutman, D.; Halpern, A.; Helba, B.; Hofmann-Wellenhof, R.; et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: An open, web-based, international, diagnostic study. Lancet Oncol. 2019 , 20 , 938–947. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Rodrigues, M.; Ezzedine, K.; Hamzavi, I.; Pandya, A.G.; Harris, J.E. New discoveries in the pathogenesis and classification of vitiligo. J. Am. Acad. Dermatol. 2017 , 77 , 1–13. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Singh, M.; Kotnis, A.; Jadeja, S.D.; Mondal, A.; Mansuri, M.S.; Begum, R. Cytokines: The yin and yang of vitiligo pathogenesis. Expert Rev. Clin. Immunol. 2019 , 15 , 177–188. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Burlando, M.; Muracchioli, A.; Cozzani, E.; Parodi, A. Psoriasis, Vitiligo, and Biologic Therapy: Case Report and Narrative Review. Case Rep. Dermatol. 2021 , 13 , 372–378. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Du-Harpur, X.; Watt, F.M.; Luscombe, N.M.; Lynch, M.D. What is AI? Applications of artificial intelligence to dermatology. Br. J. Dermatol. 2020 , 183 , 423–430. [ Google Scholar ] [ CrossRef ]
  • Silverberg, N.B. The epidemiology of vitiligo. Curr. Dermatol. Rep. 2015 , 4 , 36–43. [ Google Scholar ] [ CrossRef ]
  • Patel, S.; Wang, J.V.; Motaparthi, K.; Lee, J.B. Artificial intelligence in dermatology for the clinician. Clin. Dermatol. 2021 , 39 , 667–672. [ Google Scholar ] [ CrossRef ]
  • Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021 , 88 , 105906. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bajwa, M.N.; Muta, K.; Malik, M.I.; Siddiqui, S.A.; Braun, S.E.; Homey, B.; Dengel, A.; Ahmed, S. Computer-aided diagnosis of skin diseases using deep neural networks. Appl. Sci. 2020 , 10 , 2488. [ Google Scholar ] [ CrossRef ]
  • Ioannis, G.; Nynke, M.; Sander, L.; Michael, B.; Marcel, F.J.; Nicolai, P. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2015 , 42 , 6578–6585. [ Google Scholar ]
  • Filali, I. Contrast Based Lesion Segmentation on DermIS and DermQuest Datasets. Mendeley Data 2019 , 2 . [ Google Scholar ] [ CrossRef ]
  • Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018 , 5 , 180161. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: New York, NY, USA, 2018; pp. 168–172. [ Google Scholar ]
  • Marc, C.; Noel, C.F.C.; Veronica, R.; Brian, H.; Veronica, V.; Ofer, R.; Cristina, C.; Alicia, B.; Allan, C.H.; Susana, P.; et al. Bcn20000: Dermoscopic lesions in the wild. arXiv 2019 , arXiv:1908.02288. [ Google Scholar ]
  • Veronica, R.; Nicholas, K.; Brigid, B.; Liam, C.; Emmanouil, C.; Noel, C.; Marc, C.; Stephen, D.; Pascale, G.; David, G.; et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 2021 , 8 , 34. [ Google Scholar ]
  • Iranpoor, R.; Mahboob, A.S.; Shahbandegan, S.; Baniasadi, N. Skin lesion segmentation using convolutional neural networks with improved U-Net architecture. In Proceedings of the 2020 6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), Mashhad, Iran, 23–24 December 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [ Google Scholar ]
  • Liu, Y.; Jain, A.; Eng, C.; Way, D.H.; Lee, K.; Bui, P.; Kanada, K.; Marinho, G.O.; Gallegos, J.; Gabriele, S.; et al. A deep learning system for differential diagnosis of skin diseases. Nat. Med. 2020 , 26 , 900–908. [ Google Scholar ] [ CrossRef ]
  • Tschandl, P.; Rinner, C.; Apalla, Z.; Argenziano, G.; Codella, N.; Halpern, A.; Janda, M.; Lallas, A.; Longo, C.; Malvehy, J.; et al. Human-computer collaboration for skin cancer recognition. Nat. Med. 2020 , 26 , 1229–1234. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015 , 61 , 85–117. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Qaiser, A.; Ramzan, F.; Ghani, M.U. Acral melanoma detection using dermoscopic images and convolutional neural networks. Vis. Comput. Ind. Biomed. Art 2021 , 4 , 25. [ Google Scholar ]
  • Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E.; Lee, W. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J. Investig. Dermatol. 2018 , 138 , 1529–1538. [ Google Scholar ] [ CrossRef ]
  • Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Holland-Letz, T.; et al. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur. J. Cancer 2019 , 113 , 47–54. [ Google Scholar ] [ CrossRef ]
  • Shetty, B.; Fernandes, R.; Rodrigues, A.P.; Chengoden, R.; Bhattacharya, S.; Lakshmanna, K. Skin lesion classification of dermoscopic images using machine learning and convolutional neural network. Sci. Rep. 2022 , 12 , 18134. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Dang, T.; Han, H.; Zhou, X.; Liu, L. A novel hybrid deep learning model for skin lesion classification with interpretable feature extraction. Med. Image Anal. 2021 , 67 , 101831. [ Google Scholar ]
  • Kivanc, K.; Christi, A.; Melissa, G.; Jennifer, G.D.; Dana, H.B.; Milind, R. A machine learning method for identifying morphological patterns in reflectance confocal microscopy mosaics of melanocytic skin lesions in-vivo. In Photonic Therapeutics and Diagnostics XII ; SPIE: Bellingham, WA, USA, 2016; Volume 9689. [ Google Scholar ]
  • Alican, B.; Kivanc, K.; Christi, A.; Melissa, G.; Jennifer, G.D.; Dana, H.B.; Milind, R. A multiresolution convolutional neural network with partial label training for annotating reflectance confocal microscopy images of skin. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, 16–20 September 2018; Proceedings, Part II 11. Springer International Publishing: Cham, Switzerland, 2018. [ Google Scholar ]
  • Zorgui, S.; Chaabene, S.; Bouaziz, B.; Batatia, H.; Chaari, L. A convolutional neural network for lentigo diagnosis. In Proceedings of the Impact of Digital Technologies on Public Health in Developed and Developing Countries: 18th International Conference, ICOST 2020, Hammamet, Tunisia,, 24–26 June 2020; Proceedings 18. Springer International Publishing: Cham, Switzerland, 2020; pp. 89–99. [ Google Scholar ]
  • Halimi, A.; Batatia, H.; Le Digabel, J.; Josse, G.; Tourneret, J.Y. An unsupervised Bayesian approach for the joint reconstruction and classification of cutaneous reflectance confocal microscopy images. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017; IEEE: New York, NY, USA, 2017; pp. 241–245. [ Google Scholar ]
  • Ribani, R.; Marengoni, M. A survey of transfer learning for convolutional neural networks. In Proceedings of the 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T), Rio de Janeiro, Brazil, 28–31 October 2019; IEEE: New York, NY, USA, 2019; pp. 47–57. [ Google Scholar ]
  • Czajkowska, J.; Badura, P.; Korzekwa, S.; Płatkowska-Szczerek, A.; Słowińska, M. Deep learning-based high-frequency ultrasound skin image classification with multicriteria model evaluation. Sensors 2021 , 21 , 5846. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in medical imaging informatics: Current challenges and future directions. IEEE J. Biomed. Health Inform. 2020 , 24 , 1837–1857. [ Google Scholar ] [ CrossRef ]
  • Abdullah, M.N.; Sahari, M.A. Digital image clustering and colour model selection in content-based image retrieval (CBIR) approach for biometric security image. In AIP Conference Proceedings ; AIP Publishing: College Park, MD, USA, 2022; Volume 2617. [ Google Scholar ]
  • Silva, J.; Varela, N.; Patiño-Saucedo, J.A.; Lezama, O.B.P. Convolutional neural network with multi-column characteristics extraction for image classification. In Image Processing and Capsule Networks: ICIPCN 2020 ; Springer International Publishing: Cham, Switzerland, 2021; pp. 20–30. [ Google Scholar ]
  • Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021 , 141 , 61–67. [ Google Scholar ] [ CrossRef ]
  • Sreedhar, B.; Be, M.S.; Kumar, M.S. A comparative study of melanoma skin cancer detection in traditional and current image processing techniques. In Proceedings of the 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 7–9 October 2020; IEEE: New York, NY, USA, 2020; pp. 654–658. [ Google Scholar ]
  • Prinyakupt, J.; Pluempitiwiriyawej, C. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers. Biomed. Eng. Online 2015 , 14 , 63. [ Google Scholar ] [ CrossRef ]
  • Tan, T.Y.; Zhang, L.; Lim, C.P. Adaptive melanoma diagnosis using evolving clustering, ensemble and deep neural networks. Knowl. -Based Syst. 2020 , 187 , 104807. [ Google Scholar ] [ CrossRef ]
  • Yoganathan, S.A.; Zhang, R. Segmentation of organs and tumor within brain magnetic resonance images using K-nearest neighbor classification. J. Med. Phys. 2022 , 47 , 40. [ Google Scholar ]
  • Thamilselvan, P.; Sathiaseelan, J.G.R. Detection and classification of lung cancer MRI images by using enhanced k nearest neighbor algorithm. Indian J. Sci. Technol. 2016 , 9 , 1–7. [ Google Scholar ] [ CrossRef ]
  • Song, Y.Y.; Ying, L.U. Decision tree methods: Applications for classification and prediction. Shanghai Arch. Psychiatry 2015 , 27 , 130. [ Google Scholar ]
  • Gladence, L.M.; Karthi, M.; Anu, V.M. A statistical comparison of logistic regression and different Bayes classification methods for machine learning. ARPN J. Eng. Appl. Sci. 2015 , 10 , 5947–5953. [ Google Scholar ]
  • Jaiswal, J.K.; Samikannu, R. Application of random forest algorithm on feature subset selection and classification and regression. In Proceedings of the 2017 World Congress on Computing and Communication Technologies (WCCCT), Tiruchirappalli, India, 2–4 February 2017; IEEE: New York, NY, USA, 2017; pp. 65–68. [ Google Scholar ]
  • Seeja, R.D.; Suresh, A. Deep learning based skin lesion segmentation and classification of melanoma using support vector machine (SVM). Asian Pac. J. Cancer Prev. APJCP 2019 , 20 , 1555. [ Google Scholar ]
  • Sheikh Abdullah, S.N.H.; Bohani, F.A.; Nayef, B.H.; Sahran, S.; Akash, O.A.; Hussain, R.I.; Ismail, F. Round randomized learning vector quantization for brain tumor imaging. Comput. Math. Methods Med. 2016 , 2016 , 8603609. [ Google Scholar ] [ CrossRef ]
  • Ji, L.; Mao, R.; Wu, J.; Ge, C.; Xiao, F.; Xu, X.; Xie, L.; Gu, X. Deep Convolutional Neural Network for Nasopharyngeal Carcinoma Discrimination on MRI by Comparison of Hierarchical and Simple Layered Convolutional Neural Networks. Diagnostics 2022 , 12 , 2478. [ Google Scholar ] [ CrossRef ]
  • Lv, Q.J.; Chen, H.Y.; Zhong, W.B.; Wang, Y.Y.; Song, J.Y.; Guo, S.D.; Qi, L.X.; Chen, Y.C. A multi-task group Bi-LSTM networks application on electrocardiogram classification. IEEE J. Transl. Eng. Health Med. 2019 , 8 , 1900111. [ Google Scholar ] [ CrossRef ]
  • Sengupta, A.; Ye, Y.; Wang, R.; Liu, C.; Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 2019 , 13 , 95. [ Google Scholar ] [ CrossRef ]
  • Fan, C.; Lin, H.; Qiu, Y. U-Patch GAN: A Medical Image Fusion Method Based on GAN. J. Digit. Imaging 2022 , 36 , 339–355. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yin, X.X.; Sun, L.; Fu, Y.; Lu, R.; Zhang, Y. U-Net-Based medical image segmentation. J. Healthc. Eng. 2022 , 2022 , 4189781. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Popescu, D.; El-Khatib, M.; El-Khatib, H.; Ichim, L. New trends in melanoma detection using neural networks: A systematic review. Sensors 2022 , 22 , 496. [ Google Scholar ] [ CrossRef ]
  • Li, Z.; Chen, J. Superpixel segmentation using linear spectral clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1356–1363. [ Google Scholar ]
  • Alam, M.N.; Munia, T.T.K.; Tavakolian, K.; Vasefi, F.; MacKinnon, N.; Fazel-Rezai, R. Automatic detection and severity measurement of eczema using image processing. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 17–20 August 2016; IEEE: New York, NY, USA, 2016; pp. 1365–1368. [ Google Scholar ]
  • Thanh, D.N.H.; Erkan, U.; Prasath, V.B.S.; Kuma, V.; Hien, N.N. A skin lesion segmentation method for dermoscopic images based on adaptive thresholding with normalization of color models. In Proceedings of the 2019 6th International Conference on Electrical and Electronics Engineering (ICEEE), Istanbul, Turkey, 16–17 April 2019; IEEE: New York, NY, USA, 2019; pp. 116–120. [ Google Scholar ]
  • Chica, J.F.; Zaputt, S.; Encalada, J.; Salamea, C.; Montalvo, M. Objective assessment of skin repigmentation using a multilayer perceptron. J. Med. Signals Sens. 2019 , 9 , 88. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Nurhudatiana, A. A computer-aided diagnosis system for vitiligo assessment: A segmentation algorithm. In Proceedings of the International Conference on Soft Computing, Intelligent Systems, and Information Technology, Bali, Indonesia, 11–14 March 2015; Springer: Berlin, Heidelberg, 2015; pp. 323–331. [ Google Scholar ]
  • Dash, M.; Londhe, N.D.; Ghosh, S.; Shrivastava, V.K.; Sonawane, R.S. Swarm intelligence based clustering technique for automated lesion detection and diagnosis of psoriasis. Comput. Biol. Chem. 2020 , 86 , 107247. [ Google Scholar ] [ CrossRef ]
  • Ronneberger, O.; Fischer, P.; Brox, T. Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer—Assisted Intervention, Singapore, 8–12 October 2022; pp. 234–241. [ Google Scholar ]
  • Deng, Z.; Fan, H.; Xie, F.; Cui, Y.; Liu, J. Segmentation of dermoscopy images based on fully convolutional neural network. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: New York, NY, USA, 2017. [ Google Scholar ]
  • Luo, W.; Meng, Y. Fast skin lesion segmentation via fully convolutional network with residual architecture and CRF. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; IEEE: New York, NY, USA, 2018. [ Google Scholar ]
  • Yuan, Y. Automatic skin lesion segmentation with fully convolutional-deconvolutional networks. arXiv 2017 , arXiv:1703.05165. [ Google Scholar ]
  • Yuan, Y.; Chao, M.; Lo, Y.C. Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance. IEEE Trans. Med. Imaging 2017 , 36 , 1876–1886. [ Google Scholar ] [ CrossRef ]
  • Yuan, Y.; Lo, Y.C. Improving dermoscopic image segmentation with enhanced convolutional-deconvolutional networks. IEEE J. Biomed. Health Inform. 2017 , 23 , 519–526. [ Google Scholar ] [ CrossRef ]
  • Saood, A.; Hatem, I. COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. BMC Med. Imaging 2021 , 21 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Pal, A.; Garain, U.; Chandra, A.; Chatterjee, R.; Senapati, S. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network. Comput. Methods Programs Biomed. 2018 , 159 , 59–69. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tschandl, P.; Sinz, C.; Kittler, H. Domain-specific classification-pretrained fully convolutional network encoders for skin lesion segmentation. Comput. Biol. Med. 2019 , 104 , 111–116. [ Google Scholar ] [ CrossRef ]
  • Peng, Y.; Wang, N.; Wang, Y.; Wang, M. Segmentation of dermoscopy image using adversarial networks. Multimed. Tools Appl. 2019 , 78 , 10965–10981. [ Google Scholar ] [ CrossRef ]
  • Adegun, A.; Viriri, S. Deep learning model for skin lesion segmentation: Fully convolutional network. In Proceedings of the International Conference on Image Analysis and Recognition, Taipei, Taiwan, 22–25 September 2019; Springer: Cham, Switzerland, 2019; pp. 232–242. [ Google Scholar ]
  • Thanh, D.N.H.; Thanh, L.T.; Erkan, U.; Khamparia, A.; Prasath, V.B.S. Dermoscopic image segmentation method based on convolutional neural networks. Int. J. Comput. Appl. Technol. 2021 , 66 , 89–99. [ Google Scholar ] [ CrossRef ]
  • Zeng, G.; Zheng, G. Multi-scale fully convolutional DenseNets for automated skin lesion segmentation in dermoscopy images. In Proceedings of the International Conference Image Analysis and Recognition, Póvoa de Varzim, Portugal, 27–29 June 2018; Springer: Cham, Switzerland, 2018; pp. 513–521. [ Google Scholar ]
  • Nasr-Esfahani, E.; Rafiei, S.; Jafari, M.H.; Karimi, N.; Wrobel, J.S.; Samavi, S.; Soroushmehr, S.M.R. Dense pooling layers in fully convolutional network for skin lesion segmentation. Comput. Med. Imaging Graph. 2019 , 78 , 101658. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhao, C.; Shuai, R.; Ma, L.; Liu, W.; Wu, M. Segmentation of dermoscopy images based on deformable 3D convolution and ResU-NeXt++. Med. Biol. Eng. Comput. 2021 , 59 , 1815–1832. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Stofa, M.M.; Zulkifley, M.A.; Zainuri, M.A.A.M.; Ibrahim, A.A. U-Net with Atrous Spatial Pyramid Pooling for Skin Lesion Segmentation. In Proceedings of the 6th International Conference on Electrical, Control and Computer Engineering: InECCE2021, Kuantan, Malaysia, 23 August 2022; Springer: Singapore, 2022; pp. 1025–1033. [ Google Scholar ]
  • Yanagisawa, Y.; Shido, K.; Kojima, K.; Yamasaki, K. Convolutional neural network-based skin image segmentation model to improve classification of skin diseases in conventional and non-standardized picture images. J. Dermatol. Sci. 2023 , 109 , 30–36. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Muñoz-López, C.; Ramírez-Cornejo, C.; Marchetti, M.A.; Han, S.S.; Del Barrio-Díaz, P.; Jaque, A.; Uribe, P.; Majerson, D.; Curi, M.; Del Puerto, C.; et al. Performance of a deep neural network in teledermatology: A single-centre prospective diagnostic study. J. Eur. Acad. Dermatol. Venereol. 2021 , 35 , 546–553. [ Google Scholar ] [ CrossRef ]
  • Reddy, D.A.; Roy, S.; Kumar, S.; Tripathi, R. Enhanced U-Net segmentation with ensemble convolutional neural network for automated skin disease classification. Knowl. Inf. Syst 2023 , 65 , 4111–4156. [ Google Scholar ] [ CrossRef ]
  • Bian, Z.; Xia, S.; Xia, C.; Shao, M. Weakly supervised Vitiligo segmentation in skin image through saliency propagation. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; IEEE: New York, NY, USA, 2019; pp. 931–934. [ Google Scholar ]
  • Low, M.; Huang, V.; Raina, P. Automating Vitiligo skin lesion segmentation using convolutional neural networks. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: New York, NY, USA, 2020; pp. 1–4. [ Google Scholar ]
  • Khatibi, T.; Rezaei, N.; Ataei Fashtami, L.; Totonchi, M. Proposing a novel unsupervised stack ensemble of deep and conventional image segmentation (SEDCIS) method for localizing vitiligo lesions in skin images. for localizing vitiligo lesions in skin images. Ski. Res. Technol. 2021 , 27 , 126–137. [ Google Scholar ] [ CrossRef ]
  • Yanling, L.I.; Kong, A.W.K.; Thng, S. Segmenting Vitiligo on Clinical Face Images Using CNN Trained on Synthetic and Internet Images. IEEE J. Biomed. Health Inform. 2021 , 25 , 3082–3093. [ Google Scholar ]
  • Kassem, M.A.; Hosny, K.M.; Damaševičius, R.; Eltoukhy, M.M. Machine learning and deep learning methods for skin lesion classification and diagnosis: A systematic review. Diagnostics 2021 , 11 , 1390. [ Google Scholar ] [ CrossRef ]
  • Banditsingha, P.; Thaipisutikul, T.; Shih, T.K.; Lin, C.Y. A Decision Machine Learning Support System for Human Skin Disease Classifier. In Proceedings of the 2022 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON), Phuket, Thailand, 26–28 January 2022; IEEE: New York, NY, USA, 2022. [ Google Scholar ]
  • Wang, W.; Sun, G. Classification and research of skin lesions based on machine learning. Comput. Mater. Contin. 2020 , 62 , 1187–1200. [ Google Scholar ]
  • Farzad, S.; Rouhi, A.; Rastegari, R. The Performance of Deep and Conventional Machine Learning Techniques for Skin Lesion Classification. In Proceedings of the 2021 IEEE 18th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET), Karachi, Pakistan, 11–13 October 2021; IEEE: New York, NY, USA, 2021. [ Google Scholar ]
  • Parameshwar, H.; Manjunath, R.; Shenoy, M.; Shekar, B.H. Comparison of machine learning algorithms for skin disease classification using color and texture features. In Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Bangalore, India, 19–22 September 2018; IEEE: New York, NY, USA, 2018. [ Google Scholar ]
  • Nosseir, A.; Shawky, M.A. Automatic classifier for skin disease using k-NN and SVM. In Proceedings of the 2019 8th International Conference on Software and Information Engineering, Cairo, Egypt, 9–12 April 2019; pp. 259–262. [ Google Scholar ]
  • Pennisi, A.; Bloisi, D.D.; Nardi, D.; Giampetruzzi, A.R.; Mondino, C.; Facchiano, A. Melanoma detection using delaunay triangulation. In Proceedings of the 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), Vietri sul Mare, Italy, 9–11 November 2015; IEEE: New York, NY, USA, 2015; pp. 791–798. [ Google Scholar ]
  • Sumithra, R.; Suhil, M.; Guru, D.S. Segmentation and classification of skin lesions for disease diagnosis. Procedia Comput. Sci. 2015 , 45 , 76–85. [ Google Scholar ] [ CrossRef ]
  • Suganya, R. An automated computer aided diagnosis of skin lesions detection and classification for dermoscopy images. In Proceedings of the 2016 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 8–9 April 2016; IEEE: New York, NY, USA, 2016; pp. 1–5. [ Google Scholar ]
  • Rahman, M.A.; Haque, M.T.; Shahnaz, C.; Fattah, S.A.; Zhu, W.P.; Ahmed, M.O. Skin lesions classification based on color plane-histogram-image quality analysis features extracted from digital images. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; IEEE: New York, NY, USA, 2017; pp. 1356–1359. [ Google Scholar ]
  • Hameed, N.; Shabut, A.; Hossain, M.A. A Computer-aided diagnosis system for classifying prominent skin lesions using machine learning. In Proceedings of the 2018 10th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 19–21 September 2018; IEEE: New York, NY, USA, 2018; pp. 186–191. [ Google Scholar ]
  • Murugan, A.; Nair, S.A.H.; Sanal Kumar, K.P. Detection of skin cancer using SVM, random forest and kNN classifiers. J. Med. Syst. 2019 , 43 , 1–9. [ Google Scholar ] [ CrossRef ]
  • Balaji, V.R.; Suganthi, S.T.; Rajadevi, R.; Kumar, V.K.; Balaji, B.S.; Pandiyan, S. Skin disease detection and segmentation using dynamic graph cut algorithm and classification through Naive Bayes Classifier. Measurement 2020 , 163 , 107922. [ Google Scholar ] [ CrossRef ]
  • Nasiri, S.; Helsper, J.; Jung, M.; Fathi, M. DePicT Melanoma Deep-CLASS: A deep convolutional neural networks approach to classify skin lesion images. BMC Bioinform. 2020 , 21 , 1–13. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ahmad, B.; Usama, M.; Ahmad, T.; Khatoon, S.; Alam, C.M. An ensemble model of convolution and recurrent neural network for skin disease classification. Int. J. Imaging Syst. Technol. 2022 , 32 , 218–229. [ Google Scholar ] [ CrossRef ]
  • Karthik, R.; Vaichole, T.S.; Kulkarni, S.K.; Yadav, O.; Khan, F. Eff2Net: An efficient channel attention-based convolutional neural network for skin disease classification. Biomed. Signal Process. Control. 2022 , 73 , 103406. [ Google Scholar ] [ CrossRef ]
  • Mohammed, K.K.; Afify, H.M.; Hassanien, A.E. Artificial intelligent system for skin diseases classification. Biomed. Eng. Appl. Basis Commun. 2020 , 32 , 2050036. [ Google Scholar ] [ CrossRef ]
  • Zia Ur Rehman, M.; Ahmed, F.; Alsuhibany, S.A.; Jamal, S.S.; Zulfiqar Ali, M.; Ahmad, J. Classification of skin cancer lesions using explainable deep learning. Sensors 2022 , 22 , 6915. [ Google Scholar ] [ CrossRef ]
  • Ahmad, B.; Usama, M.; Huang, C.M.; Hwang, K.; Hossain, M.S.; Muhammad, G. Discriminative feature learning for skin disease classification using deep convolutional neural network. IEEE Access 2020 , 8 , 39025–39033. [ Google Scholar ] [ CrossRef ]
  • Wu, Z.H.E.; Zhao, S.; Peng, Y.; He, X.; Zhao, X.; Huang, K.; Wu, X.; Fan, W.; Li, F.; Chen, M.; et al. Studies on different CNN algorithms for face skin disease classification based on clinical images. IEEE Access 2019 , 7 , 66505–66511. [ Google Scholar ] [ CrossRef ]
  • Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin melanoma classification using ROI and data augmentation with deep convolutional neural networks. Multimed. Tools Appl. 2020 , 79 , 24029–24055. [ Google Scholar ] [ CrossRef ]
  • Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions into seven classes using transfer learning with AlexNet. J. Digit. Imaging 2020 , 33 , 1325–1334. [ Google Scholar ] [ CrossRef ]
  • Kassem, M.A.; Hosny, K.M.; Fouad, M.M. Skin lesions classification into eight classes for ISIC 2019 using deep convolutional neural network and transfer learning. IEEE Access 2020 , 8 , 114822–114832. [ Google Scholar ] [ CrossRef ]
  • Hosny, K.M.; Kassem, M.A. Refined residual deep convolutional network for skin lesion classification. J. Digit. Imaging 2022 , 35 , 258–280. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Reis, H.C.; Turk, V.; Khoshelham, K.; Kaya, S. InSiNet: A deep convolutional approach to skin cancer detection and segmentation. Med. Biol. Eng. Comput. 2022 , 60 , 643–662. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pacheco, A.G.C.; Renato, A.K. The impact of patient clinical information on automated skin cancer detection. Comput. Biol. Med. 2020 , 116 , 103545. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Liu, L.; Mou, L.; Zhu, X.X.; Manda, M. Automatic skin lesion classification based on mid-level feature learning. Comput. Med. Imaging Graph. 2020 , 84 , 101765. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Al-Masni, M.A.; Kim, D.-H.; Kim, T.-S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020 , 190 , 105351. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jasil, S.P.G.; Ulagamuthalvi, V. A hybrid CNN architecture for skin lesion classification using deep learning. Soft Comput. 2023 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Anand, V.; Gupta, S.; Koundal, D.; Singh, K. Fusion of U-Net and CNN model for segmentation and classification of skin lesion from dermoscopy images. Expert Syst. Appl. 2023 , 213 , 119230. [ Google Scholar ] [ CrossRef ]
  • Raghavendra, P.V.S.P.; Charitha, C.; Begum, K.G.; Prasath, V.B.S. Deep Learning–Based Skin Lesion Multi-class Classification with Global Average Pooling Improvement. J. Digit. Imaging 2023 , 36 , 2227–2248. [ Google Scholar ] [ CrossRef ]
  • Cai, G.; Zhu, Y.; Wu, Y.; Jiang, X.; Ye, J.; Yang, D. A multimodal transformer to fuse images and metadata for skin disease classification. Vis. Comput. 2023 , 39 , 2781–2793. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Yan, J.; Chen, J.; Sun, G.; Luo, W. Classification of vitiligo based on convolutional neural network. In Proceedings of the International Conference on Artificial Intelligence and Security: 5th International Conference, New York, NY, USA, 26–28 July 2019; Springer: Cham, Switzerland, 2019; pp. 214–223. [ Google Scholar ]
  • Luo, W.; Liu, J.; Huang, Y.; Zhao, N. An effective vitiligo intelligent classification system. J. Ambient. Intell. Humaniz. Comput. 2023 , 14 , 5479–5488. [ Google Scholar ] [ CrossRef ]
  • Guo, L.; Yang, Y.; Ding, H.; Zheng, H.; Yan, H.; Xie, J.; Li, Y.; Lin, T.; Ge, Y. A deep learning-based hybrid artificial intelligence model for the detection and severity assessment of vitiligo lesions. Ann. Transl. Med. 2022 , 10 , 590. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Anthal, J.; Upadhyay, A.; Gupta, A. Detection of vitiligo skin disease using LVQ neural network. In Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Coimbatore, India, 8–9 September 2017; IEEE: New York, NY, USA, 2017; pp. 922–925. [ Google Scholar ]
  • Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021 , 21 , 2852. [ Google Scholar ] [ CrossRef ]
  • Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 , 542 , 115–118. [ Google Scholar ] [ CrossRef ]
  • Haenssle, H.A.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Kalloo, A.; Hassen, A.B.; Thomas, L.; Enk, A.; et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018 , 29 , 1836–1842. [ Google Scholar ] [ CrossRef ]
  • Maron, R.C.; Weichenthal, M.; Utikal, J.S.; Hekler, A.; Berking, C.; Hauschild, A.; Enk, A.H.; Haferkamp, S.; Klode, J.; Schadendor, D.; et al. Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur. J. Cancer 2019 , 119 , 57–65. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging 2019 , 32 , 582–596. [ Google Scholar ] [ CrossRef ]
  • He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [ Google Scholar ]
  • Masaya, T.; Atsushi, S.; Kosuke, S.; Yasuhiro, F.; Kenshi, Y.; Manabu, F.; Kohei, M.; Youichirou, N.; Shin’ichi, S.; Akinobu, S. Classification of large-scale image database of various skin diseases using deep learning. Int. J. Comput. Assist. Radiol. Surg. 2021 , 16 , 1875–1887. [ Google Scholar ]
  • Bozorgtabar, B.; Sedai, S.; Roy, P.K.; Garnavi, R. Skin lesion segmentation using deep convolution networks guided by local unsupervised learning. IBM J. Res. Dev. 2017 , 61 , 6:1–6:8. [ Google Scholar ] [ CrossRef ]
  • Thanh, D.N.H.; Hien, N.N.; Surya Prasath, V.B.; Thanh, L.T.; Hai, N.H. Automatic initial boundary generation methods based on edge detectors for the level set function of the Chan-Vese segmentation model and applications in biomedical image processing. In Frontiers in Intelligent Computing: Theory and Applications ; Springer: Singapore, 2020; pp. 171–181. [ Google Scholar ]
  • Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [ Google Scholar ]
  • Codella, N.C.F.; Nguyen, Q.B.; Pankanti, S.; Gutman, D.A.; Helba, B.; Halpern, A.C.; Smith, J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2017 , 61 , 5:1–5:15. [ Google Scholar ] [ CrossRef ]
  • Samia, B.; Meftah, B.; Lézoray, O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2022 , 74 , 101701. [ Google Scholar ]
  • Lopez, A.R.; Giro-i-Nieto, X.; Burdick, J.; Marques, O. Skin lesion classification from dermoscopic images using deep learning techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017; IEEE: New York, NY, USA, 2017; pp. 49–54. [ Google Scholar ]
  • Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P.C. Techniques and algorithms for computer aided diagnosis of pigmented skin lesions—A review. Biomed. Signal Process. Control 2018 , 39 , 237–262. [ Google Scholar ] [ CrossRef ]
  • Codella, N.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep learning, sparse coding, and SVM for melanoma recognition in dermoscopy images. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Munich, Germany, 5 October 2015; Springer International Publishing: Cham, Switzerland, 2015; pp. 118–126. [ Google Scholar ]
  • Celebi, M.E.; Wen, Q.; Iyatomi, H.; Shimizu, K.; Zhou, H.; Schaefer, G. A state-of-the-art survey on lesion border detection in dermoscopy images. Dermoscopy Image Anal. 2015 , 10 , 97–129. [ Google Scholar ]
  • Salahuddin, Z.; Woodruff, H.C.; Chatterjee, A.; Lambin, P. Transparency of deep neural networks for medical image analysis: A review of interpretability methods. Comput. Biol. Med. 2022 , 140 , 105111. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Naeem, A.; Farooq, M.S.; Khelifi, A.; Abid, A. Malignant melanoma classification using deep learning: Datasets, performance measurements, challenges and opportunities. IEEE Access 2020 , 8 , 110575–110597. [ Google Scholar ] [ CrossRef ]
  • Yao, K.; Su, Z.; Huang, K.; Yang, X.; Sun, J.; Hussain, A.; Coenen, F. A novel 3D unsupervised domain adaptation framework for cross-modality medical image segmentation. IEEE J. Biomed. Health Inform. 2022 , 26 , 4976–4986. [ Google Scholar ] [ CrossRef ]
  • Silberg, J.; Manyika, J. Notes from the AI frontier: Tackling bias in AI (and in humans). McKinsey Glob. Inst. 2019 , 1 . [ Google Scholar ]
  • Castelvecchi, D. Can we open the black box of AI? Nat. News 2016 , 538 , 20. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Oliveira, R.B.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R.S. Computational methods for pigmented skin lesion classification in images: Review and future trends. Neural Comput. Appl. 2018 , 29 , 613–636. [ Google Scholar ] [ CrossRef ]
  • Muhaba, K.A.; Dese, K.; Aga, T.M.; Zewdu, F.T.; Simegn, G.L. Automatic skin disease diagnosis using deep learning from clinical image and patient information. Ski. Health Dis. 2022 , 2 , e81. [ Google Scholar ] [ CrossRef ]
  • Serte, S.; Serener, A.; Al-Turjman, F. Deep learning in medical imaging: A brief review. Trans. Emerg. Telecommun. Technol. 2022 , 33 , e4080. [ Google Scholar ] [ CrossRef ]
  • Li, H.; Pan, Y.; Zhao, J.; Zhang, L. Skin disease diagnosis with deep learning: A review. Neurocomputing 2021 , 464 , 364–393. [ Google Scholar ] [ CrossRef ]
  • Marcus, G.; Davis, E. Rebooting AI: Building Artificial Intelligence We Can Trust ; Vintage: New York, NY, USA, 2019. [ Google Scholar ]
  • Goyal, M.; Knackstedt, T.; Yan, S.; Hassanpour, S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput. Biol. Med. 2020 , 127 , 104065. [ Google Scholar ] [ CrossRef ]
  • Zhang, B.; Zhou, X.; Luo, Y.; Zhang, H.; Yang, H.; Ma, J.; Ma, L. Opportunities and challenges: Classification of skin disease based on deep learning. Chin. J. Mech. Eng. 2021 , 34 , 112. [ Google Scholar ] [ CrossRef ]
  • Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 1833–1844. [ Google Scholar ]
  • Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 205–218. [ Google Scholar ]
  • Peng, L.; Wang, C.; Tian, G.; Liu, G.; Li, G.; Lu, Y.; Yang, J.; Chen, M.; Li, Z. Analysis of CT scan images for COVID-19 pneumonia based on a deep ensemble framework with DenseNet, Swin transformer, and RegNet. Front. Microbiol. 2022 , 13 , 995323. [ Google Scholar ] [ CrossRef ]
  • Chi, J.; Sun, Z.; Wang, H.; Lyu, P.; Yu, X.; Wu, C. CT image super-resolution reconstruction based on global hybrid attention. Comput. Biol. Med. 2022 , 150 , 106112. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Liu, L.; Liang, C.; Xue, Y.; Chen, T.; Chen, Y.; Lan, Y.; Wen, J.; Shao, X.; Chen, J. An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input. Dermatol. Ther. 2023 , 13 , 569–579. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H. Transformers in medical imaging: A survey. Med. Image Anal. 2023 , 102802. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jonsson, A. Deep reinforcement learning in medicine. Kidney Dis. 2019 , 5 , 18–22. [ Google Scholar ] [ CrossRef ]
  • Ghesu, F.C.; Georgescu, B.; Zheng, Y.; Grbic, S.; Maier, A.; Hornegger, J.; Comaniciu, D. Multi-scale deep reinforcement learning for real-time 3D-landmark detection in CT scans. IEEE Trans. Pattern Anal. Mach. Intell. 2017 , 41 , 176–189. [ Google Scholar ] [ CrossRef ]
  • Xu, J.; Croft, W.B. Quary expansion using local and global document analysis. In Acm Sigir Forum ; ACM: New York, NY, USA, 2017; Volume 51, pp. 168–175. [ Google Scholar ]
  • Wu, W.T.; Li, Y.J.; Feng, A.Z.; Li, L.; Huang, T.; Xu, A.D.; Lyu, J. Data mining in clinical big data: The frequently used databases, steps, and methodological models. Mil. Med. Res. 2021 , 8 , 44. [ Google Scholar ] [ CrossRef ] [ PubMed ]

Click here to enlarge figure

Imaging EquipmentSkin Imaging StandardsModel of Applicability
1. Overall skin lesion: natural light was used as the light source, and the mode and magnification were noted.
2. Local details: the maximum magnification and clear images of the skin lesions were taken.
AlexNet, VGG, GoogLeNet, ResNet, CNN [ , , , , ].
1. Longitudinal scanning: we scanned from the stratum corneum to the superficial dermis; each layer’s thickness was 5 μm.
2. Horizontal scanning: pathological changes in the stratum corneum, stratum granulosum, stratum spinosum, stratum basale, dermo-epidermal junction, and superficial dermis were scanned.
3. Local details: for each layer of pathological changes, photos of local details were taken.
SVM, CNN, InceptionV3, Bayesian model, Nested U-net [ , , , , ].
1. Longitudinal scanning: the lesion area was scanned using high-frequency or ultra-high-frequency ultrasound, and the scanning frequency (20 MHz, 50 MHz, etc.) was marked.
2. Overall and detailed imaging: it was able to clearly display the epidermis, dermis, and subcutaneous tissue, and measure the range, depth, blood flow, and nature of skin lesions and their relationship with surrounding tissues.
DenseNet-201, GoogleNet, Inception-ResNet-v2, ResNet-101, MobileNet [ ]
ReferenceMethodACSESPJADI
[ ]LSC (ML, 2015)96.2%92.6%-0.81-
[ ]K-means (ML, 2016)90%----
[ ]RGB threshold (ML, 2019)---0.7890.876
[ ]XYZ threshold (ML, 2019)---0.80.884
[ ]ICA (ML, 2019)-99.49%98.46%0.7087-
[ ]FCM (ML, 2020)90.89%92.84%88.27%--
[ ]FCN (DL, 2017)95.3%93.8%95.2%0.8410.907
[ ]FCDN (DL, 2017)99.53%87.9%97.9%0.7830.865
[ ]DFCN (DL, 2017)93.4%82.5%97.5%0.7650.849
[ ]FCRN (DL, 2017)85.5%54.7%93.1%--
[ ]SegNet (DL, 2021)-95.6%95.42%-0.749
[ ]U-Net (DL, 2021)-96.4%94.8%-0.733
[ ]FCN (DL, 2018)---0.884-
[ ]ResNet34 (DL, 2019)---0.7680.851
[ ]U-Net (DL, 2019)97%90%99%0.880.94
[ ]FCN U-Net (DL, 2019)90%96%-0.83-
[ ]U-Net, VGG-16 (DL, 2021)96.7%90.4%98%0.8460.915
[ ]MSFCDN (DL, 2018)95.3%90.1%96.7%0.7850.869
[ ]DPFCN (DL, 2019)98.9%92.4%99.6%0.8520.916
[ ]ResU-NeXt ++ (DL, 2021)96%--0.86840.9235
[ ]U-Net (DL, 2022)90.74%--0.7572-
[ ]DeepLabv3 + (DL, 2023)95%90%90%--
[ ]W-EFO-E-CNN (DL, 2023)98%99.54%50% 0.987
[ ]DCNN (DL, 2019)---0.714-
[ ]U-Net (DL, 2020)---0.887-
[ ]SEDSIC (DL, 2021)97%98%96%0.940.97
[ ]FCN-UTA (DL, 2021)-86.36%-0.73810.8493
ReferenceMethodsACSESPClassesData TypeData Size
[ ]Adaboost (ML, 2015)89.35%93.5%85.2%2Dermoscopic images-
[ ]KNN–SVM (ML, 2015)85%--5Clinical images726
[ ]SVM (ML, 2016)96.8%95.4%89.3%2Dermoscopic images320
[ ]KNN–SVM (ML, 2017)90%--4Dermoscopic images-
[ ]SVM (ML, 2018)92.3%--3Dermoscopic images-
[ ]SVM (ML, 2019)89.43%91.15%87.71%2Dermoscopic images1000
[ ]Naive Bayes (ML, 2020)72.7%91.7%70.1%6Dermoscopic images1646
[ ]CNN (DL, 2020)75%73%78%2Dermoscopic images1796
[ ]BLSTM (DL, 2022)89.47%88.33%97.17%7Dermoscopic images10,015
[ ]Eff2Net (DL, 2022)84.70%84.70%-4Clinical images17,327
[ ]BPNN (DL, 2020)99.7%99.4%100%%2Dermoscopic images400
[ ]DenseNet201 (DL, 2022)95.5%93.96%97.06%2Dermoscopic images3297
[ ]Inception-ResNet-V2 (DL, 2020)87.42%97.04%96.48%4Clinical images14,000
[ ]Inception-ResNet V2 (DL, 2019)89.63%77%-6Clinical images11,445
[ ]GoogleNet (DL, 2020)99.29%99.22%99.38%2Dermoscopic images2376
[ ]AlexNet (DL, 2020)98.7%95.6%99.27%7Dermoscopic images10,015
[ ]GoogleNet (DL, 2020)94.92%79.8%97%8Dermoscopic images29,439
[ ]RDCNN (DL, 2022)97%94%98%2Dermoscopic images2206
[ ]InSiNet (DL, 2022)94.59%97.5%91.18%2Dermoscopic images1471
[ ]GoogleNet(Inception-V3) (DL, 2020)83.78%87.5%79.41%2Dermoscopic images1471
[ ]DenseNet-201 (DL, 2020)87.84%95%79.41%2Dermoscopic images1471
[ ]ResNet152V2 (DL, 2020)86.49%92.5%79.41%2Dermoscopic images1471
[ ]DenseNet, ResNet (DL, 2023)95.1%92%98.8%7Dermoscopic images10,015
[ ]U-Net, CNN (DL, 2023)97.96%84.86%97.93%7Dermoscopic images10,015
[ ]DCNN (DL, 2023)97.204%97%-7Dermoscopic images10,015
[ ]Visual Transformer (DL, 2023)93.81%90.14%98.36%7Dermoscopic images10,015
[ ]Resnet50, VGG16, Inception v2 (DL, 2019)87.8%90.9%91.9%2Clinical images38,677
[ ]Cycle GAN, ADRD, Resnet50 (DL, 2020)85.69%-90.92%2Wood lamp images10,000
[ ]YOLO v3, PSPNet, UNet ++ (DL, 2022)85.02%92.91%-3Clinical images2720
[ ]LVQ Neural Network (DL, 2017)92.22%--3Clinical images1002
Year201820192020202120222023
Publication No.104127161190245233
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Zhang, J.; Zhong, F.; He, K.; Ji, M.; Li, S.; Li, C. Recent Advancements and Perspectives in the Diagnosis of Skin Diseases Using Machine Learning and Deep Learning: A Review. Diagnostics 2023 , 13 , 3506. https://doi.org/10.3390/diagnostics13233506

Zhang J, Zhong F, He K, Ji M, Li S, Li C. Recent Advancements and Perspectives in the Diagnosis of Skin Diseases Using Machine Learning and Deep Learning: A Review. Diagnostics . 2023; 13(23):3506. https://doi.org/10.3390/diagnostics13233506

Zhang, Junpeng, Fan Zhong, Kaiqiao He, Mengqi Ji, Shuli Li, and Chunying Li. 2023. "Recent Advancements and Perspectives in the Diagnosis of Skin Diseases Using Machine Learning and Deep Learning: A Review" Diagnostics 13, no. 23: 3506. https://doi.org/10.3390/diagnostics13233506

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Skin Cancer Detection: A Review Using Deep Learning Techniques

Mehwish dildar.

1 Government Associate College for Women Mari Sargodha, Sargodha 40100, Pakistan; moc.liamg@49radlidhsiwhem

Shumaila Akram

2 Department of Computer Science and Information Technology, University of Sargodha, Sargodha 40100, Pakistan; moc.oohay@91_anaraliamuhs

Muhammad Irfan

3 Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia, Najran 61441, Saudi Arabia; as.ude.un@attidim

Hikmat Ullah Khan

4 Department of Computer Science, Wah Campus, Comsats University, Wah Cantt 47040, Pakistan; [email protected]

Muhammad Ramzan

5 Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54782, Pakistan

Abdur Rehman Mahmood

6 Department of Computer Science, COMSATS University Islamabad, Islamabad 440000, Pakistan; [email protected]

Soliman Ayed Alsaiari

7 Department of Internal Medicine, Faculty of Medicine, Najran University, Najran 61441, Saudi Arabia; moc.liamtoh@2yraiasla-s

Abdul Hakeem M Saeed

8 Department of Dermatology, Najran University Hospital, Najran 61441, Saudi Arabia; moc.liamg@deeeasmeeekah

Mohammed Olaythah Alraddadi

9 Department of Internal Medicine, Faculty of Medicine, University of Tabuk, Tabuk 71491, Saudi Arabia; [email protected]

Mater Hussen Mahnashi

10 Department of Medicinal Chemistry, Pharmacy School, Najran University, Najran 61441, Saudi Arabia; moc.liamg@ahamretam

Associated Data

Not applicable as it is a review article, no experiment has been performed by using any data.

Skin cancer is one of the most dangerous forms of cancer. Skin cancer is caused by un-repaired deoxyribonucleic acid (DNA) in skin cells, which generate genetic defects or mutations on the skin. Skin cancer tends to gradually spread over other body parts, so it is more curable in initial stages, which is why it is best detected at early stages. The increasing rate of skin cancer cases, high mortality rate, and expensive medical treatment require that its symptoms be diagnosed early. Considering the seriousness of these issues, researchers have developed various early detection techniques for skin cancer. Lesion parameters such as symmetry, color, size, shape, etc. are used to detect skin cancer and to distinguish benign skin cancer from melanoma. This paper presents a detailed systematic review of deep learning techniques for the early detection of skin cancer. Research papers published in well-reputed journals, relevant to the topic of skin cancer diagnosis, were analyzed. Research findings are presented in tools, graphs, tables, techniques, and frameworks for better understanding.

1. Introduction

Skin cancer is one of the most active types of cancer in the present decade [ 1 ]. As the skin is the body’s largest organ, the point of considering skin cancer as the most common type of cancer among humans is understandable [ 2 ]. It is generally classified into two major categories: melanoma and nonmelanoma skin cancer [ 3 ]. Melanoma is a hazardous, rare, and deadly type of skin cancer. According to statistics from the American Cancer Society, melanoma skin cancer cases are only 1% of total cases, but they result in a higher death rate [ 4 ]. Melanoma develops in cells called melanocytes. It starts when healthy melanocytes begin to grow out of control, creating a cancerous tumor. It can affect any area of the human body. It usually appears on the areas exposed to sun rays, such as on the hands, face, neck, lips, etc. Melanoma type of cancers can only be cured if diagnosed early; otherwise, they spread to other body parts and lead to the victim’s painful death [ 5 ]. There as various types of melanoma skin cancer such as nodular melanoma, superficial spreading melanoma, acral lentiginous, and lentigo maligna [ 3 ]. The majority of cancer cases lie under the umbrella of nonmelanoma categories, such as basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and sebaceous gland carcinoma (SGC). BCC, SGC, and SCC are formed in the middle and upper layers of the epidermis, respectively. These cancer cells have a low tendency of spreading to other body parts. Nonmelanoma cancers are easily treated as compared with melanoma cancers.

Therefore, the critical factor in skin cancer treatment is early diagnosis [ 6 ]. Doctors ordinarily use the biopsy method for skin cancer detection. This procedure removes a sample from a suspected skin lesion for medical examination to determine whether it is cancerous or not. This process is painful, slow, and time-consuming. Computer-based technology provides a comfortable, less expensive, and speedy diagnosis of skin cancer symptoms. In order to examine the skin cancer symptoms, whether they represent melanoma or nonmelanoma, multiple techniques, noninvasive in nature, are proposed. The general procedure followed in skin cancer detection is acquiring the image, preprocessing, segmenting the acquired preprocessed image, extracting the desired feature, and classifying it, represented in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g001.jpg

The process of skin cancer detection. ANN = Artificial neural network; CNN = Convolutional neural network; KNN = Kohonen self-organizing neural network; GAN = Generative adversarial neural network.

Deep learning has revolutionized the entire landscape of machine learning during recent decades. It is considered the most sophisticated machine learning subfield concerned with artificial neural network algorithms. These algorithms are inspired by the function and structure of the human brain. Deep learning techniques are implemented in a broad range of areas such as speech recognition [ 7 ], pattern recognition [ 8 ], and bioinformatics [ 9 ]. As compared with other classical approaches of machine learning, deep learning systems have achieved impressive results in these applications. Various deep learning approaches have been used for computer-based skin cancer detection in recent years. In this paper, we thoroughly discuss and analyze skin cancer detection techniques based on deep learning. This paper focuses on the presentation of a comprehensive, systematic literature review of classical approaches of deep learning, such as artificial neural networks (ANN), convolutional neural networks (CNN), Kohonen self-organizing neural networks (KNN), and generative adversarial neural networks (GAN) for skin cancer detection.

A significant amount of research has been performed on this topic. Thus, it is vital to accumulate and analyze the studies, classify them, and summarize the available research findings. To conduct a valuable systematic review of skin cancer detection techniques using deep neural network-based classification, we built search strings to gather relevant information. We kept our search focused on publications of well-reputed journals and conferences. We established multi-stage selection criteria and an assessment procedure, and on the basis of the devised search, 51 relevant research papers were selected. These papers were thoroughly evaluated and analyzed from different aspects. We are greatly encouraged by the trends in skin cancer detection systems, but still, there is space for further improvement in present diagnostic techniques.

This paper is subdivided into four main sections. Section 2 describes the research methodology for performing the effective analysis of deep learning techniques for skin cancer (SC) detection. It contains a description of the review domain, search strings, search criteria, the sources of information, the information extraction framework, and selection criteria. Selected research papers are evaluated, and a detailed survey of SC detection techniques is presented in Section 3 . Section 4 summarizes the whole study and presents a brief conclusion.

2. Research Methodology

The purpose of performing this systematic literature review was to select and categorize the best available approaches to skin cancer detection using neural networks (NNs). Systematic literature reviews collect and analyze existing studies according to predefined evaluation criteria. Such reviews help to determine what is already known in the concerned domain of study [ 10 ].

All data collected from primary sources are organized and analyzed. Once systematic literature is completed, it provides a more sensible, logical, and robust answer to the underlying question of the research [ 11 ].

The population of studies considered in the current systematic literature review consisted of research papers relevant to SC detection based on deep neural network (DNN) techniques.

2.1. Research Framework

Defining the review framework was the first step in this systematic review. It consisted of an overall plan being followed in the systematic literature review. The plan consisted of three layers: a planning layer, a data selection and evaluation layer, and a results-generation and conclusion layer.

2.1.1. Research Questions

For conducting an effective systematic literature review on a topic, it is necessary to formulate research questions. The research questions formulated for the current systematic research were as follows:

Question No. 1: What are the major deep learning techniques for skin cancer detection?

Question No. 2: What are the main characteristics of datasets available for skin cancer?

2.1.2. Search Strategy

A systematic and well-planned search is very important for collecting useful material from the searched data of the desired domain. In this step, a thorough search was conducted to extract meaningful and relevant information from the mass of data. We created an automated search mechanism for filtering out the desired domain’s data from all sources. Research papers, case studies, American Cancer Society reports, and reference lists of related publications were examined in detail. Websites containing information regarding skin cancer, the dangers of skin cancer, the reasons for skin cancer, and NN techniques of skin cancer detection were all carefully searched. For extraction of the desired and relevant data, we conducted our search according to the following parameters.

  • Search keywords/search term identification based on research questions
  • Words related to the search keywords
  • Search string formulation using logical operators between search words

The keywords related to deep learning techniques for skin cancer detection were selected. Subsequently, the search was extended to synonyms for these keywords.

Furthermore, the search was carried out using logical operators ’AND’ and ‘OR’ between keywords. The keywords used to search information relevant to skin cancer are listed in Table 1 .

Search terms.

Search TermSet of Keywords
Skin *Skin cancer, skin diseases, skin treatment
Cancer *Cancer disease, cancer types, cancer diagnosis, cancer treatment
Deep *Deep learning, deep neural networks
Neural *Neural network, neural networking
Network *Neural network, neural networking
Melano *Networking, network types
NonMelano *Melanoma skin cancer, melanoma death rate, melanoma treatment, melanoma diagnosis, melanoma causes, melanoma symptoms
Basal *Basal cell carcinoma, basal cell carcinoma skin cancer, basal cell carcinoma diagnosis, basal cell carcinoma causes, basal cell carcinoma symptoms
Squamous *Squamous cell carcinoma, squamous cell carcinoma skin cancer, squamous cell carcinoma diagnosis, squamous cell carcinoma causes, squamous cell carcinoma symptoms
Artificial *Artificial neural network, artificial neural networking,
Back *Backpropagation neural network
Conv *Convolutional neural network

* = All words that start with the string written before asterisk *.

2.1.3. Resources of Search

We conducted our initial search on well-reputed search engines such as IEEE Xplore, ACM, Springer as well as Google Scholar to extract information relevant to NN techniques for skin cancer detection. Basic research material related to the underlying topic was filtered out in the primary search. The selected research papers and conference proceedings were further analyzed according to evaluation criteria.

2.1.4. Initial Selection Criteria

The initial selection of research papers/conference papers was based on certain specified parameters such as the language of the paper, the year of the paper, and the relevance of the topic within the desired domain. Only research papers written in the English language were included in this research. Our review paper focused on research published between 2011 and 2021. Selected papers had to be relevant to the search terms described in the search strategy.

2.2. Selection and Evaluation Procedure

Using the initial search criteria, the search extracted 1483 research papers and conference reports. From the papers that were identified, we selected 95 papers that had a title considered relevant to our study. Subsequently, the abstracts of those selected papers were examined more closely for their relevance, which led to reducing their number to 64 research papers. The research papers successfully passing abstract-based selection were studied in detail. The quality of those research papers was fully examined, and 51 research papers were selected for final review. In this finalized selection, the percentage of IEEE publications was 25%, Google Scholar’s selection percentage was 16%, 10% papers were selected from ACM DL, 29% from springer, and 20% from Science Direct. The search results are represented in Table 2 .

Search results.

Sr. NoResourceInitial SearchTitle-Based
Selection
Abstract-Based
Selection
Full Paper-Based
Selection
1IEEE Xplore123211513
2Google Scholar 45129118
3ACM DL3271995
4Springer235111715
5Science Direct347151210
Total1483956451

A thorough study of the full text of the selected research papers sought answers to certain quality control questions. The current systematic research asked the following quality assessment questions.

  • Did the selected study cover all aspects of this review’s topic?
  • Was the quality of the selected paper verified?
  • Does the selected study adequately answer the research questions?

The first quality assessment question focused on the thorough coverage of deep learning techniques for skin cancer detection. The quality of a selected paper was verified by the reputation of the journal in which it was published and by its citations. The third question ensured that the research answered the research questions mentioned in Section 2 . Only the most relevant research papers to our domain of study were extracted. These papers had to satisfy those above research questions to qualify for selection. Research papers that failed to adequately answer the research or quality control questions and papers with text that was not related to our study topic were excluded.

Each question had Boolean ’yes/no’ responses. Each ‘yes’ was assigned a value Y = 1 and each ‘no’ was assigned a value N = 0. The first quality control question evaluated the topic coverage of the 51 selected research papers and resulted in a value of 77%, which was quite satisfactory. The second research question verified the quality of the selected papers, which resulted in improvement of quality. It generated an 82% result, which was satisfactory. The third question was a very important question in order to answer the review’s main research questions. It generated a 79% result, which was the indicator of the adequacy of the studies to answer the research questions posed by the review. The overall results of answers to these quality questions seemed healthy.

3. Deep Learning Techniques for Skin Cancer Detection

Deep neural networks play a significant role in skin cancer detection. They consist of a set of interconnected nodes. Their structure is similar to the human brain in terms of neuronal interconnectedness. Their nodes work cooperatively to solve particular problems. Neural networks are trained for certain tasks; subsequently, the networks work as experts in the domains in which they were trained. In our study, neural networks were trained to classify images and to distinguish between various types of skin cancer. Different types of skin lesion from International Skin Imaging Collaboration (ISIC) dataset are presented in Figure 2 . We searched for different techniques of learning, such as ANN, CNN, KNN, and GAN for skin cancer detection systems. Research related to each of these deep neural networks is discussed in detail in this section.

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g002.jpg

Skin disease categories from International Skin Imaging Collaboration (ISIC) dataset [ 12 ].

3.1. Artificial Neural Network (ANN)-Based Skin Cancer Detection Techniques

An artificial neural network is a nonlinear and statistical prediction method. Its structure is borrowed from the biological structure of the human brain. An ANN consists of three layers of neurons. The first layer is known as the input layer; these input neurons transfer data to the second/intermediate layer of neurons. The intermediate layers are referred to as hidden layers. In a typical ANN, there can be several hidden layers. Intermediate neurons send data to the third layer of output neurons. Computations are learned at each layer using backpropagation, which is used for learning the complex associations/relationships between input and output layers. It is similar to a neural network. Currently, in computer science, the term neural network and artificial neural network are used interchangeably. The basic structure of an ANN network is presented in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g003.jpg

Basic ANN structure [ 13 ].

ANN is used for the classification of extracted features in skin cancer detection systems. Input images are classified as melanoma or nonmelanoma after successful training/classification of the training set. The number of hidden layers in an ANN depends on the number of input images. The input/first layer of the ANN process connects with the hidden layer by the input dataset. The dataset can be labeled or unlabeled, which can be processed accordingly using a supervised or unsupervised learning mechanism. A neural network uses backpropagation or feed-forward architecture to learn weights present at each network connection/link. Both architectures use a different pattern for the underlying dataset. Feed-forward-architecture-based neural networks transfer data only in one direction. Data flows only from the input to the output layer.

Xie et al. [ 14 ] proposed a skin lesion classification system that classified lesions into two main classes: benign and malignant. The proposed system worked in three phases. In the initial phase, a self-generating NN was used to extract lesions from images. In the second phase, features such as tumor border, texture, and color details were extracted. The system extracted a total of 57 features, including 7 novel features related to lesion borders descriptions. Principal component analysis (PCA) was used to reduce the dimensionality of the features, which led to the selection of the optimal set of features. Finally, in the last phase, lesions were classified using a NN ensemble model. Ensemble NN improves classification performance by combining backpropagation (BP) NN and fuzzy neural networks. Furthermore, the proposed system classification results were compared with other classifiers, such as SVM, KNN, random forest, Adaboot, etc. With a 91.11% accuracy, the proposed model achieved at least 7.5% higher performance in terms of sensitivity than the other classifiers.

Masood et al. [ 15 ] proposed an ANN-based automated skin cancer diagnostic system. The performance of three ANN’s learning algorithms such as Levenberg–Marquardt (LM) [ 16 ], resilient backpropagation (RP) [ 17 ], scaled conjugate gradient (SCG) [ 18 ], was also investigated by this paper. Comparison of performance showed that the LM algorithm achieved the highest specificity score (95.1%) and remained efficient at the classification of benign lesions, while the SCG learning algorithm produced better results if the number of epochs was increased, scoring a 92.6% sensitivity value. A mole classification system for the early diagnosis of melanoma skin cancer was proposed [ 19 ]. The proposed system extracted features according to the ABCD rule of lesions. ABCD refers to asymmetry of a mole’s form, borders of mole, color, and diameter of mole. Assessment of a mole’s asymmetry and borders were extracted using the Mumford–Shah algorithm and Harris Stephen algorithm, respectively. Normal moles are composed of black, cinnamon, or brown color, so moles with colors other than those three were considered melanoma in the proposed system. Melanoma moles commonly have a diameter value greater than 6 mm, so that value was used as the threshold value of diameter for melanoma detection. The proposed system used a backpropagation feed-forward ANN to classify moles into three classes, such as common mole, uncommon mole, or melanoma mole, with 97.51% accuracy.

An automated skin cancer diagnostic system based on backpropagation ANN was proposed [ 20 ], represented in Figure 4 . This system employed a 2D-wavelet transform technique for feature extraction. The proposed ANN model classified the input images into two classes, such as cancerous or noncancerous. Another ANN-based skin cancer diagnostic system was proposed by Choudhari and Biday [ 21 ]. Images were segmented with a maximum entropy thresholding measure. A gray-level co-occurrence matrix (GLCM) was used to extract unique features of skin lesions. Finally, a feed-forward ANN classified the input images into either a malignant or benign stage of skin cancer, achieving an accuracy level of 86.66%.

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g004.jpg

Skin cancer detection using ANN [ 19 ].

Aswin et al. [ 22 ] described a new method for skin cancer detection based on a genetic algorithm (GA) and ANN algorithms. Images were preprocessed for hair removal with medical imaging software named Dull-Rozar and region of interest (ROI) and were extracted with the Otsu thresholding method. Furthermore, the GLCM technique was employed to extract unique features of the segmented images. Subsequently, a hybrid ANN and GA classifier was used for the classification of lesion images into cancerous and noncancerous classes. The proposed system achieved an overall accuracy score of 88%. Comprehensive details of the various skin cancer detection systems based on ANN are listed in Table 3 below.

A comparative analysis of skin cancer detection using ANN-based approaches.

RefSkin Cancer
Diagnoses
Classifier and Training
Algorithm
DatasetDescriptionResults (%)
[ ]MelanomaANN with backpropagation algorithm31 dermoscopic imagesABCD parameters for feature extraction,Accuracy (96.9)
[ ]Melanoma/Non- melanomaANN with backpropagation algorithm90 dermoscopic imagesmaximum entropy for thresholding, and gray- level co-occurrence matrix for features extractionAccuracy (86.66)
[ ]Cancerous/non- cancerousANN with backpropagation algorithm31 dermoscopic images 2D-wavelet transform for feature extraction and thresholding for segmentationNil
[ ]Malignant
/benign
Feed-forward ANN with the backpropagation training algorithm326 lesion
images
Color and shape characteristics of the tumor were used as discriminant features for classificationAccuracy (80)
[ ]Malignant/non-MalignantBackpropagation neural network as NN classifier448 mixed-type imagesROI and SRM for segmentationAccuracy (70.4)
[ ]Cancerous/noncancerousANN with backpropagation algorithm30 cancerous/noncancerous imagesRGB color features and GLCM techniques for feature extractionAccuracy (86.66)
[ ]Common mole/non-common mole/melanomaFeed-forward BPNN200 dermoscopic imagesFeatures extracted according to ABCD ruleAccuracy (97.51)
[ ]Cancerous/noncancerousArtificial neural network with backpropagation algorithm50 dermoscopic imagesGLCM technique for feature extractionAccuracy (88)
[ ]BCC/non-BCCANN180 skin lesion images Histogram equalization for contrast enhancementReliability (93.33)
[ ]Melanoma/Non-melanomaANN with Levenberg–Marquardt (LM), resilient backpropagation (RBP), and scaled conjugate gradient (GCG) learning algorithms135 lesion
images
Combination of multiple classifiers to avoid the misclassificationAccuracy (SCG:91.9, LM: 95.1, RBP:88.1)
[ ]Malignant/benignANN meta-ensemble model consisting of BPN and fuzzy neural networkCaucasian race and xanthous-race datasetsSelf-generating neural network was used for
lesion extraction
Accuracy (94.17)
Sensitivity (95), specificity (93.75)

ANN = Artificial neural network, NN = Neural network. ROI = Region of interest, SRM = Statistical region merging, GLCM = Gray level co-occurrence matrix, BPNN = Backpropagation neural network.

3.2. Convolutional Neural Network (CNN)-Based Skin Cancer Detection Techniques

A convolution neural network is an essential type of deep neural network, which is effectively being used in computer vision. It is used for classifying images, assembling a group of input images, and performing image recognition. CNN is a fantastic tool for collecting and learning global data as well as local data by gathering more straightforward features such as curves and edges to produce complex features such as shapes and corners [ 28 ]. CNN’s hidden layers consist of convolution layers, nonlinear pooling layers, and fully connected layers [ 29 ]. CNN can contain multiple convolution layers that are followed by several fully connected layers. Three major types of layers involved in making CNN are convolution layers, pooling layers, and full-connected layers [ 30 ]. The basic architecture of a CNN is presented in Figure 5 .

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g005.jpg

Basic CNN Architecture [ 9 ].

CNN-based automated deep learning algorithms have achieved remarkable performance in the detection, segmentation, and classification operations of medical imaging [ 31 ]. Lequan et al. [ 32 ] proposed a very deep CNN for melanoma detection. A fully convolutional residual network (FCRN) having 16 residual blocks was used in the segmentation process to improve performance. The proposed technique used an average of both SVM and softmax classifier for classification. It showed 85.5% accuracy in melanoma classification with segmentation and 82.8% without segmentation. DeVries and Ramachandram [ 33 ] proposed a multi-scale CNN using an inception v3 deep neural network that was trained on an ImageNet dataset. For skin cancer classification, the pre-trained inception v3 was further fined-tuned on two resolution scales of input lesion images: coarse-scale and finer scale. The coarse-scale was used to capture shape characteristics as well as overall contextual information of lesions. In contrast, the finer scale gathered textual detail of lesion for differentiation between various types of skin lesions.

Mahbod et al. [ 34 ] proposed a technique to extract deep features from various well-established and pre-trained deep CNNs for skin lesions classification. Pretrained AlexNet, ResNet-18 and VGG16 were used as deep-feature generators, then a multi-class SVM classifier was trained on these generated features. Finally, the classifier results were fused to perform classification. The proposed system was evaluated on the ISIC 2017 dataset and showed 97.55% and 83.83% area under the curve (AUC) performance for seborrheic keratosis (SK) and melanoma classification. A deep CNN architecture based on pre-trained ResNet-152 was proposed to classify 12 different kinds of skin lesions [ 35 ]. Initially, it was trained on 3797 lesion images; however, later, 29-times augmentation was applied based on lighting positions and scale transformations. The proposed technique provided an AUC value of 0.99 for the classification of hemangioma lesion, pyogenic granuloma (PG) lesion, and intraepithelial carcinoma (IC) skin lesions.

A technique for the classification of four different types of skin lesion images was proposed by Dorj et al. [ 36 ]. A pre-trained deep CNN named AlexNet was used for feature extraction, after which error-correcting output coding SVM worked as a classifier. The proposed system produced the highest scores of the average sensitivity, specificity, and accuracy for SCC, actinic keratosis (AK), and BCC: 95.1%, 98.9%, and 94.17%, respectively. Kalouche [ 37 ] proposed a pre-trained deep CNN architecture VGG-16 with a final three fine-tuned layers and five convolutional blocks. The proposed VCG-16 model is represented in Figure 6 . VCG-16 models showed 78% accuracy for the classification of lesion images as melanoma skin cancer. A deep CNN-based system was proposed to detect the borders of skin lesions in images. The deep learning model was trained on 1200 normal skin images and 400 images of skin lesions. The proposed system classified the input images into two main classes, normal skin image and lesion image, with 86.67% accuracy. A comprehensive list of skin cancer detection systems using CNN classifiers is presented in Table 4 .

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g006.jpg

Skin cancer diagnosis using CNN [ 37 ].

A comparative analysis of skin cancer detection using CNN-based approaches.

RefSkin Cancer DiagnosesClassifier and Training
Algorithm
DatasetDescriptionResults (%)
[ ]Benign/malignantLightNet (deep learning framework), used for classificationISIC 2016 datasetFewer parameters and well suited for mobile applicationsAccuracy (81.6), sensitivity (14.9), specificity (98)
[ ]Melanoma/benignCNN classifier170 skin lesion imagesTwo convolving layers in CNNAccuracy (81), sensitivity (81), specificity (80)
[ ]BCC/SCC/melanoma/AKSVM with deep CNN3753 dermoscopic images Pertained to deep CNN and AlexNet for features extractionAccuracy (SCC: 95.1, AK: 98.9, BCC: 94.17)
[ ]Melanoma /benign
Keratinocyte carcinomas/benign SK
Deep CNNISIC-Dermoscopic ArchiveExpert-level performance against 21 certified dermatologistsAccuracy (72.1)
[ ]Malignant melanoma and BC carcinomaCNN with Res-Net 152 architectureThe first dataset has 170 images the second dataset contains 1300 images Augmentor Python library for augmentation.AUC (melanoma: 96, BCC: 91)
[ ]Melanoma/nonmelanomaSVM-trained, with CNN, extracted featuresDermIS dataset and DermQuest dataA median filter for noise removal and CNN for feature extractionAccuracy (93.75)
[ ]Malignant melanoma/nevus/SKCNN as single neural-net architectureISIC 2017 datasetCNN ensemble of AlexNet, VGGNet, and GoogleNetfor classificationAverage AUC:9 84.8), average accuracy (83.8)
[ ]BCC/nonBCCCNN40 FF-OCT imagesTrained CNN, consisted of 10 layers for features extractionAccuracy (95.93), sensitivity (95.2), specificity (96.54)
[ ]Cancerous/noncancerousCNN1730 skin lesion and background imagesFocused on edge detectionAccuracy (86.67)
[ ]Benign/melanomaVGG-16 and CNNISIC datasetDataset was trained on three separate learning modelsAccuracy (78)
[ ]Benign/malignantCNNISIC databaseABCD symptomatic checklist for feature extractionAccuracy (89.5)
[ ]Melanoma/benign keratosis/ melanocytic nevi/BCC/AK/IC/atypical nevi/dermatofibroma/vascular lesionsDeep CNN architecture (DenseNet 201, Inception v3, ResNet 152 and
InceptionResNet v2)
HAM10000 and PH2 datasetDeep learning models outperformed highly trained dermatologists in overall mean results by at least 11%ROC AUC
(DenseNet 201: 98.79–98.16, Inception v3:
98.60–97.80,
ResNet 152: 98.61–98.04,
InceptionResNet v2: 98.20–96.10)
[ ]Lipoma/fibroma/sclerosis/melanomaDeep region-based CNN
and fuzzy C means clustering
ISIC datasetCombination of the region-based CNN and fuzzy C-means ensured more accuracy in disease detectionAccuracy (94.8) sensitivity (97.81) specificity (94.17) F1_score (95.89)
[ ]Malignant/benign6-layers deep CNNMED-NODE and ISIC datasetsIllumination factor in images affected performance of the systemAccuracy (77.50)
[ ]Melanoma/non melanomaHybrid of fully CNN with autoencoder and decoder and RNNISIC datasetProposed model outperformed state-of-art SegNet, FCN, and ExB architectureAccuracy (98) Jaccard index (93), sensitivity (95), specificity (94)
[ ]Benign/malignant2-layer CNN with a novel regularizerISIC datasetProposed regularization technique controlled complexity by adding a penalty on the dispersion value of classifier’s weight matrix Accuracy (97.49) AUC (98), sensitivity (94.3), specificity (93.6)
[ ]Malignant melanoma/SKSVM classification with features extracted with pretrained deep models named AlexNet, ResNet-18, and VGG16ISIC datasetSVM scores were mapped to
probabilities with logistic regression function for evaluation
Average AUC (90.69)
[ ]Melanoma/BCC/melanocytic nevus/Bowen’s disease/AK/benign keratosis/vascular lesion/dermatofibromaInceptionResNetV2, PNASNet-5-Large,
InceptionV4, and
SENet154
ISIC datasetA trained image-net model was used to initialize network parameters and fine-tuning Validation Score (76)
[ ]melanoma/BCC/melanocytic nevus/AK/benign keratosis/vascular lesion/dermatofibromaCNN model with
LeNet approach
ISIC datasetThe adaptive piecewise linear activation function was used to increase system performanceAccuracy (95.86)
[ ]Benign/malignantDeep CNNISIC datasetData augmentation was performed for data balancingAccuracy (80.3), precision (81), AUC (69)
[ ]Compound nevus/malignant melanomaCNNAtlasDerm, Derma, Dermnet, Danderm, DermIS and DermQuest datasetsBVLC-AlexNet model, pretrained from ImageNet dataset was used for fine-tuningMean average precision (70)
[ ]Melanoma/SKDeep multi-scale CNNISIC datasetThe proposed model used Inception-v3 model, which was trained on the ImageNet.Accuracy (90.3), AUC (94.3)
[ ]Benign/malignantCNN with 5-fold cross-validation1760 dermoscopic images Images were preprocessed on the basis of melanoma cytological findings Accuracy (84.7), sensitivity (80.9),
specificity (88.1)
[ ]Benign/malignantA very deep residual CNN and FCRNISIC 2016 databaseFCRN incorporated with a multi-scale contextual information integration technique was proposed for accurate lesions segmentationAccuracy (94.9), sensitivity (91.1), specificity (95.7), Jaccard index (82.9), dice coefficient (89.7)
[ ]AK/melanocytic nevus/BCC/SK/SCCCNN1300 skin lesion imagesMean subtraction for each image, pooled multi-scale feature extraction process and pooling in augmented-feature spaceAccuracy (81.8)
[ ]BCC/non-BCCPruned ResNet18297 FF-OCT imagesK-fold cross-validation was applied to measure the performance of the proposed systemAccuracy (80)
[ ]Melanoma/non melanomaResNet-50 with deep transfer learning3600 lesion images from the ISIC datasetThe proposed model showed better performance than o InceptionV3, Densenet169, Inception ResNetV2, and MobilenetAccuracy (93.5), precision (94)
recall (77), F1_ score (85)
[ ]Benign/malignantRegion-based CNN with ResNet1522742 dermoscopic images from ISIC datasetRegion of interest was extracted by mask and region-based CNN, then ResNet152 is used for classification.Accuracy (90.4), sensitivity (82),
specificity (92.5)

CNN = Convolutional neural network; ISIC = International skin imaging collaboration; SVM = Support vector machine; BCC = Basal cell carcinoma; SCC = Squamous cell carcinoma; AK = Actinic keratosis; IC = Intraepithelial carcinoma; HAM10000 = Human-against-machine dataset with 10,000 images; BVLC = Berkeley Vision and Learning Center; SK= Seborrheic keratosis; FCRN = Fully convolutional residual network; FF-OCT = Full field optical coherence tomography; FCN = Fully convolutional network.

3.3. Kohonen Self-Organizing Neural Network (KNN)-Based Skin Cancer Detection Techniques

The Kohonen self-organizing map is a very famous type of deep neural network. CNNs are trained on the basis of unsupervised learning, which means that a KNN does not require any developer’s intervention in the learning process as well as requiring little information about the attributes of the input data. A KNN generally consists of two layers. In the 2-D plane, the first layer is called an input layer, while another is named a competitive layer. Both of these layers are fully connected, and every connection is from the first to second layer dimension. A KNN can be used for data clustering without knowing the relationships between input data members. It is also known as a self-organizing map. KNNs do not contain an output layer; every node in the competitive layer also acts as the output node itself.

A KNN basically works as a dimensionality reducer. It can reduce the high dimensional data into a low dimension, such as a two-dimensional plane. Thus, it provides discrete types of representation of the input dataset. KNNs are different from other types of NN in terms of learning strategy because it uses competitive learning rather than the learning based on error correction found in BPN or feed-forward learning. A KNN preserves the topological structure of the input data space during mapping dimensionality from high to low. Preservation refers to the preservation of relative distance between data points in space. Data points that are closer in input data space are mapped closer to each other in this scheme; far points are mapped far from each other as well as, according to the relative distance present among them. Consequently, a KNN is the best tool for high dimensional data. Another important feature provided by a KNN is its generalization ability. The network has the ability to recognize and organize unknown input data. The architecture of a KNN is shown in Figure 7 . A KKN’s main quality is its ability to map complex relationships of data points in which even nonlinear relations exist between data points. Due to these benefits, nowadays, KNNs are being used in skin cancer detection systems.

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g007.jpg

Basic KNN structure [ 58 ], BMU= Best matching unit.

Lenhardt et al. [ 59 ] proposed a KNN-based skin cancer detection system. The proposed system processed synchronous fluorescence spectra of melanoma, nevus, and normal skin samples for neural network training. A fluorescence spectrophotometer was used to measure the fluorescence spectra of the samples, whereas samples were collected from human patients immediately after surgical resection. The dimensionality of measured spectra was reduced with the PCA technique. Both KNN and ANN were trained, and their performance for melanoma detection was compared. On the test dataset, the classification error of KNN was 2–3%, while the classification error for ANN lay in the range of 3% to 4%.

A combination of self-organizing NN and radial basis function (RBF) neural network was proposed to diagnose three different types of skin cancer, such as BCC, melanoma, and SCC [ 60 ]. The proposed system extracted color, GLCM, and morphological features of lesion images, after which the classification model used those features as input. Furthermore, the classification performance of the proposed system was compared with k -nearest neighbor, ANN, and naïve-Bayes classifiers. The proposed system achieved 93.150685% accuracy while k -nearest neighbor showed 71.232877%, ANN showed 63.013699%, and naïve Bayes showed 56.164384% accuracy scores.

Another KNN-based automated skin cancer diagnostic system was proposed by Sajid et al. [ 61 ]. The proposed system employed a median filter as a noise removal technique. Then filtered images were segmented with a statistical region growing and merging technique. In this system, a collection of textual and statistical features was used. Statistical features were extracted from lesion images, whereas textual features were extracted from a curvelet domain. Finally, the proposed system classified the input images into cancerous or noncancerous with 98.3% accuracy. In this work, other classifiers such as SVM, BPN, and 3-layer NN were also implemented, and their performance was compared with the proposed system’s classification performance. SVM produced 91.1% accuracy, BPN showed 90.4% accuracy, 3-layer NN showed 90.5%, whereas the proposed system achieved the highest accuracy of 98.3% for skin cancer diagnosis. Details on the KNN-based skin cancer diagnostic systems is presented in Table 5 .

A comparative analysis of skin cancer detection using KNN-based approaches.

RefSkin Cancer
Diagnoses
Classifier and Training AlgorithmDatasetDescriptionResults (%)
[ ]Melanoma/nevus/normal skinSOM and feed-forward NN50 skin lesion imagesPCA for decreasing spectra’s dimensionalityAccuracy (96–98)
[ ]BCC, SCC, and melanomaSOM and RBFDermQuest and Dermnet datasets15 features consisting of GCM morphological and color features were extractedAccuracy (93.15)
[ ]Cancerous/noncancerousModified KNN500 lesion imagesAutomated Otsu method of thresholding for segmentationAccuracy (98.3)

SOM = Self organizing map; PCA = Principal component analysis; GCM = Generalized co-occurrence matrices; RBF = Radial Basis Function; KNN = Kohonen self-organizing neural network.

3.4. Generative Adversarial Network (GAN)-Based Skin Cancer Detection Techniques

A generative adversarial neural network is a powerful class of DNN that is inspired by zero-sum game theory [ 62 ]. GANs are based on the idea that two neural networks, such as a generator and a discriminator, compete with each other to analyze and capture the variance in a database. The generator module uses the data distribution to produce fake data samples and tries to misguide the discriminator module. On the other hand, the discriminator module aims to distinguish between real and fake data samples [ 63 ]. In the training phase, both of these neural networks repeat these steps, and their performance improves after each competition. The ability to generate fake samples that are similar to a real sample using the same data distribution, such as photorealistic images, is the major power of a GAN network. It can also solve a major problem in deep learning: the insufficient training examples problem. Research scholars have been implementing various types of GANs, such as Vanilla GAN, condition GAN (CGAN), deep convolutional GAN (DCGAN), super-resolution GAN (SRGAN), and Laplacian Pyramid GAN (LPGAN). Nowadays, GANs are successfully being used in skin cancer diagnostic systems. The architecture of a GAN is shown in Figure 8 .

An external file that holds a picture, illustration, etc.
Object name is ijerph-18-05479-g008.jpg

GAN architecture [ 64 ].

Rashid et al. [ 7 ] proposed a GAN-based skin lesion classification system. The proposed system performed augmentation on a training set of images with realistic-looking skin lesion images generated via GAN. A deconvolutional network was used as the generator module, while the discriminator module used CNN as a classifier. The CNN learned to classify seven different categories of skin lesions. Results of the proposed system were compared with ResNet-50 and DenseNet. ResNet-50 produced 79.2% accuracy, DenseNet showed 81.5% accuracy, whereas the proposed approach achieved the highest accuracy of 86.1% for skin lesion classification. Deep learning methods provide sufficient accuracy but require pure, unbalanced, and large training datasets. To overcome these limitations, Bisla et al. [ 8 ] proposed a deep learning approach for data purification and GAN for data augmentation. The proposed system used decoupled deep convolutional GANs for data generation. A pre-trained ResNet-50 model was further refined with a purified and augmented dataset and was used to classify dermoscopic images into three categories: melanoma, SK, and nevus. The proposed system outperformed the baseline ResNet-50 model for skin lesion classification and achieved 86.1% accuracy.

A novel data augmentation method for a skin lesion on the basis of self-attention progressive GAN (PGAN) was proposed. Moreover, the generative model was enhanced with the stabilization technique. The proposed system achieved 70.1% accuracy as compared with 67.3% accuracy produced by a non-augmented system. A list of GAN-based skin cancer detection systems with their diagnosed skin cancer type, classifier, dataset, and the obtained result is presented in Table 6 .

A comparative analysis of skin cancer detection using GAN-based approaches.

RefSkin Cancer DiagnosesClassifier and Training AlgorithmDatasetDescriptionResults (%)
[ ]AK/BCC/benign keratosis/dermatofibroma/melanoma/melanocytic nevus/vascular lesionGANISIC 2018The proposed system used deconvolutional network and CNN as generator and discriminator moduleAccuracy (86.1)
[ ]Melanoma/nevus/SKDeep convolutional GANISIC 2017, ISIC 2018, PH Decoupled deep convolutional GANs for data augmentationROC AUC (91.5), accuracy (86.1)
[ ]BCC/vascular/pigmented benign keratosis/pigmented Bowen’s/nevus/dermatofibromaSelf-attention-based PGANISIC 2018A generative model was enhanced with a stabilization techniqueAccuracy (70.1)

GAN = Generative adversarial neural network, PGAN = Progressive generative adversarial network, ROC AUC= Area under the receiver operating characteristic curve.

4. Datasets

Several computer-based systems for skin cancer diagnosis have been proposed. Evaluating their diagnostic performance and validating predicted results requires a solid and reliable collection of dermoscopic images. Various skin cancer datasets have lacked size and diversity other than for images of nevi or melanoma lesions. Training of artificial neural networks for skin lesion classification is hampered by the small size of the datasets and a lack of diverse data. Although patients commonly suffer from a variety of non-melanocytic lesions, past research for automated skin cancer diagnosis primarily focused on diagnosing melanocytic lesions, resulting in a limited number of diagnoses in the available datasets [ 66 ]. Therefore, the availability of a standard, reliable dataset of dermoscopic images is very crucial. Real-world datasets for the evaluation of proposed skin cancer detection techniques are discussed in this section. Table 7 summarizes the important details of these datasets.

Skin Cancer Datasets.

Sr. NoName of DatasetYear of ReleaseNo. of ImagesReference Used
1HAM10000201810,015[ ]
2PH 2013200[ ]
3ISIC archive201625,331[ , , , , , , , , , , ]
4DermQuest199922,082[ , , ]
5DermIS 6588[ , ]
6AtlasDerm20001024[ ]
7Dermnet199823,000[ , ]

4.1. HAM10000

There is a human-against-machine dataset with 10,000 training images that is referred to as HAM10000 [ 66 ]. It is the latest publicly available skin lesions dataset, and it overcomes the problem of the lack of diversity. The final dataset of HAM10000 contains 10,015 dermoscopic images, collected from two sources: Cliff Rosendahl’s skin cancer practice in Queensland, Australia, and the Dermatology Department of the Medical University of Vienna, Austria. This collection has taken twenty years to compile. Before widespread use of digital cameras, photographic prints of lesions were deposited and stored at the Dermatology Department of the Medical University of Vienna, Austria. These photographic prints were digitalized with the help of Nikon-Coolscan-5000-ED scanner, manufactured by Nikon corporation Japan and converted into 8-bit color JPEG images having 300 DPI quality. The images were then manually cropped and saved at 800 × 600 pixels resolution at 72 DPI.

Several acquisition functions and cleaning methods were applied to the images and a semi-automatic workflow was developed using a neural network to attain diversity. The resulting dataset contains 327 images of AK, 514 images of basal cell carcinomas, 1099 images of benign keratoses, 115 images of dermatofibromas, 1113 images of melanocytic nevi, 6705 images of melanomas, and 142 images of vascular skin lesions.

The dermoscopic images in the PH² dataset were collected at the Dermatology Center of Pedro Hispano Hospital, Portugal [ 68 ]. These images were obtained using a Tuebinger-Mole-Analyzer system under the same conditions and magnification rate of 20×. PH2 dataset contains 8-bit RGB color images having 768 × 560 pixels resolution. The dataset contains 200 dermoscopic images, divided into 80 images of common nevi, 80 images of atypical nevi, and 40 images of melanoma skin cancers. This dataset contains medical annotation of the lesion images, such as medical segmentation of pigmented skin lesions, histological and clinical diagnosis, and evaluation of various dermoscopic criteria. The assessment was performed according to dermoscopic criteria of streaks, colors, regression areas, pigment network, and blue-whitish veil globules.

4.3. ISIC Archive

The ISIC archive [ 69 ] is a collection of various skin lesions datasets. The ISIC dataset [ 70 ] was originally released by the International Skin Imaging Collaboration at the International Symposium on Biomedical Imaging (ISBI) challenge 2016, named as ISIC2016. The ISIC2016 archive is divided into two parts: training and testing. The training subset of ISIC contains 900 images, while the testing subset contains 379 dermoscopic images. It includes images of two classes: malignant melanomas and benign nevi. Approximately 30.3% of the dataset’s images are of melanoma lesions and the remaining images belong to the benign nevi class. ISIC increases the number of images in its archive every year and has established a design challenge for the development of a system for skin cancer automated diagnosis.

In the ISIC2017 dataset, there were three categories of images: melanomas, seborrheic-keratoses (SK), and benign nevi. The dataset contains 2000 training images, 150 validation images, and 600 images for testing. The training dataset contains 374 images of melanomas, 254 SK images, and 1372 images of benign nevi. The validation dataset contains 30 melanoma images, 42 SK images, and 78 benign nevus images. The test dataset includes 117 melanoma images, 90 SK images, and 393 benign nevus images. ISIC2018 contains 12,594 training images, 100 validation images, and 1000 test images. The ISIC2019 dataset includes 25,331 images of eight different categories of skin lesions, such as melanoma, melanocytic-nevus, BCC, AK, benign keratosis, dermatofibroma, vascular lesion, and SCC. It contains 8239 images in the test dataset and an additional outlier class that was not included in the training dataset. The new proposed skin cancer diagnostic systems must be able to identify these images. The ISIC2019 dataset also includes metadata for images, such as sex, age, and area of the patient.

4.4. Derm Quest

The publicly available DermQuest dataset [ 71 ] contained 22,082 dermoscopic images. Among all dermoscopic datasets, only the DermQuest dataset contained lesion tags for skin lesions. There were 134 lesion tags for all images in the dataset. The DermQuest dataset redirected to Derm101 in 2018. However, this dataset was deactivated recently on 31 December 2019.

4.5. DermIS

The Dermoscopic dataset Dermatology Information System is commonly known as DermIS [ 72 ]. This dataset was built through cooperation between the Department of Dermatology of the University of Erlangen and the Department of Clinical Social Medicine of the University of Heidelberg. It contains 6588 images. This dataset has recently been divided into two parts: a dermatology online image atlas (DOIA) and a pediatric dermatology online image atlas (PeDOIA). The DOIA includes 3000 lesion images covering approximately 600 dermatological diagnoses. It provides dermoscopic images complete with differential and provisional diagnoses, case reports, and other information on nearly all types of skin diseases.

4.6. AtlasDerm

The Atlas of Dermoscopy dataset is commonly referred to as AtlasDerm [ 73 ]. It is a unique and well-organized combination of a book and images on CD-ROM with sample examples for training. It was originally designed as a tool to help physicians in the diagnosis of skin lesions and the recognition of dermoscopic criteria related to melanoma. The AtlasDerm dataset considers various cases of skin lesions, with corresponding dermoscopic images for every case. It contains 5 images of AK, 42 images of BCC, 70 images of benign keratosis, 20 images of dermatofibroma, 275 images of melanocytic nevus, 582 images of melanoma, and 30 images of vascular skin lesions.

4.7. Dermnet

The Dermnet Skin Disease Atlas dataset is commonly referred to as Dermnet [ 74 ]. It was built in 1998 by Dr. Thomas Habif in Portsmouth, New Hampshire. It consists of more than 23,000 dermoscopic images. This database contains images of 643 different types of skin diseases. These diseases are biologically organized into a two-level taxonomy. The bottom level contains more than 600 skin diseases in fine granularity. The top-level taxonomy contains 23 different classes of skin diseases, such as connective tissue disease, benign tumors, eczema, melanomas, moles, nevi, etc.

5. Open Research Challenges

5.1. extensive training.

One of the major challenges in neural network-based skin cancer detection techniques is the extensive training that is required. In other words, to successfully analyze and interpret the features from dermoscopic images, the system must undergo detailed training, which is a time-consuming process and demands extremely powerful hardware.

5.2. Variation in Lesion Sizes

Another challenge is the variation in the sizes of lesions. A group of Italian and Austrian researchers collected many benign and cancerous melanoma lesion images in the 1990s [ 73 ]. The diagnostic accuracy of the identification of the lesions was as high as 95% to 96% [ 75 ]. However, the diagnostic process, with earlier stage and smaller lesions of 1mm or 2mm in size, was much more difficult and error-prone.

5.3. Images of Light Skinned People in Standard Datasets

Existing standard dermoscopic datasets contain images of light-skinned people, mostly from Europe, Australia, and the United States. For accurate skin cancer detection in dark-skinned people, a neural network must learn to account for skin color [ 76 ]. However, doing so is possible only if the neural network observes enough images of dark-skinned people during the process of training. Therefore, datasets having sufficient lesion images of dark-skinned and light-skinned people is necessary for increasing the accuracy of skin cancer detection systems.

5.4. Small Interclass Variation in Skin Cancer Images

Unlike the other types of images, medical images have very small interclass variation; that is, the difference between melanoma and nonmelanoma skin cancer lesion images has much less variation than, say, the variation between images of cats and dogs. It is also very difficult to differentiate between a birthmark and a melanoma. The lesions of some disease are so similar that it is extremely hard to distinguish them. This limited variation makes the task of image analysis and classification very complex [ 32 ].

5.5. Unbalanced Skin Cancer Datasets

Real-world datasets used for skin cancer diagnosis are highly unbalanced. Unbalanced datasets contain a very different number of images for each type of skin cancer. For example, they contain hundreds of images of common skin cancer types but only a few images for the uncommon types, making it difficult to draw generalizations from the visual features of the dermoscopic images [ 12 ].

5.6. Lack of Availability of Powerful Hardware

Powerful hardware resources with high graphical processing unit (GPU) power are required for the NN software to be able to extract the unique features of a lesion’s image, which is critical for achieving better skin cancer detection. The lack of availability of high computing power is a major challenge in deep learning-based skin cancer detection training.

5.7. Lack of Availability of Age-Wise Division of Images In Standard Datasets

Various types of skin cancers such as Merkel cell cancer, BCC, and SCC typically appear after the age of 65 years [ 77 ]. Existing standard dermoscopic datasets contain images of young people. However, for an accurate diagnosis of skin cancer in elderly patients, it is necessary that neural networks observe enough images of people aged more than 50 years.

5.8. Use of Various Optimization Techniques

Preprocessing and detection of lesion edges are very crucial steps in the automated detection of skin cancer. Various optimization algorithms such as artificial the bee colony algorithm [ 78 ], ant colony optimization [ 79 ], social spider optimization [ 80 ], and particle swarm optimization [ 81 ] can be explored to increase the performance of automated skin cancer diagnostic systems.

5.9. Analysis of Genetic and Environmental Factors

Researchers have identified various genetic risk factors for melanoma, such as fair skin, light colored eyes, red hair, a large number of moles on the body, and a family history of skin cancer. When these genetic risk factors are combined with environmental risks such as high ultraviolet light exposure, the chances of developing skin cancer become very high [ 82 ]. These factors can be combined with existing deep learning approaches for better performance.

6. Conclusion and Future Work

This systematic review paper has discussed various neural network techniques for skin cancer detection and classification. All of these techniques are noninvasive. Skin cancer detection requires multiple stages, such as preprocessing and image segmentation, followed by feature extraction and classification. This review focused on ANNs, CNNs, KNNs, and RBFNs for classification of lesion images. Each algorithm has its advantages and disadvantages. Proper selection of the classification technique is the core point for best results. However, CNN gives better results than other types of a neural networks when classifying image data because it is more closely related to computer vision than others.

Most of the research related to skin cancer detection focuses on whether a given lesion image is cancerous. However, when a patient asks if a particular skin cancer symptom appears on any part of their body, the current research cannot provide an answer. Thus far, the research has focused on the narrow problem of classification of the signal image. Future research can include full-body photography to seek the answer to the question that typically arises. Autonomous full-body photography will automate and speed up the image acquisition phase.

The idea of auto-organization has recently emerged within the area of deep learning. Auto-organization refers to the process of unsupervised learning, which aims to identify features and to discover relations or patterns in the image samples of the dataset. Under the umbrella of convolutional neural networks, auto-organization techniques increase the level of features representation that is retrieved by expert systems [ 47 ]. Currently, auto-organization is a model that is still in research and development. However, its study can improve the accuracy of image processing systems in the future, particularly in the area of medical imaging, where the smallest details of features are extremely crucial for the correct diagnosis of disease.

Acknowledgments

The authors acknowledge support from the Deanship of Scientific Research, Najran University, Kingdom of Saudi Arabia.

Author Contributions

M.D. and S.A., M.I. and H.U.K. developed the idea and collected the data. M.R. and A.R.M. analyzed the data and wrote the manuscript. S.A.A. prepared the figures. A.H.M.S., M.O.A. and M.H.M. reviewed the data and manuscript as well. M.O.A. and M.H.M. were involved in the analysis of the images, datasets, labelling, and data. All authors have read and agreed to the published version of the manuscript.

This research study has not received any research funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Data availability statement, conflicts of interest.

Authors have no conflicts of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Captcha Page

We apologize for the inconvenience...

To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.

If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.

https://ioppublishing.org/contacts/

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Advertisement

Advertisement

Skin cancer detection with MobileNet-based transfer learning and MixNets for enhanced diagnosis

  • Original Article
  • Published: 28 August 2024

Cite this article

skin disease detection using machine learning research paper

  • Mohammed Zakariah   ORCID: orcid.org/0000-0002-2488-2605 1 ,
  • Muna Al-Razgan 2 &
  • Taha Alfakih 3  

Skin cancer poses a significant health hazard, necessitating the utilization of advanced diagnostic methodologies to facilitate timely detection, owing to its escalating prevalence in recent years. This paper proposes a novel approach to tackle the issue by introducing a method for detecting skin cancer that uses MixNets to enhance diagnosis and leverages mobile network-based transfer learning. Skin cancer has diverse forms, each distinguishable by its structural attributes, morphological characteristics, texture, and coloration. The pressing demand for accurate and efficient diagnostic instruments has spurred the investigation of novel techniques. The present study utilizes the ISIC dataset, comprising a validation set of 660 images and a training set of 2637 images. Moreover, the research employs a combination of MixNets and mobile network-based transfer learning as its chosen approach. Transfer learning is a technique that leverages preexisting models to enhance the diagnostic capabilities of the proposed system. Integrating MobileNet and MixNets allows for utilizing their respective functionalities, resulting in a dual-model methodology that enhances the comprehensiveness of skin cancer diagnosis. The results demonstrate impressive performance metrics, with MobileNet and MixNets models, and the proposed approach achieves an outstanding accuracy rate of 99.58%. The above findings underscore the efficacy of the dual-model method in effectively discerning between benign and malignant skin lesions. Moreover, the present study aims to examine the potential integration of emerging technologies to enhance the accuracy and practicality of diagnostics within real-world healthcare settings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

skin disease detection using machine learning research paper

Similar content being viewed by others

skin disease detection using machine learning research paper

Skin Lesion Analyser: An Efficient Seven-Way Multi-class Skin Cancer Classification Using MobileNet

skin disease detection using machine learning research paper

Transfer Learning for Automated Melanoma Classification System: Data Augmentation

skin disease detection using machine learning research paper

Skin lesion classification using transfer learning

Explore related subjects.

Artificial Intelligence

  • Medical Imaging

Data availability

The dataset is available on reasonable request.

Abbreviations

International Skin Imaging Collaboration Laboratory database

Machine Learning

Magnetic Resonance Imaging

Visual Geometry Group

Human Against Machine

Residual Network

Principal Component Analysis

Receiver Operating Characteristic

True Negative/False Negative

Ultraviolet

Support Vector Machine

Convolutional Neural Network

Internet of Things

Geometric Active Contour

Exploratory Data Analysis

Rectified Linear Unit

False Positive/True Positive

Area Under the ROC Curve

Anand V, Gupta S, Altameem A, Nayak SR, Poonia RC, Saudagar AKJ (2022) An enhanced transfer learning based classification for diagnosis of skin cancer. Diagnostics 12(7):1628. https://doi.org/10.3390/diagnostics12071628

Article   Google Scholar  

Al-Rasheed A, Ksibi A, Ayadi M, Alzahrani AIA, Zakariah M, Ali Hakami N (2022) “An ensemble of transfer learning models for the prediction of skin cancers with conditional generative adversarial networks.” Diagnostics 12(12):3145. https://doi.org/10.3390/diagnostics12123145

Khandizod S, Patil T, Dode A, Banale V, Prof CD, Bawankar (2022) “Deep Learning based skin cancer classifier using MobileNet.” Int J Res Appl Sci Eng Technol 10(5):629–633. https://doi.org/10.22214/ijraset.2022.42260

Ghazal TM, Hussain S, Khan MF, Khan MA, Said RAT, Ahmad M (2022) Detection of benign and malignant tumours in skin empowered with transfer learning. Comput Intell Neurosci 2022:1–9. https://doi.org/10.1155/2022/4826892

K. V. Reddy and L. R. Parvathy, “An Innovative Analysis of predicting Melanoma Skin Cancer using MobileNet and Convolutional Neural Network Algorithm,” in 2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS), IEEE, Oct. 2022, pp. 91–95. https://doi.org/10.1109/ICTACS56270.2022.9988569 .

M. Castro-Fernandez, A. Hernandez, H. Fabelo, F. J. Balea-Fernandez, S. Ortega, and G. M. Callico, “Towards Skin Cancer Self-Monitoring through an Optimized MobileNet with Coordinate Attention,” in 2022 25th Euromicro Conference on Digital System Design (DSD), IEEE, Aug. 2022, pp. 607–614. https://doi.org/10.1109/DSD57027.2022.00087 .

Fraiwan M, Faouri E (2022) “On the automatic detection and classification of skin cancer using deep transfer learning.” Sensors 22(13):4963. https://doi.org/10.3390/s22134963

Bassel A, Abdulkareem AB, Alyasseri ZAA, Sani NS, Mohammed HJ (2022) “Automatic malignant and benign skin cancer classification using a hybrid deep learning approach.” Diagnostics 12(10):2472. https://doi.org/10.3390/diagnostics12102472

Alam TM et al (2022) “An efficient deep learning-based skin cancer classifier for an imbalanced dataset.” Diagnostics 12(9):2115. https://doi.org/10.3390/diagnostics12092115

Lu X, Firoozeh Abolhasani Zadeh YA (2022) “Deep learning-based classification for melanoma detection using xceptionnet.” J Healthc Eng. 2022:1–10

Google Scholar  

Kousis I, Perikos I, Hatzilygeroudis I, Virvou M (2022) “Deep learning methods for accurate skin cancer recognition and mobile application.” Electronics (Basel) 11(9):1294. https://doi.org/10.3390/electronics11091294

Viknesh CK, Kumar PN, Seetharaman R, Anitha D (2023) “Detection and classification of melanoma skin cancer using image processing technique.” Diagnostics 13(21):3313. https://doi.org/10.3390/diagnostics13213313

Mazhar T et al (2023) “The role of machine learning and deep learning approaches for the detection of skin cancer.” Healthcare 11(3):415. https://doi.org/10.3390/healthcare11030415

Yaqoob MM, Alsulami M, Khan MA, Alsadie D, Saudagar AKJ, AlKhathami M (2023) “Federated machine learning for skin lesion diagnosis: an asynchronous and weighted approach.” Diagnostics 13(11):1964. https://doi.org/10.3390/diagnostics13111964

Bakheet S, Alsubai S, El-Nagar A, Alqahtani A (2023) “A multi-feature fusion framework for automatic skin cancer diagnostics.” Diagnostics 13(8):1474. https://doi.org/10.3390/diagnostics13081474

Z. E. Diame, M. N. Al-Berry, M. A.-M. Salem, and M. Roushdy, “Deep Learning Architectures For Aided Melanoma Skin Disease Recognition: A Review,” in 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), IEEE, May 2021, pp. 324–329. https://doi.org/10.1109/MIUCC52538.2021.9447615 .

Imran A, Nasir A, Bilal M, Sun G, Alzahrani A, Almuhaimeed A (2022) “Skin cancer detection using combined decision of deep learners.” IEEE Access 10:118198–118212. https://doi.org/10.1109/ACCESS.2022.3220329

Melarkode N, Srinivasan K, Qaisar SM, Plawiak P (2023) “AI-powered diagnosis of skin cancer: a contemporary review, open challenges and future research directions.” Cancers (Basel) 15(4):1183. https://doi.org/10.3390/cancers15041183

N. Kumar and T. Sandhan, “Alternating Sequential and Residual Networks for Skin Cancer Detection from Biomedical Images,” in 2023 National Conference on Communications (NCC), IEEE, Feb. 2023, pp. 1–5. https://doi.org/10.1109/NCC56989.2023.10068074 .

Salih O, Duffy KJ (2023) “Optimization convolutional neural network for automatic skin lesion diagnosis using a genetic algorithm.” Appl Sci 13(5):3248. https://doi.org/10.3390/app13053248

Cai H, Brinti Hussin N, Lan H, Li H (2023) “A skin cancer detector based on transfer learning and feature fusion.” Curr Bioinform 18(6):517–526. https://doi.org/10.2174/1574893618666230403115540

Rashid J et al (2022) “Skin cancer disease detection using transfer learning technique.” Appl Sci 12(11):5714. https://doi.org/10.3390/app12115714

Hussien MA, Alasadi AHH (2023) “A review of skin cancer detection: traditional and deep learning-based techniques.” J Univ Babylon Pure Appl Sci 31(2):253–262. https://doi.org/10.29196/jubpas.v31i2.4682

Ali MS, Miah MS, Haque J, Rahman MM, Islam MK (2021) “An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models.” Mach Learn Appl 5:100036. https://doi.org/10.1016/j.mlwa.2021.100036

Toğaçar M, Cömert Z, Ergen B (2021) “Intelligent skin cancer detection applying autoencoder, MobileNetV2 and spiking neural networks.” Chaos Solitons Fractals 144:110714. https://doi.org/10.1016/j.chaos.2021.110714

Article   MathSciNet   Google Scholar  

Shinde RK et al (2022) “Squeeze-mnet: precise skin cancer detection model for low computing IoT devices using transfer learning.” Cancers (Basel) 15(1):12. https://doi.org/10.3390/cancers15010012

Jain S, Singhania U, Tripathy B, Nasr EA, Aboudaif MK, Kamrani AK (2021) “Deep learning-based transfer learning for classification of skin cancer.” Sensors 21(23):8142. https://doi.org/10.3390/s21238142

Khan IU et al (2021) “Remote diagnosis and triaging model for skin cancer using efficientnet and extreme gradient boosting.” Complexity 2021:1–13. https://doi.org/10.1155/2021/5591614

Shehzad K et al (2023) “A deep-ensemble-learning-based approach for skin cancer diagnosis.” Electronics (Basel) 12(6):1342. https://doi.org/10.3390/electronics12061342

Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ (2021) “Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM.” Sensors 21(8):2852. https://doi.org/10.3390/s21082852

Dildar M et al (2021) “Skin cancer detection: a review using deep learning techniques.” Int J Environ Res Public Health 18(10):5479. https://doi.org/10.3390/ijerph18105479

Mridha K, Uddin MdM, Shin J, Khadka S, Mridha MF (2023) “An interpretable skin cancer classification using optimized convolutional neural network for a smart healthcare system.” IEEE Access 11:41003–41018. https://doi.org/10.1109/ACCESS.2023.3269694

A. H. Jui, S. Sharnami, and A. Islam, “A CNN Based Approach to Classify Skin Cancers using Transfer Learning,” in 2022 25th International Conference on Computer and Information Technology (ICCIT), IEEE, Dec. 2022, pp. 1063–1068. https://doi.org/10.1109/ICCIT57492.2022.10055838 .

T. Guergueb and M. A. Akhloufi, “Skin Cancer Detection using Ensemble Learning and Grouping of Deep Models,” in International Conference on Content-based Multimedia Indexing, New York, NY, USA: ACM, Sep. 2022, pp. 121–125. https://doi.org/10.1145/3549555.3549584 .

Gong X, Xiao Y (2021) “A skin cancer detection interactive application based on CNN and NLP.” J Phys Conf Ser 2078(1):012036. https://doi.org/10.1088/1742-6596/2078/1/012036

Olayah F, Senan EM, Ahmed IA, Awaji B (2023) “AI techniques of dermoscopy image analysis for the early detection of skin lesions based on combined cnn features.” Diagnostics 13(7):1314. https://doi.org/10.3390/diagnostics13071314

Download references

Acknowledgements

The authors would like to acknowledge the Researchers Supporting Project number (RSP2024R206), King Saud University, Riyadh, Saudi Arabia.

Funding for this study was received from the Researchers Supporting Project number (RSP2024R206), King Saud University, Riyadh, Saudi Arabia.

Author information

Authors and affiliations.

Department of Computer Sciences and Engineering, College of Applied Science, King Saud University, 11543, Riyadh, Saudi Arabia

Mohammed Zakariah

Department of Software Engineering, College of Computer and Information Sciences, King Saud University, 11345, Riyadh, Saudi Arabia

Muna Al-Razgan

Department of Information Systems, College of Computer and Information Sciences, King Saud University, 11543, Riyadh, Saudi Arabia

Taha Alfakih

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohammed Zakariah .

Ethics declarations

Conflict of interest.

The authors declare there is no conflict of interest.

Ethical approval

Not applicable.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Zakariah, M., Al-Razgan, M. & Alfakih, T. Skin cancer detection with MobileNet-based transfer learning and MixNets for enhanced diagnosis. Neural Comput & Applic (2024). https://doi.org/10.1007/s00521-024-10227-w

Download citation

Received : 22 January 2024

Accepted : 12 July 2024

Published : 28 August 2024

DOI : https://doi.org/10.1007/s00521-024-10227-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Skin cancer detection
  • ISIC Archive
  • MobileNet-based transfer learning
  • Deep learning
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (PDF) IJERT-Skin Disease Detection using Machine Learning

    skin disease detection using machine learning research paper

  2. Flow chart of skin disease image recognition based on machine learning

    skin disease detection using machine learning research paper

  3. Processes

    skin disease detection using machine learning research paper

  4. Applied Sciences

    skin disease detection using machine learning research paper

  5. A Mobile-Based Skin Disease Identification System Using Convolutional

    skin disease detection using machine learning research paper

  6. Figure 1 from SKIN DISEASE DETECTION USING COMPUTER VISION AND MACHINE

    skin disease detection using machine learning research paper

VIDEO

  1. Skin disease detection using ML

  2. Skin-O-Care , Skin disease detection website using machine learning

  3. Skin Disease Detection using Machine Learning

  4. skin disease detection using machine learning and flutter

  5. Eye Disease Detection Using Machine Learning

  6. Brain Tumor Disease Detection using Machine Learning and Deep Learning || VENKAT PROJECTS || HYD

COMMENTS

  1. Skin Lesion Classification and Detection Using Machine Learning Techniques: A Systematic Review

    In order to analyze current research findings that have been suggested for skin disease detection and classification using conventional machine learning methods, deep learning methods, and hybrid methodologies, this systematic literature review work set three main objectives: (1) to identify the commonly available datasets that could be ...

  2. A machine learning approach for skin disease detection and

    A machine learning approach for skin disease detection ...

  3. Machine Learning Methods in Skin Disease Recognition: A ...

    Skin lesions affect millions of people worldwide. They can be easily recognized based on their typically abnormal texture and color but are difficult to diagnose due to similar symptoms among certain types of lesions. The motivation for this study is to collate and analyze machine learning (ML) applications in skin lesion research, with the goal of encouraging the development of automated ...

  4. Machine Learning Algorithms based Skin Disease Detection

    This paper presents a comparative analysis of 5 different. machine learning algorithms random forest, naive Bayes, logistic regression, kernel SVM and CNN. All these. algorithms are implemented on ...

  5. Skin Disease Detection based on Machine Learning Techniques

    Skin is the human body's exterior integument. Human skin pigmentation varies from person to person, and skin types include dry, oily, and mixed. The human skin's diversity offers bacteria and other microbes with a diverse home. Melanocytes in the human skin create melanin, which can absorb harmful UV radiation from the sun, causing skin damage and cancer. In most third-world societies, the ...

  6. Skin disease detection using deep learning

    Skin disease detection using deep learning

  7. Skin Disease Detection Using Machine Learning Techniques

    3 Proposed Work. The proposed technique is a valuable tool for studying people's skin diseases and predicting skin disease. A hybrid architecture of image processing and machine learning techniques is used in this proposed framework to predict disease types with promising accuracy in a short period of time.

  8. Recent Advancements and Perspectives in the Diagnosis of Skin Diseases

    Objective: Skin diseases constitute a widespread health concern, and the application of machine learning and deep learning algorithms has been instrumental in improving diagnostic accuracy and treatment effectiveness. This paper aims to provide a comprehensive review of the existing research on the utilization of machine learning and deep learning in the field of skin disease diagnosis, with a ...

  9. Melanoma Detection Using Deep Learning-Based Classifications

    In skin cancer research, image processing, machine learning, CNN, and DL have all ... Koc K.O. Detection of skin diseases from dermoscopy image using the combination of convolutional neural network and one-versus-all. ... Ijaz M.F., Bhoi A.K., Kim W., Kang J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 ...

  10. Automated Skin Disease Detection Using Machine Learning Techniques

    An automatic detection method for skin disorders identifies skin diseases at high efficiency inside a short period of time. If skin disorders are found early, life is spared from recurring skin diseases such as skin cancer. These chapter contains the design criteria of a protection system for hybridized skin diseases.

  11. Skin Disease Classification and Detection by Deep Learning and Machine

    Kotian AL, Deepa K (2017) Detection and classification of skin diseases by image analysis using MATLAB. Int J Emerg Res Manag Technol 6(5):779-784. Google Scholar Sundaramurthy S, Saravanabhavan C, Kshirsagar P (2020) Prediction and Classification of rheumatoid arthritis using ensemble machine learning approaches.

  12. Automatic skin disease diagnosis using deep learning from clinical

    Skin is the largest organ of the body which provides protection, regulates the body fluids and temperature, and enables sense of the external environment. 1 Skin diseases are the most common cause of all human illnesses which affects almost 900 million people in the world at any time. 2 According to the global burden of disease project, skin ...

  13. A Method Of Skin Disease Detection Using Image Processing And Machine

    A Method Of Skin Disease Detection Using Image ...

  14. Classification of Skin Disease Using Deep Learning Neural Networks with

    Classification of Skin Disease Using Deep Learning Neural ...

  15. Skin Disease Detection using Deep Learning

    Among the most avoidable diseases on the planet is epidermal concerns. At any rate being normal, its research is quite difficult due to its intricate colors, hair is present, hiding. Early diagnosis of skin diseases is critical to successful treatment. The skills and experience of such expert specialist are used to determine the procedure for identifying and treating skin damage. The ...

  16. Skin Disease Detection and Recommendation System using Deep Learning

    The main objective of this research is to develop an application based on Deep learning, Computer vision and cloud computing that detects the different kinds of skin diseases caused by different types of viruses, Bacteria, Fungus and Environment. This study has also developed and integrated a recommendation system, which recommends the medicines and care taking process for a particular disease ...

  17. A Method Of Skin Disease Detection Using Image Processing And Machine

    The paper proposed a skin disease detection tool based on image processing, machine learning and deep learning techniques. The proposed tools are non-invasive, easy-touse and accurate to identify ...

  18. Skin Cancer Detection: A Review Using Deep Learning Techniques

    Skin Cancer Detection: A Review Using Deep Learning ...

  19. Skin Disease Classification System Based on Machine Learning Technique

    Skin Disease Classification System Based on Machine Learning Technique: A Survey. Saja Salim Mohammed1 and Jamal Mustafa Al-Tuwaijari1. IOP Conference Series: Materials Science and Engineering, Volume 1076, 2nd International Scientific Conference of Engineering Sciences (ISCES 2020) 16th-17th December 2020, Diyala, Iraq Citation Saja Salim ...

  20. A systematic literature survey on skin disease detection and

    The world population is growing very fast and the lifestyle of human beings is changing with time and place. So, there is a need for disease management which includes disease diagnosis, its detection and classification, cure and lastly for future disease prevention. The outermost protective layer of a human body is the skin. Skin not only impacts a person's health but also psychologically ...

  21. Skin cancer identification utilizing deep learning: A survey

    These datasets play crucial roles in advancing research and development across various fields, from dermatology to machine learning and pharmaceuticals. In summary, the availability of public datasets specifically curated for melanoma identification has played a pivotal role in facilitating the development and evaluation of DL models aimed at ...

  22. Deep Learning in Skin Disease Image Recognition: A Review

    The application of deep learning methods to diagnose diseases has become a new research topic in the medical field. In the field of medicine, skin disease is one of the most common diseases, and its visual representation is more prominent compared with the other types of diseases. Accordingly, the use of deep learning methods for skin disease image recognition is of great significance and has ...

  23. PDF Skin Disease Detection using Machine Learning

    Skin diseases are the 4th common cause of skin burden worldwide. Robust and Automated system have been developed to lessen this burden and to help the patients to conduct the early assessment of the skin lesion. Mostly this system available in the literature only provide skin cancer classification. Treatments for skin are more effective and ...

  24. Skin Disease Classification Using Machine Learning and Data Mining

    Skin is an extraordinary human structure. As a result of inherited traits and environmental variables, skin conditions are the most prevalent worldwide. People frequently neglect the effects of skin diseases in their initial stages. It commonly experienced both well-known and rare diseases. Identifying skin diseases and their kinds in the medical field is a very difficult process. It can be ...

  25. Skin cancer detection with MobileNet-based transfer learning and

    Skin cancer poses a significant health hazard, necessitating the utilization of advanced diagnostic methodologies to facilitate timely detection, owing to its escalating prevalence in recent years. This paper proposes a novel approach to tackle the issue by introducing a method for detecting skin cancer that uses MixNets to enhance diagnosis and leverages mobile network-based transfer learning ...