Classification of physiological disorders in apples using deep convolutional neural network under different lighting conditions


BÜYÜKARIKAN B., Ulker E.

Multimedia Tools and Applications, cilt.82, sa.21, ss.32463-32483, 2023 (SCI-Expanded, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 82 Sayı: 21
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s11042-023-14766-7
  • Dergi Adı: Multimedia Tools and Applications
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, FRANCIS, ABI/INFORM, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Sayfa Sayıları: ss.32463-32483
  • Anahtar Kelimeler: Classification, Computer vision, Convolutional neural network, Friedman/Nemenyi, Lighting, Physiological disorders in apples
  • Isparta Uygulamalı Bilimler Üniversitesi Adresli: Evet

Özet

Non-destructive testing of apple fruit, an important product in the world fresh fruit trade, according to physiological disorders, can be done with a computer vision system. However, in the vision system, images may be affected by the brightness values created by different lighting conditions. For this reason, it is a necessity to use algorithms that accurately and quickly detect physiological disorders. By using a convolutional neural network (CNN), an algorithm that enables easy extraction of features from images, determining physiological disorders becomes easier. This study aims to classify the images of apples with physiological disorders obtained under different lighting conditions with CNN models. This study created a dataset (images of different light colors, angles, and distances) with some physiological disorder images. A 5-fold cross-validation method was applied to improve the generalization ability of the models, and CNN models were trained end-to-end. In addition, the Friedman hypothesis test and post-hoc Nemenyi test were performed to compare the evaluation indicators of different CNN models. The average accuracy, precision, recall, and F1-score of the Xception model were 0.996, 0.994, 0.996, and 0.998, respectively. The classification accuracy of this model is followed by the ResNet101, MobileNet, ResNet152, ResNet18, ResNet34, ResNet50, EfficientNetB0, AlexNet, VGG16, and VGG19. Finally, Xception performed well, according to Friedman/Nemenyi test results.