Robustness evaluation on different training state of a CNN model

Abstract: Convolutional neural networks (CNNs) have proved to be successful in many applications such as image processing. However, even imperceptible perturbations applied to the images can make the neural network performance unreliable. To guarantee an accurate performance in safety critical fields, it is necessary to assess the robustness of CNN solutions before launching. Adversarial attack is a machine learning approach to generate perturbations on real samples to detect the vulnerability of CNN. In this paper, we will use an adversarial attack technique to evaluate a CNN at different training states. The model was trained to perform surgical tool classification task, which was applied to recognize surgical tool in Cholecystectomy to further analyze surgical process. The experiments demonstrate the relation between training states and robustness, i.e. the robustness improved at higher training states, especially for some particular classes. In future work, additional training with generated adversarial images may improve the robustness of the model.

Location
Deutsche Nationalbibliothek Frankfurt am Main
Extent
Online-Ressource
Language
Englisch

Bibliographic citation
Robustness evaluation on different training state of a CNN model ; volume:8 ; number:2 ; year:2022 ; pages:497-500 ; extent:4
Current directions in biomedical engineering ; 8, Heft 2 (2022), 497-500 (gesamt 4)

Creator
Ding, Ning
Möller, Knut

DOI
10.1515/cdbme-2022-1127
URN
urn:nbn:de:101:1-2022090315340993091515
Rights
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
Last update
15.08.2025, 7:35 AM CEST

Data provider

This object is provided by:
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.

Associated

Other Objects (12)