The SARS-CoV-2 (COVID-19) disease rapidly spread worldwide, thus increasing the need to create new strategies to fight it. Several researchers in different fields have attempted to develop methods to early identifying it and mitigating its effects. The Deep Learning (DL) approach, such as the Convolutional Neural Networks (CNNs), has been increasingly used in COVID-19 diagnoses. These models intend to support decision-making and are doing well to detecting patient status early. Although DL models have good accuracy to support diagnosis, they are vulnerable to Adversarial Attacks. These attacks are new methods to make DL models biased by adding small perturbations on the original image. This paper investigates the impact of Adversarial Attacks on DL models for classifying X-ray images of COVID-19 cases. We focused on the attack Fast Gradient Sign Method (FGSM), which aims to add perturbations to the testing images by combining a perturbation matrix, producing a crafted image. We conduct the experiments analyzing the model’s performance attack-free and adding attacks. The following CNNs models were selected: DenseNet201, ResNet-50V2, MobileNetV2, NasNet and VGG16. In the attack-free environment, we reach precision around 99%. When it adds the attack, our results revealed that all models suffer from performance reduction, and the most affected was MobileNet that reduced its ability from 98.61% to 67.73%. However, the VGG16 network showed to be the least affected by the attacks. Our finds describe that DL models for COVID-19 are vulnerable to Adversarial Examples. The FGSM was capable of fooling the model, resulting in a significant reduction in the DL performance.
This paper presents a Computer-Aided Diagnosis (CAD) system for mammograms, which is based on complex
networks to shape boundary characterization of mass in mammograms, suggesting a "second opinion" to the
health specialist. A region of interest (the mass) is automatically segmented using an improved algorithm based
on EM/MPM and the shape is modeled into a scale-free complex network. Topological measurements of the
resulting network are used to compose the shape descriptors. The experiments comparing the complex network
approach with other traditional descriptors, in detecting breast cancer in mammograms, show that the proposed
approach accomplish the best values of accuracy. Hence, the results indicate that complex networks are wellsuited
to characterize mammograms.
Techniques for Content-Based Image Retrieval (CBIR) have been intensively explored due to the increase in the
amount of captured images and the need of fast retrieval of them. The medical field is a specific example that
generates a large flow of information, especially digital images employed for diagnosing. One issue that still
remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effective retrieval,
one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore,
the present paper was conceived to fill in this gap creating a consistent support to perform similarity queries
over medical images, maintaining the semantics of a given query desired by the user. CBIR systems relying in
relevance feedback techniques usually request the users to label relevant images. In this paper, we present a
simple but highly effective strategy to survey user profiles, taking advantage of such labeling to implicitly gather
the user perceptual similarity. The user profiles maintain the settings desired for each user, allowing tuning
the similarity assessment, which encompasses dynamically changing the distance function employed through an
interactive process. Experiments using computed tomography lung images show that the proposed approach is
effective in capturing the users' perception.
A challenge in Computer-Aided Diagnosis based on image exams is to provide a timely answer that complies to the specialist's expectation. In many situations, when a specialist gets a new image to analyze, having information and knowledge from similar cases can be very helpful. For example, when a radiologist evaluates a new image, it is common to recall similar cases from the past. However, when performing similarity queries to retrieve similar cases, the approach frequently adopted is to extract meaningful features from the images and searching the database based on such features. One of the most popular image feature is the gray-level histogram, because it is simple and fast to obtain, providing the global gray-level distribution of the image. Moreover, normalized histograms are also invariant to affine transformations on the image. Although vastly used, gray-level histograms generates a large number of features, increasing the complexity of indexing and searching operations. Therefore, the high dimensionality of histograms degrades the efficiency of processing similarity queries. In this paper we propose a new and efficient method associating the Shannon entropy and the gray-level histogram to considerably reduce the dimensionality of feature vectors generated by histograms. The proposed method was evaluated using a real dataset and the results showed impressive reductions of up to 99% in the feature vector size, at the same time providing a gain in precision of up to 125% in comparison with the traditional gray-level histogram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.