Melanoma is considered a major health problem since it is the deadliest form of skin cancer. The early diagnosis through periodic screening with dermoscopic images can significantly improve the survival rate as well as reduce the treatment cost and consequent suffering of patients. Dermoscopy or skin surface microscopy provides in vivo inspection of color and morphologic structures of pigmented skin lesions (PSLs), rendering higher accuracy for detecting suspicious cases than it is possible via inspecting with naked eye. However, interpretation of dermoscopic images is time consuming and subjective, even for trained dermatologists. Therefore, there is currently a great interest in the development of computeraided diagnosis (CAD) systems for automated melanoma recognition. However, the majority of the CAD systems are still in the early development stage with lack of descriptive feature generation and benchmark evaluation in ground-truth datasets. This work is focusing on by addressing the various issues related to the development of such a CAD system with effective feature extraction from Non-Subsampled Contourlet Transform (NSCT) and Eig(Hess) histogram of oriented gradients (HOG) and lesion classification with efficient Extreme Learning Machine (ELM) due to its good generalization abilities and a high learning efficiency and evaluating its effectiveness in a benchmark data set of dermoscopic images towards the goal of realistic comparison and real clinical integration. The proposed research on melanoma recognition has huge potential for offering powerful services that would significantly benefit the present Biomedical Information Systems.
This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
This paper presents a novel approach to biomedical image representation for classification by mapping image regions to local concepts and represent images in a weighted entropy based probabilistic feature space. In a heterogeneous collection of medical images, it is possible to identify specific local patches that are perceptually and/or semantically distinguishable. The variation of these patches is effectively modeled as local concepts based on their low-level features as inputs to a multi-class SVM classifier. The probability of occurrence of each concept in an image is measured by spreading and normalizing each region’s class confidence score based on the probabilistic output of the classifier. Furthermore, importance of concepts is measured as Shannon entropy based on pixel values of image patches and used to refine the feature vector to overcome the limitation of the “TF-IDF”- based weighting. In addition, to take the localization information of concepts into consideration, each image each segmented into five overlapping regions and local concept feature vectors are generated from those regions to finally obtain a combined semi-global feature vector. A systematic evaluation of image classification on two biomedical image data sets demonstrates improvement of more than 10% for the proposed feature representation approach compared to the commonly used low level and visual word-based approaches.
Image modality classification is an important task toward achieving high performance in biomedical image and article retrieval. Imaging modality captures information about its appearance and use. Examples include X-ray, MRI, Histopathology, Ultrasound, etc. Modality classification reduces the search space in image retrieval. We have developed and evaluated several modality classification methods using visual and textual features extracted from images and text data, such as figure captions, article citations, and MeSH®. Our hierarchical classification method using multimodal (mixed textual and visual) and several class-specific features achieved the highest classification accuracy of 63.2%. The performance was among the best in ImageCLEF2012 evaluation.
Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. They
appear in specialized databases or in biomedical publications and are not meaningfully retrievable using primarily textbased
retrieval systems. The task of automatically finding the images in an article that are most useful for the purpose of
determining relevance to a clinical situation is quite challenging. An approach is to automatically annotate images
extracted from scientific publications with respect to their usefulness for CDS. As an important step toward achieving
the goal, we proposed figure image analysis for localizing pointers (arrows, symbols) to extract regions of interest (ROI)
that can then be used to obtain meaningful local image content. Content-based image retrieval (CBIR) techniques can
then associate local image ROIs with identified biomedical concepts in figure captions for improved hybrid (text and
image) retrieval of biomedical articles.
In this work we present methods that make robust our previous Markov random field (MRF)-based approach for pointer
recognition and ROI extraction. These include use of Active Shape Models (ASM) to overcome problems in recognizing
distorted pointer shapes and a region segmentation method for ROI extraction.
We measure the performance of our methods on two criteria: (i) effectiveness in recognizing pointers in images, and (ii)
improved document retrieval through use of extracted ROIs. Evaluation on three test sets shows 87% accuracy in the
first criterion. Further, the quality of document retrieval using local visual features and text is shown to be better than
using visual features alone.
Biomedical images are invaluable in establishing diagnosis, acquiring technical skills, and implementing best practices in
many areas of medicine. At present, images needed for instructional purposes or in support of clinical decisions appear in
specialized databases and in biomedical articles, and are often not easily accessible to retrieval tools. Our goal is to
automatically annotate images extracted from scientific publications with respect to their usefulness for clinical decision
support and instructional purposes, and project the annotations onto images stored in databases by linking images
through content-based image similarity.
Authors often use text labels and pointers overlaid on figures and illustrations in the articles to highlight regions of
interest (ROI). These annotations are then referenced in the caption text or figure citations in the article text. In previous
research we have developed two methods (a heuristic and dynamic time warping-based methods) for localizing and
recognizing such pointers on biomedical images. In this work, we add robustness to our previous efforts by using a
machine learning based approach to localizing and recognizing the pointers. Identifying these can assist in extracting
relevant image content at regions within the image that are likely to be highly relevant to the discussion in the article
text. Image regions can then be annotated using biomedical concepts from extracted snippets of text pertaining to images
in scientific biomedical articles that are identified using National Library of Medicine's Unified Medical Language
System® (UMLS) Metathesaurus. The resulting regional annotation and extracted image content are then used as indices
for biomedical article retrieval using the multimodal features and region-based content-based image retrieval (CBIR)
techniques. The hypothesis that such an approach would improve biomedical document retrieval is validated through
experiments on an expert-marked biomedical article dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.