Abstract
The rapid increase of CT scans and the limited number of radiologists present a unique opportunity for computer-based radiological Content-Based Image Retrieval (CBIR) systems. However, the current structure of the clinical diagnosis reports presents substantial variability, which significantly hampers the creation of effective CBIR systems. Researchers are currently looking for ways of standardizing the reports structure, e.g., by introducing uniform User Express (UsE) annotations and by automating the extraction of UsE annotations with Computer Generated (CoG) features. This paper presents an experimental evaluation of the derivation of UsE annotations from CoG features with a classifier that estimates each UsE annotation from the input CoG features. We used the datasets of the ImageCLEF-Liver CT Annotation challenge: 50 training and 10 testing CT scans with liver and liver lesion annotations. Our experimental results on the ImageCLEF-Liver CT Annotation challenge exhibit a completeness level of 95% and accuracy of 91% for 10 unseen cases. This is the second best result obtained in the Liver CT Annotation challenge and only 1% away from the rst place.
Original language | English |
---|---|
Pages (from-to) | 438-447 |
Number of pages | 10 |
Journal | CEUR Workshop Proceedings |
Volume | 1180 |
State | Published - 2014 |
Event | 2014 Cross Language Evaluation Forum Conference, CLEF 2014 - Sheffield, United Kingdom Duration: 15 Sep 2014 → 18 Sep 2014 |