This article presents a solution to volume measurement object packing using 3D cameras (such as the Microsoft KinectTM). We target application scenarios, such as warehouses or distribution and logistics companies, where it is important to promptly compute package volumes, yet high accuracy is not pivotal. Our application auto- matically detects cuboid objects using the depth camera data and computes their volume and sorting it allowing space optimization. The proposed methodology applies to a point cloud simple computer vision and image processing methods, as connected components, morphological operations and Harris corner detector, producing encouraging results, namely an accuracy in volume measurement of 8mm. Aspects that can be further improved are identified; nevertheless, the current solution is already promising turning out to be cost effective for the envisaged scenarios.
This paper introduces pSIVE, a platform that allows the easy setting up of Virtual Environments, with interactive information (for instance, a video or a document about a machine that is present in the virtual world) to be accessed for different 3D elements. The main goal is to create for evaluation and training on a virtual factory – but generic enough to be applied in different contexts by non-expert users (academic and touristic for instance). We show some preliminary results obtained from two different scenarios: first a production line of a factory with contextualized information associated to different elements which aimed the training of employees. Second a testing environment, to compare and assess two different selection styles that were integrated in pSIVE and to allow different users to interact with an environment created with pSIVE to collect opinions about the system. The conclusions show that the overall satisfaction was high and the comments will be considered in further platform development.
Selecting input and output devices to be used in virtual walkthroughs is an important issue as it may have significant impact in usability and comfort. This paper presents a user study meant to compare the usability of two input devices used for walkthroughs in a virtual environment with a Head-Mounted Display. User performance, satisfaction, ease of use and comfort, were compared with two different input devices: a two button mouse and a joystick from a gamepad. Participants also used a desktop to perform the same tasks in order to assess if the participant groups had similar profiles. The results obtained by 45 participants suggest that both input devices have a comparable usability in the used conditions and show that participants generally performed better with the desktop; a discussion of possible causes is presented.
KEYWORDS: Visualization, Taxonomy, Visual analytics, Information visualization, Testing and analysis, Data analysis, Data modeling, Data visualization, Statistical analysis, Optimization (mathematics)
The first data and information visualization techniques and systems were developed and presented without a systematic evaluation; however, researchers have become, and are more and more, aware of the importance of evaluation (Plaisant, 2004)1. Evaluation is not only a means of improving techniques and applications, but it can also produce evidence of measurable benefits that will encourage adoption. Yet, evaluating visualization applications or techniques, is not simple. We deem visualization applications should be developed using a user-centered design approach and that evaluation should take place in several phases along the process and with different purposes. An account of what issues we consider relevant while planning an evaluation in Medical Data Visualization can be found in (Sousa Santos and Dillenseger, 2005) 2. In that work the question “how well does a visualization represent the underlying phenomenon and help the user understand it?” is identified as fundamental, and is decomposed in two aspects: A) the evaluation of the representation of the phenomenon (first part of the question). B) the evaluation of the users’ performance in their tasks when using the visualization, which implies the understanding of the phenomenon (second part of the question). We contend that these questions transcend Medical Data Visualization and can be considered central to evaluating Data and Information Visualization applications and techniques in general. In fact, the latter part of the question is related to the question Freitas et al. (2009) 3 deem crucial to user centered visualization evaluation: “How do we know if information visualization tools are useful and usable for real users performing real visualization tasks?” In what follows issues and methods that we have been using to tackle this latter question, are briefly addressed. This excludes equally relevant topics as algorithm optimization, and accuracy, that can be dealt with using concepts and methods well known in other disciplines and are mainly related to how well the phenomenon is represented. A list of guidelines considered as our best practices to perform evaluations is presented and some conclusions are drawn.
In medical image processing and analysis it is often required to perform segmentation for quantitative measures
of extent, volume and shape.
The validation of new segmentation methods and tools usually implies comparing their various outputs among
themselves (or with a ground truth), using similarity metrics. Several such metrics are proposed in the literature
but it is important to select those which are relevant for a particular task as opposed to using all metrics and
therefore avoiding additional computational cost and redundancy.
A methodology is proposed which enables the assessment of how different similarity and discrepancy metrics
behave for a particular comparison and the selection of those which provide relevant data.
Detailed morphological analysis of pulmonary structures and tissue, provided by modern CT scanners, is of
utmost importance as in the case of oncological applications both for diagnosis, treatment, and follow-up. In this
case, a patient may go through several tomographic studies throughout a period of time originating volumetric
sets of image data that must be appropriately registered in order to track suspicious radiological findings.
The structures or regions of interest may change their position or shape in CT exams acquired at different
moments, due to postural, physiologic or pathologic changes, so, the exams should be registered before any
follow-up information can be extracted. Postural mismatching throughout time is practically impossible to
avoid being particularly evident when imaging is performed at the limiting spatial resolution. In this paper, we
propose a method for intra-patient registration of pulmonary CT studies, to assist in the management of the
oncological pathology. Our method takes advantage of prior segmentation work. In the first step, the pulmonary
segmentation is performed where trachea and main bronchi are identified. Then, the registration method proceeds
with a longitudinal alignment based on morphological features of the lungs, such as the position of the carina, the
pulmonary areas, the centers of mass and the pulmonary trans-axial principal axis. The final step corresponds to
the trans-axial registration of the corresponding pulmonary masked regions. This is accomplished by a pairwise
sectional registration process driven by an iterative search of the affine transformation parameters leading to
optimal similarity metrics. Results with several cases of intra-patient, intra-modality registration, up to 7 time
points, show that this method provides accurate registration which is needed for quantitative tracking of lesions
and the development of image fusion strategies that may effectively assist the follow-up process.
This paper presents preliminary results on the development of a 3D audiovisual model of the Anta Pintada (painted
dolmen) of Antelas, a Neolithic chamber tomb located in Oliveira de Frades and listed as Portuguese national
monument. The final aim of the project is to create a highly accurate Virtual Reality (VR) model of this unique
archaeological site, capable of providing not only visual but also acoustic immersion based on its actual geometry and
physical properties.
The project started in May 2006 with in situ data acquisition. The 3D geometry of the chamber was captured using a
Laser Range Finder. In order to combine the different scans into a complete 3D visual model, reconstruction software
based on the Iterative Closest Point (ICP) algorithm was developed using the Visualization Toolkit (VTK). This software
computes the boundaries of the room on a 3D uniform grid and populates its interior with "free-space nodes", through an
iterative algorithm operating like a torchlight illuminating a dark room. The envelope of the resulting set of "free-space
nodes" is used to generate a 3D iso-surface approximating the interior shape of the chamber. Each polygon of this
surface is then assigned the acoustic absorption coefficient of the corresponding boundary material.
A 3D audiovisual model operating in real-time was developed for a VR Environment comprising head-mounted display
(HMD) I-glasses SVGAPro, an orientation sensor (tracker) InterTrax 2 with 3 Degrees Of Freedom (3DOF) and stereo
headphones. The auralisation software is based on a geometric model. This constitutes a first approach, since geometric
acoustics have well-known limitations in rooms with irregular surfaces. The immediate advantage lies in their inherent
computational efficiency, which allows real-time operation. The program computes the early reflections forming the
initial part of the chamber's impulse response (IR), which carry the most significant cues for source localisation. These
early reflections are processed through Head Related Transfer Functions (HRTF) updated in real-time according to the
orientation of the user's head, so that sound waves appear to come from the correct location in space, in agreement with
the visual scene. The late-reverberation tail of the IR is generated by an algorithm designed to match the reverberation
time of the chamber, calculated from the actual acoustic absorption coefficients of its surfaces. The sound output to the
headphones is obtained by convolving the IR with anechoic recordings of the virtual audio source.
The complexity of a polygonal mesh is usually reduced by applying a simplification method, resulting in a similar
mesh having less vertices and faces. Although several such methods have been developed, only a few observer
studies are reported comparing the perceived quality of the simplified meshes, and it is not yet clear how the
choice of a given method, and the level of simplification achieved, influence the quality of the resulting mesh, as
perceived by the final users. Similar issues occur regarding other mesh processing methods such as smoothing.
Mesh quality indices are the obvious less costly alternative to user studies, but it is also not clear how they relate
to perceived quality, and which indices best describe the users behavior.
This paper describes on going work concerning the evaluation of perceived quality of polygonal meshes using
observer studies, while looking for a quality index which estimates user performance. In particular, given some
results obtained in previous studies, a new experimental protocol was designed and a study involving 55 users
was carried out, which allowed their validation, as well as further insight regarding mesh quality, as perceived
by human observers.
Virtual and Augmented Reality are developing rapidly: there is a multitude of environments and experiments in several
laboratories using from simple HMD (Head-Mounted Display) visualization to more complex and expensive 6-wall
projection CAVEs, and other systems. Still, there is not yet a clear emerging technology in this area, nor commercial
applications based on such a technology are used in large scale. In addition to the fact that this is a relatively recent
technology, there is little work to validate the utility and usability of Virtual and Augmented Reality environments when
compared with the traditional desktop set-up. However, usability evaluation is crucial in order to design better systems
that respond to the users' needs, as well as for identifying applications that might really gain from the use of such
technologies.
This paper presents a preliminary usability evaluation of a low-cost Virtual and Augmented Reality environment under
development at the University of Aveiro, Portugal. The objective is to assess the difference between a traditional desktop
set-up and a Virtual/Augmented Reality system based on a stereo HMD. Two different studies were performed: the first
one was qualitative and some feedback was obtained from domain experts who used an Augmented Reality set-up as well
as a desktop in different data visualization scenarios. The second study consisted in a controlled experiment meant to
compare users' performances in a gaming scenario in a Virtual Reality environment and a desktop. The overall
conclusion is that these technologies still have to overcome some hardware problems. However, for short periods of time
and specific applications, Virtual and Augmented Reality seems to be a valid alternative since HMD interaction is
intuitive and natural.
KEYWORDS: Lung, Data modeling, Neodymium, Visual process modeling, Visualization, Data analysis, 3D modeling, Statistical modeling, Statistical analysis, Error analysis
The complexity of a polygonal mesh model is usually reduced by applying a simplification method, resulting in
a similar mesh having less vertices and faces. Although several such methods have been developed, only a few
observer studies are reported comparing them regarding the perceived quality of the obtained simplified meshes,
and it is not yet clear how the choice of a given method, and the level of simplification achieved, influence the
quality of the resulting model, as perceived by the final users. Mesh quality indices are the obvious less costly
alternative to user studies, but it is also not clear how they relate to perceived quality, and which indices best
describe the users behavior.
Following on earlier work carried out by the authors, but only for mesh models of the lungs, a comparison
among the results of three simplification methods was performed through (1) quality indices and (2) a controlled
experiment involving 65 observers, for a set of five reference mesh models of different kinds. These were simplified
using two methods provided by the OpenMesh library - one using error quadrics, the other additionally using
a normal flipping criterion - and also by the widely used QSlim method, for two simplification levels: 50% and
20% of the original number of faces. The main goal was to ascertain whether the findings previously obtained
for lung models, through quality indices and a study with 32 observers, could be generalized to other types of
models and confirmed for a larger number of observers. Data obtained using the quality indices and the results
of the controlled experiment were compared and do confirm that some quality indices (e.g., geometric distance
and normal deviation, as well as a new proposed weighted index) can be used, in specific circumstances, as
reasonable estimators of the user perceived quality of mesh models.
KEYWORDS: Visualization, Visual process modeling, Data modeling, 3D modeling, Visual analytics, Networks, Quality measurement, Solid modeling, Process modeling, Light sources
Polygonal meshes are used in many application scenarios. Often the generated meshes are too complex not allowing
proper interaction, visualization or transmission through a network. To tackle this problem, simplification
methods can be used to generate less complex versions of those meshes.
For this purpose many methods have been proposed in the literature and it is of paramount importance that
each new method be compared with its predecessors, thus allowing quality assessment of the solution it provides.
This systematic evaluation of each new method requires tools which provide all the necessary features (ranging
from quality measures to visualization methods) to help users gain greater insight into the data.
This article presents the comparison of two simplification algorithms, NSA and QSlim, using PolyMeCo, a
tool which enhances the way users perform mesh analysis and comparison, by providing an environment where
several visualization options are available and can be used in a coordinated way.
A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.
KEYWORDS: Visualization, Image quality, Medical imaging, Data modeling, Taxonomy, Image visualization, Visual process modeling, Data visualization, Visual system, Medicine
Among the several medical imaging stages (acquisition, reconstruction, etc.), visualization is the latest stage on which decision is generally taken. Scientific visualization tools allow to process complex data into a graphical visible and understandable form, the goal being to provide new insight. If the evaluation of procedures is a crucial issue and a main concern in medicine, paradoxically visualization techniques, predominantly in tri-dimensional imaging, have not been the subject of many evaluation studies. This is perhaps due to the fact that the visualization process integrates the Human Visual and Cognitive Systems, which makes evaluation especially difficult. However, as in medical imaging, the question of quality evaluation of a specific visualization remains a main challenge. While a few studies concerning specific cases have already been published, there is still a great need for definition and systemization of evaluation methodologies. The goal of our study is to propose such a framework, which makes it possible to take into account all the parameters taking part in the evaluation of a visualization technique. Concerning the problem of quality evaluation in data visualization in general, and in medical data visualization in particular, three different concepts appear to be fundamental: the type and level of components used to convey to the user the information contained in the data, the type and level at which evaluation can be performed, and the methodologies used to perform such evaluation. We propose a taxonomy involving types of methods that can be used to perform evaluation at different levels.
Meshes are currently used to model objects, namely human organs and other structures. However, if they have a large number of triangles, their rendering times may not be adequate to allow interactive visualization, a mostly desirable feature in some diagnosis (or, more generally, decision) scenarios, where the choice of adequate views is important. In this case, a possible solution consists in showing a simplified version while the user interactively chooses the viewpoint and, then, a fully detailed version of the model to support its analysis. To tackle this problem, simplification methods can be used to generate less complex versions of meshes. While several simplification methods have been developed and reported in the literature, only a few studies compare them concerning the perceived quality of the obtained simplified meshes.
This work describes an experiment conducted with human observers in order to compare three different simplification methods used to simplify mesh models of the lungs. We intended to study if any of these methods allows a better-perceived quality for the same simplification rate.
A protocol was developed in order to measure these aspects. The results presented were obtained from 32 human observers. The comparison between the three mesh simplification methods was first performed through an Exploratory Data Analysis and the significance of this comparison was then established using other statistical methods. Moreover, the influence on the observers' performances of some other factors was also investigated.
Quantitative evaluation of the performance of segmentation algorithms on medical images is crucial before their clinical use can be considered. We have quantitatively compared the contours obtained by a pulmonary segmentation algorithm to contours manually-drawn by six expert imaiologists on the same set of images, since the ground truth is unknown. Two types of variability (inter-observer and intra-observer) should be taken into account in the performance evaluation of segmentation algorithms and several methods to do it have been proposed. This paper describes the quantitative evaluation of the performance of our segmentation algorithm using several figures of merit, exploratory and multivariate data analysis and non parametric tests, based on the assessment of the inter-observer variability of six expert imagiologists from three different hospitals and the intra-observer variability of two expert imagiologists from the same hospital. As an overall result of this comparison we were able to claim that the consistency and accuracy of our pulmonary segmentation algorithm is adequate for most of the quantitative requirements mentioned by the imagiologists. We also believe that the methodology used to evaluate the performance of our algorithm is general enough to be applicable to many other segmentation problems on medical images.
Bubble emphysema is a disease characterized by the presence of air bubbles within the lungs. With the purpose of identifying pulmonary air bubbles, two alternative methods were developed, using High Resolution Computer Tomography (HRCT) exams. The search volume is confined to the pulmonary volume through a previously developed pulmonary contour detection algorithm. The first detection method follows a slice by slice approach and uses selection criteria based on the Hounsfield levels, dimensions, shape and localization of the bubbles. Candidate regions that do not exhibit axial coherence along at least two sections are excluded. Intermediate sections are interpolated for a more realistic representation of lungs and bubbles. The second detection method, after the pulmonary volume delimitation, follows a fully 3D approach. A global threshold is applied to the entire lung volume returning candidate regions. 3D morphologic operators are used to remove spurious structures and to circumscribe the bubbles.
Bubble representation is accomplished by two alternative methods. The first generates bubble surfaces based on the voxel volumes previously detected; the second method assumes that bubbles are approximately spherical. In order to obtain better 3D representations, fits super-quadrics to bubble volume. The fitting process is based on non-linear least squares optimization method, where a super-quadric is adapted to a regular grid of points defined on each bubble.
All methods were applied to real and semi-synthetical data where artificial and randomly deformed bubbles were embedded in the interior of healthy lungs. Quantitative results regarding bubble geometric features are either similar to a priori known values used in simulation tests, or indicate clinically acceptable dimensions and locations when dealing with real data.
The visual analysis of Stereoelectroencephalographic (SEEG) signals in their anatomical context is aimed at the understanding of the spatio-temporal dynamics of epileptic processes. The magnitude of these signals may be encoded by graphical glyphs, having a direct impact on the perception of the values. Our study is devoted to the evaluation of the quantitative visualization of these signals, specifically to the influence of the coding scheme of the glyphs on the understanding and the analysis of the signals. This work describes an experiment conducted with human observers in order to evaluate three different coding schemes used to visualize the magnitude of SEEG signals in their 3D anatomical context. We intended to study if any of these coding schemes allows better performances for the human observers in two aspects: accuracy and speed. A protocol has been developed in order to measure these aspects. The results that will be presented in this work were obtained from 40 human observers. The comparison between the three coding schemes has first been performed through an Exploratory Data Analysis (EDA). The statistical significance of this comparison has then been established using nonparametric methods. The influence on the observers' performance of some other factors has also been investigated.
Segmentation of thoracic X-Ray Computed Tomography images is a mandatory pre-processing step in many automated or semi- automated analysis tasks such us region identification, densitometric analysis, or even for 3D visualization purposes when a stack of slices has to be prepared for surface or volume rendering. In this work, we present a fully automated and fast method for pulmonary contour extraction and region identification. Our method combines adaptive intensity discrimination, geometrical feature estimation and morphological processing resulting into a fast and flexible algorithm. A complementary but not less important objective of this work consisted on a quality assessment study of the developed contour detection technique. The automatically extracted contours were statistically compared to manually drawn pulmonary outlines provided by two radiologists. Exploratory data analysis and non-parametric statistical tests were performed on the results obtained using several figures of merit. Results indicate that, besides a strong consistence among all the quality indexes, there is a wider inter-observer variability concerning both radiologists than the variability of our algorithm when compared to each one of the radiologists. As an overall conclusion we claim that the consistence and accuracy of our detection method is more than acceptable for most of the quantitative requirements mentioned by the radiologists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.