Accurate pressure ulcer measurement is critical in assessing the effectiveness of treatment. However, the traditional measuring process is subjective. Each health care provider may measure the same wound differently, especially related to the depth of the wound. Even the same health care provider may obtain inconsistent measurements when measuring the same wound at multiple times. Also, the measuring process requires frequent contact with the wound, which increases risk of contamination or infection and can be uncomfortable for the patient. This manuscript describes a new automatic pressure ulcer monitoring system (PrUMS), which uses a tablet connected to a 3D scanner, to provide an objective, consistent, noncontact measurement method. We combine color segmentation on 2D images and 3D surface gradients to automatically segment the wound region for advanced wound measurements. To demonstrate the system, two pressure ulcers on a mannequin are measured with PrUMS; ground-truth is provided by a clinically trained wound care nurse. The results of PrUMS 2D measurement (length and width) are within 1 mm average error and 2 mm standard deviation; the average error for the depth measurement is 2 mm and the standard deviation is 2 mm. PrUMS is tested on a small pilot dataset of 8 patients: the average errors are 3 mm, 3 mm, and 4 mm in length, width, and depth, respectively.
This paper presents an automated classification system for images based on their visual complexity. The image complexity is approximated using a clutter measure, and parameters for processing it are dynamically chosen. The classification method is part of a vision-based collision avoidance system for low altitude aerial vehicles, intended to be used during search and rescue operations in urban settings. The collision avoidance system focuses on detecting thin obstacles such as wires and power lines. Automatic parameter selection for edge detection shows a 5% and 12% performance improvement for medium and heavily cluttered images respectively. The automatic classification enabled the algorithm to identify near invisible power lines in a 60 frame video footage from a SUAV helicopter crashing during a search and rescue mission at hurricane Katrina, without any manual intervention.
The use of biometric systems in physical access scenarios is gaining popularity. In such scenarios, users are enroled under well controlled conditions and the enrolment is usually indoors. To gain access to the building, the user provides his biometric samples in an outdoor environment over which there is little control. This adversely affects the quality of the samples and as a result the system performance is sub-optimal. This study evaluates the performance of a multimodal biometric system in a physical access scenario. We evaluate leading commercial algorithms on an indoor-outdoor, multimodal database comprising of face and voice samples. The indoor-outdoor nature of the database and the choice of modalities results in individual systems performing poorly. Popular normalization and fusion techniques are used to improve the performance of the overall system. Multimodal fusion results in an average improvement of approximately 20% at 1% false acceptance rate over individual modalities.
Most state of the art video-based gait recognition algorithms start from binary silhouettes. These silhouettes, defined as foreground regions, are usually detected by background subtraction methods, which results in holes or missed parts due to similarity of foreground and background color, and boundary errors due to video compression artifacts. Errors in low-level representation make it hard to understand the effect of certain conditions, such as surface and time, on gait recognition. In this paper, we present a part-level, manual silhouette database consisting of 71 subjects, over one gait cycle, with differences in surface, shoe-type, carrying condition, and time. We have a total of about 11,000 manual silhouette frames. The purpose of this manual silhouette database is twofold. First, this is a resource that we make available at http://www.GaitChallenge.org for use by the gait community to test and design better silhouette detection algorithms. These silhouettes can also be used to learn gait dynamics. Second, using the baseline gait recognition algorithm, which was specified along with the HumanID Gait Challenge problem, we show that performance from manual silhouettes is similar and only sometimes better than that from automated silhouettes detected by statistical background subtraction. Low performances when comparing sequences with differences in walking surfaces and time-variation are not fully explained by silhouette quality. We also study the recognition power in each body part and show that recognition based on just the legs is equal to that from the whole silhouette. There is also significant recognition power in the head and torso shape.
KEYWORDS: 3D modeling, Breast, Finite element methods, Mammography, Magnetic resonance imaging, 3D image processing, X-rays, Mathematical modeling, Tissues, X-ray imaging
Predicting breast tissue deformation is of great significance in several medical applications such as biopsy, diagnosis, and surgery. In breast surgery, surgeons are often concerned with a specific portion of the breast, e.g., tumor, which must be located accurately beforehand. Also clinically it is important for combining the information provided by images from several modalities or at different times, for the detection/diagnosis, treatment planning and guidance of interventions. Multi-modality imaging of the breast obtained by X-ray mammography, MRI is thought to be best achieved through some form of data fusion technique. However, images taken by these various techniques are often obtained under entirely different tissue configurations, compression, orientation or body position. In these cases some form of spatial transformation of image data from one geometry to another is required such that the tissues are represented in an equivalent configuration.
We propose to use a 3D finite element model for lesion correspondence in breast imaging. The novelty of the approach lies in the following facts: (1) Finite element is the most accurate technique for modeling deformable objects such as breast. The physical soundness and mathematical rigor of finite element method ensure the accuracy and reliability of breast modeling that is essential for lesion correspondence. (2) When both MR and mammographic images are available, a subject-specific 3D breast model will be built from MRIs. If only mammography is available, a generic breast model will be used for two-view mammography reading. (3) Incremental contact simulation of breast compression allows accurate capture of breast deformation and ensures the quality of lesion correspondence. (4) Balance between efficiency and accuracy is achieved through adaptive meshing. We have done intensive research based on phantom and patient data.
Conference Committee Involvement (7)
Biometric Technology for Human Identification VII
5 April 2010 | Orlando, Florida, United States
Biometric Technology for Human Identification VI
13 April 2009 | Orlando, Florida, United States
Biometric Technology for Human Identification V
18 March 2008 | Orlando, Florida, United States
Biometric Technology for Human Identification IV
9 April 2007 | Orlando, Florida, United States
Biometric Technology for Human Identification III
17 April 2006 | Orlando (Kissimmee), Florida, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.