In this research study, a system to support medical analysis of intestinal contractions by processing WCE images
is presented. Small intestine contractions are among the motility patterns which reveal many gastrointestinal
disorders, such as functional dyspepsia, paralytic ileus, irritable bowel syndrome, bacterial overgrowth. The
images have been obtained using the Wireless Capsule Endoscopy (WCE) technique, a patented, video colorimaging
disposable capsule. Manual annotation of contractions is an elaborating task, since the recording device
of the capsule stores about 50,000 images and contractions might represent only the 1% of the whole video. In
this paper we propose the use of Local Binary Pattern (LBP) combined with the powerful textons statistics to
find the frames of the video related to contractions. We achieve a sensitivity of about 80% and a specificity of
about 99%. The achieved high detection accuracy of the proposed system has provided thus an indication that
such intelligent schemes could be used as a supplementary diagnostic tool in endoscopy.
Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has
been used to examine the small intestine non invasively. Medical specialists look for signicative events in the
WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical
relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus,
stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the
automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each
frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The
experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative
events have been previously labelled by experts. Results have shown that the proposed method may eliminate up
to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated
only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.
In this paper we present a technique which infers interframe motion by tracking SIFT features through consecutive frames:
feature points are detected and their stability is evaluated through a combination of geometric error measures and fuzzy
logic modelling. Our algorithm does not depend on the point detector adopted prior to SIFT descriptor creation: therefore
performance have been evaluated against a wide set of point detection algorithms, in order to investigate how to increase
stabilization quality with an appropriate detector.
Holistic representations of natural scenes are an effective and powerful source of information for semantic classification and analysis of arbitrary images. Recently, the frequency domain has been successfully exploited to
holistically encode the content of natural scenes in order to obtain a robust representation for scene classification.
Despite the technological hardware and software advances, consumer single sensor imaging devices technology
are quite far from the ability of recognize scenes and/or to exploit the visual content during (or after) acquisition
time. In this paper we consider the properties of the scenes regarding its naturalness. The proposed method
exploits a holistic representation of the scene obtained directly in the DCT domain and fully compatible with
the JPEG format. Experimental results confirm the effectiveness of the proposed method.
We propose novel techniques for microarray image analysis. In particular, we describe an overall pipeline able to solve the most common problems of microarray image analysis. We propose the microarray image rotation algorithm (MIRA) and the statistical gridding pipeline (SGRIP) as two advanced modules devoted to restoring the original microarray grid orientation and to detecting, the correct geometrical information about each spot of input microarray, respectively. Both solutions work by making use of statistical observations, obtaining adaptive and reliable information about each spot property. They improve the performance of the microarray image segmentation pipeline (MISP) we recently developed. MIRA, MISP, and SGRIP modules have been developed as plug-ins for an advanced framework for microarray image analysis. A new quality measure able to effectively evaluate the adaptive segmentation with respect to the fixed (e.g., nonadaptive) circle segmentation of each spot is proposed. Experiments confirm the effectiveness of the proposed techniques in terms of visual and numerical data.
Microarray is a new class of biotechnologies able to help biologist researches to extrapolate new knowledge from biological experiments. Image Analysis is devoted to extrapolate, process and visualize image information. For this reason it has found application also in Microarray, where it is a crucial step of this technology (e.g. segmentation). In this paper we describe MISP (Microarray Image Segmentation Pipeline), a new segmentation pipeline for Microarray Image Analysis. The pipeline uses a recent segmentation algorithm based on statistical analysis coupled with K-Means algorithm. The Spot masks produced by MISP are used to determinate spots information and quality measures. A software prototype system has been developed; it includes visualization, segmentation, information and quality measure extraction. Experiments show the effectiveness of the proposed pipeline both in terms of visual accuracy and measured quality values. Comparisons with existing solutions (e.g. Scanalyze) confirm the improvement with respect to previously published works.
The SVG (Scalable Vector Graphics) standard permits to represent complex graphical scenes by a collection of vectorial-based primitives. In this work we are interested in finding some heuristic techniques to cover the gap between the graphical vectorial world and the raster real world typical of digital photography. SVG format could find useful application in the world of mobile imaging devices, where typical camera capabilities should match with limited color/size resolutions displays.
Two different techniques have been applied: Data Dependent Triangulation (DDT) and Wavelet Based Triangulation (WBT). The DDT replaces the input image with a set of triangles according to a specific cost function. The overall perceptual error is then minimized choosing a suitable triangulation. The WBT uses the wavelet multilevel transformation to extract the details from the input image. A triangulation is achieved at the lowest level, introducing large triangles; then the process is iteratively refined, according to the wavelet transformation. That means increasing the quantity of small triangles into the texturized areas and fixing the amount of large triangles into the smooth areas.
Both DDT and WBT are then processed by the polygonalization. The purpose of this function is to merge triangles together reducing the amount of redundancies present into the SVG files.
The proposed technique has been compared with other raster to vector methods showing good performances. Experiments can be found in the SVG UniCT Group page http://svg.dmi.unict.it/.
A new image restoration technique to enhance details in enlarged images is presented. The proposed algorithm obtains a new enlarged image from a single traditionally zoomed frame, which is post-processed to enhance details. Enhancement is obtained through a weighted blending, based on a local measure of contrast, between two zoomed versions of the original picture: one optimized for low contrast and one optimized for details .
The weights for blending are chosen according to the local contrast value: the bi-cubic image weights more when the contrast decreases, on the contrary the second frame is more relevant when the contrast increases.
The blending process is straightforward, for this reason one has to avoid any artefacts in each of the source images: they can affect irremediably the final result. It is hence crucial, especially in the frame optimized for details, to enhance edges and corners with a minimum error of localization. To this aim, we propose to use for the blending an image enhanced by an adaptive filtering like anisotropic diffusion. In particular, we have obtained good results, both in term of efficiency and of quality of the final image, using in blending a new offset-smoothing technique based on the USAN principle.
This paper presents a novel raster-to-vector technique for digital images by advanced watershed decomposition coupled with some ad-hoc heuristics devoted to obtain high quality rendering of digital photography. The system is composed by two main steps: first, the image is partitioned into homogeneous and contiguous regions using Watershed decomposition. Then, a Scalable Vector Graphics (SVG) representation of such areas is achieved by ad-hoc chain code building. The final result is an SVG file of the image that can be used for the transmission of pictures through Internet using different display systems (PC, PDA, Cellular Phones). Experimental results and comparisons provide the effectiveness of the proposed method.
This paper introduces a method for the automatic discrimination of digital images based on their semantic content. The proposed system allows to detect if a digital image contains or not a text. This is realized by a multi-steps procedure based on low-level features set properly derived. Our experiments show that the proposed algorithm is competitive in efficiency with classical techniques, and it has a lower complexity.
Reconstruction techniques exploit a first building process using Low-resolution (LR) images to obtain a "draft" High Resolution (HR) image and then update the estimated HR by back-projection error reduction. This paper presents different HR draft image construction techniques and shows methods providing the best solution in terms of final perceived/measured quality. The following algorithms have been analysed: a proprietary Resolution Enhancement method (RE-ST); a Locally Adaptive Zooming Algorithm (LAZA); a Smart Interpolation by Anisotropic Diffusion (SIAD); a Directional Adaptive Edge-Interpolation (DAEI); a classical Bicubic interpolation and a Nearest Neighbour algorithm. The resulting HR images are obtained by merging the zoomed LR-pictures using two different strategies: average or median. To improve the corresponding HR images two adaptive error reduction techniques are applied in the last step: auto-iterative and uncertainty-reduction.
A new technique to detect, localize and classify corners in digital closed curves is proposed. The technique is based on correct estimation of support regions for each point. We compute multiscale curvature to detect and to localize corners. As a further step, with the aid of some local features, it's possible to classify corners into seven distinct types. Classification is performed using a set of rules, which describe corners according to preset semantic patterns. Compared with existing techniques, the proposed approach inscribes itself into the family of algorithms that try to explain the curve, instead of simple labeling. Moreover, our technique works in manner similar to what is believed are typical mechanisms of human perception.
In this paper we present a novel watermarking scheme to generalize a previous proposal by the same authors. In that paper to watermark, the image amounts to process the colors in the picture as points in the Color Opponency space and to offset each one of them by a random vector. As an extra constraint, in order to avoid picture quality degradation, the offset applied should be such that a color is not moved outside of a small sphere. The watermarking algorithm processes the colors in the picture as points in the Color Opponency space and offsets each one of them, inside a suitable imperceptive area, by a random vector. To improve robustness we suggest to partition the whole pixels population into 'color sets': each set gathers pixels with the same color. Each color set, exceeding a given cardinality, is in turn randomly partitioned into three subsets. Each one of these subsets is hence manipulated in different suitable ways. The watermark is obtained maintaining a record of the statistical distribution of the three subsets. Rigorous theoretical statistical analysis shows that this approach is robust with respect to the most common operations of image processing.
This paper present a new approach to content based retrieval in image databases. The basic new idea in the proposed technique is to organize the quantized and truncated wavelet coefficient of an image into a suitable tree structure. The tree structure respects the natural hierarchy imposed on the coefficients by the successive resolution levels. Al the trees relative to the images in a database are organized into a trie. This structure helps in the error tolerant retrieval of queries. The result obtained show that this approach is promising provided that a suitable distance function between trees is adopted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.