In this paper we present a technique to classify five common classes of shapes acquired with a capacitive touch display: finger, ear, cheek, hand hold, half ear-half cheek. The need of algorithms able to discriminate among the aforementioned shapes comes from the growing diffusion of touch screen based consumer devices (e.g. smartphones, tablet, etc.). In this context, detection and the recognition of fingers are fundamental tasks in many touch based user applications (e.g., mobile games). Shape recognition algorithms are also extremely useful to identify accidental touches in order to avoid involuntary activation of the device functionalities (e.g., accidental calls). Our solution makes use of simple descriptors designed to capture discriminative information of the considered classes of shapes. The recognition is performed through a decision tree based approach whose parameters are learned on a set of labeled samples. Experimental results demonstrate that the proposed solution achieves good recognition accuracy.
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with
the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile
platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile
devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile
devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing
power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction,
face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile
platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm.
Despite the great advances that have been made in the field of digital photography and CMOS/CCD sensors, several
sources of distortion continue to be responsible for image quality degradation. Among them, a great role is played by
sensor noise and motion blur. Of course, longer exposure times usually lead to better image quality, but the change in the
photocurrent over time, due to motion, can lead to motion blur effects. The proposed low-cost technique deals with the
aforementioned problem using a multi-capture denoising algorithm, obtaining a good quality with sensible reduction of
the motion blur effects.
DCT based compression engines1,2 are well known to introduce color artifacts on the processed input frames, in
particular for low bit rates. In video standards, like MPEG-23, MPEG-44, H2635, and in still picture standards, like
JPEG6,7, blocking and ringing distortions are understood and considered, so different approaches have been developed to
reduce these effects8,9,10,11. On the other side, other kinds of phenomenon have not been deeply investigated. Among
them, the chromatic color bleeding effects has only recently received proper attention12,13. The scope of this paper is to
propose and describe an innovative and powerful algorithm to overcome this kind of color artifacts.
This paper describes an automatic technique able to fuse different images of the same scene, acquired with different camera settings, in order to obtain an enhanced single representation of the interested. This allows to extend the functionalities (depth of field, dynamic range) of medium and low cost digital cameras. When Multi-Scale Decomposition (MSD) is used on differently focused images, magnification and blurring effects of lens focusing systems often compromise the final image with unpleasant artifacts. In our approach new techniques able to reduce these artifacts are introduced. Even if the algorithm has been essentially designed to extend depth of field it can be also used on multi-exposed input images thus extending dynamic range. The algorithm can be applied on full colorand on Color Filter Array (CFA)images.
An automatic natural scenes classifier and enhancer is presented. It works mainly by combining chromatic and positional criterions in order to classify and enhance portraits and landscapes natural scenes images. Various image processing applications can easily take advantage from the proposed solution, e.g. automatically drive camera settings for the optimization of exposure, focus, or shutter speed parameters, or post processing applications for color rendition optimization. A large database of high quality images has been used to design and tune the algorithm, according to wide accepted assumptions that few chromatic classes on natural images have the most perceptive impact on the human visual system. These are essentially skin, vegetation and sky?sea. The adaptive color rendition technique, which has been derived from the results produced by the image classifier, is based on a simple yet effective principle: it shifts the chromaticity of the regions of interest towards the statistically expected ones. Introduction of disturbing color artifacts is avoided by a proper modulation and by preservation of original image luminance values. Quantitative results obtained over an extended data set not belonging to the training database, show the effectiveness of the solution proposed both for the natural image classification and the color enhancement techniques.
A new technique able to improve the performance of the standard DCT compression algorithm in terms of compression size, maintaining almost constant the perceived quality is presented. The measured improvement is obtained profiling the relative DCT error inside typical image pipeline of data acquired by digital sensors. Experimental results show the effectiveness of the methodology proposed, validated also by using two perceptual quality metrics.
Usually in an image no real information about the scene’s depth (in terms of absolute distance) is available. In this paper, a method that extracts real depth measures is developed. This approach starts considering a region located in the center of the depth map. This region can be positioned, interactively, in any part of the depth map in order to measure the real distance of every object inside the scene. The histogram local maxima of this region are determined. Among these values the biggest, that represents the gray-level of the most considerable object, is chosen. This gray-level is used in an exponential mapping function that converts, using the input camera settings, the depth map gray-levels into real measures. Experiments over a large dataset of images show good performances in terms of accuracy and reliability.
In this paper we propose a new algorithm for the Compression Factor Control when the JPEG standard is used. It can be used, for example, when the memory size to store the image is fixed, like in a Digital Still Camera, or when a limited band channel is used to transmit the image. The JPEG standard is the image compression algorithm used 'de facto' by all the devices due the good trade off between compression ratio and quality, but it do not ensure a fixed stream size due to the run-length/variable-length encoding, so a compression factor control algorithm is required. This algorithm allows a very good rate control in a faster way compared to the known algorithms and a lower power consumption too, so it can be used in the portable devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.