SignificanceHyperspectral imaging sensors have rapidly advanced, aiding in tumor diagnostics for in vivo brain tumors. Linescan cameras effectively distinguish between pathological and healthy tissue, whereas snapshot cameras offer a potential alternative to reduce acquisition time.AimOur research compares linescan and snapshot hyperspectral cameras for in vivo brain tissues and chromophore identification.ApproachWe compared a linescan pushbroom camera and a snapshot camera using images from 10 patients with various pathologies. Objective comparisons were made using unnormalized and normalized data for healthy and pathological tissues. We utilized the interquartile range (IQR) for the spectral angle mapping (SAM), the goodness-of-fit coefficient (GFC), and the root mean square error (RMSE) within the 659.95 to 951.42 nm range. In addition, we assessed the ability of both cameras to capture tissue chromophores by analyzing absorbance from reflectance information.ResultsThe SAM metric indicates reduced dispersion and high similarity between cameras for pathological samples, with a 9.68% IQR for normalized data compared with 2.38% for unnormalized data. This pattern is consistent across GFC and RMSE metrics, regardless of tissue type. Moreover, both cameras could identify absorption peaks of certain chromophores. For instance, using the absorbance measurements of the linescan camera, we obtained SAM values below 0.235 for four peaks, regardless of the tissue and type of data under inspection. These peaks are one for cytochrome b in its oxidized form at λ=422 nm, two for HbO2 at λ=542 nm and λ=576 nm, and one for water at λ=976 nm.ConclusionThe spectral signatures of the cameras show more similarity with unnormalized data, likely due to snapshot sensor noise, resulting in noisier signatures post-normalization. Comparisons in this study suggest that snapshot cameras might be viable alternatives to linescan cameras for real-time brain tissue identification.
Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundreds of bands across the electromagnetic spectrum –from the ultraviolet to the infrared range–. Thanks to this huge amount of information, an identification of the different elements that compound the hyperspectral image is feasible. Initially, HI was developed for remote sensing applications and, nowadays, its use has been spread to research fields such as security and medicine. In all of them, new applications that demand the specific requirement of real-time processing have appear. In order to fulfill this requirement, the intrinsic parallelism of the algorithms needs to be explicitly exploited.
In this paper, a Support Vector Machine (SVM) classifier with a linear kernel has been implemented using a dataflow language called RVC-CAL. Specifically, RVC-CAL allows the scheduling of functional actors onto the target platform cores. Once the parallelism of the classifier has been extracted, a comparison of the SVM classifier implementation using LibSVM –a specific library for SVM applications– and RVC-CAL has been performed.
The speedup results obtained for the image classifier depends on the number of blocks in which the image is divided; concretely, when 3 image blocks are processed in parallel, an average speed up above 2.50, with regard to the RVC-CAL sequential version, is achieved.
KEYWORDS: Principal component analysis, Hyperspectral imaging, Sensors, Data processing, Image processing, Electromagnetism, Remote sensing, Data modeling, Chemical elements, Spatial resolution
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. The tremendous development of this technology within the field of remote sensing has led to new research fields, such as cancer automatic detection or precision agriculture, but has also increased the performance requirements of the applications. For instance, strong time constraints need to be respected, since many applications imply real-time responses. Achieving real-time is a challenge, as hyperspectral sensors generate high volumes of data to process. Thus, so as to achieve this requisite, first the initial image data needs to be reduced by discarding redundancies and keeping only useful information. Then, the intrinsic parallelism in a system specification must be explicitly highlighted.
In this paper, the PCA (Principal Component Analysis) algorithm is implemented using the RVC-CAL dataflow language, which specifies a system as a set of blocks or actors and allows its parallelization by scheduling the blocks over different processing units. Two implementations of PCA for hyperspectral images have been compared when aiming at obtaining the first few principal components: first, the algorithm has been implemented using the Jacobi approach for obtaining the eigenvectors; thereafter, the NIPALS-PCA algorithm, which approximates the principal components iteratively, has also been studied. Both implementations have been compared in terms of accuracy and computation time; then, the parallelization of both models has also been analyzed.
These comparisons show promising results in terms of computation time and parallelization: the performance of the NIPALS-PCA algorithm is clearly better when only the first principal component is achieved, while the partitioning of the algorithm execution over several cores shows an important speedup for the PCA-Jacobi. Thus, experimental results show the potential of RVC–CAL to automatically generate implementations which process in real-time the large volumes of information of hyperspectral sensors, as it provides advanced semantics for exploiting system parallelization.
Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundred of bands raging from the infrared to the ultraviolet wave lengths. In the medical field, specifically, in the cancer tissue identification at the operating room, the potential of HI is huge. However, given the data volume of HI and the computational complexity and cost of identification algorithms, real-time processing is the key, differential feature that brings value to surgeons. In order to achieve real-time implementations, the parallelism available in a specification needs to be explicitly highlighted. Data-flow programming languages, like RVC-CAL, are able to accomplish this goal.
In this paper, an RVC-CAL library to implement dimensionality reduction and endmember extraction is presented. The results obtained show significant improvements with regard to a state-of-the-art analysis tool. A speedup of 30% is carried out using the complete processing chain and, in particular, a speedup of 5% has been achieved in the dimensionality reduction step. This dimensionality reduction takes ten of the thirteen seconds that the whole system needs to analyze one of the images. In addition, the RVC-CAL library is an excellent tool to simplify the implementation process of HI algorithms. Effectively, during the experimental test, the potential of the RVC-CAL library to reveal possible bottlenecks present in the HI processing chain and, therefore, to improve the system performance to achieve real-time constraints has been shown. Furthermore, the RVC-CAL library provides the possibility of system performance testing.
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization.
In that line, this paper describes the construction of a new hyperspectral processing library for RVC–CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC–CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.
KEYWORDS: Digital signal processing, Signal processing, Multimedia, Operating systems, Video, Embedded systems, Video processing, Clocks, Process modeling, Manufacturing
System-level energy optimization of battery-powered multimedia embedded systems has recently become a design goal.
The poor operational time of multimedia terminals makes computationally demanding applications impractical in real
scenarios. For instance, the so-called smart-phones are currently unable to remain in operation longer than several hours.
The OMAP3530 processor basically consists of two processing cores, a General Purpose Processor (GPP) and a Digital
Signal Processor (DSP). The former, an ARM Cortex-A8 processor, is aimed to run a generic Operating System (OS)
while the latter, a DSP core based on the C64x+, has architecture optimized for video processing.
The BeagleBoard, a commercial prototyping board based on the OMAP processor, has been used to test the Android
Operating System and measure its performance. The board has 128 MB of SDRAM external memory, 256 MB of Flash
external memory and several interfaces. Note that the clock frequency of the ARM and DSP OMAP cores is 600 MHz
and 430 MHz, respectively.
This paper describes the energy consumption estimation of the processes and multimedia applications of an Android v1.6
(Donut) OS on the OMAP3530-Based BeagleBoard. In addition, tools to communicate the two processing cores have
been employed. A test-bench to profile the OS resource usage has been developed.
As far as the energy estimates concern, the OMAP processor energy consumption model provided by the manufacturer
has been used. The model is basically divided in two energy components. The former, the baseline core energy,
describes the energy consumption that is independent of any chip activity. The latter, the module active energy, describes
the energy consumed by the active modules depending on resource usage.
In this paper, a system that emulates the whole DAB transmission chain, allowing the development and test of external
decoders for DAB data services, is described. DAB receivers offer the possibility of connecting an external data decoder
that handles additional data services, using a data interface called RDI. The system described in this paper replaces the
complete DAB transmission chain from the transmitter to the RDI interface of the receiver. The system generates a DAB
ensemble that can carry several data services and transmits the RDI frames corresponding to this ensemble through an
RDI output. Any type of data service can be carried by the ensemble. The purpose of the system is to be used as a debug
and verification tool for external decoder equipment that can be connected to a DAB receiver via an RDI interface. The
system has been tested with two kinds of data services -data carousels and video streaming- with very satisfactory
results in both cases. We are working currently on adding DMB support to our system.
KEYWORDS: Video, Digital signal processing, Clocks, Photoemission spectroscopy, Computer programming, Computer architecture, Receivers, Multimedia, Digital video discs, On-screen displays
Media synchronization at network context minimizes the effects of the network jitter and the skew between the emitter and receiver clocks. Theoretical algorithms cannot always be implemented on real
systems for the architecture differences between a real and a theoretical system. In this paper an implementation for an intra-medium and an inter-media synchronization algorithm for a real multistandard IP set-top box is presented. For intra-medium synchronization, the proposed technique is based on controlling the receiver buffer. However for inter-media synchronization, the proposed technique is based on controlling the video playback according the Presentation Time Stamp (PTS) of the media units
(audio and video). The proposed synchronizations algorithms has been integrated in an IP-STB and tested in a real environment using DVD movies and TV channels with excellent results. Those results show that
the proposed algorithm can achieve media synchronization and meet the requirements of perceived quality of service (P-QoS).
KEYWORDS: Digital signal processing, Video, Surface plasmons, Digital video discs, Clocks, Internet, Computer programming, Video coding, Computer architecture, Data storage
Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the
market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a
TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this
code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP
decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies
and TV channels with excellent results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.