In this paperwe present a novel approach for depth estimation and background subtraction in light field images. our
approach exploits the regularity and the internal structure of the light field signal in order to extract an initial depth
map of the captured scene and uses the extracted depth map as the input to a final segmentation algorithm which
finely isolates the background in the image.
Background subtraction is natural application of the light field information since it is highly involved with depth
information and segmentation. However many of the approaches proposed so far are not optimized specifically
for background subtraction and are highly computationally expensive. Here we propose an approach based on a
modified version of the well-known Radon Transform and not involving massive matrix calculations. It is therefore
computationally very efficient and appropriate for real-time use.
Our approach exploits the structured nature of the light field signal and the information inherent in the plenoptic
space in order to extract an initial depth map and background model of the captured scene. We apply a modified
the Radon transform and the gradient operator to horizontal slices of the light field signal to infer the initial depth
map. The initial depth estimated are further refined to a precise background using a series of depth thresholding and
segmentation in ambiguous areas.
We test on method on various types real and synthetic of light field images. Scenes with different levels of clutter
and also various foreground object depth have been considered in the experiments. The results of our experiments
show much better computational complexity while retaining comparable performance to similar more complex
methods.
The real-time development of multi-camera systems is a great challenge. Synchronization and large data rates of
the cameras adds to the complexity of these systems as well. The complexity of such system also increases as the
number of their incorporating cameras increases. The customary approach to implementation of such system is a
central type, where all the raw stream from the camera are first stored then processed for their target application.
An alternative approach is to embed smart cameras to these systems instead of ordinary cameras with limited or
no processing capability. Smart cameras with intra and inter camera processing capability and programmability
at the software and hardware level will offer the right platform for distributed and parallel processing for multi-
camera systems real-time application development. Inter camera processing requires the interconnection of smart
cameras in a network arrangement. A novel hardware emulating platform is introduced for demonstrating the
concept of the interconnected network of cameras. A methodology is demonstrated for the interconnection
network of camera construction and analysis. A sample application is developed and demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.