Analysis of quantum-based methods for improved gravimetric sensing has demonstrated that photon entanglement can provide an additional source of target-state information beyond what is possible using purely classical sensing techniques. In this paper we propose a quantum-based system for large-scale space-based detection of small near-earth objects (NEOs). The objective of the system is to measure extremely small deviations in the background gravitational field within a defined surveillance region to identify potentially dangerous NEO intrusions as early as possible. The system is composed of a set of widely-separated line-of-sight emitter-receiver pairs that exchange entangled photons so that the signature of a moving object can be discerned from subtle gravitation-induced spin effects. The key advantage of the system is that detection does not require direct illumination of the target. A potentially more important practical advantage is that the system can be implemented using relatively simple interferometric measurements.
Distributed Sensor Networks are often implemented to overcome some of the challenges presented by a single monostatic sensor system. In this paper we consider the possibility of geographically-distributed quantum radar nodes for improved target detection. Our theoretical design assumes N quantum sensor nodes with transmit-detect capabilities. One of these nodes is chosen to an “entangler node”, which generates entangled photon pairs and uses quantum channels to transmit, swap or teleport the photons to nodes A and B. One of the nodes retains the photon as an idler while the other transmits its photon as the signal. When this process is repeated across several nodes, there is a clear advantage of having different bearing angles to the target among the N nodes. Furthermore, the position and orientation of these sensor nodes could be actively optimized to maximize information about the state of the target. In addition, because the photon frequencies can be chosen to be independent of N, the system generates virtual modes that increase the performance of a single quantum sensor. The proposed design could be generalized to maintain the equivalent of a distributed quantum register among the N nodes so that the sharing of classical information from the detection among the nodes can permit state estimation to be performed in a completely uniform and consistent way. More specifically, each node receives a signal photon which is compared to its idler photon to produce something that relates to the state of the target but is completely non-informative to the individual nodes. It is only when the quantum information from the nodes is received at a secure central node that a full estimate comprising all information about the target from all nodes can be constructed. As we will discuss, the system not only is more information-efficient but also provides a certain level of security because any classical information leakage at a node (i.e., compromised by an adversary) will not actually reveal anything about the state of the target. Therefore, a set of geographically distributed quantum sensors can be treated as a single logical quantum radar device.
In this paper we discuss two potential areas of intersection between Quantum Information Technologies and Information Fusion. The first area we call Quantum (Data Fusion) and refers to the use of quantum computers to perform data fusion algorithms with classical data generated by quantum and classical sensors. As we discuss, we expect that these quantum fusion algorithms will have a better computational complexity than traditional fusion algorithms. This means that quantum computers could allow the efficient fusion of large data sets for complex multi-target tracking. On the other hand, (Quantum Data) Fusion refers to the fusion of quantum data that is being generated by quantum sensors. The output of the quantum sensors is considered in the form of qubits, and a quantum computer performs data fusion algorithms. Our theoretical models suggest that we expect that these algorithms can increase the sensitivity of the quantum sensor network.
In this paper we discuss and examine approaches for detecting large objects in low-light maritime environments with a goal of improving the detection of large targets within a region of interest. More specifically, a passive ghost imaging system is proposed for using caustics illumination patterns to reconstruct a target image from correlations with intensities captured by a bucket detector.
Synthetic aperture radar (SAR) uses sensor motion to generate finer spatial resolution of a given target area. In this paper we explore the theoretical potential of quantum synthetic aperture quantum radar (QSAR). We provide theoretical analysis and simulation results which suggest that QSAR can provide improved detection performance over classical SAR in the high-noise low-brightness regime.
The study of plate tectonic motion is important to generate theoretical models of the structure and dynamics of the Earth. In turn, understanding tectonic motion provides insight to develop sophisticated models that can be used for earthquake early warning systems and for nuclear forensics. Tectonic geodesy uses the position of a network of points on the surface of earth to determine the motion of tectonic plates and the deformation of the earths crust. GPS and interferometric synthetic aperture radar are commonly used techniques used in tectonic geodesy. In this paper we will describe the feasibility of interferometric synthetic aperture quantum radar and its theoretical performance for tectonic geodesy.
In previous research we designed an interferometric quantum seismograph that uses entangled photon states to enhance sensitivity in an optomechanic device. However, a spatially-distributed array of such sensors, with each sensor measuring only nm-vibrations, may not provide sufficient sensitivity for the prediction of major earthquakes because it fails to exploit potentially critical phase information. We conjecture that relative phase information can explain the anecdotal observations that animals such as lemurs exhibit sensitivity to impending earthquakes earlier than can be done confidently with traditional seismic technology. More specifically, we propose that lemurs use their limbs as ground motion sensors and that relative phase differences are fused in the brain in a manner similar to a phased-array or synthetic-aperture radar. In this paper we will describe a lemur-inspired quantum sensor network for early warning of earthquakes. The system uses 4 interferometric quantum seismographs (e.g., analogous to a lemurs limbs) and then conducts phase and data fusion of the seismic information. Although we discuss a quantum-based technology, the principles described can also be applied to classical sensor arrays
The Radar Cross Section (RCS) is a crucial element for assessing target visibility and target characterization, and it depends not only on the target’s geometry but also on its composition. However, the calculation of the RCS is a challenging task due to the mathematical description of electromagnetic phenomena as well as the computational resources needed. In this paper, we will introduce two ideas for the use of quantum information processing techniques to calculate the RCS of dielectric targets. The first is to use toolboxes of quantum functions to determine the geometric component of the RCS. The second idea is to use quantum walks, expressed in terms of scattering processes, to model radar absorbing materials.
A major scientific thrust from recent years has been to try to harness quantum phenomena to increase the performance of a wide variety of information processing devices. In particular, quantum radar has emerged as an intriguing theoretical concept that could revolutionize electromagnetic standoff sensing. In this paper we will discuss how the techniques developed for quantum radar could also be used towards the design of novel seismographs able to detect small ground vibrations., We use a hypothetical earthquake warning system in order to compare quantum seismography with traditional seismographic techniques.
In the context of traditional radar systems, the Doppler effect is crucial to detect and track moving targets in the presence of clutter. In the quantum radar context, however, most theoretical performance analyses to date have assumed static targets. In this paper we consider the Doppler effect at the single photon level. In particular, we describe how the Doppler effect produced by clutter and moving targets modifies the quantum distinguishability and the quantum radar error detection probability equations. Furthermore, we show that Doppler-based delayline cancelers can reduce the effects of clutter in the context of quantum radar, but only in the low-brightness regime. Thus, quantum radar may prove to be an important technology if the electronic battlefield requires stealthy tracking and detection of moving targets in the presence of clutter.
Recent research suggests that quantum radar offers several potential advantages over classical sensing technologies. At present, the primary practical challenge is the fast and efficient generation of entangled microwave photons. To mitigate this limitation we propose and briefly examine a distributed architecture to synthetically increase the number of effectively-distinguishable modes.
Correlations between entangled quantum states can be exploited to dramatically improve detection sensitivity under certain conditions. In this paper we argue that space-based surveillance ideally satisfies these conditions and represents a practical application of quantum sensing for the detection of near-earth objects which threaten spacecraft or terrestrial life.
In this paper we raise questions about the reality of computational quantum parallelism. Such questions are
important because while quantum theory is rigorously established, the hypothesis that it supports a more powerful
model of computation remains speculative. More specifically, we suggest the possibility that the seeming
computational parallelism offered by quantum superpositions is actually effected by gate-level parallelism in the
reversible implementation of the quantum operator. In other words, when the total number of logic operations
is analyzed, quantum computing may not be more powerful than classical. This fact has significant public policy
implications with regard to the relative levels of effort that are appropriate for the development of quantumparallel
algorithms and associated hardware (i.e., qubit-based) versus quantum-scale classical hardware.
It is often believed that quantum entanglement plays an important role in the speed-up of quantum algorithms. In
addition, a few research groups have found that Majorization behavior may also play an important role in some quantum
algorithms. In some of our previous work we showed that for a simple spin 1/2 system, consisting of two or three qubits,
the value of a Groverian entanglement (a rather useful measure of entanglement) varies inversely with the temperature.
In practical terms this means that more iterations of the Grover's algorithm may be needed when a quantum computer is
working at finite temperature. That is, the performance of a quantum algorithm suffers due to temperature-dependent
changes on the density matrix of the system. Most recently, we have been interested in the behavior of Majorization for
the same types of quantum system, and we are trying to determine the relationship between Groverian entanglement and
Majorization at finite temperature. As Majorization entails the probability distribution arising out of the evolving
quantum state from the probabilities of the final outcomes, our study will reveal how Majorization affects the evolution
of Grover's algorithm at finite temperature.
Some of our previous research showed some interesting results regarding the effect of non-zero temperature on a
specified quantum computation. For example, our analysis revealed that more Grover iterations are required to amplify
the amplitude of the solution in a quantum search problem when the system is found at some finite temperature. We want
to further study the effects of temperature on quantum entanglement using a finite temperature field theoretical
description. Such a framework could prove to be useful for the understanding of computational dynamics inside a
quantum computer. Other issues that we will address in our discussion include analytical descriptions of the effects of
the temperature in the Von Newman entropy and others as a measure of entanglement.
The engineering of practical quantum computers requires dealing with the so-called "temperature mismatch problem".
More specifically, analysis of quantum logic using ensembles of quantum systems typically assumes very low
temperatures, kT<< E, where T is the temperature, k is the Boltzmann's constant, and E is the energy separation used to
represent the two different states of the qubits. On the other hand, in practice the electronics necessary to control these
quantum gates will almost certainly have to operate at much higher temperatures. One solution to this problem is to
construct electronic components that are able to work at very low temperatures, but the practical engineering of these
devices continues to face many difficult challenges. Another proposed solution is to study the behavior of quantum gates
devices continues to face many difficult challenges. Another proposed solution is to study the behavior of quantum gates
different from the T=0 case, where collective interactions and stochastic phenomena are not taken into consideration. In
this paper we discuss several aspects of quantum logic at finite temperature. In particular, we present analysis of the
behavior of quantum systems undergoing a specified computation performed by quantum gates at nonzero temperature.
Our main interest is the effect of temperature on the practical implementation of quantum computers to solve potentially
large and time-consuming computations.
A key enabler for Network Centric Warfare (NCW) is a sensor network that can collect and fuse vast amounts
of disparate and complementary information from sensors that are geographically dispersed throughout the
battlespace. This information will lead to better situation awareness so that commanders will be able to act faster
and more effectively. However, these benefits are possible only if the sensor data can be fused and synthesized for
distribution to the right user in the right form at the right time within the constraints of available bandwidth.
In this paper we consider the problem of developing Level 1 data fusion algorithms for disparate fusion in
NCW. These algorithms must be capable of operating in a fully distributed (or decentralized) manner; must be
able to scale to extremely large numbers of entities; and must be able to combine many disparate types of data.
To meet these needs we propose a framework that consists of three main components: an attribute-based
state representation that treats an entity state as a collection of attributes, new methods or interpretations of
uncertainty, and robust algorithms for distributed data fusion. We illustrate the discussion in the context of
maritime domain awareness, mobile adhoc networks, and multispectral image fusion.
Quantum computing (QC) has become an important area of research in
computer science because of its potential to provide more efficient
algorithmic solutions to certain problems than are possible with
classical
computing (CC). In particular, QC is able to exploit the special
properties of quantum superposition to achieve computational
parallelism
beyond what can be achieved with parallel CC computers. However, these
special properties are not applicable for general computation.
Therefore,
we propose the use of "hybrid quantum computers" (HQCs) that combine
both classical and quantum computing architectures in order to leverage
the benefits of both. We demonstrate how an HQC can exploit quantum
search
to support general database operations more efficiently than is
possible
with CC. Our solution is based on new quantum results that are of
independent significance to the field of quantum computing. More
specifically, we demonstrate that the most restrictive implications of
the
quantum No-Cloning Theorem can be avoided through the use of
semiclones.
The prospects for practical quantum computing have improved
significantly over the past few years, and there is an increasing
motivation for developing quantum algorithms to address
problems that are presently impractical to solve using classical
computing. In previous work we have indentified such problems
in the areas of computer graphics applications,
and we have derived quantum-based solutions. In this paper
we examine quantum-based solutions to problems arising in
the area of computational geometry. These types of problems
are important in a variety of scientific, industrial and military
applications such as large scale multi-object simulation,
virtual reality systems, and multi-target tracking. In particular,
we present quantum algorithms for multidimensional searches,
convex hull construction, and collision detection.
In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.
Many future missions for mobile robots demand multi-robot systems which are capable of operating in large environments for long periods of time. A critical capability is that each robot must be able to localize itself. However, GPS cannot be used in many environments (such as within city streets, under water, indoors, beneath foliage or extra-terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. In this paper we consider the problem of building and maintaining an extremely large map (of one million beacons). We describe a fully distributed, highly scaleable SLAM algorithm which is based on distributed data fusion systems. A central map is maintained in global coordinates using the Split Covariance Intersection (SCI) algorithm. Relative and local maps are run independently of the central map and their estimates are periodically fused with the central map.
KEYWORDS: Robots, Mobile robots, Sensors, Algorithm development, Process modeling, Robotics, Data fusion, Filtering (signal processing), Computing systems, Robotic systems
Many of the future missions for mobile robots demand multi- robot systems which are capable of operating in large environments for long periods of time. One of the most critical capabilities is the ability to localize- a mobile robot must be able to estimate its own position and to consistently transmit this information to other robots and control sites. Although state-of-the-art GPS is capable of yielding unmatched performance over large areas, it is not applicable in many environments (such as within city streets, under water, indoors, beneath foliage or extra- terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. However, most approaches are non-scalable (the storage and computational costs vary quadratically and cubically with the number of beacons in the map) and can only be used with multiple robotic vehicles with a great degree of difficulty. In this paper, we describe the development of a scalable, multiple-vehicle SLAM system. This system, based on the Covariance Intersection algorithm, is scalable- its storage and computational costs are linearly proportional to the number of beacons in the map. Furthermore, it is scalable to multiple robots- each has complete freedom to exchange partial or full map information with any other robot at any other time step. We demonstrate the real-time performance of this system in a scenario of 15,000 beacons.
The Naval Research Laboratory (NRL) has spearheaded the development and application of Covariance Intersection (CI) for a variety of decentralized data fusion problems. Such problems include distributed control, onboard sensor fusion, and dynamic map building and localization. In this paper we describe NRL's development of a CI-based navigation system for the NASA Mars rover that stresses almost all aspects of decentralized data fusion. We also describe how this project relates to NRL's augmented reality, advanced visualization, and REBOT projects.
Battlefield situation awareness is the most fundamental prerequisite for effective command and control. Information about the state of the battlefield must be both timely and accurate. Imagery data is of particular importance because it can be directly used to monitor the deployment of enemy forces in a given area of interest, the traversability of the terrain in that area, as well as many other variables that are critical for tactical and force level planning. In this paper we describe prototype REmote Battlefield Observer Technology (REBOT) that can be deployed at specified locations and subsequently tasked to transmit high resolution panoramic imagery of its surrounding area. Although first generation REBOTs will be stationary platforms, the next generation will be autonomous ground vehicles capable of transporting themselves to specified locations. We argue that REBOT fills a critical gap in present situation awareness technologies. We expect to provide results of REBOT tests to be conducted at the 1999 Marines Advanced Warfighting Demonstration.
KEYWORDS: Visualization, Data modeling, 3D modeling, Systems modeling, Computer simulations, Optimization (mathematics), Visual process modeling, Coastal modeling, Computing systems, Analytical research
Computational steering is a newly evolving paradigm for working with simulation models. It entails integration of model execution, observation and input data manipulation carried out concurrently in pursuit of rapid insight and goal achievement. Keys to effective computational steering include advanced visualization, high performance processing and intuitive user control. The Naval Research Laboratory (NRL) has been integrating facilities in its Virtual Reality Lab and High Performance Computing Center for application of computational steering to study effects of electromagnetic wave interactions using the HASP (High Accuracy Scattering and Propagation) modeling technique developed at NRL. We are also investigating automated inverse steering which involves incorporation of global optimization techniques to assist the user with tuning of parameter values to produce desired behaviors in complex models.
KEYWORDS: Electroluminescence, Algorithm development, Analytical research, Robotics, Gold, Information technology, Space robots, Binary data, Detection and tracking algorithms, Chemical elements
In this paper we describe a recently developed algorithm for computing least cost paths under turn angle constraints. If a graph representation of a two or three dimensional routing problem contains V vertices and El edges, then the new algorithm scales as O(IEI log V). This result is substantially better than O((EIIVI) algorithms for the more general problem of routing with turn penalties, which cannot be applied to large scale graphs. We also describe an enhancement to the new algorithm that dramatically improves the performance in practice. We provide empirical results showing that the new algorithm can substantially reduce the computation time required for constrained vehicle routing. This performance is sufficient to allow for the dynamic re-routing of vehicles in uncertain or changing environments. Keywords: Dijkstra's algorithm, least cost paths, range searching, routing, turn constraints.
This paper will discuss research conducted at the Naval Research Laboratory in the area of automated routing, advanced 3D displays and novel interface techniques for interacting with those displays. This research has culminated in the development of the strike optimized mission planing module (STOMPM). The STOMPM testbed incorporates new technologies/results in the aforementioned areas to address the deficiencies in current systems and advance the state of the art in military planing systems.
In this paper we describe the GROTTO visualization projects being carried out at the Naval Research Laboratory. GROTTO is a CAVE-like system, that is, a surround-screen, surround- sound, immersive virtual reality device. We have explored the GROTTO visualization in a variety of scientific areas including oceanography, meteorology, chemistry, biochemistry, computational fluid dynamics and space sciences. Research has emphasized the applications of GROTTO visualization for military, land and sea-based command and control. Examples include the visualization of ocean current models for the simulation and stud of mine drifting and, inside our computational steering project, the effects of electro-magnetic radiation on missile defense satellites. We discuss plans to apply this technology to decision support applications involving the deployment of autonomous vehicles into contaminated battlefield environments, fire fighter control and hostage rescue operations.
The Kalman Filter (KF) is one of the most widely used methods for tracking and estimation due to its simplicity, optimality, tractability and robustness. However, the application of the KF to nonlinear systems can be difficult. The most common approach is to use the Extended Kalman Filter (EKF) which simply linearizes all nonlinear models so that the traditional linear Kalman filter can be applied. Although the EKF (in its many forms) is a widely used filtering strategy, over thirty years of experience with it has led to a general consensus within the tracking and control community that it is difficult to implement, difficult to tune, and only reliable for systems which are almost linear on the time scale of the update intervals. In this paper a new linear estimator is developed and demonstrated. Using the principle that a set of discretely sampled points can be used to parameterize mean and covariance, the estimator yields performance equivalent to the KF for linear systems yet generalizes elegantly to nonlinear systems without the linearization steps required by the EKF. We show analytically that the expected performance of the new approach is superior to that of the EKF and, in fact, is directly comparable to that of the second order Gauss filter. The method is not restricted to assuming that the distributions of noise sources are Gaussian. We argue that the ease of implementation and more accurate estimation features of the new filter recommend its use over the EKF in virtually all applications.
The covariance intersection (CI) framework represents a generalization of the Kalman filter that permits filtering and estimation to be performed in the presence of unmodeled correlations. As described in previous papers, unmodeled correlations arise in virtually all real-world problems; but in many applications the correlations are so significant that they cannot be 'swept under the rug' simply by injecting extra stabilizing noise within a traditional Kalman filter. In this paper we briefly describe some of the properties of the CI algorithm and demonstrate their relevance to the notoriously difficult problem of simultaneous map building and localization for autonomous vehicles.
A significant problem in tracking and estimation is the consistent transformation of uncertain state estimates between Cartesian and spherical coordinate systems. For example, a radar system generates measurements in its own local spherical coordinate system. In order to combine those measurements with those from other radars, however, a tracking system typically transforms all measurements to a common Cartesian coordinate system. The most common approach is to approximate the transformation through linearization. However, this approximation can lead to biases and inconsistencies, especially when the uncertainties on the measurements are large. A number of approaches have been proposed for using higher order transformation modes, but these approaches have found only limited use due to the often enormous implementation burdens incurred by the need to derive Jacobians and Hessians. This paper expands a method for nonlinear propagation which is described in a companion paper. A discrete set of samples are used to capture the first four moments of the untransformed measurement. The transformation is then applied to each of the samples, and the mean and covariance are calculated from the result. It is shown that the performance of the algorithm is comparable to that of fourth order filters, thus ensuring consistency even when the uncertainty is large. It is not necessary to calculate any derivatives, and the algorithm can be extended to incorporate higher order information. The benefits of this algorithm are illustrated in the contexts of autonomous vehicle navigation and missile tracking.
In this paper we present a new theoretic framework for combining sensor measurements, state estimates, or any similar type of quantity given only their means and covariances. The key feature of the new framework is that it permits the optimal fusion of estimates that are correlated to an unknown degree. This framework yields a new filtering paradigm that avoids all of the restrictive independence assumptions required by the standard Kalman filter, though at the cost of reduced rates of convergence for cases in which independence can be established.
A new method for simultaneous vehicle localization and dynamic map building is described. The method models the noise sources as bounded distributions and can therefore produce bounded estimates for the vehicle and all the target positions. Correlations that arise between vehicle and target estimates when using other techniques, such as the Kalman filter, do not arise, and hence a significant saving in memory and computation is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.