KEYWORDS: Signal attenuation, Neural networks, Fourier transforms, Data processing, Data conversion, Cell phones, Neurons, Mobile devices, Received signal strength, Computer engineering
Mobile devices have distinct RF fingerprints, which are reflected by changes in the frequency of transmitted signals. The Short-Time Fourier Transform (STFT) is a suitable technique for evaluating this frequency content and thus identifying them. In this paper, we take advantage of STFT processing and perform roomlevel location classification. The raw in-phase and quadrature (IQ) signals and channel state information (CSI) frames have been collected using seven different cell phones. The data collection process has been performed in eight different locations on the same floor of our engineering building, which contains indoor hallways and rooms of different sizes. Three software-defined radios (SDRs) are placed in three different locations to receive signals simultaneously but separately. The IQ and CSI frames have been concatenated together for training a neural network. A Multi-Layer Perception (MLP) network has been used to train the concatenated signals as input and their corresponding locations as labels. A challenging aspect is that our dataset does not contain the same number of samples per location. Moreover, several locations have insufficient training data due to signal attenuation. An imbalanced learning method has been implemented in this dataset to overcome this limitation and improve the classification accuracy. The classification strategy involves binary classification like individual location vs. other. Using this approach, we obtain a mean accuracy of around 95%.
Many correctional facilities suffer from the smuggling of cell phones and other wireless devices into prison walls. In order to locate these devices for confiscation, we must be able to map intercepted signals to indoor locations within a few meter radius. We chose to use cell phones of varying models and multiple low-cost software-defined radios for this task. The different types of cell phones provide us with a more robust dataset for location fingerprinting due to the different transmitter hardware in each. Furthermore, the SDRs allow us to easily receive the raw IQ data from WiFi signals while being more cost-efficient for smaller facilities. This raw data is collected from a harsh prison-like environment in a grid pattern and associated with the location they were captured. An advanced machine learning network uses the raw signals as input and locations as labels in order to map the signals to their respective locations. The accuracy of our system is then compared and discussed against prior works in this field. These studies often use values other than the raw IQ data such as channel state information and received signal strength indicator. Therefore, we augment our original input with each of these values and measure their effect on the system’s overall performance. The end result provides prisons with a tool capable of locating devices used in unauthorized zones for confiscation.
Wireless devices identify themselves using media access control (MAC) addresses which can be easily intercepted and mimicked by an adversary. Mobile devices also have a unique physical fingerprint represented by perturbations in the frequency of broadcasted signals caused by differences in the manufacturing process of their hardware components. This unique fingerprint is much more difficult to mimic. The short time Fourier transform (STFT) is used to analyze how the frequency content of a signal changes over time, and may provide a better representation of mobile signals in order to detect their unique fingerprint. In this paper, we have collected wireless signals using the 802.11 a/g protocol, showing the effect on classification performance of applying the STFT when varying the choice of window lengths, augmenting the data with complex Gaussian noise, and concatenating STFTs of different frequency resolutions, achieving state-of-the-art performance of 99.94% accuracy in the process.
We consider the problem of accurately detecting signals from contraband WiFi devices. Source locations may be selected in a worst-case fashion from within an indoor structure, such as a correctional facility. The structure layout is known, but inaccessible prior to deployment, and only a small number of detectors are available for sensing these signals. Our approach treats this setting as a covering problem, where the aim is to achieve a high probability of detection at each of the grid points of the terrain. Unlike prior approaches, we employ (1) a variant of the maximum coverage problem, which allows us to account for aggregate coverage by several detectors, and (2) a state-of-the-art commercial wireless simulator to provide SINR measurements that inform our problem instances. This approach is formulated as a mathematical program to which additional constraints are added to limit the number of detectors. Solving the program produces a placement of detectors whose performance is then evaluated for classifier accuracy. We present preliminary results, combining both simulation data and real-world data to evaluate the performance our approach against two competitors inspired by the literature.
In wireless networks, MAC-address spoofing is a common attack that allows an adversary to gain access to the system. To circumvent this threat, previous work has focused on classifying wireless signals using a “physical fingerprint”, i.e., changes to the signal caused by physical differences in the individual wireless chips. Instead of relying on MAC addresses for admission control, fingerprinting allows devices to be classified and then granted access. In many network settings, the activity of legitimate devices—those devices that should be granted access— may be dynamic over time. Consequently, when faced with a device that comes online, a robust fingerprinting scheme must quickly identify the device as legitimate using the pre-existing classification, and meanwhile identify and group those unauthorized devices based on their signals. This paper presents a two-stage Zero-Shot Learning (ZSL) approach to classify a received signal originating from either a legitimate or unauthorized device. In particular, during the training stage, a classifier is trained for classifying legitimate devices. The classifier learns discriminative features and the outlier detector uses these features to classify whether a new signature is an outlier. Then, during the testing stage, an online clustering method is applied for grouping those identified unauthorized devices. Our approach allows 42% of unauthorized devices to be identified as unauthorized and correctly clustered.
Wireless communication is susceptible to security breaches by adversarial actors mimicking Media Access Controller (MAC) addresses of currently-connected devices. Classifying devices by their “physical fingerprint” can help to prevent this problem since the fingerprint is unique for each device and independent of the MAC address. Previous techniques have mapped the WiFi signal to real values and used classification methods that support solely real-valued inputs. In this paper, we put forth four new deep neural networks (NNs) for classifying WiFi physical fingerprints: a real-valued deep NN, a corresponding complex-valued deep NN, a real-valued deep CNN, and the corresponding complex-valued deep convolutional NN (CNN). Results show state-of-the-art performance against a dataset of nine WiFi devices.
KEYWORDS: Sensors, Ray tracing, Transceivers, Computer simulations, Detection and tracking algorithms, Signal detection, 3D modeling, Signal attenuation, Neural networks, Receivers
Signal attributes such as angle of arrival (AoA), time of arrival (ToA), signal amplitude, and phase can be used by a set of receivers (detectors) to perform location fingerprinting (LF), whereby the location of a wireless source is determined. In validating new approaches for location fingerprinting, it is useful to simulate these attributes for the subset of signals that intersect detectors. However, given indoor settings with a complex architecture, it is computationally expensive to simulate multipath propagation while preserving detailed signal information. Moreover, this cost can be unnecessary since determining whether an LF approach is promising may not require tracing all rays that impact the detector. Here, we report on our preliminary efforts to design and test a MATLAB-based simulation tool for wireless propagation that addresses this issue. Our approach builds upon well-known ray-tracing techniques, but innovates via an algorithm designed to obtain a sizable subset of rays that intersect a detector, along with the AoA, ToA, signal amplitude, and phase for each such ray. Finally, we employ our tool in conjunction with a neural network-based method for location fingerprinting, demonstrating the intended use case for our simulation tool.
Semantic Segmentation using convolutional neural networks is a trending technique in scene understanding. As these techniques are data-intensive, several devices struggle to store and process even a small batch of images at a time. Also, as the volume of training datasets required by the training algorithms is very high, it might be wise to store these datasets in their compressed form. Not only this, in order to correspond the limited bandwidth of the transmission network the images could be compressed before sending to the destination. Joint Photography Expert Group (JPEG) is a famous technique for image compression. However, JPEG introduces several unwanted artifacts in the images after compression. In this paper, we explore the effect of JPEG compression on the performance of several deep-learning-based semantic segmentation techniques for both the synthetic and real-world dataset at various compression levels. For some established architectures trained with compressed synthetic and real-world dataset, we noticed the equivalent (and sometimes better) performances compared to uncompressed dataset with substantial amount of storage space reduced. We also analyze the effect of combining original dataset with the compressed dataset with different JPEG quality levels and witnessed a performance improvement over the baseline. Our evaluation and analysis indicates that the segmentation network trained on compressed dataset could be a better option in terms of performance. We also illustrate that the JPEG compression acts as a data augmentation technique improving the performance of semantic segmentation algorithms.
For autonomous vehicles 3D, rotating LiDAR sensors are often critically important towards the vehicle’s ability to sense its environment. Generally, these sensors scan their environment, using multiple laser beams to gather information about the range and the intensity of the reflection from an object. LiDAR capabilities have evolved such that some autonomous systems employ multiple rotating LiDARs to gather greater amounts of data regarding the vehicle’s surroundings. For these multi–LiDAR systems, the placement of the sensors determine the density of the combined point cloud. We perform preliminary research regarding the optimal LiDAR placement strategy on an off–road, autonomous vehicle known as the Halo project. We use the Mississippi State University Autonomous Vehicle Simulator (MAVS) to generate large amounts of labeled LiDAR data that can be used to train and evaluate a neural network used to process LiDAR data in the vehicle. The trained networks are evaluated and their performance metrics are then used to generalize the performance of the sensor pose. Data generation, training, and evaluation, was performed iteratively to perform a parametric analysis of the effectiveness of various LiDAR poses in the Multi–LiDAR system. We also, describe and evaluate intrinsic and extrinsic calibration methods that are applied in the multi–LiDAR system. In conclusion we found that our simulations are an effective way to evaluate the efficacy of various LiDAR placements based on the performance of the neural network used to process that data and the density of the point cloud in areas of interest.
Temperature monitoring and regulation is a critical aspect of data center administration. Currently, conventional discrete transistor-based thermal sensing systems are widely used for this purpose, which requires a discrete device for each temperature measurement in the special domain. This leads to an increase in both complexity and cost as the data center grows in scale. This manuscript describes a real-time multiplexed optical fiber thermal sensing system for data center applications which simultaneously measures thousands of discrete points along the length of the fiber under test. This system allows for real-time thermal monitoring of several hundred servers with a spatial resolution of 1 cm, a temperature resolution of <1 °C, and a system update rate of 1 Hz. Temperature inside of individual servers and the ambient room temperature outside the racks can be simultaneously monitored in real time using a single optical fiber probe. To investigate this concept, a pilot experiment is presented which monitored the dynamic server temperature distribution using the proposed fiber sensing system. Temperature data recorded using built-in thermal sensors within the CPU of the server under test were simultaneously recorded and compared to measurements made. In order to induce a temperature change within the server, a computationally intensive task was undertaken during temperature testing. Both methods of temperature measurement demonstrated similar trends, indicating that the proposed multiplexed optical fiber-based system has substantial potential as a scalable method of distributed data center temperature monitoring.
In this paper, a modified particle swarm optimization (PSO) approach, particle swarm optimization with ε- greedy exploration εPSO), is used to tackle the object tracking. In the modified εPSO algorithm, the cooperative learning mechanism among individuals has been introduced, namely, particles not only adjust its own flying speed according to itself and the best individual of the swarm but also learn from other best individuals according to certain probability. This kind of biologically-inspired mutual-learning behavior can help to find the global optimum solution with better convergence speed and accuracy. The εPSO algorithm has been tested on benchmark function and demonstrated its effectiveness in high-dimension multi-modal optimization. In addition to the standard benchmark study, we also combined our new εPSO approach with the traditional particle filter (PF) algorithm on the object tracking task, such as car tracking in complex environment. Comparative studies between our εPSO combined PF algorithm with those of existing techniques, such as the particle filter (PF) and classic PSO combined PF will be used to verify and validate the performance of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.