Open Access Paper
25 April 2017 On the application of neural networks to the classification of phase modulated waveforms
Anthony Buchenroth, Joong Gon Yim, Michael Nowak, Vasu Chakravarthy
Author Affiliations +
Abstract
Accurate classification of phase modulated radar waveforms is a well-known problem in spectrum sensing. Identification of such waveforms aids situational awareness enabling radar and communications spectrum sharing. While various feature extraction and engineering approaches have sought to address this problem, the use of a machine learning algorithm that best utilizes these features is becomes foremost. In this effort, a comparison of a standard shallow and a deep learning approach are explored. Experiments provide insights into classifier architecture, training procedure, and performance.

1.

INTRODUCTION

Dynamic spectrum access (DSA), spectrum sharing, and radio frequency (RF) convergence are concepts that have been receiving ever-increasing attention. As technology has advanced, more and more devices are requiring access to the RF spectrum. The proliferation of various spectrum users and the increasing bandwidth requirements of those spectrum users has driven many to consider the RF spectrum as a commodity. As scarcity for this commodity increases in some frequency bands, the need to more effectively and efficiently use the RF spectrum has impacted both policy and technology.

Recent research in waveform design has focused on facilitating methods for radar and communications spectrum sharing and optimizing performance in multifunction radar. Specifically, radar waveform design has focused on optimizing phase coded waveforms to attain a tactical goal. Furthermore, radar systems are increasingly equipped with waveform management algorithms that employ many dissimilar waveforms to be emitted.

This broad diversity of waveforms in the RF environment poses a challenge for spectrum sensing devices that provide situational awareness to RF spectrum users. Previous works have focused on feature engineering and machine learning for waveform recognition.1234 While some works123 focus mainly on feature engineering, other work4 places more emphasis on developing a machine learning approach to both classify within-library modulation and detect out-of-library modulations. Similarly, many communities including speech recognition, image classification, and computer vision have begun to place more emphasis on the application of more sophisticated machine learning. To that end, many classification applications have focused on applying neural network algorithms.563

This paper leverages existing feature engineering work1 to investigate the applicability of sophisticated neural network approaches to the recognition of phase modulated waveforms. We employ the use of features extracted from the ambiguity function (AF) of an intercepted waveform that have shown to aid in accurate classification. This effort will provide a performance comparison between the classifiers in literature1 and a single hidden layer neural network (SHLNN) classifier, identify fundamental differences in training procedures, and lead to the robust application of a convolutional neural network (CNN) classifier.

The remainder of this paper is organized as follows. Section 2 will briefly discuss the waveform feature extraction process. Section 3 describes the application and results of SHLNN classification methods. Section 4 presents the application and results of CNN classification methods. Finally, Section 5 concludes with a summary of findings and future efforts.

2.

FEATURE EXTRACTION

Given that a radar emits a continuous-time prototype pulse

00519_psisdg10205_102050I_page_2_1.jpg

where ϕc(t) is a phase modulation belonging to one of C classes in Table 1. As described,1 the intercept receiver observes the signal

00519_psisdg10205_102050I_page_2_2.jpg

corrupted by white Gaussian noise vp(t) with variance 00519_psisdg10205_102050I_page_2_3.jpg and potentially degraded by model mismatch in the receiver. Further, the observed pulses are altered by jitter in complex amplitude Ap, time scaled by α to achieve the designed pulse width, and modulated to carrier frequency, w0; all of which are unknown to the intercept receiver. Additionally, each pulse from the radar is contained within a window slightly larger than the pulse width to allow for errors in edge detection. The offset within the window, tp, is assumed to be unknown. Radar pulse observations are then input into the phase modulated waveform classifier in digitized form, yp[n] = yp(nTs), n = 0…N–1, where Ts is the sampling interval assumed to satisfy the Nyquist requirement.

Table 1.

Phase modulation types.1

cModulation TypeCode LengthTraining τ (µsec)Testing τ (µsec)
1Barker71.757.0
2Barker112.7511.0
3Barker133.2513.0
4Combined Barker162.08.0
5Combined Barker496.132.11
6Combined Barker16922.184.6
7Maximum Length Pseudo Random161.54.5
8Maximum Length Pseudo Random643.510.5
9Maximum Length Pseudo Random2566.318.9
10Minimum Peak Sidelobe101.44.2
11Minimum Peak Sidelobe252.510.0
12Minimum Peak Sidelobe484.819.2
13T1NA4.016.0
14T2NA3.012.0
15T3NA8.02.0
16Polyphase Barker71.757.0
17Polyphase Barker202.08.0
18Polyphase Barker404.016.0
19P1NA10.020.0
20P2NA6.425.6
21P3NA6.425.6
22P4NA10.029.0
23Minimum Shift Key6418.98.0

Because the focus of this paper is on the application and analysis of neural network classifiers to the waveform classification problem, for brevity, we will not overly emphasize the feature extraction process. As such, we compute the two-dimensional discrete Fourier transform of the log-scaled ambiguity function given by

00519_psisdg10205_102050I_page_3_1.jpg

For a more detailed and complete description of the feature extraction process, refer to the ambiguity-based classification algorithm.1

3.

SINGLE HIDDEN LAYER NEURAL NETWORK CLASSIFICATION

In keeping with the classification strategy in the ambiguity-based classification algorithm,1 we leverage the classifier training methodology and apply the use of a single hidden layer neural network. Specifically, we train our classifier with the 23 phase modulations displayed in Table 1. For each class, we perform a 1000 iteration Monte Carlo simulation at 10 dB SNR with the pulse widths denoted in Table 1. In each iteration, pseudorandom noise and phase realizations are imparted on the waveform. Additionally, the time offset within the pulse capture window and sampling rate is randomized. This training procedure allows for an intact, yet well-generalized waveform data set to be input into our feature extractor in an effort to safeguard against biases and overfitting in our classification system.

This training procedure is especially well-suited to training Fisher’s linear discriminant (FLD) and support vector machine (SVM) classifiers based on how each forms the separating hyperplane. Because both seek to maximize the margin between two classes, training these classifiers with feature sets that best maintain the structure of the waveform will allow for maximum separation between classes in feature space. For this reason, we train at high SNR. Neural networks behave differently, however. At a fundamental level, neural networks are linear; performing a weighted sum of inputs to form its net activation. The network emits an output that is a nonlinear function of its activation given by

00519_psisdg10205_102050I_page_3_2.jpg

where i and j index the input and hidden layer, respectively, ω and b denote the weights and biases, and σ denotes the nonlinear activation function. Because Equation 4 is a nonlinear function, data input into the SHLNN are mapped non-linearly in feature space altering class separability. To properly utilize this non-linearity it is best to train using a more diverse data set where the structure of each class is widely varied. These widely varied classes allow for the SHLNN to account for the structure of the features in diverse backgrounds. To create a diverse training set, we generate data at multiple SNRs as noted in Section 3.2.

3.1

Single Hidden Layer Neural Network Performance

To test our classification algorithm, we apply a SHLNN using Tensorflow7 with different pulse widths than in our training phase, as shown in Table 1. We postulate that a SHLNN should trend similarly to the FLD classifier8 and the SVM classifier9 given that, at the most fundamental level, a SHLNN behaves linearly.

Figure 1, displays a comparison of the overall accuracy of the FLD, SVM, and SHLNN classifiers for a range of SNRs. Note, that accuracy curves in both plots show a comparison of SHLNN performance under two different training methodologies. This will be explained in further detail in Section 3.2. The left-hand plot in Figure 1 shows that at high SNR, the SHLNN performs similarly to both the FLD and SVM classifiers. As SNR declines below 2 dB, performance degrades quickly. Because good performance is defined as accuracy greater than 90%, it is interesting that the SHLNN performs 2 dB worse than the FLD classifier and 4 dB worse than the SVM classifier.

Figure 1.

(left) Overall classifier accuracy using training method in the ambiguity-based classification algorithm.1 Note, that while performance is comparable at high SNR, a sharp decline in performance occurs at 0 dB. (right) Overall classifier accuracy using a modified training method. Observe, overall performance increases but reaches as asymptotic limit at about 90% accuracy.

00519_psisdg10205_102050I_page_4_1.jpg

As noted in the ambiguity-based classification algorithm,1 accuracy alone does not provide a sufficient indication of classifier performance. Instead, the combination of accuracy and the F1-score given by

00519_psisdg10205_102050I_page_4_3.jpg

where precision is a measure of a classifier’s exactness and sensitivity is a measure of a classifier’s completeness. Figure 2 displays the F1-score of each class for the FLD, SVM, and SHLNN classifiers. Interestingly, while both the FLD and SVM classifiers outperform the SHLNN, the F1-scores of the FLD and SHLNN trend similarly for the majority of classes.

Figure 2.

(left) Comparison of classifier F1-score when trained in a identically. Interestingly, the FLD and SHLNN classifiers show similiar trends in their abilities to classify each modulation. (right) Comparison of classifier F1-score when the SHLNN incorporates a modified training method. Notice, per-class performance is widely varied.

00519_psisdg10205_102050I_page_4_2.jpg

At this point, it is appropriate to mention the right-hand plots in Figures 1 and 2. Because neural networks, in general, differ greatly from other linear classifiers in the way features are processed, we institute a methodology to improve our training phase. Using this modified training methodology, we are able to drastically improve classifier accuracy as shown in the right-hand plot of Figures 1 and 2. Notice, however that our overall accuracy reaches an asymptotic limit at around 90%. While finding the optimal training set is a difficult problem in neural network research, we do not overly dwell on this for SHLNNs.

3.2

Modified Training Methodology

Based on other works,563 neural networks seem to naturally work well with highly varied training data. As such, we remove the restraint of training our classifiers strictly at 10 dB SNR. Instead, based on our performance displayed in Figures 1 and 2, we see that for all of our classifiers, accuracy sharply declines after 0 dB SNR and asymptotically approaches a minimum accuracy of approximately 5%. Thus, we choose to retrain our SHLNN with 1000 Monte Carlo iterations of 0 dB and -10 dB each. Note that all other random variables in our original training methodology remain.

As stated in the previous section, optimizing this methodology for the SHLNN was not the focus of this work. As such, the asymptotic performance limit of approximately 90% could be improved by incorporating additional training data at a higher SNR.

4.

CONVOLUTIONAL NEURAL NETWORK CLASSIFICATION

Recently, convolutional neural networks have received increasing notoriety. From applications in facial, hand-writing, and image recognition, CNNs have shown to provide outstanding performance.56 Much of this increase in performance is due to its ability to extract meaningful features from large and complex sets of data. Because CNN classifiers have worked well with classifying images, we alter our feature set to maximally utilize the CNN algorithm.

Specifically, we choose to train and test the CNN classifier with features computed directly from the ambiguity function of an intercepted pulse as shown in Figure 3. For each waveform class, one half of the discrete AF is computed and the resulting 101-by-101 image is input into the CNN. Note, that for training the CNN, we follow the both our original1 and modified training methodology as discussed in Section 3.2.

Figure 3.

Discrete ambiguity function of a Barker 7 waveform.

00519_psisdg10205_102050I_page_5_1.jpg

4.1

Convolutional Neural Network Performance

Figure 4 displays the classification accuracy and and F1-score of the CNN classifier under both training methodologies. Using the original training methodology,1 the accuracy of the CNN trends similarly to the FLD, SVM, and SHLNN. Interestingly, good performance (≥ 90%) is achieved at 0 dB but, both the FLD and SVM have higher accuracy. Furthermore, the F1-score of the CNN classifier appears to closely match the FLD classifier.

Figure 4.

(left) Comparison of overall classifier accuracy. Note, that when all classifiers are trained using the methodology in the ambiguity-based classification algorithm,1 accuracy trends are similar. Using the modified training approach, performance at low SNR is greatly increased. (right) Comparison of classifier F1-scores. Note, that while the overall accuracy of the CNN classifier using the modified training is superior to all others tested, the one-vs-all SVM maintains the highest F1-score.

00519_psisdg10205_102050I_page_6_1.jpg

As shown, performance is drastically increased under the modified training methodology. Training via the modified methodology, overall accuracy of the CNN far exceeds that of any other classifier. Near-perfect accuracy is achieved by the CNN for all SNRs above 0 dB, before gracefully declining at -2 dB. Furthermore, the CNN achieves approximately 60% accuracy at -10 dB; a difference of nearly 32% from the next nearest classifier.

But, as previously stated, overall accuracy is not a complete indicator of the performance of a classifier. The right-hand plot displays the F1-score of all classifiers. As shown, the CNN classifier using the modified training methodology performs well on all waveform classes. Note, that while the CNN trained using the modified methodology has superior overall accuracy, the SVM classifier has a superior F1-score on nearly all waveform classes. This is due to the one-vs-all structure of the SVM classifier. Because each class is compared to all remaining classes, false positives are minimized by allowing for the all classification decision, thus, the SVM classifier has superior precision. Conversely, the CNN classifier forces a classification into one of the 23 trained classes resulting in an increased amount of false positives.

Because we seek a classification system that achieves the highest overall performance without respect to any one particular class, assessing our classifiers in terms of an average, or aggregate, F1-score is appropriate. This measure, like our overall accuracy measure, serves as a composite metric to easily compare all classifiers. Table 2 displays the aggregate F1-scores for each classifier. For each classifier, the F1-scores of each waveform class are averaged, providing a measure of the entire classification system. As previously mentioned, the one-vs-all SVM attains the highest aggregate F1-score followed closely by the CNN using the modified training. The SHLNN using the training methodology in the ambiguity-based classification algorithm1 provides the worst F1-score. As previously noted, the relatively low performance of the SHLNN is due to the training methodology. When training using the modified methodology, performance is increased, resulting in an aggregate F1-score slightly higher than that of the FLD and CNN classifiers under the original training methodology.1 This modification in our training was especially beneficial to the CNN performance, where the F1-score experienced a 17% increase.

Table 2.

Aggregate F1-score for each classifier.

ClassifierAggregate F1-score
Sequential FLD.7597
One-vs-All SVM.9818
SHLNN.6804
SHLNN Modified Training.7648
CNN.7491
CNN Modified Training.9223

5.

CONCLUSION

To enable spectrum sharing, modern RF systems must employ situational awareness algorithms to detect, characterize, and identify other spectrum users. Due to the advances in communications and radar waveform design, incorporating an algorithm to recognize a wide range of modulations provides crucial information to empower spectrum knowledge in RF devices. In this paper, we present a comparison of performance for a single hidden layer neural network and a convolutional neural network applied to the phase modulated waveform recognition problem. We assess our performance against sequential FLD and one-vs-all SVM classifiers and document key findings. Future work will explore incorporating a broader range of waveforms and resolving waveform ambiguities resulting from complex environments.

REFERENCES

[1] 

Buchenroth, A., Rigling, B., and Chakravarthy, V., “Ambiguity-based classification of phase modulated radar waveforms,” in Radar Conference (RadarConf), 2016 IEEE, 1 –6 (2016). Google Scholar

[2] 

Rigling, B. D. and Roush, C., “Acf-based classification of phase modulated waveforms,” in Radar Conference, 2010 IEEE, 287 –291 (2010). Google Scholar

[3] 

Lundén, J. and Koivunen, V., “Automatic radar waveform recognition,” IEEE Journal of Selected Topics in Signal Processing, 1 (1), 124 –136 (2007). https://doi.org/10.1109/JSTSP.2007.897055 Google Scholar

[4] 

Pavy, A. M. and Rigling, B. D., “Phase modulated radar waveform classification using quantile one-class svms,” in Radar Conference (RadarCon), 2015 IEEE, 0745 –0750 (2015). Google Scholar

[5] 

LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P., “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, 2278 –2324 (1998). Google Scholar

[6] 

Krizhevsky, A., Sutskever, I., and Hinton, G. E., “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, 1097 –1105 (2012). Google Scholar

[7] 

Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X., “TensorFlow: Large-scale machine learning on heterogeneous systems,” (2015). Google Scholar

[8] 

Duda, R. O., Hart, P. E., and Stork, D. G., “Pattern classification,” John Wiley & Sons(2012). Google Scholar

[9] 

Chang, C.-C. and Lin, C.-J., “Libsvm: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology (TIST), 2 (3), 27 (2011). Google Scholar
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Anthony Buchenroth, Joong Gon Yim, Michael Nowak, and Vasu Chakravarthy "On the application of neural networks to the classification of phase modulated waveforms", Proc. SPIE 10205, Open Architecture/Open Business Model Net-Centric Systems and Defense Transformation 2017, 102050I (25 April 2017); https://doi.org/10.1117/12.2264459
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Ferroelectric LCDs

Neural networks

Signal to noise ratio

Modulation

Radar

Feature extraction

Convolutional neural networks

Back to Top