Saturday, March 2, 2024

Automatic Radar Signal Detection and FFT Estimation using Deep Learning

The training phase of the proposed method. The network receives as input the IQ signal and the groundtruth segmentation labels. Segmentation loss (ℒs⁢e⁢g) and FFT loss (ℒf⁢f⁢t) are calculated and used for the training of the model.

Automatic Radar Signal Detection and FFT Estimation using Deep Learning

Electrical Engineering and Systems Science > Signal Processing

Abstract

This paper addresses a critical preliminary step in radar signal processing: detecting the presence of a radar signal and robustly estimating its bandwidth. Existing methods which are largely statistical feature-based approaches face challenges in electronic warfare (EW) settings where prior information about signals is lacking.

While alternate deep learning based methods focus on more challenging environments, they primarily formulate this as a binary classification problem. In this research, we propose a novel methodology that not only detects the presence of a signal, but also localises it in the time domain and estimates its operating frequency band at that point in time.

To achieve robust estimation, we introduce a compound loss function that leverages complementary information from both time-domain and frequency-domain representations. By integrating these approaches, we aim to improve the efficiency and accuracy of radar signal detection and parameter estimation, reducing both unnecessary resource consumption and human effort in downstream tasks.
Subjects: Signal Processing (eess.SP)
Cite as: arXiv:2402.19073 [eess.SP]
  (or arXiv:2402.19073v1 [eess.SP] for this version)
  https://doi.org/10.48550/arXiv.2402.19073

Submission history

From: Akila Sachinthani Pemasiri Hewa Thondilege [view email]
[v1] Thu, 29 Feb 2024 11:54:48 UTC (2,885 KB)

 

  • License: CC BY-NC-SA 4.0
    arXiv:2402.19073v1 [eess.SP] 29 Feb 2024

    Automatic Radar Signal Detection and FFT Estimation using Deep Learning

    • Akila Pemasiri Queensland University of Technology, Australia a.thondilege@qut.edu.au 
    • Zi Huang Queensland University of Technology, Australia z36.huang@hdr.qut.edu.au
    • Fraser Williams Queensland University of Technology, Australia fraser.williams@hdr.qut.edu.au 
    • Ethan Goan Queensland University of Technology, Australia ej.goan@qut.edu.au 
    • Simon Denman Queensland University of Technology, Australia s.denman@qut.edu.au 
    • Terrence Martin Revolution Aerospace, Australia terry@revn.aero 
    • Clinton Fookes Queensland University of Technology, Australia c.fookes@qut.edu.au

    Index Terms:

    Signal Recognition, Signal Detection, Low Probability of Intercept, FFT, Deep Learning

    I Introduction

    Detecting the presence of a radar signal and robustly estimating its bandwidth is a crucial preliminary step in radar signal processing. This reduces both the consumption of computing resources and human efforts in downstream tasks such as radar parameter estimation and derivation of specific signal characteristics for electronic support [1].

    In low probability of intercept (LPI) signal detection, the primary objective remains the minimization of the detectable signal-to-noise ratio (SNR) [2]. In an electronic warfare (EW) scenario, the receiver does not possess any prior information about the to-be-detected signals. Initial detection methods such as matched filters are not well-suited for scenarios where neither the received signal nor its modulation type are known [3]. Furthermore, general likelihood ratio tests, which typically require prior knowledge about the modulation type of the received signal, are also not applicable in such cases [4].

    Other approaches have primarily relied on statistical feature-based methodologies including energy detection [5] and time-frequency domain detection [6, 7]. However, these techniques face inherent limitations with degraded performance in challenging environments [8]. Recently, deep learning (DL)-based methods have received much attention for characterizing and classifying radar signals [11, 12, 9, 10]. DL-based signal detection methods have primarily been explored using two main approaches: time-frequency images [13, 14] (TFI), and IQ sequences as model inputs [15, 16, 17].

    However, while DL-based methods have demonstrated superior results compared with the other methods, most existing DL-based methods are formulated as a binary classification task where the task is simply detecting the presence of a signal [15, 16, 17]. While some methods have sought to detect the signal and its location in the time domain [8], in this paper we present a novel methodology to detect the signal, localise it in the time domain, and estimate the frequency content occupied by the signal at that point in time. To enable robust estimation of the signal duration and operating frequency band we utilise a compound loss function which is capable of learning complementary information shared between the time domain and frequency domain representations.

    II Proposed Method

    The flow of data through our model during the training phase is depicted in Fig. 1, and information flow during the inference phase is depicted in Fig. 2. During the training phase, the IQ signal and the corresponding groundtruth segmentation label are used as the input. In the groundtruth segmentation label, 1 denotes that a signal of interest is present at a point in time, and 0 denotes the absence of such a signal. Using the input signal and the groundtruth segmentation label, the groundtruth fast Fourier transform (FFT) representation is obtained [18]. The “Detection and FFT Estimator Model” in Fig. 1 network model estimates the prediction of the segmentation mask, which is then used with the input IQ signal to generate the FFT representation for the signal of interest.

    In the inference phase (Fig. 2) the input is the IQ sequence and the trained model will output the starting point and the end point of each pulse and the FFT representation of the incoming signal.

    Refer to caption

    Figure 1: The training phase of the proposed method. The network receives as input the IQ signal and the groundtruth segmentation labels. Segmentation loss (seg) and FFT loss (fft) are calculated and used for the training of the model.

    Refer to caption

    Figure 2: The inference phase of the proposed method. The network receives as input the IQ signal and estimates the pulse locations the FFT representation. For the pulse locations, tsn denotes the starting time stamp of nth pulse and ten denotes the ending time stamp of the same pulse.

    Refer to caption

    Figure 3: The Unet architecture we use in this work. N is the sequence length under consideration. The details of the layers are discussed in Section II-A of this paper.

    II-A Network Architecture

    We use a UNet [19] architecture for predicting the segmentation labels. This is because the standard UNet architecture modified to use 1D convolution operations has demonstrated superior performance compared to the other sequence segmentation methodologies on long and interleaved radar signals. We follow a similar architecture to [10]. Fig. 3 depicts the network architecture we use in our work.

    Both the contracting path and the expansive path of our network use Double Convolution Layers that consist of 2 blocks where each block contains a 1D Convolution Layer followed by a 1D Batch Normalization layer and a Rectified Linear (ReLU) layer. To alleviate the information loss between down sampling and up sampling paths, skip connections are used and the features are concatenated across them.

    We employ a compound loss (L) function which seeks to minimise both the segmentation loss and the FFT estimation loss (1), where wseg and wfft are the coefficients associated with the segmentation loss and FFT loss respectively. We use Binary Cross Entropy (BCE) loss (2) for the segmentation loss, and Mean Squared Error (MSE) loss (3) for the FFT estimation loss.


    =wsegseg+wfftfft
    (1)

    seg =1Mi=1M(seggtilog(pi))
    (2)


    +(1seggtilog(1pi))

    fft=1Mi=1M(gtfftipredffti)
    (3)

    In (2) and (3), M is the number of samples under consideration. In (2), seggti refers to the ith groundtruth segmentation label and pi is the estimated probability of class 1. In (3), gtffti refers to the ith normalized groundtruth FFT value and predffti refers to the ith normalized predicted FFT value.

    We train our models with a constant learning rate of 104 using the Adam optimizer[20]. To reduce the number of experimental permutations, we set wseg and wfft in (1) to 1.

    III Experiments

    III-A Dataset

    We use a new radar signal detection dataset which builds upon the datasets proposed in [9, 10]. We consider 5 radar signal classes: 1) coherent unmodulated pulses (CPT), 2) Barker codes, 3) polyphase Barker codes, 4) Frank codes, and 5) Linear frequency-modulated (LFM) pulses. We consider the same parameter ranges for the parameters of pulse width (PW) and pulse repetition interval (PRI). In addition to the Additive white Gaussian noise (AWGN), in this dataset, we introduce channel impairments by incorporating frequency offsets and phase offsets. The frequency offsets of 20,000, 40,000, 60,000, 80,000 and 100,000 Hz, and phase offsets from 0 to π/2 radians at π/16 intervals are used. We allow radar signals to freely alternate, capturing the dynamic nature of an electronic warfare setting [22]. A sample data sequence from our dataset is depicted in Fig. 4.

    Refer to caption
    (a) Raw IQ Signal.
    Refer to caption
    (b) Presence of each signal. In the legend Barker - 10 - 100, refers to a signal with modulation type “Barker”, with “10µs” PW and ‘10µs” PRI.
    Refer to caption
    (c) The annotation labels used in training the models.
    Figure 4: A sample data sequence, with -17 dB SNR.

    IV Evaluation Protocol

    We use the train split of our dataset to train the models and use the independent test split to test the models. The training set contains 120, 000 signals, while the validation and test set each contain 20, 000 signals.

    We use the average F1 ((4), (5), (6)) value over all the datapoints to evaluate the accuracy of the segmentation predictions. In (5) and (6), TP, FP and FN refer to True Positives, False Positives and False Negatives respectively.


    F1=2PrecisionRecallPrecision+Recall
    (4)

    Precision=TPTP+FP
    (5)

    Recall=TPTP+FN
    (6)

    To calculate the similarity between the groundtruth FFT representation and the predicted FFT representation, we use cosine similarity between the two representations (7).


    similarity=gtfftpredfftgtfft+predfft
    (7)

    To compare our method with the energy detection approach of [2], we use probability of correct detection (Pd) and the probability of false detection(Pfa).

    V Performance Evaluation

    We conduct our evaluations with the aims of: 1) Evaluating the effect of sequence length and the performance; 2) Evaluating the contribution of the components in Equation (1) to the overall performance; and 3) Comparing method with Energy detection [2] method.

    Table I provides a summary of the results of experiments conducted on datasets with different sequence lengths and with different settings of the loss function shown in Equation (1). It can be observed that the proposed approach has performed consistently across different sequence lengths. Compared with the high SNR levels, the performance at the low SNRs remains low, and this is an expected trend and is consistent with similar radio recognition tasks [9, 10, 21]. From Table I, it can be observed that a significant performance gain can be obtained on the segmentation task by incorporating the Lfft within the loss function.

    Refer to caption
    (a) F1 measure for segmentation prediction across all the considered SNRs.
    Refer to caption
    (b) Cosine similarity for FFT prediction across all the considered SNRs.
    Figure 5: Performance evaluations for segmentation prediction and FFT prediction.
    TABLE I: Comparison of segmentation models on different sequence lengths and different loss components based on F1 score.
    Loss Function
    Components
    Sequence
    Length
    -20dB -10dB 0dB 10dB 20dB
    Lseg 4096 54.63 77.62 96.83 98.35 98.53
    8192 55.31 78.05 97.513 98.46 98.54
    16384 55.54 78.56 97.61 98.74 98.75
    Lseg+Lfft 4096 57.59 79.70 98.33 99.07 98.87
    8192 59.13 80.60 98.72 98.88 98.92
    16384 57.40 79.98 98.60 99.27 99.31

    Similarly, Table II provides a summary of the results of FFT estimation accuracy. Similar to the segmentation tasks, it can be observed the models have performed consistently across different sequence lengths. When compared with the models that only use seg in (1), it can be seen the models that use the compound loss function have obtained significant accuracy in FFT estimation even at the lower SNRs. Fig. 5 depicts the performance on segmentation and FFT estimation, which highlights the impact of the compound loss function. Fig 4(b) highlights the significance of enabling the fft in the loss function. From Table I and Table II, it can be seen that employing the compound loss function has a positive effect on both segmentation and FFT estimation as the model is effectively learning the complementary information.

    TABLE II: Comparison of FFT estimation on different sequence lengths and different loss components. This records cosine similarity computed at -20, -10, 0, 10 and 20 dB SNR.
    Loss Function
    Components
    Sequence
    Length
    -20 -10 0 10 20
    Lseg 4096 0.299 0.447 0.582 0.740 0.878
    8192 0.316 0.482 0.649 0.772 0.866
    16384 0.321 0.504 0.657 0.786 0.869
    Lseg+Lfft 4096 0.543 0.766 0.953 0.982 0.999
    8192 0.551 0.788 0.956 0.984 0.999
    16384 0.597 0.828 0.947 0.978 0.999

    To calculate the probability of correct detection (Pd) and probability of false alarm (Pfa) using an energy detector we use the radar parameters specified in [2], where we obtain 52.86% Pd and 8.47%Pfa. For our model we record a 98.36% Pd and 1.34 %Pfa.

    VI Conclusion and Future Directions

    This paper presents a novel segmentation model for detecting a signal and localising it in the time domain. The model we propose can also be used for estimating the FFT representation of the signal, which helps in limiting the search scope for downstream signal processing tasks for electronic support. We evaluate our models on a synthetic dataset which represents a EW environment. In future work, the dataset can be extended to incorporate more challenging and realistic channel effects and other segmentation models including multi-stage models can be explored.

    VII Acknowledgement

    The research for this paper received funding support from the Queensland Government through Trusted Autonomous Systems (TAS), a Defence Cooperative Research Centre funded through the Commonwealth Next Generation Technologies Fund and the Queensland Government.

    References

    • [1]
  • Richard, G., and E. L. I. N. T. Wiley. ”The interception and analysis of radar signals.” Artech House, Boston (2006).
  • [2]
  • O’Donoughue N. Emitter detection and geolocation for electronic warfare. Artech House; 2019 Oct 31.
  • [3]
  • Roman JR, Rangaswamy M, Davis DW, Zhang Q, Himed B, Michels JH. Parametric adaptive matched filter for airborne radar applications. IEEE Transactions on Aerospace and Electronic Systems. 2000 Apr;36(2):677-92.
  • [4]
  • Ly PQ, Sirianunpiboon S, Elton SD. Passive detection of BPSK radar signals with unknown parameters using multi-sensor arrays. In2017 11th International Conference on Signal Processing and Communication Systems (ICSPCS) 2017 Dec 13 (pp. 1-5). IEEE.
  • [5]
  • Liang YC, Zeng Y, Peh EC, Hoang AT. Sensing-throughput tradeoff for cognitive radio networks. IEEE transactions on Wireless Communications. 2008 Apr 15;7(4):1326-37.
  • [6]
  • Geroleo FG, Brandt-Pearce M. Detection and estimation of LFMCW radar signals. IEEE Transactions on Aerospace and Electronic Systems. 2012 Jan 16;48(1):405-18.
  • [7]
  • Liu Y, Xiao P, Wu H, Xiao W. LPI radar signal detection based on radial integration of Choi-Williams time-frequency image. Journal of systems engineering and electronics. 2015 Oct;26(5):973-81.
  • [8]
  • Zhang Z, Li Y, Zhu M, Wang S. JDMR-Net: Joint Detection and Modulation Recognition Networks for LPI Radar Signals. IEEE Transactions on Aerospace and Electronic Systems. 2023 Jul 6.
  • [9]
  • Huang, Z., Pemasiri, A., Denman, S., Fookes, C., & Martin, T. (2023). Multi-Task Learning For Radar Signal Characterisation. In Proceedings of the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) (pp. 1-5). Rhodes Island, Greece: IEEE.
  • [10]
  • Huang Z, Pemasiri A, Denman S, Fookes C, Martin T. Multi-stage Learning for Radar Pulse Activity Segmentation. arXiv preprint arXiv:2312.09489. 2023 Dec 15.
  • [11]
  • O’Shea TJ, Roy T, Clancy TC. Over-the-air deep learning based radio signal classification. IEEE Journal of Selected Topics in Signal Processing. 2018 Jan 23;12(1):168-79.
  • [12]
  • Huynh-The T, Pham QV, Nguyen TV, Nguyen TT, Ruby R, Zeng M, Kim DS. Automatic modulation classification: A deep architecture survey. IEEE Access. 2021 Oct 15;9:142950-71.
  • [13]
  • Chen X, Jiang Q, Su N, Chen B, Guan J. LFM signal detection and estimation based on deep convolutional neural network. In 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 2019 Nov 18 (pp. 753-758). IEEE.
  • [14]
  • Liu Z, Shi Y, Zeng Y, Gong Y. Radar emitter signal detection with convolutional neural network. In2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT) 2019 Oct 18 (pp. 48-51). IEEE.
  • [15]
  • Nuhoglu MA, Alp YK, Akyon FC. Deep learning for radar signal detection in electronic warfare systems. In2020 IEEE Radar Conference (RadarConf20) 2020 Sep 21 (pp. 1-6). IEEE.
  • [16]
  • Su Z, Teh KC, Razul SG, Kot AC. Deep non-cooperative spectrum sensing over Rayleigh fading channel. IEEE Transactions on Vehicular Technology. 2021 Dec 28;71(4):4460-4.
  • [17]
  • Gao J, Yi X, Zhong C, Chen X, Zhang Z. Deep learning for spectrum sensing. IEEE Wireless Communications Letters. 2019 Sep 4;8(6):1727-30.
  • [18]
  • Heckbert P. Fourier transforms and the fast Fourier transform (FFT) algorithm. Computer Graphics. 1995 Feb;2(1995):15-463.
  • [19]
  • Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. InMedical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 2015 (pp. 234-241). Springer International Publishing.
  • [20]
  • Kinga D, Adam JB. A method for stochastic optimization. InInternational conference on learning representations (ICLR) 2015 May 7 (Vol. 5, p. 6).
  • [21]
  • Jagannath A, Jagannath J. Multi-task learning approach for automatic modulation and wireless signal classification. InICC 2021-IEEE International Conference on Communications 2021 Jun 14 (pp. 1-7). IEEE.
  • [22]
  • Ge Z, Sun X, Ren W, Chen W, Xu G. Improved algorithm of radar pulse repetition interval deinterleaving based on pulse correlation. IEEE access. 2019 Feb 25;7:30126-34.

No comments:

Post a Comment

7 Free AM simulation tools you might not know - Engineering.com

7 Free AM simulation tools you might not know - Engineering.com Here is a summary of the 3D printing simulation tools mentioned in the artic...