Thursday, February 1, 2024

Flying Objects Classification Based on Micro–Doppler Signature Data From UAV Borne Radar | IEEE Journals & Magazine | IEEE Xplore

Fig. 2. - Flying object data sample. (a) ULA, (b) URA, and (c) UCA.
Fig. 2. Flying object data sample. (a) ULA, (b) URA, and (c) UCA.

Flying Objects Classification Based on Micro–Doppler Signature Data From UAV Borne Radar | IEEE Journals & Magazine | IEEE Xplore

Publisher: IEEE


Abstract:

Unmanned aerial vehicles (UAVs) have been widely used in many facets of contemporary society over the past ten years due to their accessibility and affordability. The rise in drone usage brings up privacy and security issues. It is essential to be vigilant for unauthorized UAVs in restricted areas. In this work, a hybrid Convolutional Neural Network-Shuffled Frog Leap (CNN-SFL) algorithm is proposed for classifying various flying objects, such as drones, helicopters, and artificial birds based on micro-Doppler signature (MDS) collected from HB100 radar mounted on UAV.

Various array positioning and configuration, such as uniform linear array (ULA), uniform rectangular array (URA), and uniform circular array (UCA), are taken into account when analyzing the accuracy for avoiding performance loss due to a significant angle of arrival (AoA) of the received signal. Further, the activities of drones are also classified, and accuracy is assessed in comparison to existing algorithms. The results demonstrate that the proposed technique outperforms in all cases. In the endfire direction, URA performs better as compared to the other configurations and in other directions, ULA performs better.

Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 21)
Article Sequence Number: 3501605
Date of Publication: 16 January 2024
ISSN Information:
Publisher: IEEE

SECTION I. Introduction

Unmanned aerial vehicles (UAVs) have seen fast technological advancements in recent years, enabling extensive usage in various applications [1]. Despite receiving more attention from different public and private sectors, UAVs unquestionably constitute a severe threat to the safety of the airspace, which may jeopardize civilian privacy and security. Concerns about unlicensed and inexperienced pilots invading restricted areas and interfering with flight systems have been raised by using civilian drones in the national airspace. Using UAVs in illegal surveillance and terrorist strikes is the most problematic situation [2]. Therefore, the deployment of anti-drone equipment is urgently required. Such a drone defense system must find, recognize, and follow a drone’s movements. Various video, audio, radar, and radio frequency (RF) surveillance technologies are used for micro-UAV detection and categorization [3]. Radar can be preferred as it works in all weather conditions. Deep learning (DL) has seen tremendous growth in recent years. With the ability to choose, extract, and analyze features from raw datasets without relying on manual feature selection and extraction, DL approaches are recognized as one of the most influential and successful techniques. If the initial hyperparameters are adequately calibrated, DL approaches can adapt to the dataset’s diversity presented without degrading the classification performance, making those algorithms incredibly efficient and time-effective [4]. DIAT-μ SAT is explicitly made for small UAV (SUAV) target detection and classification using their micro-Doppler signatures (MDSs) [5]. But different flight pattern recognition is not taken into consideration during classification. Using the MDS extracted from multipolarization radar data to categorize drone elevation angles [6]. The analysis is done only with a single antenna configuration. It does not provide a solution in case of poor performance of the system due to the angle of arrival (AoA) in the endfire direction. Based on the limitation of the few recent works, here, the work is taken up on classifying the flying object with the radar data in the form of MDS and further identifying the operation mode of a drone.

The work in this letter has the following contributions:

  1. Data acquired by an array of HB100 radar mounted on a UAV are used to classify flying objects (drones, helicopters, and artificial birds). The implications of various HB100 array orientations or placements, such as uniform linear array (ULA), uniform rectangular array (URA), and uniform circular array (UCA), are explored. Furthermore, depending on the experimental data, the activities or modes are classified.

  2. A hybrid algorithm is proposed and utilized to categorize the flying object and identify the activities (of drone) with various configurations of HB100 radar array.

  3. Classification accuracy is analyzed for the proposed algorithm by considering different parameters such as batch size, population size (PS), and array configuration into consideration.

The remaining sections of the letter are organized as follows. Section II describes the experiment and dataset, Section III describes the proposed technique for categorizing flying objects along with its mode of operation, and Section IV presents the outcomes of the proposed algorithm. Finally, Section V concludes the work.

Note: Matrices and vectors are represented in boldface letters. In this work, drones and UAVs are used interchangeably. Section II describes the experimental setup and dataset description.

SECTION II.  Experimental Set-Up and Dataset Description

The experimental setup’s framework for categorizing the flying object and identifying its activities is shown in Fig. 1. The HB100 radar mounted to a UAV captures the micro-Doppler impact caused by flying objects depicted in Fig. 2. The transmitted signal rebounds to the radar and can be obtained at the IF terminal after collecting the micro-Doppler effect generated by the targeted objects [7]. The radar is associated with an amplifier circuit (AC) in order to strengthen the signal because its output is obtained in microvolts [8]. The Zigbee module is linked to the AC’s output to transfer the data to the desktop or laptop for additional MATLAB processing such as data preprocessing, formatting, dataset generation, and subsequent analysis. A more detailed composition of the dataset is represented in Table I and Fig. 3(a). In Table I, the term “Ratio” means the samples of the particular class divided by the total number of samples of all the classes. Dataset is fragmented into 70: 20: 10 ratios for training, testing, and validation, respectively.

TABLE I Dataset Composition
Table I- 
Dataset Composition
Fig. 1. - Experimental setup.
Fig. 1.

Experimental setup.

Fig. 2. - Flying object data sample. (a) ULA, (b) URA, and (c) UCA.
Fig. 2. Flying object data sample. (a) ULA, (b) URA, and (c) UCA.
Fig. 3. - (a) Flowchart for dataset generation. (b) Algorithm flowchart.
Fig. 3. (a) Flowchart for dataset generation. (b) Algorithm flowchart.

SECTION Algorithm 1

Proposed Algorithm

1: Initialize the parameters for j and k number of iterations and generations respectively.
2: Randomly initialize the population of weight W
3: for i=1toI
4: for k=1toK
5: for j=1toJ
6: Evaluation:
  1. CNN: for l = 1 to 4 Convolution: Z=WjY+B ReLU: Z={0YifZ0ifZ>0 Pooling: X=max(Z) end for

  2. Output Prediction: True Positive, True Negative, False Positive, False Negative

  3. FF: Calculate the accuracy [5]. Q(k)=[FF1(k),FF2(k),,FFj(k)]

7: end for
8: Sorting: Qsort = [FFbest(k),,FFworst(k) ]
9: if FFworst(k)>FFworst(k)
10: then Wworst=Wworst
11: Subgroups: Qmeme1=[FFbest(j),FFthirdbest(j), , FFworst(j)] ; Qmeme2=[FFsecondbest(j),FFfourthbest(j),]
12: D = Rand (WbestWworst)Wworst=Wworst + D , DminDDmax
13: end for
14: end for

SECTION III. Proposed Algorithm

The proposed algorithm combines a Convolutional Neural Network (CNN) and a Shuffled Frog Leap Algorithm (SFLA).

CNNs are commonly employed in pattern recognition because of their capacity to retrieve and categorize features effectively [9]. SFLA is a metaheuristic optimization technique inspired by the foraging behavior of frogs [10]. It employs global search, in which successful solutions are exchanged among different groups of frogs, and local search, in which solutions are improved by altering their components. CNNs and SFLA are effective methods for feature extraction, classification, and optimization. In order to train the MDS data using CNN, the population is initialized by randomly assigning weights to a known number of filters. The evaluation entails feeding the experimental MDS, Y from the moving HB100 radar into a convolutional layer. The algorithm employs four convolutional layers, with 32 filters in the first two levels and 64 filters in the last two layers. Initially, the filter weights W are random with a bias term, B . The convolutional layers have a filter size of 3×3 , zero padding, and a stride of (1, 2). To minimize the size of the feature map, max pooling with a stride of (1, 2) is employed. Between the convolutional and pooling layers, ReLU activation is used. The matrix is flattened in the final stage, convolution and pooling layers are applied, and the outcomes are vectorized in a fully connected layer. Sorting is done based on the fitness function (FF) based on the accuracy [11] in decreasing order from the best value to the worst value. Further subgrouping is done by dividing it into memes, Qmeme . From this, the best and the worst values are considered for getting the value of Wworst . The worst value Wworst is replaced with the new one Wworst . Then the accuracy with Wworst . is evaluated and compared with Wworst . If Wworst . is better than Wworst , then it is replaced, or else again, a new value can be obtained using step 12. In this way, the process continues until the required criteria is fulfilled for a number of generations. The flowchart of the algorithm is depicted in Fig. 3(b).

SECTION IV. Result and Discussion

The proposed algorithm is assessed based on MDS gathered from HB100 radar mounted on a UAV. The data are collected at different arrangements of HB100 radar antenna array (ULA, URA, and UCA). Tables II–​IV depict the classification accuracy of the flying objects with four classes—artificial bird, helicopter, drone 1, and drone 2, by varying the batch size (32, 64, and 128) and iterations at different angles (0°, 30°, and 45°) along with the activities of UAVs, i.e., eight classes—artificial bird, helicopter, drone 1 with On, Off, Connected activity, drone 1 with Flying activity, drone 1 with Hovering activity and drone 2 with On, Off, Connected activity, drone 2 with Flying activity, drone 2 with Hovering activity, as mentioned in Table I, for ULA, URA, and UCA, respectively. It can be seen that ULA configuration accuracy is less near 0°, as compared to different directions, i.e., 30° and 45°. Although URA works well near 0°. This is due to the nonuniform operation of each antenna element in the ULA [12]. The ULA’s field of view is so constrained. It is unable to recognize or evaluate accurately in all aspects. URA, on the other hand, has a symmetry array structure that enables a nearly 360° field of view with little variations in beamwidth or sidelobe level. The reference point distribution of URAs allows for higher resolution than that of UCAs. The circular layout of UCAs limits their ability to differentiate far separated sites, whereas URAs can do so with more effectiveness.

TABLE II Classification of Flying Object and Its Operation Mode for Experimental Dataset (for ULA)
Table II- 
Classification of Flying Object and Its Operation Mode for Experimental Dataset (for ULA)
TABLE III Classification of Flying Object and Its Operation Mode for Experimental Dataset (for URA)
Table III- 
Classification of Flying Object and Its Operation Mode for Experimental Dataset (for URA)
TABLE IV Classification of Flying Object and Its Operation Mode for Experimental Dataset (for UCA)
Table IV- 
Classification of Flying Object and Its Operation Mode for Experimental Dataset (for UCA)

Another finding is that accuracy grows with more batch sizes, up to 128 batches and 500 iterations, after which accuracy declines in every situation. This experimental study yields greater accuracy at 128 batch size (500 iterations). This is because of the proposed algorithm’s weights being updated both locally and globally. Consequently, accuracy may be improved even with a 128-batch size. However, accuracy declines after 500 iterations since the proposed algorithm is likely to overfit the categorization after many iterations with large batch sizes.

Three various sample sizes—PS = 25, 50, and 60—are taken into account. Tables II–​IV shows that in all cases, a PS = 50 provides the best accuracy. This is due to insufficient variety in the search space. As PS increases, categorization complexity increases as well, leading to less accuracy at PS = 60 than PS = 50. A suitable number must be taken into account because both a low and a high population have a negative impact on accuracy. Low performance in smaller populations may be caused by the weights’ limited search space, and the intricacy of the large PS.

The proposed algorithm is a promising approach for classification in real-world scenarios. Figs. 4 and 5 depict the classification accuracy of the proposed algorithm with four and eight classes, respectively, with the existing algorithms [5], [13], [14], and [15]. In comparison to the existing methods, the proposed algorithm performs better in each of the circumstances considered. As more information about the distinctive characteristics of different flying objects becomes available owing to the convolutional layer count, their accurate classification becomes possible. Utilizing the sorting, subgrouping, and replacement building blocks will therefore aid in avoiding early convergence and exploring additional diversity without running into overfitting issues. But it costs plenty of computing power to run proposed algorithm. For real-time applications, this might pose a drawback. Moreover, it is susceptible to the hyperparameters. To achieve the accurate outcomes, it is crucial to properly adjust these hyperparameters. From an application perspective, achieving a high level of accuracy is crucial, as misclassification of flying objects can lead to false alarms or missed detections. It is observed that the proposed method is better than the existing method. This high accuracy translates directly into the reliability of our system in real-world applications, such as airspace surveillance or flying classification and categorization.

Fig. 4. - Flying object classification accuracy.
Fig. 4. Flying object classification accuracy.
Fig. 5. - Flying object with activities classification accuracy.
Fig. 5. Flying object with activities classification accuracy.

Fig. 6 depicts the sensitivity of ULA, URA, and UCA. The array configuration’s sensitivity is determined by the placement of the antennas as well as AoA. The mathematical relation of the sensitivity with respect to the array configuration is depicted in [12].

Fig. 6. - Sensitivity of various array configurations.
Fig. 6. Sensitivity of various array configurations.

SECTION V. Conclusion

For the invader monitoring system to function properly in the restricted area, flying objects must be classified. For this application, the proposed strategy performs better than the prevailing techniques in categorizing flying objects with their activities. In this case, the CNN model’s main parameter is weights. Moreover, the weights are updated via SFLA, which also keeps the process away from convergent abruptly. ULA, URA, and UCA have been explored in various orientations, with URA excelling in endfire direction and ULA performing better in the 30° and 45°. The proposed technique attains an accuracy of 98.4% (URA at endfire direction), 99.7% at 30° (ULA), 99.9% at 45° (ULA), and 98.1% (URA), 99.4% at 30° (ULA), 99.6% at 45° (ULA), for the four and eight classes, respectively.

 

No comments:

Post a Comment

When RAND Made Magic + Jason Matheny Response

Summary The article describes RAND's evolution from 1945-present, focusing on its golden age (1945-196...