Tuesday, June 7, 2022

Fusion for SAR Target Recognition


Overall framework of multiregion multiscale scattering feature and deep feature fusion learning framework.

 Z. Liu, L. Wang, Z. Wen, K. Li and Q. Pan, "Multilevel Scattering Center and Deep Feature Fusion Learning Framework for SAR Target Recognition," in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-14, 2022, Art no. 5227914, doi: 10.1109/TGRS.2022.3174703.

Abstract: In synthetic aperture radar (SAR) automatic target recognition (ATR), there are mainly two types of methods: the physics-driven model and the data-driven network. The physics-driven model can exploit electromagnetic theory to obtain physical properties, while the data-driven network will extract deep discriminant features of targets. These two types of features represent the target characteristics in the scattering domain and the image domain, respectively. However, the representation discrepancy caused by the different modalities between them hinders the further comprehensive utilization and fusion of both features.

In order to take full advantage of physical knowledge and deep discriminant feature for SAR ATR, we propose a new feature fusion learning framework SDF-Net to combine scattering and deep image features. In this work, we treat the attributed scattering centers (ASC) as set-data instead of multiple individual points, which can well mine the topological interaction among scatterers. Then, multiregion multiscale subsets are constructed at both component and target levels. To be specific, the most significant scattering intensity and overall representation in these subsets are exploited successively to learn permutation-invariant scattering features according to a set-oriented deep network. 

The scattering representations can provide mid-level semantic and structural features that are subsequently fused with the complementary deep image features to yield an end-to-end high-level feature learning framework, which helps enhance the generalization ability of networks especially under complex observation conditions. 

Extensive experiments on the Moving and Stationary Target Acquisition and Recognition database verify the effectiveness and robustness of the SDF-Net compared against both typical SAR ATR networks and ASC-based models.

URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9773339&isnumber=9633014

pubmed.ncbi.nlm.nih.gov

Attributed scattering centers for SAR ATR - PubMed

L C Potter  1 , R L Moses

Attributed scattering centers for SAR ATR

L C Potter et al. IEEE Trans Image Process. 1997.

Abstract

High-frequency radar measurements of man-made targets are dominated by returns from isolated scattering centers, such as corners and flat plates. Characterizing the features of these scattering centers provides a parsimonious, physically relevant signal representation for use in automatic target recognition (ATR). In this paper, we present a framework for feature extraction predicated on parametric models for the radar returns. The models are motivated by the scattering behaviour predicted by the geometrical theory of diffraction. For each scattering center, statistically robust estimation of model parameters provides high-resolution attributes including location, geometry, and polarization response. We present statistical analysis of the scattering model to describe feature uncertainty, and we provide a least-squares algorithm for feature estimation. We survey existing algorithms for simplified models, and derive bounds for the error incurred in adopting the simplified models. A model order selection algorithm is given, and an M-ary generalized likelihood ratio test is given for classifying polarimetric responses in spherically invariant random clutter.

Similar articles

Cited by 10 articles

LinkOut - more resources

 https://ieeexplore.ieee.org/document/552098

jpier.org

Target Recognition for Multi-Aspect SAR Images with Fusion Strategies


Two fusion strategies for target recognition using multi-aspect synthetic aperture radar (SAR) images are presented for recognizing ground vehicles in MSTAR database. Due to radar cross-section variability, the ability to discriminate between targets varies greatly with target aspect. Multi-aspect images of a given target are used to support recognition. In this paper, two fusion strategies for target recognition using multi-aspect SAR images are proposed, which are data fusion strategy and decision fusion strategy. The recognition performance sensitivity to the number of images and the aspect separations is analyzed for those two target recognition strategies. The two strategies are also compared with each other in probability of correct classification and operating efficiency. The experimental results indicate that if we have a small number of multi-aspect images of a target and the aspect separations between those images are proper, the probability of correct classification obtained by the two proposed strategies can be advanced significantly compared with that obtained by the method using single image.


1. Mohammadpoor, M., R. S. A. Raja Abdullah, A. Ismail, and A. F. Abas, "A circular synthetic aperture radar for on-the-ground object detection," Progress In Electromagnetics Research, Vol. 122, 269-292, 2012.

2. Ross, T., S. Worrell, V. Velten, J. Mossing, and M. Bryant, "Standard SAR ATR evaluation experiments using the MSTAR public release data set," Proc. SPIE, Vol. 3370, 566-573, 1998.

3. Zhou, J., Z. Shi, X. Cheng, and Q. Fu, "Automatic target recognition of SAR images based on global scattering center model," IEEE Trans. on Geoscience and Remote Sensing, Vol. 49, No. 10, 3713-3729, 2011.

4. Sandirasegaram, N. and R. Englisth, "Comparative analysis of feature extraction (2D FFT and wavelet) and classification (Lp metric distances, MLP NN, and HNeT) algorithms for SAR imagery," Proc. SPIE, Vol. 5808, 314-325, 2005.

5. Nilubol, C. and Q. H. Pham, "Translational and rotational invariant hidden Markov model for automatic target recognition," Proc. SPIE, Vol. 3374, 179-185, 1998.

6. O'Sullivan, J. A., M. D. DeVore, V. Kedia, and M. I. Miller, "SAR ATR performance using a conditionally Gaussian model," IEEE Trans. on Aerospace and Electronic Systems, Vol. 37, No. 1, 91-108, 2001.

7. Brown, M. Z., "Analysis of multiple-view Bayesian classification for SAR ATR," Proc. SPIE, Vol. 5095, 265-274, 2003.

8. Brendel, G. and L. Horowitz, "Benefits of aspect diversity for SAR ATR: Fundamental and experimental results," Proc. SPIE, Vol. 4053, 567-578, 2000.

9. Bhanu, B. and G. Jones, "Exploiting azimuthal variance of scatterers for multiple look SAR recognition," Proc. SPIE, Vol. 4727, 290-298, 2002.

10. Ettinger, G. and W. Snyder, "Model-based fusion of multi-look SAR for ATR," Proc. SPIE, Vol. 4727, 277-289, 2002.

11. Snyder, W. and G. Ettinger, "Performance models for hypothesis-level fusion of multi-look SAR ATR," Proc. SPIE, Vol. 5095, 396-407, 2003.

12. Vespe, M., C. Baker, and H. Griffiths, "Aspect dependent drivers for multi-perspective target classification," IEEE Conference on Radar, 256-260, 2006.

13. Anagnostopoulos, G. C., "SVM-based target recognition from synthetic aperture radar images using target region outline descriptors," Nonlinear Analysis, Vol. 71, e2934-e2939, 2009.

14. Wang, B., Y. Huang, J. Yang, and J. Wu, "A feature extraction method for synthetic aperture radar (SAR) automatic target recognition based on maximum interclass distance," Sci. China Tech. Sci., Vol. 54, 2520-2524, 2011.

15. Liu, M., Y. Wu, P. Zhang, Q. Zhang, Y. Li, and M. Li, "SAR target configuration recognition using locality preserving property and Gaussian mixture distribution," IEEE Trans. on Geoscience and Remote Sensing Letters, Vol. 10, No. 2, 268-272, 2012.

16. Park, J., S. Park, and K. Kim, "New discrimination features for SAR automatic target recognition," IEEE Geoscience and Remote Sensing Letters, Vol. PP, No. 99, 1-5, 2012.

17. Chang, Y.-L., C.-Y. Chiang, and K.-S. Chen, "SAR image simulation with application to target recognition," Progress In Electromagnetics Research, Vol. 119, 35-57, 2011.

18. Zhao, Q., J. C. Principe, V. L. Brennan, D. Xu, and Z. Wang, "Synthetic aperture radar automatic target recognition with three strategies of learning and representation," Opt. Eng., Vol. 39, No. 5, 1230-1244, 2000.

19. Zhao, Q. and J. C. Principe, "Support vector machines for SAR automatic target recognition," IEEE Trans. on Aerospace and Electronic Systems, Vol. 37, No. 2, 643-654, 2001.

20. Yang, W., Y. Liu, G.-S. Xia, and X. Xu, "Statistical mid-level features for building-up area extraction from high-resolution PolSAR imagery," Progress In Electromagnetics Research, Vol. 132, 233-254, 2012.

21. Vapnik, V. N., "An overview of statistical learning theory," IEEE Trans. on Neural Networks, Vol. 10, No. 5, 988-999, 1999.

22. Hsu, C. W. and C. J. Lin, "A comparison of methods for multiclass support vector machines," IEEE Trans. on Neural Networks, Vol. 13, No. 2, 415-425, 2002.

23. Huan, R. and Y. Pan, "Decision fusion strategies for SAR image target recognition," IET Radar, Sonar and Navigation, Vol. 5, No. 7, 747-755, 2011.

24. Huan, R. and R. Yang, "SAR automatic target recognition based on decision fusion," 7th European Conference on Synthetic Aperture Radar (EUSAR), 1-4, 2008.

25. Huan, R., K. Mao, Y. Lei, J. Yu, and M. Xia, "SAR target recognition with data fusion," 2010 WASE International Conference on Information Engineering, Vol. 2, 19-23, 2010.

26. Rizvi, S. A. and N. M. Nasrabadi, "Fusion of automatic target recognition algorithms," Proc. SPIE, Vol. 4726, 122-132, 2002.

27. Rizvi, S. A. and N. M. Nasrabadi, "Fusion techniques for automatic target recognition," Proc. IEEE Conf. Applied Imagery Pattern Recognition Workshop, 27-32, Washingdon DC, USA, 2003.

 

No comments:

Post a Comment

Breakthrough in Satellite Error Correction Improves Space Communications

Typical LEO Architecture and Segments Spectra of some LEO Link Losses Breakthrough in Satellite Error Correction Improves Space Communicatio...