Friday, December 5, 2025

Novel Phase Error Recovery Method Advances Bistatic Forward-Looking SAR Imaging


Flowchart of the proposed phase error reverse recovery method. 

Figure 5: Functional Flow Analysis

Overview

Figure 5 above presents the complete processing pipeline for the phase error reverse recovery method in bistatic forward-looking SAR (BFSAR). The flowchart illustrates how range-compressed raw data is transformed into a focused BFSAR image through four major processing stages, with an iterative refinement loop.

Detailed Functional Flow

Input: Range Compression Raw Data

The process begins with range-compressed raw data—echo signals that have been demodulated and compressed in the range (fast-time) dimension but still contain motion-induced phase errors and require azimuth (slow-time) processing.

Stage 1: Two-Step Preprocessing (Blue Box - Left Side)

1. GCBC System Establishment

  • Analyzes the Doppler characteristics of the bistatic geometry
  • Calculates the Doppler gradient to determine spectrum tilt direction
  • Establishes rotation angle θ between GCC and GCBC coordinate systems
  • Creates the coordinate transformation matrix to align with azimuth Doppler variation

2. Overlapping Sub-Aperture Division

  • Divides the full synthetic aperture into multiple overlapping segments
  • Typical overlap is 50% between adjacent subapertures
  • Reduces the impact of severe nonsystematic range cell migration (NsRCM)
  • Enables phase continuity during later time-domain phase error (TDPE) concatenation

3. Imaging Grid Setting

  • Calculates theoretical wavenumber support region boundaries (K'{x max}, K'{x min}, K'{y max}, K'{y min})
  • Determines optimal 2-D grid resolution (Δx', Δy') based on wavenumber bandwidth
  • Constructs Cartesian imaging grid in GCBC coordinate system
  • Grid spacing is finer than actual resolution to avoid aliasing

Stage 2: Spectrum De-Aliasing (Gray Box - Left Side)

4. Fast BPA Imaging

  • Applies fast backprojection algorithm to create coarse subaperture images
  • Back-projects range-compressed data onto the GCBC imaging grid
  • Results in focused but spectrally aliased images due to position-dependent spectrum centers

5. Spectrum Center Correction

  • Applies 2-D phase function f(x', y') to shift all target spectra to common center
  • Correction function derived from instantaneous slant range geometry
  • Eliminates position-dependent spectral centroid variations
  • Output: Image with aligned spectrum centers but residual spectrum tilt

6. Spectrum Tilt Correction

  • Applies range-frequency domain correction g(x', ΔK'_y)
  • Uses Taylor series expansion to third order for computational efficiency
  • Corrects for x'-variant spectrum tilt (space-variant along azimuth)
  • Output: Fully de-aliased spectrum with uniform phase errors at aligned positions

Stage 3: Phase Error Reverse Recovery (Green Box - Right Side)

7. Phase Errors Estimation

  • Applies autofocus algorithm (typically Phase Gradient Autofocus - PGA)
  • Estimates image domain phase error (IDPE) from the de-aliased spectrum
  • IDPE represents phase errors at image frequency sampling points
  • These are not directly usable because they're in resampled frequency domain

8. Reverse Recovery Relationship Establishment

  • Calculates image frequency sampling points (F_x, F_y) based on grid parameters
  • Determines effective range transmitting frequencies K̂_y from valid spectrum support
  • Computes effective azimuth transmitting frequencies K̂_x accounting for tilt correction
  • Maps effective frequencies to azimuth time points t̂_a (original pulse timing)
  • Maps image sampling frequencies to time points t̃_a (resampled timing)

9. TDPE Obtained by IDPE

  • Interpolates platform positions to t̃_a time grid
  • Calculates transmission frequency K̃_r for each image sampling point
  • Removes frequency variation from IDPE: Δφ(t̃_a) = (K_{rc}/K̃_r)Δφ(F_x, F_{yc})
  • Interpolates Δφ(t̃_a) to original data time grid to obtain TDPE Δφ(t_a)

Stage 4: Phase Error Combination and Compensation (Orange Boxes - Right Side)

10. Full Aperture Phase Error Combination

  • Concatenates TDPEs from all overlapping subapertures
  • Uses overlap regions to ensure phase continuity
  • Estimates and removes linear phase differences between adjacent subapertures
  • Produces full-aperture TDPE Δφ_{all}(t_a) spanning entire synthetic aperture

11. IDPE Completely Estimated Decision Point

  • Checks convergence: compares image entropy before and after compensation
  • If entropy decreases → Phase error estimation successful → Continue to imaging
  • If entropy increases → Phase error estimation failed → Re-divide subapertures
  • Typical implementation requires 1-2 iterations for convergence

Final Processing (Top Right)

12. Fast BPA Imaging (with compensated data)

  • Applies compensation function h(t_a) in range-frequency domain
  • Compensates both phase errors and envelope migration simultaneously
  • Performs final fast backprojection with full aperture corrected data
  • Uses full-resolution imaging grid (finer than preprocessing grid)

13. Focused BFSAR Image

  • Final output: well-focused bistatic forward-looking SAR image
  • Motion errors compensated
  • Targets properly focused with minimal sidelobe artifacts
  • Ready for interpretation and exploitation

Key Workflow Characteristics

Iterative Refinement Loop

The flowchart shows a feedback loop from the decision point back to "Overlapping sub-aperture division." This enables:

  • Adaptive subaperture size adjustment if initial division insufficient
  • Recovery from poor initial phase error estimates
  • Robustness to varying motion error severity

Color-Coded Functional Grouping

  • Blue box (Two-step preprocessing): Data preparation and coordinate system setup
  • Gray box (Spectrum de-aliasing): Image spectrum alignment enabling accurate phase estimation
  • Green box (Phase error reverse recovery): Core innovation - mapping IDPE to TDPE
  • Orange boxes: Standard processing and output generation

Data Flow Directionality

  • Main forward path: Left side down, then right side up
  • Reverse recovery: Bottom-to-top on right side (hence "reverse recovery" name)
  • Feedback loop: Enables quality control and adaptive processing

Critical Innovation Highlight

The reverse recovery relationship establishment block represents the paper's key contribution. Traditional methods estimate phase errors in the image domain but cannot directly convert them to time domain for data-level compensation because:

  1. BP imaging resamples the transmit spectrum based on imaging grid parameters
  2. Spectrum corrections further alter the frequency-to-time mapping
  3. No direct correspondence exists between image frequencies and data time samples

The reverse recovery method solves this by:

  1. Computing effective transmitting frequencies from image grid and spectrum support
  2. Establishing analytical mapping between effective frequencies and time samples
  3. Interpolating through intermediate time grids to bridge image and data domains
  4. Enabling accurate TDPE estimation despite BP spectral resampling

This analytical approach avoids iterative search methods and provides deterministic, computationally efficient phase error recovery suitable for operational implementation.


BLUF: Chinese researchers have developed a groundbreaking motion compensation technique for bistatic forward-looking synthetic aperture radar (BFSAR) that addresses critical imaging challenges through a ground combined beam coordinate (GCBC) system, enabling high-precision airborne surveillance and terminal guidance applications despite complex platform motion errors.

Breakthrough Algorithm Tackles Complex Motion Compensation Challenge

Researchers at Xidian University have introduced a phase error reverse recovery method that significantly improves image quality for bistatic forward-looking SAR systems operating under challenging motion conditions. The technique, published in IEEE Transactions on Geoscience and Remote Sensing, addresses fundamental limitations in existing motion compensation approaches for airborne radar platforms.

The development comes as defense and commercial operators increasingly deploy bistatic SAR configurations for applications requiring forward-looking capabilities, including aircraft navigation, autonomous landing systems, and precision strike missions. Traditional monostatic SAR systems cannot image directly ahead of the aircraft, creating a critical operational gap that bistatic configurations fill—but at the cost of substantially more complex signal processing requirements.

Technical Innovation Centers on Coordinate System Redesign

The core innovation involves establishing a GCBC system aligned with the radar signal's azimuth Doppler variation, rather than relying on conventional ground Cartesian coordinate (GCC) systems. This seemingly simple change addresses two critical problems that have plagued BFSAR imaging: severe image spectrum tilt and nonsystematic range cell migration (NsRCM).

"The forward-looking configuration induces severe image spectrum tilt and nonsystematic range cell migration, significantly degrading phase error estimation accuracy," lead author Yishan Lou and colleagues write in their paper. The research team, led by Professor Mengdao Xing at Xidian's National Laboratory of Radar Signal Processing, developed the method under support from China's National Natural Science Foundation.

Mathematical Foundation: Signal Model

The bistatic forward-looking geometry introduces unique signal characteristics. For a target P at coordinates (x₀, y₀, 0), the instantaneous slant ranges from transmitter and receiver are:

R_T(t_a) = √[(X_T(t_a) - x₀)² + (Y_T(t_a) - y₀)² + Z_T(t_a)²]

R_R(t_a) = √[(X_R(t_a) - x₀)² + (Y_R(t_a) - y₀)² + Z_R(t_a)²]

where t_a represents azimuth slow time, and (X_T, Y_T, Z_T) and (X_R, Y_R, Z_R) are the transmitter and receiver positions.

The range-compressed echo signal after demodulation becomes:

S(f_r, t_a) = A(t_a) rect(f_r/B) exp(-jK_r(R_T(t_a) + R_R(t_a) + ΔR(t_a)))

where K_r = 2π(f_c + f_r)/c is the wavenumber magnitude, f_r is range frequency, f_c is center frequency, c is speed of light, B is bandwidth, and ΔR(t_a) represents motion-induced range error.

Doppler Characteristics and Coordinate Transformation

Through Taylor series expansion about the aperture center time, the researchers derived the Doppler frequency:

f_d(x₀, y₀) = (χ_r1)/(2λR_rcen) + (χ_t1)/(2λR_tcen)

where:

  • χ_r1 = 2v_ry(Y_R(t_ac) - y₀) + 2v_rzZ_R(t_ac)
  • χ_t1 = 2v_ty(Y_T(t_ac) - y₀) + 2v_tzZ_T(t_ac)
  • v_ry, v_rz are receiver velocities
  • v_ty, v_tz are transmitter velocities

The Doppler gradient determines the spectrum tilt direction:

Δf_dx = [f_d(x₀ + Δ, y₀) - f_d(x₀ - Δ, y₀)]/(2Δ)

Δf_dy = [f_d(x₀, y₀ + Δ) - f_d(x₀, y₀ - Δ)]/(2Δ)

The GCBC system rotation angle θ relative to the GCC X-axis is:

θ = arccos(dir · n_x / |dir||n_x|)

where dir = Δf_d/|Δf_d| is the normalized Doppler variation direction.

The coordinate transformation becomes:

[x', y', z']ᵀ = [cos θ -sin θ 0; sin θ cos θ 0; 0 0 1][x, y, z]ᵀ

In conventional GCC-based processing, simulation results showed target envelopes spanning 62 range units due to spectrum tilt. The GCBC system reduced this to 38 range units—a 39% improvement that directly translates to more accurate phase error estimation and superior image quality.

Wavenumber Domain Analysis

Backprojection Image Formation

The ground Cartesian backprojection algorithm constructs the SAR image as:

I(x, y) = Σ_{t_a=-T_a/2}^{T_a/2} Σ_{K_r=K_{r min}}^{K_{r max}} S(K_r, t_a) exp(jK_rR_Σ(t_a))

where T_a is synthetic aperture time, R_Σ(t_a) = R̂_T(t_a) + R̂_R(t_a) represents the instantaneous slant range sum from grid point (x,y) to transmitter and receiver, and:

K_{r min} = (2π/c)(f_c - B/2) K_{r max} = (2π/c)(f_c + B/2)

Spectrum Analysis in GCBC

Applying the 2-D Fourier transform and principle of stationary phase (POSP), the image spectrum phase becomes:

φ(K'_x, K'_y) = K_rR_Σ(t_a) - K'_x x' - K'_y y' - K_rΔR(t_a)

The azimuth wavenumber extent is:

K'_x = ∂φ/∂x' = K_r[(X'_T(t_a) - x')/R̂_T(t_a) + (X'_R(t_a) - x')/R̂_R(t_a)]

The range wavenumber extent is:

K'_y = ∂φ/∂y' = K_r[(Y'_T(t_a) - y')/R̂_T(t_a) + (Y'_R(t_a) - y')/R̂_R(t_a)]

These expressions enable precise determination of the imaging grid resolution:

Δx' = 0.886 × 2π/(K'{x max} - K'{x min})

Δy' = 0.886 × 2π/(K'{y max} - K'{y min})

Four-Stage Processing Framework Delivers Results

The proposed method operates through four distinct stages:

Stage 1: Two-Step Preprocessing

The preprocessing divides data into overlapping subapertures while establishing imaging grids based on theoretical wavenumber analysis. The maximum and minimum wavenumber boundaries are:

K'{x max} = K_r(sin θ{T max} + sin θ_{R max}) K'{x min} = K_r(sin θ{T min} + sin θ_{R min}) K'{y max} = K{r max}(cos θ_T + cos θ_R) K'{y min} = K{r min}(cos θ_T + cos θ_R)

where θ_T and θ_R are transmitter and receiver squint angles:

θ_T = arcsin[(X'_T(t_a) - x')/R̂_T(t_a)] θ_R = arcsin[(X'_R(t_a) - x')/R̂_R(t_a)]

This addresses the challenge that motion errors introduce significant NsRCM that worsens with synthetic aperture integration time.

Stage 2: Spectrum De-Aliasing

Spectrum Center Correction: The 2-D spectrum centers are:

K'{xc} = K{rc}[(X'_T(t_ac) - x')/R̂_T(t_ac) + (X'_R(t_ac) - x')/R̂_R(t_ac)]

K'{yc} = K{rc}[(Y'_T(t_ac) - y')/R̂_T(t_ac) + (Y'_R(t_ac) - y')/R̂_R(t_ac)]

where K_{rc} = 2π/λ. The correction function satisfies:

∂f(x', y')/∂x' = K'{xc} ∂f(x', y')/∂y' = K'{yc}

Integrating these partial derivatives yields the spectrum center correction function:

f(x', y') = K_{rc}[√((X'_T(t_ac) - x')² + (Y'_T(t_ac) - y')² + Z'_T(t_ac)²) + √((X'_R(t_ac) - x')² + (Y'_R(t_ac) - y')² + Z'_R(t_ac)²)]

After correction, the image spectrum becomes:

I₁(ΔK'_x, ΔK'_y) = exp(-jx'₀ΔK'_x - jy'₀ΔK'_y) exp(-jΔφ(ΔK'_x, ΔK'_y))

where ΔK'_x = K'x - K'{xc}, ΔK'_y = K'y - K'{yc}.

Spectrum Tilt Correction: The spectrum centerline of any point target is:

ΔK'_y = ΔK'_x × [R̂_R(t_ac)(X'_T(t_ac) - x') + R̂_T(t_ac)(X'_R(t_ac) - x')]/[R̂_R(t_ac)(Y'_T(t_ac) - y') + R̂_T(t_ac)(Y'_R(t_ac) - y')]

The correction function uses Taylor series expansion:

m(x', y') = [R̂_R(t_ac)(X'_T(t_ac) - x') + R̂_T(t_ac)(X'_R(t_ac) - x')]/[R̂_R(t_ac)(Y'_T(t_ac) - y'_c) + R̂_T(t_ac)(Y'_R(t_ac) - y'_c)]

= Σ_{i=0}^∞ (x'^i/i!) × ∂^i m(x', y')/∂x'^i

The tilt correction function becomes:

g(x', ΔK'_y) = -ΔK'y Σ{i=0}^∞ (x'^{i+1}/(i+1)!) × ∂^i m(x', y')/∂x'^i

After both corrections:

I₂(ΔK'_x, ΔK'_y) = exp(-jx'₀[ΔK'_x - ΔK'y Σ{i=0}^∞ (x'^i/i!) × ∂^i m/∂x'^i] - jy'₀ΔK'_y) × exp(-jΔφ(...))

Stage 3: Phase Error Reverse Recovery

Effective Frequency Calculation: The image frequency sampling points are:

F_y = (2π/Δy·L_y)L'y + F{yc} F_x = (2π/Δx·L_x)L'x + F{xc}

where L'_y = (-L_y/2 : L_y/2 - 1), L'_x = (-L_x/2 : L_x/2 - 1).

The effective range transmitting frequency is determined by:

K̂_y = {x | x ∈ K_{y0} ∩ F_y}

where K_{y0} is the theoretical range wavenumber support.

The effective transmitted wavenumber is:

K̂_r = [F̂_yR̂_R(t_a, x'_c, y'_c)R̂_T(t_a, x'_c, y'_c)]/[R̂_R(t_a, x'_c, y'_c)(Y'_T(t_a) - y'_c) + R̂_T(t_a, x'_c, y'_c)(Y'_R(t_a) - y'_c)]

The effective azimuth transmitting frequency after tilt correction:

K̂_x = K̂_r[(X'_T(t_a) - x'_c)/R̂_T(t_a, x'_c, y'_c) + (X'_R(t_a) - x'_c)/R̂_R(t_a, x'_c, y'c)] - K̂_y · Σ{i=0}^∞ (x'^i_c/i!) · ∂^i m(x'_c, y'_c)/∂x'^i_c

Time Domain Recovery: The azimuth time points for effective frequencies are:

t̂_a = ξ(K̂_y, K̂_x)

For image sampling points:

t̃_a = ξ(K̂_y, F_x(K̂_x))

where F_x(K̂_x) = {x | x ∈ K̂_x ∩ F_x}.

Through interpolation of platform positions, the transmission frequency for each image sampling point is:

K̃_r = [K̂_yR̂_R(t̃_a, x'_c, y'_c)R̂_T(t̃_a, x'_c, y'_c)]/[R̂_R(t̃_a, x'_c, y'_c)(Ỹ_T(t̃_a) - y'_c) + R̂_T(t̃_a, x'_c, y'_c)(Ỹ_R(t̃_a) - y'_c)]

The phase error for reverse recovery:

Δφ(t̃_a) = (K_{rc}/K̃_r)Δφ(F_x, F_{yc})

Finally, TDPE is obtained by interpolating Δφ(t̃_a) to the original time grid t_a.

Stage 4: Compensation and Imaging

The compensation function applied in range-frequency domain:

h(t_a) = exp(-j[Δφ_{all}(t_a)/f_c](f_c + f_r))

where Δφ_{all}(t_a) is the concatenated full-aperture TDPE from all subapertures.

Validation Through Simulation and Flight Test

The research team validated their approach using both simulated data and real airborne BFSAR data collected during flight experiments. The simulation used a 15 GHz carrier frequency with 100 MHz bandwidth, processing five point targets with significant motion errors introduced.

Real-world validation came from an airborne bistatic configuration with a 16 GHz carrier and 150 MHz bandwidth. The receiver adopted a forward-looking configuration while the transmitter provided side-looking illumination, creating the overlapping coverage area typical of bistatic operations. Processing a scene measuring 2,048 × 1,843 meters at 0.4 × 0.3 meter resolution, the proposed method achieved an image entropy of 16.14, compared to 16.20 for the reference algorithm and 16.32 for uncompensated fast BP processing—lower entropy indicating better focus quality.

Detailed analysis of two corner reflector targets in the real data demonstrated substantial improvements in standard SAR quality metrics. For Target A, the proposed method achieved a peak sidelobe ratio (PSLR) of -18.2342 dB in range and -12.9430 dB in azimuth, compared to -15.7063 dB and -10.7064 dB for the reference algorithm. Integrated sidelobe ratio (ISLR) showed similar improvements: -9.8663 dB (range) and -9.8937 dB (azimuth) versus -3.8731 dB and -0.5250 dB.

Computational Efficiency Analysis

The researchers provide detailed computational complexity analysis using floating-point operations per second (FLOPS) as the metric. For a scene of N×N grid points with n subapertures of length L, and original data range dimension M, the individual stage complexities are:

Coarse imaging: ρ₁ = (8LN²/m) + N²(1 + m)log₂(4N/√n) - (N²/2)log₂m

Spectrum de-aliasing: ρ₂ = N²log₂N + 2N²

Phase error reverse recovery: ρ₃ = 2N²log₂(N/n) + 4N² + 24N

Full-aperture compensation: ρ₄ = ((n + 1)ML/2)log₂M + (n + 1)ML

Total computational load: ρ_{all} = (8L/m + 6)N² + N²(5 + 2m)log₂N - N²(5 + m)log₂√n - N²log₂m + (n + 1)MLlog₂√M + (n + 1)ML + 24N

where m represents the number of subaperture divisions used for fast imaging.

Processing times for the real data experiment were 85.8 seconds for direct fast BP, 97.56 seconds for the reference algorithm, and 134.8 seconds for the proposed method—a 38% increase over the reference method but delivering substantially superior image quality. The additional processing burden stems from IDPE estimation in subaperture images and TDPE compensation in the data domain prior to final imaging.

Implications for Operational Systems

The advancement holds particular significance for cost-constrained platforms including small unmanned aerial vehicles (UAVs) and miniature autonomous aerial vehicles that cannot accommodate high-precision inertial measurement units. While GPS and inertial navigation systems can provide motion compensation, their accuracy often proves insufficient for high-resolution imaging requirements.

The method's ability to accurately estimate motion errors from the radar data itself—a data-driven approach—eliminates dependence on expensive external navigation systems. This could enable BFSAR capabilities on platforms previously unable to achieve the required image quality.

However, the researchers acknowledge limitations. "The proposed method can accurately estimate nonspace-variant phase errors, but its accuracy is limited for space-variant phase errors," the paper notes. Additionally, because PGA cannot accurately estimate constant and linear phase errors, compensated images exhibit positional offsets.

Future Research Directions

The Xidian University team identifies accurately estimating space-variant, constant, and linear phase errors as key focus areas for future work. These enhancements would further improve the method's robustness across diverse operational scenarios and platform configurations.

The research also opens possibilities for extending the GCBC framework to other bistatic and multistatic SAR configurations beyond forward-looking geometries. The fundamental approach of aligning coordinate systems with signal characteristics rather than arbitrarily chosen reference frames may have broader applicability in radar signal processing.

With bistatic SAR systems gaining traction for both military and civilian applications—from border surveillance to autonomous vehicle navigation—techniques that enable reliable operation despite platform motion will prove increasingly valuable. The phase error reverse recovery method represents a significant step toward making high-quality bistatic forward-looking SAR practical for operational deployment.


Sources

  1. Lou, Y., Xing, M., Zhang, M., Ma, P., & Yu, H. (2025). A Phase Error Reverse Recovery Method for Bistatic Forward-Looking SAR Based on Ground Combined Beam Coordinate. IEEE Transactions on Geoscience and Remote Sensing, 63, 5224315. https://doi.org/10.1109/TGRS.2025.3637863

  2. Li, Y., Xu, G., Zhou, S., Xing, M., & Song, X. (2022). A novel CFFBP algorithm with noninterpolation image merging for bistatic forward-looking SAR focusing. IEEE Transactions on Geoscience and Remote Sensing, 60, 5225916. https://doi.org/10.1109/TGRS.2022.3188654

  3. Espeter, T., Walterscheid, I., Klare, J., Brenner, A. R., & Ender, J. H. G. (2011). Bistatic forward-looking SAR: Results of a spaceborne–airborne experiment. IEEE Geoscience and Remote Sensing Letters, 8(4), 765-768. https://doi.org/10.1109/LGRS.2010.2102337

  4. Walterscheid, I., Ender, J. H. G., Brenner, A. R., & Loffeld, O. (2010). Bistatic SAR processing and experiments. IEEE Transactions on Geoscience and Remote Sensing, 48(8), 3177-3189. https://doi.org/10.1109/TGRS.2010.2045502

  5. Bao, M., Zhou, S., Yang, L., Xing, M., & Zhao, L. (2021). Data-driven motion compensation for airborne bistatic SAR imagery under fast factorized back projection framework. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 1728-1740. https://doi.org/10.1109/JSTARS.2020.3048743

  6. Pu, W., et al. (2016). Motion errors and compensation for bistatic forward-looking SAR with cubic-order processing. IEEE Transactions on Geoscience and Remote Sensing, 54(12), 6940-6957. https://doi.org/10.1109/TGRS.2016.2594118

  7. Ulander, L. M. H., Hellsten, H., & Stenstrom, G. (2003). Synthetic-aperture radar processing using fast factorized back-projection. IEEE Transactions on Aerospace and Electronic Systems, 39(3), 760-776. https://doi.org/10.1109/TAES.2003.1238734

  8. National Natural Science Foundation of China. (2023). Key Program Grant 62331020. http://www.nsfc.gov.cn/

  9. National Natural Science Foundation of China. (2022). Grant U22B2015. http://www.nsfc.gov.cn/

  10. National Science Fund for Excellent Young Scholars. (2022). Grant 62222113. http://www.nsfc.gov.cn/

No comments:

Post a Comment

Novel Phase Error Recovery Method Advances Bistatic Forward-Looking SAR Imaging

Flowchart of the proposed phase error reverse recovery method.  Figure 5: Functional Flow Analysis Overview Figure 5 above presents the co...