Friday, June 28, 2024

High Resolution Millimeter Wave Imaging Based on FMCW Radar Systems at W-Band

Refer to caption
Figure 1 Geometry of the Model

High Resolution Millimeter Wave Imaging Based on FMCW Radar Systems at W-Band

Electrical Engineering and Systems Science > Signal Processing

In this paper, we present a unique 2D high resolution, compact, low-cost, low-weight, and highly accurate millimeter wave (mm-Wave) imagery system capable of operating in all weather conditions. We describe mm-Wave imaging process in detail and present several novel signal processing methods with their applications.

To create the array, we utilize the Synthetic Aperture Radar (SAR) concept. The imagery system presented in this work, can strongly compete with Lidar systems as the reolution limit is at the same level. Furthermore, in contrast to the Lidar systems, our imagery system can operate in heavy rain and dense fog and produce high quality images.

We use our custom-made Frequency Modulated Continuous Wave (FMCW) radar operating at W-band with 33 GHz bandwidth for data collection and present the results.
Subjects: Signal Processing (eess.SP)
Cite as: arXiv:2406.18244 [eess.SP]
  (or arXiv:2406.18244v1 [eess.SP] for this version)
  https://doi.org/10.48550/arXiv.2406.18244

Submission history

From: Shahrokh Hamidi [view email]
[v1] Wed, 26 Jun 2024 10:47:40 UTC (32,234 KB)

Summary

 This paper presents a high-resolution millimeter wave imaging system based on a custom-made FMCW radar operating in the W-band (75-108 GHz). The key points are:

1. System Design:

- Custom FMCW radar operating from 75-108 GHz with 33 GHz bandwidth
- Uses 24 GHz bandwidth (78-102 GHz) for imaging, providing 6.3 mm range resolution
- Synthetic aperture radar (SAR) approach with 40 cm array length


2. Imaging Techniques:

- Back-projection algorithm for image reconstruction
- Apodization methods (dual and complex dual) for sidelobe reduction while preserving resolution
- Multi-spectral imaging by dividing bandwidth into 3 sub-bands
- Image denoising using modified Perona-Malik diffusion technique
- Sub-array based processing for improved SNR (approx. 3 dB gain)


3. Experimental Results:

- High-resolution images of various targets (bikes, cars) obtained
- System demonstrated to work in different conditions (day, night, rain, fog)
- Images show fine details of targets (e.g. bike spokes, car headlights)


4. Key Advantages:

- Compact, lightweight, low-cost system
- High resolution comparable to LiDAR
- All-weather capability (works in rain and fog)

5. Applications:

- Material characterization
- High-resolution localization
- Non-destructive testing
- Near-field imaging


The paper demonstrates that this mm-wave imaging system can produce high-quality, high-resolution images in various conditions, competing with LiDAR systems while offering better all-weather performance.

Apodization

Apodization is a signal processing technique used in radar imaging to reduce sidelobe levels while preserving the resolution of the system. The paper discusses two types of apodization: dual apodization and complex dual apodization. Let's break this down:

1. Purpose of Apodization:
   - Reduce sidelobe levels in the reconstructed image
   - Maintain the resolution limit of the system
   - Improve overall image quality

2. Traditional Windowing vs. Apodization:
   - Traditional windowing reduces sidelobes but also degrades resolution
   - Apodization aims to reduce sidelobes without sacrificing resolution

3. Dual Apodization:
   - Uses two different window functions (e.g., rectangular and Hamming)
   - Applied to the absolute value of the reconstructed image
   - Provides reasonable sidelobe reduction while maintaining resolution

4. Complex Dual Apodization:
   - Works on both real and imaginary parts of the reconstructed image separately
   - Provides better results compared to regular dual apodization
   - Slightly higher computational complexity

5. Implementation:
   - The paper uses rectangular (no windowing) and Hamming windows for apodization
   - Multiple windows can be used, creating a multi-layer algorithm for sidelobe reduction

6. Results:
   - Figure 13 in the paper demonstrates the effects of apodization:
     a) Image without apodization
     b) Image with dual apodization
     c) Image with complex dual apodization
   - Complex dual apodization shows the best results in terms of sidelobe reduction

7. Benefits:
   - Improved image quality
   - Better target discrimination
   - Reduced false detections caused by high sidelobes

8. Trade-offs:
   - Increased computational complexity, especially for complex dual apodization
   - Potential for slight loss in sensitivity (though less than traditional windowing)

9. Application in this system:
   - Applied post-processing to the reconstructed images
   - Helps in producing clearer, more accurate representations of the targets

Apodization, especially complex dual apodization, allows this millimeter-wave imaging system to produce high-quality images with reduced sidelobe artifacts while maintaining the high resolution enabled by the system's wide bandwidth. This technique contributes significantly to the system's ability to produce detailed, accurate images of various targets in different environmental conditions.

Theoretical Performance Predictions

Resolution in Range

Let's calculate the theoretical range resolution based on the information provided in the paper.

The theoretical range resolution for a radar system is indeed given by the formula:

Range Resolution = c / (2 * B)

Where:
c = speed of light (3 x 10^8 m/s)
B = bandwidth in Hz

From the paper, we know:
Bandwidth used for imaging = 24 GHz (78 GHz to 102 GHz)

Let's calculate:

Range Resolution = (3 x 10^8) / (2 * 24 x 10^9)
                 = 3 x 10^8 / 48 x 10^9
                 = 0.00625 m
                 = 6.25 mm

This theoretical range resolution of 6.25 mm is very close to the 6.3 mm resolution stated in the paper. The slight difference (0.05 mm) could be due to rounding or perhaps accounting for some system non-idealities.

This calculation confirms that the stated resolution in the paper closely matches the theoretical limit for the given bandwidth, indicating that the system is operating near its optimal performance in terms of range resolution.

Cross Range Resolution

The theoretical cross-range resolution in Synthetic Aperture Radar (SAR) is more complex than range resolution and depends on several factors. The basic formula for cross-range resolution in SAR is:

Cross-Range Resolution = λ * R / (2 * L)

Where:
λ = wavelength
R = range to the target
L = synthetic aperture length

From the paper, we know:
- Synthetic aperture length (L) = 40 cm
- The system operates in the W-band, using frequencies from 78 GHz to 102 GHz

Let's calculate for the center frequency (90 GHz):

λ = c / f = (3 x 10^8) / (90 x 10^9) ≈ 3.33 mm

The range (R) to the target isn't explicitly stated for all scenarios, but let's consider the example where the bike was placed at 2.05 m (from the impulse response measurement):

Cross-Range Resolution = (3.33 x 10^-3 * 2.05) / (2 * 0.4)
                       ≈ 8.5 mm

This is a theoretical best-case resolution at 2.05 m range. A few important points:

1. The cross-range resolution varies with range, getting coarser as range increases.

2. This calculation assumes perfect focusing and doesn't account for non-ideal factors in real systems.

3. The actual resolution might be slightly different due to the use of a range of frequencies rather than a single frequency.

4. The paper doesn't explicitly state the cross-range resolution achieved, so we can't directly compare this theoretical value to their results.

5. In practice, the achieved resolution might be slightly worse than this theoretical value due to various system and processing limitations.

This calculation shows that for targets close to the radar (around 2 m), the cross-range resolution is theoretically in the same order of magnitude as the range resolution, though slightly coarser. As the range increases, the cross-range resolution would degrade proportionally.

Near Field Effects

To determine if the target is in the far-field region, we need to calculate the far-field distance and compare it to the target range. The far-field region for an antenna or synthetic aperture typically begins at a distance of:

R_ff = 2D^2 / λ

Where:
D = the largest dimension of the antenna or synthetic aperture
λ = wavelength

From the previous information:
- Synthetic aperture length (D) = 40 cm = 0.4 m
- Center frequency ≈ 90 GHz
- λ ≈ 3.33 mm = 0.00333 m

Let's calculate:

R_ff = 2 * (0.4)^2 / 0.00333
     ≈ 96 meters

The paper mentions an example where the bike was placed at 2.05 m, and other experiments seem to be conducted at similar or shorter ranges (a few meters).

Given that the far-field distance is about 96 meters, and the targets are being imaged at distances of just a few meters, we can conclude that:

The targets are not in the far-field region. They are in the near-field or Fresnel region.

This near-field imaging has several implications:

1. The wavefront curvature is significant and must be accounted for in the imaging algorithms.
2. The cross-range resolution formula we used earlier is an approximation and may not be entirely accurate in this near-field scenario.
3. Near-field imaging can potentially provide better resolution than far-field imaging, but it requires more complex processing.
4. The back-projection algorithm mentioned in the paper is well-suited for near-field imaging as it naturally accounts for wavefront curvature.

The near-field nature of this imaging scenario is likely one of the reasons why the system can achieve such high-resolution images of the targets.
 
Upon carefully reviewing the paper, I don't find any explicit mention of near-field effects or discussions about near-field versus far-field imaging.

The paper does not:
1. Discuss the distinction between near-field and far-field imaging
2. Mention Fresnel zone considerations
3. Explicitly address wavefront curvature issues
4. Discuss any special processing techniques specific to near-field imaging

The authors use the back-projection algorithm for image reconstruction, which is capable of handling both near-field and far-field scenarios. However, they don't specifically highlight its applicability to near-field imaging.

This omission is noteworthy, given that the imaging scenarios described (targets at distances of a few meters) are clearly in the near-field region for the given synthetic aperture size and operating frequencies.

The lack of discussion about near-field effects is a significant point that could have been addressed in the paper, as it has important implications for the imaging process and the interpretation of results. Near-field effects can influence the achievable resolution and the complexity of the required processing algorithms.

Uncompensated near-field effects in SAR imaging can lead to several observable artifacts and distortions in the reconstructed image. Let's consider what these effects might be and whether we can observe them in the images presented in the paper:

Potential near-field effects:

  1. Defocusing:
    • Objects might appear blurred or smeared, especially at the edges of the image.
    • Not clearly observed in the presented images, which appear well-focused.
  2. Phase errors:
    • Could lead to geometric distortions or displacement of targets.
    • Not obviously present in the images; targets appear to be in expected positions.
  3. Range curvature:
    • Straight lines might appear curved, especially at the edges of the image.
    • Not clearly visible in the presented images.
  4. Amplitude variations:
    • Uneven brightness across the image, particularly from center to edge.
    • Not readily apparent in the images shown.
  5. Resolution variation:
    • Resolution might vary across the image, typically degrading towards the edges.
    • Difficult to assess from the presented images without reference measurements.
  6. Sidelobe distortion:
    • Asymmetric or increased sidelobes, especially for off-center targets.
    • The images after processing (e.g., Figure 10) show well-controlled sidelobes.
  7. Depth of focus issues:
    • Objects at different ranges might show different levels of focus.
    • Not clearly observed; all parts of the targets appear similarly focused.

Observations from the paper:

  1. The images (e.g., Figures 9, 10, 17-23) generally appear well-focused and free from obvious distortions.
  2. Fine details (like bike spokes in Figure 10) are visible across the image, suggesting good and consistent resolution.
  3. There's no obvious curvature or geometric distortion in straight elements like the bike frame.
  4. The back-projection algorithm used by the authors is known to handle near-field effects well, which might explain the lack of obvious artifacts.

Conclusion:

The absence of clear near-field artifacts in the images suggests that:

  1. The back-projection algorithm is effectively compensating for near-field effects, even if not explicitly mentioned.
  2. The system's design and processing chain are handling near-field imaging well, possibly without the need for additional specific compensation.
  3. The high quality of the images across the field of view indicates that near-field effects are not significantly impacting the imaging performance.

However, without access to the raw data or more detailed analysis, it's difficult to completely rule out all near-field effects. Some subtle impacts might be present but not easily discernible in the published images. A more rigorous analysis comparing the achieved results with theoretical predictions for both near-field and far-field scenarios would be needed to fully assess any residual near-field effects.

Missing System Parameters

The phase noise of the PLL FMCW generator after frequency multiplication is a crucial parameter that would significantly affect the imaging performance, and it's a notable omission in the paper. Let's discuss this:

1. Importance of Phase Noise in FMCW Radar:
   - Phase noise directly impacts the radar's ability to detect weak targets in the presence of strong ones.
   - It affects the dynamic range of the system.
   - Can lead to degradation in range resolution and accuracy.

2. Impact on Imaging Performance:
   - High phase noise can result in increased background noise in the image.
   - It can cause spreading of strong target responses, potentially masking nearby weaker targets.
   - May lead to false targets or ghosting effects in the image.

3. Relevance to This System:
   - Given the high resolution claimed by the system, low phase noise would be crucial.
   - The wide bandwidth (33 GHz) and high frequency multiplication (to reach W-band) make phase noise particularly critical.

4. What's Missing in the Paper:
   - No specification of the phase noise of the initial PLL FMCW generator.
   - No discussion of how phase noise might be affecting the final image quality.
   - No mention of any techniques used to mitigate phase noise effects.

5. Potential Implications:
   - Without knowing the phase noise characteristics, it's difficult to fully assess the quality and reliability of the imaging results.
   - The actual range resolution and dynamic range of the system might be affected by phase noise, potentially being worse than the theoretical values.

6. Related Considerations:
   - The paper also doesn't discuss frequency stability or linearity of the chirp, which are related to phase noise and equally important for FMCW performance.

This omission highlights a gap in the system characterization presented in the paper. For a comprehensive understanding of the system's capabilities and limitations, information about the phase noise characteristics would be essential. It would allow readers to better interpret the image quality and understand any limitations in the system's performance, especially in scenarios with targets of varying reflectivity or in cluttered environments.

For future work or for anyone attempting to replicate or build upon this research, characterizing and specifying the phase noise of the FMCW generator would be a crucial step in fully understanding and optimizing the imaging performance. Other parameters needed include transmitter power and antenna gains needed to determine noise equivalent reflectivity.

Unambiguous Range

Let's review the paper for information about the Pulse Repetition Frequency (PRF) and maximum unambiguous range:

1. Pulse Repetition Frequency (PRF):
   - The paper does not explicitly mention a PRF.
   - However, it's important to note that this is an FMCW (Frequency Modulated Continuous Wave) system, not a pulsed radar.
   - For FMCW systems, the equivalent parameter would be the chirp repetition frequency.

2. Chirp Parameters:
   - The paper does mention that the chirp time is set to 5 msec.
   - This implies a maximum chirp repetition frequency of 200 Hz (1/5 msec), assuming no delay between chirps.

3. Maximum Unambiguous Range:
   - There is no direct mention of the maximum unambiguous range in the paper.
   - In FMCW systems, the maximum unambiguous range is typically determined by the chirp duration and the ability to sample the beat frequency.

4. Related Information:
   - The paper mentions using an ADC with a sampling frequency of 5 MHz.
   - Experiments are described with targets at various ranges, with the farthest explicitly mentioned being 8.5 m (Figure 23).

5. What's Missing:
   - No discussion of range ambiguities or how they are handled.
   - No mention of the maximum designed range of the system.
   - No information on how the chirp parameters were chosen in relation to the expected target ranges.

The omission of these details is significant because:

- In FMCW systems, the chirp duration and repetition rate affect the maximum unambiguous range and the ability to detect moving targets.
- Understanding these parameters is crucial for assessing the system's capabilities in different scenarios.
- It's important for determining the system's ability to handle multiple targets at different ranges.

Without this information, it's challenging to fully evaluate the system's range capabilities and limitations. For a complete system description, it would be beneficial to have:

- The chirp repetition rate (or inter-chirp delay, if any)
- The maximum designed unambiguous range
- Any techniques used to mitigate range ambiguities, if applicable

This information would provide a more comprehensive understanding of the system's operational parameters and its suitability for different imaging scenarios.

Rain Propagation Effects

Atmospheric loss, particularly due to rain, can significantly affect RF propagation in the W-band (75-110 GHz). Let's break this down:

1. W-band Characteristics:
   - W-band frequencies are highly susceptible to atmospheric attenuation, especially from water vapor and rain.


2. Rain Attenuation:
   - Rain attenuation increases with frequency, making W-band particularly vulnerable.
   - The attenuation is dependent on rain rate (mm/hour) and increases dramatically with heavier rainfall.

3. Specific Attenuation:
   - For W-band frequencies, the specific attenuation due to rain can range from about 1 dB/km for light rain (5 mm/hour) to over 10 dB/km for heavy rain (50 mm/hour).
   - In extreme rainfall, attenuation can exceed 20 dB/km.

4. Impact on System Performance:
   - Reduced maximum range
   - Decreased signal-to-noise ratio
   - Potential degradation of image quality

5. Variability:
   - Rain attenuation can vary significantly over short distances and time periods, potentially causing fluctuations in system performance.

6. Relation to This Paper:
   - The paper mentions collecting data in rainy conditions (Figure 19), but doesn't provide quantitative analysis of rain effects.
   - No specific discussion on how rain affects the system's performance or range.

7. Mitigation Strategies:
   - Increasing transmit power (not discussed in the paper)
   - Using larger antennas to increase gain (mentioned using horn antennas for longer range)
   - Adaptive signal processing techniques (not explicitly discussed)

8. Other Considerations:
   - Water films on the antenna or radome can cause additional losses and should be considered in wet conditions.

9. Importance in System Design:
   - Understanding rain attenuation is crucial for determining the operational limits of the system in various weather conditions.
   - It affects the reliability and consistency of the imaging system in outdoor environments.

While the paper demonstrates the system's ability to function in rainy conditions, a more detailed analysis of how rain affects the system's performance would be valuable. This could include:

- Quantitative measurements of system performance under different rain intensities
- Discussion of any adaptive techniques used to maintain performance in wet conditions
- Analysis of how rain attenuation affects the maximum operational range and image quality

Understanding these effects is crucial for assessing the system's practical utility in various weather conditions, especially for outdoor applications where consistent performance is required regardless of weather. Let's review the information provided:

1. Transmit Power:
   - The paper does not specify the transmit power of the radar system.
   - The only power-related information given is that the initial FMCW generator outputs a signal with 10 dBm power, but this is before frequency multiplication and amplification stages.

2. Antenna Gain:
   - There is no explicit mention of antenna gain for either the transmit or receive antennas.
   - The paper mentions using either an open waveguide or a horn antenna, depending on the scenario, but doesn't provide gain figures for these.

3. Related Information:
   - For longer-range imaging (8.5 m example in Figure 23), horn antennas were used on both transmit and receive sides to increase range, but their specifications are not provided.

Given these omissions:

1. Range Calculation in Rain:
   - It's not possible to accurately calculate the system's range in rainy conditions without knowing the transmit power and antenna gains.
   - These parameters are crucial for determining the link budget and, consequently, the maximum operational range under various atmospheric conditions.

2. Impact on Assessment:
   - Without this information, it's challenging to quantitatively assess the system's performance in different weather conditions.
   - We can't determine how much the range might be reduced in rainy conditions compared to clear weather.

3. System Characterization Gap:
   - This lack of information represents a significant gap in the system's characterization.
   - It limits the ability to fully understand the system's capabilities and limitations, especially in adverse weather conditions.

4. Replication and Comparison:
   - The absence of these key parameters makes it difficult for other researchers to replicate the results or compare this system with other millimeter-wave imaging systems.

For a comprehensive understanding of the system's performance in rain, we would need:

  • - Transmit power at the antenna
  • - Antenna gain for both transmit and receive antennas
  • - Receiver sensitivity or noise figure
  • - Details about any weather-adaptive features of the system


Without these specifications, while we can qualitatively say that the system operates in rainy conditions (as demonstrated in Figure 19), we cannot quantitatively determine how rain affects the maximum range or image quality at different distances. This is a notable limitation in the paper's presentation of the system's capabilities.

Figures

The paper includes several figures that illustrate the system design, experimental setup, and imaging results. I looked at the figures to get a feel for system performance Here's a summary of the key figures:

1. Figure 1: Shows the geometry of the imaging model.

2. Figure 2: Illustrates the process of dividing the bandwidth into 3 sub-bands for multi-spectral imaging. Here's a breakdown of the figure and the rationale behind this approach:

1. Figure Description:
   The figure shows the frequency spectrum of the radar system, which is divided into three equal parts:
   - Lower band
   - Middle band
   - Upper band

2. Rationale for Using Multiple Bands:

   a) Material Characterization: Different materials may respond differently to various frequencies within the radar's bandwidth. By splitting the bandwidth into sub-bands, the system can potentially distinguish between materials based on their frequency-dependent responses.

   b) Enhanced Information: Each sub-band provides a slightly different perspective of the target. When combined, these multiple perspectives can reveal more information about the target than a single, full-bandwidth image.

   c) Color Representation: The paper mentions assigning different colors (red, green, and blue) to the images created from each sub-band. This allows for a color composite image that can visually highlight frequency-dependent characteristics of targets.

   d) Frequency-Dependent Features: Some features of targets might be more prominent in certain frequency ranges. Using multiple bands allows for capturing these frequency-dependent features.

   e) Potential for Reducing Noise or Clutter: In some cases, certain types of noise or clutter might be more prominent in one sub-band than others. Multi-spectral imaging could potentially help in identifying and mitigating such issues.

3. Implementation:
   - The system creates three separate images, one for each sub-band.
   - These images are then assigned red, green, and blue colors respectively.
   - The final image is a composite of these three color-coded images.

4. Interpretation:
   - Targets that have the same response across all sub-bands appear white in the final image.
   - Targets with frequency-dependent responses appear in non-white colors, potentially allowing for material differentiation or feature highlighting.

The paper notes that this multi-spectral imaging approach paves the way for applications such as material characterization. It's a technique borrowed from other fields like satellite imaging, where multi-spectral data is often used to gain more information about observed areas or objects.


3. Figure 3: Depicts the concept of sub-array based processing, which is a technique used to improve the Signal-to-Noise Ratio (SNR) of the imaging system. Here's a detailed explanation of Figure 3 and how sub-array processing works:

1. Figure Description: The figure shows the entire synthetic aperture array being divided into smaller overlapping sub-arrays. Each sub-array is then used to form a beam directed towards a specific spot in the field of view.

2. Sub-Array Processing Concept:
   - The full synthetic aperture is divided into smaller, overlapping sub-arrays.
   - Each sub-array performs its own beamforming, creating a "sub-beam".
   - All these sub-beams are then combined in a final beamforming stage to create the final image.

3. How It Works:
   a) Array Partitioning: The full array is divided into smaller sub-arrays. In the example given, they use sub-arrays of length 10 with 9 elements overlapping between adjacent sub-arrays.
   
   b) Sub-Array Beamforming: Each sub-array performs beamforming, directing its beam towards a specific point in the field of view.
   
   c) Final Beamforming: The outputs from all sub-arrays are then combined in a final beamforming stage to generate the final image.

4. Benefits:
   - SNR Improvement: The paper reports achieving approximately 3 dB gain in SNR using this method.
   - This gain is achieved purely through signal processing, without changing the hardware.

5. Trade-offs:
   - Increased Computational Complexity: Processing multiple sub-arrays requires more computational resources.
   - Narrower Field of View: As the number of elements in each sub-array increases, the overall field of view of the system becomes narrower.

6. Implementation Details:
   - The paper mentions using sub-arrays of length 10 with 9 elements overlap.
   - They demonstrate steering the combined beam left by 15 degrees, center (boresight), and right by 15 degrees.

7. Limitations:
   - The gain is not directly proportional to the number of elements in the sub-array due to noise correlation introduced by the overlap.
   - The narrowing of the field of view means that to image a wide area, the beam needs to be steered multiple times.

The sub-array processing technique is a clever way to improve the SNR of the system through signal processing alone, without requiring hardware changes. This can be particularly valuable in millimeter-wave systems where achieving high SNR can be challenging due to higher noise levels and path losses at these frequencies.


4. Figure 4: Block diagram of the FMCW radar system.
Figure 4 shows the block diagram of the FMCW radar system used in this paper. Let's break down the components and explain how the system works:

1. FMCW Generator:
   - Uses a Phase-Locked Loop (PLL) to generate the initial frequency spectrum from 12.5 GHz to 18 GHz.
   - Output power is 10 dBm.

2. Frequency Multiplication:
   - The signal goes through a doubler and tripler to extend the frequency range.
   - This process increases the frequency to 75-108 GHz (W-band).

3. Amplification Stages:
   - After each multiplication step, there are amplification stages to maintain proper signal power.

4. Power Splitter:
   - Divides the signal for transmission and as a local oscillator for the receiver.

5. Transmission Arm:
   - Uses either an open waveguide or a horn antenna to transmit the signal.

6. Receiving Arm:
   - Low Noise Amplifier (LNA): 25 dB gain, 4 dB noise figure.
   - Mixer: Combines received signal with local oscillator to produce IF (Intermediate Frequency) signal.

7. IF Processing:
   - High-pass filter: Removes undesirable low-frequency components and TX-RX leakage.
   - Amplification and filtering stages to enhance signal quality.

8. Analog-to-Digital Converter (ADC):
   - Converts the IF signal to digital format for processing.
   - Includes an Anti-Aliasing Filter (AAF).
   - Sampling frequency: 5 MHz

9. Digital Processing:
   - Performs complex digital processing algorithms on the digitized signal.

How it works:

1. The FMCW generator produces a frequency-modulated signal (chirp) in the 12.5-18 GHz range.

2. This signal is frequency-multiplied to reach the 75-108 GHz range, effectively increasing the bandwidth.

3. The signal is split: one part is transmitted, the other used as a local oscillator.

4. The transmitted signal reflects off targets in the environment.

5. The reflected signal is received, amplified by the LNA, and mixed with the local oscillator signal.

6. This mixing produces an IF signal containing beat frequencies corresponding to target distances.

7. The IF signal is filtered, amplified, and digitized.

8. Digital processing algorithms (like those described elsewhere in the paper) are applied to the digitized signal to form images.

Key features:
- Wide bandwidth (33 GHz total, 24 GHz used for imaging) enables high range resolution.
- High operating frequency (W-band) allows for a compact system.
- FMCW approach provides power efficiency and lower complexity compared to pulsed radar.
This system design allows for high-resolution imaging in a compact, efficient package, suitable for various applications in challenging environments.


5. Figure 5: Shows the impulse response of the system using a corner reflector.
The impulse response itself doesn't directly show the SNR:

1. What the impulse response actually shows:
  •    - The response of the system to a point target (corner reflector in this case)
  •    - The range resolution capability of the system
  •    - The signal-to-clutter ratio,   - Sidelobe levels relative to the main lobe

2. SNR determination: The SNR is determined by factors such as:
  •    - Transmit power
  •    - Receiver noise figure
  •    - Antenna gain
  •    - Path loss
  •    - Target radar cross-section   - Phase noise in oscillator and FMCW generator

3. Relationship to clutter:
  •    - The impulse response can give an indication of the system's ability to distinguish targets from background clutter
  •    - This is more accurately described as the Signal-to-Clutter Ratio (SCR), not SNR

4. System performance: While not directly showing SNR, the impulse response does demonstrate the overall system performance in terms of resolution and clutter suppression

It's an important distinction that the impulse response primarily shows the system's resolution capability and response to a point target, rather than directly indicating the SNR. The SNR would need to be determined through separate measurements or calculations based on the system parameters.
 
Let's examine Figure 5 more closely to extract information:

1. 3dB Range Resolution: From Figure 5(b), we can see the 3dB beamwidth of the main lobe. The paper states that with 24 GHz bandwidth, they achieve a 6.3 mm range resolution. This appears consistent with what's shown in the figure, though it's difficult to read the exact value from the image provided.

2. Close-in Sidelobe Level: the closest sidelobes appear to be approximately 13-15 dB below the main lobe peak. This is a reasonable level for an unwindowed or lightly windowed response.

3. Far Sidelobe Level: The general far sidelobe level seems to be around 20-25 dB below the main lobe peak.

4. Significant Lobe at 2.3 m: There is a significant lobe at about 2.3 m, approximately 0.25 m beyond the main reflection at 2.05 m. This lobe appears to be only about 6-8 dB below the main peak, which is much higher than typical sidelobes.

This higher lobe at 2.3 m could be due to several factors:
  • A secondary reflection from the experimental setup, possibly from another part of the corner reflector or the mounting structure.
  • A multipath reflection involving the ground or nearby objects.
  • An artifact of the radar system itself, possibly due to internal reflections or non-linearities in the transmit/receive chain.
The presence of this significant secondary lobe is important to note as it could affect the system's ability to accurately resolve closely spaced targets. In a real-world imaging scenario, this could lead to ghost targets or ambiguities in target location.

This feature of the impulse response warrants further investigation. In a thorough characterization of the system, the researchers would typically try to identify the source of this secondary lobe and mitigate it if possible, as it could impact the overall imaging performance of the radar.


6. Figure 6: Displays the power spectrum of the system over the entire bandwidth.
Figure 6 shows the power spectrum of the radar system over its entire bandwidth, ranging from 75 GHz to 108 GHz. This is a crucial characterization of the system's frequency response. Let's analyze it in detail:

1. Overall Spectrum:
   The figure shows the normalized power (in dB) across the frequency range of 75-108 GHz.

2. Spectrum Shape:
   - The spectrum is not flat across the entire bandwidth.
   - There are significant power fluctuations at the lower and upper ends of the frequency range.
   - The central part of the spectrum appears to be relatively flat.

3. Usable Bandwidth:
   The paper mentions that for imaging, they use the frequency range from 78 GHz to 102 GHz, which corresponds to 24 GHz of bandwidth. This selection is likely based on the flatness of the spectrum in this region.

4. 3dB Band Limits:
   While the paper doesn't explicitly state the 3dB band limits, we can estimate them from the figure:
   - The lower 3dB point appears to be around 77-78 GHz.
   - The upper 3dB point seems to be around 104-105 GHz.
   - This gives an approximate 3dB bandwidth of 26-28 GHz.

5. Power Fluctuations:
   - There are significant power drops below 77 GHz and above 105 GHz.
   - These regions are likely excluded from the imaging bandwidth to maintain signal quality.

6. Significance:
   - The flatness of the spectrum in the used bandwidth (78-102 GHz) is crucial for good image quality.
   - A flat spectrum ensures that all frequencies contribute equally to the imaging process, maintaining resolution and reducing artifacts.

7. System Design Implications:
   - The shape of this spectrum likely influenced the choice of the 24 GHz bandwidth used for imaging.
   - The researchers have chosen to use the most stable and flat portion of the available spectrum.

This power spectrum is a key characterization of the radar system. It shows that while the system can theoretically operate from 75-108 GHz, practical considerations (likely related to component performance and system design) limit the usable bandwidth for high-quality imaging to 78-102 GHz. This careful selection of the operating bandwidth helps ensure the high-quality, high-resolution images demonstrated later in the paper.


7. Figure 7: Image of the radar system with a two-axis motorized scanner.


8. Figure 8: Experimental setup with a bike in front of the radar system.


9. Figures 9: Applying BP algorithm to data from bike.

10. The Resonstructed Image with Hamming Window in both directions. What are the differences between Fig 9 and 10:
Excellent question. Figures 9 and 10 show reconstructed images of the bike, but with a key difference in processing. Let's compare them:

Figure 9:
- Result of applying the back-projection algorithm without additional processing
- Shows the raw reconstructed image

Figure 10:
- Reconstructed image after applying Hamming window in both range and cross-range directions
- Demonstrates the effect of windowing on the image

Key differences:

1. Sidelobe Levels:
   - Figure 9 shows higher sidelobe levels, visible as a "haze" or "speckle" around the main features.
   - Figure 10 has reduced sidelobe levels, resulting in a cleaner image with less background noise.

2. Feature Sharpness:
   - Figure 9 may have slightly sharper edges on the bike's features.
   - Figure 10 might show slightly softer edges due to the windowing process.

3. Dynamic Range:
   - Figure 10 likely has an improved dynamic range, making weaker reflections more visible relative to stronger ones.

4. Background Clutter:
   - Figure 9 shows more background clutter or artifacts.
   - Figure 10 has a cleaner background due to sidelobe suppression.

5. Contrast:
   - Figure 10 may show improved contrast between the bike's features and the background.

6. Resolution Trade-off:
   - While not explicitly stated, Figure 10 might have a slightly reduced resolution compared to Figure 9, as windowing often comes with a small resolution penalty.

7. Feature Visibility:
   - Some weaker features of the bike might be more easily discernible in Figure 10 due to reduced interference from sidelobes.

8. Overall Image Quality:
   - Figure 10 generally appears "cleaner" and potentially easier to interpret, especially for features with lower reflectivity.

These differences illustrate the trade-offs involved in radar image processing. The windowing technique reduces sidelobes and improves overall image quality, but potentially at the cost of a slight reduction in resolution. The choice between these approaches would depend on the specific requirements of the imaging application.

Range versus Cross Range effects

Let's examine the potential differences between range and cross-range features in Figures 9 and 10:

1. Range Features:
  •    - In both figures, the range direction corresponds to the vertical axis.
  •    - Range resolution is primarily determined by the system's bandwidth (24 GHz in this case).
  •    - The paper states a range resolution of 6.3 mm.

2. Cross-Range Features:
  •    - The cross-range (horizontal axis) resolution is determined by the synthetic aperture length (40 cm) and the center frequency.
  •    - Cross-range resolution typically varies with range and is generally coarser than range resolution.

Comparing the figures:

1. Resolution Differences:
  •    - Range features (vertical) appear to be more finely resolved than cross-range features (horizontal) in both figures.
  •    - This is expected due to the typically higher range resolution in SAR systems.

2. Windowing Effects:
  •    - The Hamming window applied in Figure 10 might have a slightly different effect on range vs. cross-range.
  •    - Range sidelobes might be more suppressed than cross-range sidelobes, or vice versa, depending on how the windowing was applied.

3. Feature Elongation:
  •    - Some features, particularly in Figure 9, might appear slightly elongated in the cross-range direction.
  •    - This could be due to the inherently lower cross-range resolution or motion during the synthetic aperture formation.

4. Sidelobe Patterns:
  •    - In Figure 9, sidelobe patterns might differ between range and cross-range.
  •    - Range sidelobes often appear as horizontal streaks, while cross-range sidelobes appear as vertical streaks.

5. Edge Sharpness:
  •    - Vertical edges (perpendicular to the range direction) might appear sharper than horizontal edges in both figures, but this difference could be more pronounced in Figure 9.

6. Detail Preservation:
  •    - Fine details in the range direction (like the spokes of the wheels) might be better preserved compared to similar-sized details in the cross-range direction.

7. Windowing Impact:
  •    - The application of the Hamming window in Figure 10 might have smoothed out some of these differences, potentially making the image appear more isotropic.

It's important to note that without direct access to higher resolution versions of these images or the raw data, some of these differences might be subtle and hard to discern definitively. The differences between range and cross-range features in SAR images can provide valuable information about the system's performance and the imaged target's characteristics.

11.
12. 
Figure 13: Compares images without apodization, with dual apodization, and with complex dual apodization.

11. Figure 14: Demonstrates the effect of image denoising.

12. Figure 15: Shows results of sub-array based processing with different beam steering angles.

13. Figure 16: Presents the result of multi-spectral imaging.

14. Figures 17-23: Additional experimental results showing:
    - A car and two bikes
    - Two cars at night
    - A bike in front of a car in rainy conditions
    - A bike in clear and foggy conditions
    - Multiple targets (cars, bikes, ladder, scooter) in one scene
    - A bike at a far distance (8.5m)

These figures provide visual evidence of the system's capabilities in various scenarios and demonstrate the effectiveness of the different processing techniques described in the paper.

Synthetic Array Scanning

How the synthetic aperture is formed based on the information provided in the paper:

1. Synthetic Aperture Formation:
   The paper mentions using a two-axis motorized scanner to create the synthetic aperture. This is shown in Figure 7.

2. Movement Method:
   The radar is moved along a horizontal track to create the synthetic aperture.

3. Array Length:
   The paper states that for all experiments presented, the length of the array is 40 cm.

4. Movement Speed:
   Unfortunately, the paper does not explicitly state the speed at which the radar is moved along the track. This is an important detail that is missing from the information provided.

5. Data Collection:
  •    - The radar collects data at multiple positions along the 40 cm track.
  •    - Each position acts as an element of the synthetic array.

6. FMCW Operation:
  •    - The paper mentions that the chirp time is set to 5 msec.
  •    - This suggests that at each position, the radar transmits and receives for at least 5 msec to complete one chirp.

7. Sampling and Processing:
  •    - The ADC sampling frequency is 5 MHz.
  •    - This sampling is done for the received signal at each position along the track.

Without knowing the speed of movement or the number of positions sampled along the 40 cm track, it's difficult to determine the total time taken for data collection or the density of the synthetic array.

For future reference, it would be valuable to know:
  • - The speed of the radar movement along the track
  • - The number of sampling positions within the 40 cm aperture
  • - The total time taken for a complete scan

These details would provide a more complete understanding of the system's operation and its potential limitations in terms of imaging speed and real-time capabilities.

Point Cloud

A point cloud representation is a way of visualizing 3D data using a collection of points in space. In the context of radar imaging, as described in this paper, a point cloud representation provides a way to visualize the reconstructed image data in a three-dimensional format. Here are the key aspects of point cloud representations:

1. Data Structure:
   - Each point in the cloud is defined by its 3D coordinates (x, y, z).
   - In radar imaging, these typically correspond to range, cross-range, and intensity or elevation.

2. Visualization:
   - Points are plotted in 3D space without connecting lines or surfaces.
   - This creates a "cloud" of points that represents the shape and structure of the imaged object or scene.

3. Information Content:
   - Each point can carry additional information beyond just position, such as intensity or color.
   - In this paper, Figure 11 shows a point cloud where the intensity of each point is represented, likely corresponding to the strength of the radar return.

4. Advantages:
   - Provides a direct representation of the raw data without interpolation.
   - Can represent complex 3D structures efficiently.
   - Allows for easy visualization of the spatial distribution of reflectors.

5. In This Paper:
   - Figure 11 shows a point cloud representation of the bike image, including intensity information.
   - Figure 12 shows a similar representation but with all points at the same intensity level, focusing on spatial distribution.

6. Applications:
   - Useful for analyzing the spatial distribution of reflectors in the scene.
   - Can help in identifying specific features or structures in the imaged object.
   - Facilitates 3D visualization of the radar data.

7. Processing:
   - Point clouds can be further processed for tasks like surface reconstruction, object recognition, or feature extraction.

Point cloud representations are particularly useful in radar imaging as they provide a direct visualization of the reflectors in the scene, allowing researchers to analyze the spatial distribution and intensity of reflections from the imaged objects. This can be especially helpful in understanding complex 3D structures or in identifying specific features of interest in the imaged scene.

Artifacts

Based on the information provided in the paper, there are several artifacts and outputs from this research:

1. Custom FMCW Radar System:
   The authors developed a custom-made wideband FMCW radar system operating in the W-band (75-108 GHz).

2. Experimental Setup:
   Figure 7 shows the radar system along with a two-axis motorized scanner used to create the synthetic aperture.

3. Reconstructed Images:
   The paper presents several high-resolution reconstructed images of various targets, including:
   - A bicycle (Figures 9-14)
   - Cars and bikes in different scenarios (Figures 17-23)
   - Images in various weather conditions (clear, rainy, foggy)

4. Signal Processing Algorithms:
   While not physical artifacts, the authors developed and implemented several signal processing techniques:
   - Back-projection algorithm for image reconstruction
   - Apodization methods for sidelobe reduction
   - Multi-spectral imaging technique
   - Image denoising algorithm
   - Sub-array based processing method

5. System Characterization Data:
   - Impulse response of the system (Figure 5)
   - Power spectrum of the radar system (Figure 6)

6. Point Cloud Representations:
   Figures 11 and 12 show point cloud representations of the reconstructed images.

7. Multi-Spectral Images:
   Figure 16 shows the result of multi-spectral imaging, where the bandwidth is divided into three sub-bands.

These artifacts demonstrate the capabilities of the imaging system and the effectiveness of the signal processing techniques developed by the authors. However, it's important to note that the paper doesn't mention any publicly available datasets or open-source code related to this research. The primary artifacts are the images and data presented within the paper itself.
 

  • Multi Target Tracking using AWR1642 TI FMCW radar chip with 2 TX and 4 RX operating at 77 GHz.

  • The signal processing block includes: Range estimation --> CFAR --> Doppler estimation --> CFAR --> Doppler compensation --> Angle of arrival estimation

  • Tracking: Kalman Filter - Constant Velocity motion model

  • Data Association is based on Joint Probabilistic Data Association filter (JPDAF)

  • Three stationary cars are identified with labels ranging from 0 to 2 and a person is walking in front of the radar.

  • The project has been coded in C and Python3.

 

High Resolution Millimeter Wave Imaging Based on FMCW Radar Systems at W-Band


Shahrokh Hamidi Shahrokh Hamidi is with the Faculty of Electrical and Computer Engineering, University of Waterloo, 200 University Ave W, Waterloo, ON., Canada, N2L 3G1. e-mail: Shahrokh.Hamidi@uwaterloo.ca.    and

M.R. Nezhad-Ahmadi M.R. Nezhad-Ahmadi is with the Faculty of Electrical and Computer Engineering, University of Waterloo, 200 University Ave W, Waterloo, ON., Canada, N2L 3G1. e-mail: mrnezhad@uwaterloo.ca.

Abstract

In this paper, we present a unique 2D high resolution, compact, low-cost, low-weight, and highly accurate millimeter wave (mm-Wave) imagery system capable of operating in all weather conditions. We describe mm-Wave imaging process in detail and present several novel signal processing methods with their applications. To create the array, we utilize the Synthetic Aperture Radar (SAR) concept. The imagery system presented in this work, can strongly compete with Lidar systems as the reolution limit is at the same level. Furthermore, in contrast to the Lidar systems, our imagery system can operate in heavy rain and dense fog and produce high quality images.

We use our custom-made Frequency Modulated Continuous Wave (FMCW) radar operating at W-band with 33 GHz bandwidth for data collection and present the results.

Index Terms:

FMCW radar, high resolution imaging, mm-Wave imaging, SAR.

I Introduction

Radar imaging is an important topic with many applications that has been used for decades in different areas [1, 2, 3, 4]. The goal in radar imaging is to create 2D or 3D images from the targets located in the radar’s field of view. The image is the representation of the complex reflective coefficient field of the targets. Achieving higher resolution is of high importance in the field of radar imaging. The range resolution is directly related to the bandwidth of the system. Increasing the bandwidth at low frequencies such as L, C, or X bands, that have been used extensively for radar imaging, is considered to be hard to achieve. Moreover, enhancing the number of elements to obtain higher resolution in the azimuth direction, makes the system complex and increases the physical size of the system.

Recently, millimeter, sub-terahertz, and terahertz waves have been used to perform radar imaging [11, 12, 13, 14, 15, 16, 17, 18, 5, 6, 7, 8, 9, 10]. Compared to the systems operating at lower frequencies, the advantage of the systems operating at mm-Wave, sub-terahertz, and terahertz frequencies is that they are light-weight, compact, and have higher bandwidth. Operating with higher bandwidth will allow the system to obtain higher resolution in the range direction. Also, operating at higher frequencies makes it possible to create apertures with large number of elements while they occupy small physical space. This, in turn, yields high resolution in the azimuth direction.

The imagery systems operating at millimeter wave (mm-Wave), sub-terahertz, and terahertz frequencies have numerous applications including material characterization, high resolution localization, non-destructive testing (NDT), and near-field imaging, to mention a few.

I-A Related Work

The work presented in [11, 12, 13, 14, 15, 16, 17, 18] are based on commercially available FMCW radar [1, 2, 19, 20] chips operating at 77 GHz with 4 GHz bandwidth. High resolution and high quality images have been created. However, due to the fact that the bandwidth of the systems is limited to 4 GHz, therefore, the best range resolution that we can achieve is approximately 5 cm.

In [5], the authors have developed a SiGe-Chip-based 80 GHz Frequency Modulated Continuous Wave (FMCW) radar system with 25.6 GHz bandwidth for high resolution imaging. The work is based on a low power SiGe-MMIC system operating as a FMCW radar. A large aperture has been created synthetically and high resolution images have been generated.

Another interesting work reported in the literature [6, 7] is an imagery system operating at 300 GHz with more than 40 GHz bandwidth. The system has been used in both Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) modes and has been used to create high resolution images. In [8], a FMCW radar, operating at terahertz frequencies, has been designed to create high resolution images based on Circular SAR (CSAR) technique. In [9], a FMCW radar has been designed at 220 GHz with 8 GHz bandwidth. Then the system has been used to generate high resolution radar images.

The major issue with imagery systems operating at sub-terahertz and terahertz frequencies is low output power and high free-space path-loss which makes them only suitable for near field and very close range applications.

I-B Motivation

In this paper, we present a high resolution imagery system. The system we use in this paper, is our custom-made wide-band radar system. We have chosen the W-band with frequencies ranging from 75 GHz to 108 GHz with 33 GHz bandwidth. Compared to the systems operating at lower frequencies the W-band will allow us to design a compact and light-weight system. In addition, we can reach higher bandwidth which as a result allows to enhance the range resolution limit of the system.

However, increasing the frequency range beyond the W-band will cause issues such as lower output power and higher free-space path-loss which systems presented in [6, 7, 8, 9, 10] suffer from.

Moreover, the system presented in this paper is a custom-made FMCW radar. FMCW radars are well-known for power efficiency and lower complexity compared to the pulsed-radars.

In addition to the system design and image formation, we present several signal processing techniques such as multi-spectral-based imaging, sub-array-based processing, image de-noising, and apodization which distinguishes our work from [11, 12, 13, 14, 16, 17, 18, 5, 6, 7, 8, 9, 10]. The multi-spectral-based imaging subject paves the way to applications such as material characterization. The sub-array-based processing approach provides approximately 3 dB gain which is a remarkable achievement since the origin of the gain is the signal processing unit. We further, describe the system and the imaging process in detail. Moreover, we present image de-noising. We use the state-of-the-art de-noising techniques from the field of image processing and apply it to the reconstructed images. Since the resolution of the reconstructed images is extremely high, therefore, it is plausible to bring sophisticated methods from the filed of image processing in to the field of radar imaging. We, further, utilize apodization technique. The apodization method is performed to reduce the side-lobe levels while preserving the resolution limit of the imagery system [21].

Another major difference between the work we present in this paper and the material reported in [11, 12, 13, 14, 15, 16, 17, 18, 5, 6, 7, 8, 9, 10] is related to the fact that we have collected data from multiple realistic scenarios in different weather conditions. The main purpose is to show the capability of the imagery system presented in this work in producing high resolution images from complex realistic scenarios in different weather conditions such as day, night, rain, and fog.

I-C Organization

The paper has been organized as follows. In Section II, we present the system model along with the imaging algorithm. Section III has been dedicated to the side-lobe reduction using apodization-based methods. In Section IV, we describe the multi-spectral imaging technique. Section V discusses the image de-noising process. In Section VI, we present the sub-array-based processing technique. Finally, section VII has been dedicated to the experimental results followed by the concluding remarks.

II System Model

Fig. 1 illustrates the geometry of the model we are considering in this paper. The array is created synthetically. The imagery system is a single element TX-RX FMCW radar which is moved along the horizontal direction to create a large array based on the SAR concept.

Refer to caption
Figure 1: The geometry of the model.

In FMCW radar systems, the signal transmitted by the transmitter and received at the location of the receiver, after being reflected from the lth point target, and at the output of the mixer, is described as,


s(l)(t,η)=σlu(tτl(η)T)ej4π(fc+0.5βt)Rl(η)/c,
(1)

where fc is the carrier frequency and σl is the radar cross section for the lth point target. The parameter β is given as b/T, where b and T stand for the bandwidth and the chirp time, respectively. Also, u represents the envelope of the signal and is considered to be a rectangular window. Furthermore, t and η represent the fast- and slow-time parameters, respectively. Finally, Rl is the radial distance between the target and the radar.

Upon taking Fourier transform from (1) with respect to the fast time parameter t, we obtain


s(l)(f,η)=σlsinc(T(f2Rl(η)cβ))ej4πfcRl(η)/c.
(2)

This step results in the localization of the energy of the signal in the range direction. The next step is to localize the energy of the signal in the azimuth direction. To accomplish this goal, we implement the back-projection algorithm [4] as follows


σ^l=s(l)(f,η)ej4πfcRi(η)/c𝑑η,
(3)

where σ^l is the estimated reflective coefficient of the lth point reflector located at the ith pixel.

III Apodization

The common technique for side-lobe reduction is by applying a window to the data in both range and azimuth directions. However, the side-lobe reduction based on windowing method will reduce the resolution of the system. Apodization is a method to decrease the side-lobe levels without changing the resolution limit of the system. In fact, the resolution of the system remains intact while the side-lobe levels are reduced and will be at the same level of the applied window [21].

Complex dual apodization is a more advanced apodization technique which operates on real and imaginary parts of the signal separately and generates better result compared to the apodization method [21]. In this paper, we will be using both the dual apodization and the complex dual apodization technique to reduce the side-lobe levels while preserving the resolution limit of the imaging system.

IV multi-spectral-based imaging

In this section, we present multi-spectral-based imaging. The idea is based on partitioning the whole bandwidth into 3 sub-bands. We, then, create 3 separate images and assign red, green, and blue colors to them, respectively. At the end, we display the image in color.

In fact, the sub-bands are created using the IF signal (also known as the beat signal). We divided the beat signal, in fast time domain, into three equal sections and each section represents one-third of the initial signal. Next, each part is used to create an independent image from the scene. Finally, all three images are assigned red, blue, and green colors and the result is plotted.

Refer to caption
Figure 2: Dividing the whole bandwidth of the system into 3 equal sub-bands for multi-spectral-based imaging.

Fig. 2 shows the process of dividing the entire bandwidth of the system into 3 sub-bands. One of the applications of multi-spectral-based imaging is for materiel characterization in which targets that display the same response over the whole frequency band are presented as white and targets that have frequency-dependent response will be seen as non-white in the processed image.

V image de-noising

The image created by the system is corrupted with noise. In this section, we address the image de-noising process. The goal of image de-noising is to remove the noise while prevent image smearing and preserve the fine structures in the image. Image de-noising helps with observing the fine details of the image which have been covered by the noise of the system. Since the imagery system presented in this paper creates incredibly high resolution images, which is comparable to optical images in resolution and quality, therefore, we will be using image de-noising techniques from the filed of image processing.

For the image de-noising we will be implementing the l1-TV method [22] which is based on the following convex optimization problem,


minu Ωu1𝑑x+λ2Ω(fu)2𝑑x.
(4)

In (4), f represents the noisy image and λ is the hyper-parameter of the optimization problem. Also, Ω represents the image domain.

The optimization problem given in (4) is convex [23] and can be solved using existing packages such as cvx in MATLAB or cvxpy in Python. However, for images with large number of pixels, solving the optimization problem, given in (4), is computationally prohibited.

To mitigate this issue, we will be solving the Euler-Lagrange equation [24] for the optimization problem given in (4). Moreover, to make the algorithm resilient to edge smearing, we will use an anisotropic version of (4). Consequently, upon using the Euler-Lagrange equation we obtain the following nonlinear elliptic partial differential equation,


{ut=.(c(|(Gσu)|2)u),u(x,0)=f(x),c(I)=e(I/K)2,Gσ(I)=σ1/2e|I|2/(4σ),(t,x)(0,T)×Ω.
(5)

In (LABEL:denoising_opt_EL), the parameter K controls the sensitivity to the edges and is a hyper-parameter and represents the convolution operator. The algorithm presented in (LABEL:denoising_opt_EL) is in fact the nonlinear diffusion method, also known as modified Perona-Malik diffusion technique [25], which is used for image de-noising . We will implement (LABEL:denoising_opt_EL) directly in Python to achieve the de-noised image. One of the main features of the de-noising technique given in (LABEL:denoising_opt_EL) is the ability of the algorithm to preserve the edges and fine details of the image while remove the noise.

VI sub-array-based processing

In this section, we discuss sub-array-based processing. The goal of the sub-array-based method is to increase the Signal to Noise Ratio (SNR) which is a critical factor for image reconstruction at millimeter wave frequencies. We partition the array into smaller sub-arrays. Then, we perform beam-forming per each sub-array and direct the sub-beams toward a specific spot in the field of view of the imagery system. Next, we utilize all the sub-beams and perform the final beam-forming stage over all the sub-beams that we have created and generate the final image. Fig. 3 illustrates the idea behind the sub-array-based imaging.

Creating sub-arrays this way will increase the SNR which, in turn, will reduce the noise floor and that is the main goal for implementing the sub-array-based approach. The sub-arrays are created with non-zero overlap which forces the noise of the system to become correlated. Therefore, the gain we achieve is not exactly equal to the number of the elements in the sub-arrays. However, we can obtain approximately 3 dB gain in SNR which is a remarkable achievement. It should be noted that, the gain is purely obtained from the signal processing unit and that is what makes this approach valuable.

It is worth mentioning that, increasing the number of elements in the sub-array comes at the cost of narrowing the field of view of the imagery system.

Refer to caption
Figure 3: Partitioning the whole array into sub-arrays and direct the sub-beams toward a specific place in the field of view of the imagery system which will be used to perform the final stage of the beam-forming process.

VII experimental results

In this section, we present our experimental results. We have designed a FMCW radar at W-band operating at frequencies ranging from 75 GHz to 108 GHz with 33 GHz bandwidth. .

Figure 4: The block diagram of the FMCW radar system.

Fig. 4 shows the block diagram of the developed broadband imaging radar system incorporating FMCW technology, based on a Phase-Locked Loop (PLL) FMCW generator. This generator outputs a signal spanning a broad initial frequency spectrum of 12.5 GHz to 18 GHz, with a power specification of 10 dBm. Subsequent frequency multiplication through a configured doubler and tripler extends the signal’s frequency reach, achieving an expansive band from 75 GHz to 108 GHz.

This frequency multiplication process enhances the bandwidth of the radar system which is critical for achieving high-resolution imaging. Amplification stages subsequent to each multiplication step ensure proper signal power at the output and maintain the broadband characteristic of the output. A power splitter, serving both the transmission arm and the local oscillator input for the mixer in the receive arm. The transmission arm leverages the broadband capabilities to radiate a signal through either an open wave-guide or a horn antenna depending on the scenario under test to support high-resolution imaging applications.

The receiving arm utilizes a Low Noise Amplifier (LNA) with 25 dB gain and 4 dB noise figure to amplify the reflected broadband signals. These signals, upon mixing with the local oscillator signal, yield an Intermediate Frequency (IF) that contains the beat frequencies from the different targets.

To assure the integrity of the IF signal, a high-pass filter is employed to discard undesirable low-frequency components, with subsequent stages of amplification and filtering enhancing the signal quality. The IF signal is then converted to digital format via an Analog-to-Digital Converter (ADC), making it amenable to perform complex digital processing algorithms.

The purpose of the high pass filter is to remove or decrease the energy of the close-by targets, that are within the minimum range of the radar, as well as blocking the leakage from TX into RX to protect the receiver from saturation and increase the dynamic range of the system.

The ADC block contains the ADC as well as the low pass Anti Aliasing Filter (AAF).

The sampling frequency for the ADC is 5 MHz. The chirp time has been set to 5 msec.

Normalized Power [dB]Range [m](a)
Normalized Power [dB]Range [m](b)
Figure 5: a) the impulse response of the system using a corner reflector at 2.05 m radial distance, b) the 3 dB beam-width limit. .

In Fig. 5, we have shown the impulse response of the system using a corner reflector located at 2.05 m radial distance.

To perform the image reconstruction process, we have chosen frequencies ranging from 78 GHz to 102 GHz which is the flat part of the spectrum that the system provides and is equivalent to 24 GHz bandwidth. The 24 GHz bandwidth will provide us with 6.3 mm resolution in the range direction. The length of the array, for all the experiments that we present in this section, is 40 cm.

Fig. 6 illustrates the power spectrum of the system over the entire bandwidth ranging from 75 GHz to 108 GHz. It is clear from Fig. 6 that we have large power fluctuation in the beginning and at the end of the spectrum. For the imaging process, we have chosen the part corresponding to range of frequencies from 78 GHz to 102 GHz which is equivalent to 24 GHz bandwidth.

Normalized Power [dB]Frequency [GHz]
Figure 6: The power spectrum for the system over the entire bandwidth ranging from 75 GHz to 108 GHz.

Figure 7: The radar system along with a two-axis motorized scanner to create the synthetic aperture.

Fig. 7 illustrates the radar system along with a two-axis motorized scanner to create the synthetic aperture.


Figure 8: The experimental set-up.

Fig. 8 depicts the experimental set-up in which a bike has been located in front of the radar system.

VII-A Image Formation

In this subsection, we present the result of applying the back-projection algorithm to the data-set collected from a bike in front of the imaging system.

Range [m]Cross Range [m]
Figure 9: The result of applying the back-projection algorithm to the experimental data gathered from the bike.

Fig. 9 shows the result of applying the back-projection algorithm, given in (3), to the experimental data gathered from the bike.

Range [m]Cross Range [m]
Figure 10: The reconstructed image after applying Hamming window in both the range and cross-range directions.

Fig. 10 illustrates the reconstructed image while we have applied Hamming window in both range and cross-range directions to reduce the side-lobe levels. From Fig. 9 and Fig. 10, we can see all the details of the bike such as the saddle, tires, pedal, chain, spokes, gears, handlebars, and crossbar.

Range [m]Cross Range [m]
Figure 11: The point-cloud for the image shown in Fig. 9, the information about the intensity is included.

In Fig. 11, we have shown the point-cloud for the reconstructed image shown in Fig. 9. Fig. 11 includes the information about the intensity of the point reflectors

Range [m]Cross Range [m]
Figure 12: The point-cloud for the image shown in Fig. 9 with all points represented at the same intensity level.

In Fig. 12 we have shown all the point reflectors with the same intensity level.

VII-B Apodization

In this sub-section, the result of the dual apodization and complex dual apodization methods are presented. We have used rectangular and Hamming window to perform the apodization method. The rectangular window represents the image without windowing. It is important to mention that multiple windows can be used for apodization which create multi-layer algorithm for side-lobe reduction while preserving the resolution limit of the system.

The dual apodization method is based on the absolute value of the reconstructed image and provides reasonable result. However, the complex dual apodization works on the real and imaginary part of the reconstructed image and yields an incredibly better result.

Fig. 13 shows the result of utilizing the apodization methods. Fig. 13-(a) illustrates the image without apodization. In Fig. 13-(b), the result of dual apodization which has been applied to the image is shown. As can be seen from Fig. 13-(b), the side-lobe levels are lower and the resolution limit is intact. The side-lobe levels will be at the same level as the case in which the Hamming window is used.

Finally, Fig. 13-(c) depicts the result of applying the complex dual apodization technique. As it is clear from Fig. 13-(c), the result of the complex dual apodization is by far better than that of the dual apodization. Of course, the computational complexity of the complex dual apodization is slightly higher.

Range [m]Cross Range [m](a)
Range [m]Cross Range [m](b)
Range [m]Cross Range [m](c)
Figure 13: a) the resulting image without apodization, b) the reconstructed image after applying the dual apodization method, c) the reconstructed image after applying the complex dual apodization method.

VII-C Image De-noising

In this sub-section, we implement the modified Perona-Malik method, given in (LABEL:denoising_opt_EL), directly in Python to remove the noise from the reconstructed images.

The result of the de-noised image has been shown in Fig. 14. In order for the effect of the de-noising process to be seen clearly, we have shown all the images with 50 dB dynamic range.

Range [m]Cross Range [m](a)
Range [m]Cross Range [m](b)
Range [m]Cross Range [m](c)
Figure 14: a) the noisy image, b) the de-noised image, c) the de-noised image by applying the de-noising technique given in (LABEL:denoising_opt_EL) to the result of the complex dual apodization method shown in Fig. 13-(c).

As can be seen from Fig. 14-(b) the de-noising algorithm provides a remarkable result. The fine structures of the target can be uncovered from the noise while the edges have been preserved. The result shown in Fig. 14-(b) has been obtained by applying (LABEL:denoising_opt_EL) to the windowed image shown in Fig. 14-(a). In Fig. 14-(a), we have used Hamming window in both the range and azimuth directions.

In Fig. 14-(c), we have applied the de-noising algorithm presented in (LABEL:denoising_opt_EL) to the result of the complex dual apodization which has been shown in Fig. 13-(c). Since the complex dual apodization reduces the side-lobe levels while preserves the resolution limit of the system, therefore, Fig. 14-(c) provides better results compared to what we have achieved in Fig. 14-(b). In other words, more fine structures related to the bike can be seen in Fig. 14-(c) compared to what is visible in Fig. 14-(b).

VII-D Sub-Array-Based Processing

This subsection discusses the sub-array-based processing results. Fig. 15 shows remarkable results for implementing the sub-array-based processing.

We have partitioned the whole array into sub-arrays with length 10 and with 9 elements overlap between them. Subsequently, we use all these sub-arrays to perform the final stage of the beam-forming process and obtain the image.

In Fig. 15-(a), we have steered the beam toward left by 15 degree. In Fig. 15-(b), we are illuminating the bore-sight and in Fig. 15-(c), we have directed the beam toward the right by 15 degree.

By performing sub-array-based processing with sub-arrays of length equal to 10, we can achieve approximately 3 dB gain which is impressive. The reason that we can not obtain 10 dB gain, which is the real number for creating sub-arrays with the length equal to 10, is that when we create the sub-arrays, the overlap between them will force the noise of the system to become correlated. As a result, when we perform the final stage of the beam-forming process to generate the final image we will be deprived of the full array gain because of the correlation between the noise samples.

The results that we have presented in Fig. 15 are based on 50 dB dynamic range and in gray scale, thus, we can compare them with the reconstructed image shown in Fig. 14-(a).

The complete analysis of the gain of the sub-array-based processing and its dependency on the length of the sub-arrays, will deviate us from the main concept of this paper and will be the subject of our future work.

It should be noted that, the gain that we obtain by using the sub-array-based processing technique, which we have presented in this section, is exclusively the gain that is provided by the signal processing unit and that is the reason that the sub-array-based method is considered to be highly valuable.

However, as we mentioned before and as it is clear form the results presented in Fig. 15, when we increase the number of elements per each sub-array the overall field of view of the system becomes narrower. In other words, achieving higher gain using the sub-array-based approach comes at the cost of narrowing the field of view of the imagery system. As a result, to be able to perform imaging for the entire field of view of the imagery system, we need to steer the beam similar to what we have done in Fig. 15.

Range [m]Cross Range [m](a)
Range [m]Cross Range [m](b)
Range [m]Cross Range [m](c)
Figure 15: The result of sub-array-based processing. Sub-arrays of length equal to 10 have been created and all the sub-beams have been directed towards the location of the bike. Finally,all the elements of the array have been used to perform the beam-forming, a) steering the beam toward left by 15 deg, b) steering the beam toward the center c) steering the beam toward right by 15 deg.

VII-E Multi-Spectral-Based Imaging

The result of multi-spectral-based imaging based on 3 sub-bands has been shown in Fig. 16. The result is based on 3 separate images, one image per each sub-band. Next, they are assigned to red, green, and blue colors and the final image is created. For better visualization, we have presented 1|img| which implies that if the target has the same response over the entire bandwidth, it will be shown in black color.

In order to increase the visibility of the image, we have performed contrast enhancement as well [26]. We have applied the de-noising algorithm given in (LABEL:denoising_opt_EL) to the image prior to partitioning the data into 3 sub-bands.

In Fig. 16, the target with darker color shows the same response over the entire bandwidth of the system. In other words, its reflective coefficient has less dependency on the range of frequencies that the system is operating over.

Range [m]Cross Range [m]
Figure 16: The result of the multi-spectral-based imaging. The 24 GHz bandwidth has been partitioned into 3 sub-bands.

The multi-spectral-based imaging can be used for material characterization and this will be the subject for our future work.

(a)
Range [m]Cross Range [m](b)
Figure 17: a) the experimental set-up for a car and two bikes in front of the radar system, b) the reconstructed image.

VII-F More Results

In this subsection, we present more experimental results from targets such as bike as well as car at near and far distances from the imagery system. It should be noted that, for all the experimental results presented in this subsection, we have used the back-projection algorithm, given in (3), to reconstruct the images. Also, we have used the de-noising algorithm given in (LABEL:denoising_opt_EL) to de-noise the images.

We have performed a test in which a car and two bikes have been located in front of the radar system. Fig. 17-(a) illustrates the experimental set-up. In Fig. 17-(b), we have presented the reconstructed image. From Fig. 17-(b), we can see all the details of all targets present in the radar’s field of view.

Refer to caption(a)
Range [m]Cross Range [m](b)
Figure 18: a) the experimental set-up for 2 cars, the data has been collected during night time, b) the reconstructed image.

We have conducted a different experiment which includes 2 cars in front of the imagery system. Fig. 18-(a) illustrates the experimental set-up. As can be seen from Fig. 18-(a) , the data has been collected during night time. In Fig. 18-(b), we have shown the reconstructed image. From Fig. 18-(b), we can see the details of both cars such as the bumper, headlights, license plate, and side mirror.

We have conducted an additional test. The data for this scenario has been collected in rainy weather. Fig. 19-(a) illustrates the experimental set-up for the bike situated on the ground in front of the car. As we mentioned, the data has been collected in a rainy morning in order to test the ability of the imagery system to create a high resolution image from the targets even in rainy condition. As can be seen from Fig. 19-(a), the ground, the bike, and the car are all wet and rain is coming during the period that the data is being collected. Fig. 19-(b) shows the reconstructed image.

Refer to caption(a)
Range [m]Cross Range [m](b)
Figure 19: a) the experimental set-up for the bike in front of the car, the data has been collected in a rainy morning, b) the reconstructed image.

To further demonstrate the all-weather capability of the imagery system, we have conducted a test in a foggy environment. The data for this scenario has been collected in dense fog. Fig. 20-(a) shows the experimental set-up for the bike situated on the ground in front of the imagery system in clear weather condition and Fig. 20-(b) presents the reconstructed image from the scene. The set-up is inside the garage and we have then created a thick fogy situation. In Fig. 21-(a), the test set-up in the fog has been shown. The thick layer of fog makes it impossible for the bike to be seen in the optical image presented by Fig. 21-(a). In Fig. 21-(b), we have presented the reconstructed image form the fogy test set-up. It is clear form In Fig. 21-(b) that even in thick fogy situation the imagery system is capable of producing high resolution image form the target. In fact the reconstructed images under no-fog and foggy conditions, presented in Fig. 20-(b) and Fig. 21-(b), respectively, are identical.

Refer to caption(a)
Refer to captionRange [m]Cross Range [m](b)
Figure 20: a) the experimental set-up for the bike in front of the imagery system, b) the reconstructed image.
Refer to caption(a)
Refer to captionRange [m]Cross Range [m](b)
Figure 21: a) the experimental set-up presented in Fig. 20-(a), the data has been collected in a foggy condition, b) the reconstructed image.

The next test experiment, which has been shown in Fig. 22-(a), includes 2 cars, 2 bikes, a ladder, and a scooter. Fig. 22-(b) illustrates the reconstructed image which clearly shows all the targets. The reason we have conducted this experiment is that the number of targets are higher and the targets have been located in close proximity of one another. Consequently, we can see some undesired effects that have been created by multi-path.

Refer to caption(a)
Refer to captionRange [m]Cross Range [m](b)
Figure 22: a) the experimental set-up for 2 cars, 2 bikes, a ladder, and a scooter, b) the reconstructed image.

Finally, we conclude this section with the experimental set-up for the bike located at 8.5 m distance in front of the imagery system. Fig. 23-(a) depicts the experimental result and in Fig. 23-(b) we have shown the reconstructed image. When we place the targets at far distances from the imagery system, the SNR as well as the resolution limit of the system in the cross range direction decrease.

It should be noted that, for this experiment we have used horn antennas on both the transmit and receive sides in order to increase the maximum range of the imagery system. Consequently, the field of view of the imagery system has been reduced and in the reconstructed image, which has been shown in Fig. 23-(b), we can only see the bike.

Refer to caption(a)
Refer to captionRange [m]Cross Range [m](b)
Figure 23: a) the bike in front of the imagery system at far distance, b) the reconstructed image.

VIII Conclusion

In this paper, we presented our custom-made FMCW radar system which operates at mm-Wave frequencies. We described the functionalities of the system in detail. We further, presented the imaging algorithm and several signal processing techniques for image formation and image post processing. At the end, we presented all the results based on the experimental data gathered from the imagery system. The paper is a proof that images with high resolution and high quality can be generated using compact, low-weight, and low-cost mm-Wave systems such as the one presented in this work.

References

  • [1] B. R. Mahafza, "Radar Signal Analysis and Processing Using MATLAB,"Chapman and Hall/CRC, 2008.
  • [2] M. I. Skolnik, "Introduction to Radar Systems," McGraw-Hill, New York, 2002.
  • [3] Cumming, I.G. and Wong, F.H., "Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation," Artech House, 2005.
  • [4] Soumekh, M., "Synthetic aperture radar signal processing with MATLAB algorithms," John Wiley, 1999.
  • [5] N. Pohl, T. Jaeschke and M. Vogt, "An SiGe-chip-based 80 GHz FMCW-radar system with 25 GHz bandwidth for high resolution imaging," 14th International Radar Symposium (IRS), Dresden, Germany, pp. 239-244, 2013.
  • [6] M. Caris et al., "Very high resolution radar at 300 GHz," 44th European Microwave Conference, Rome, Italy, pp. 1797-1799, 2014.
  • [7] M. Caris, S. Stanko, S. Palm, R. Sommer, A. Wahlen and N. Pohl, "300 GHz radar for high resolution SAR and ISAR applications," 16th International Radar Symposium (IRS), Dresden, Germany, pp. 577-580, 2015
  • [8] M. Gezimati and G. Singh, "Curved Synthetic Aperture Radar for Near-Field Terahertz Imaging," in IEEE Photonics Journal, vol. 15, no. 3, pp. 1-13, June 2023.
  • [9] H. Essen et al., "High resolution millimetre wave measurement radars for ground based SAR and ISAR imaging," IEEE Radar Conference, Rome, Italy, pp. 1-5, 2008.
  • [10] Q. Yang, H. Wang, B. Deng and Y. Qin, "High Resolution ISAR Imaging of Targets with Complex Motions in the terahertz region," 12th UK-Europe-China Workshop on Millimeter Waves and Terahertz Technologies (UCMMT), London, UK, pp. 1-4, 2019.
  • [11] M. E. Yanik and M. Torlak, "Near-Field 2-D SAR Imaging by Millimeter-Wave Radar for Concealed Item Detection," IEEE Radio and Wireless Symposium (RWS), pp. 1-4, 2019.
  • [12] M. E. Yanik and M. Torlak, "Near-Field MIMO-SAR Millimeter-Wave Imaging With Sparsely Sampled Aperture Data," IEEE Access, vol. 7, 2019.
  • [13] M. E. Yanik and D. Wang and M. Torlak, "3-D MIMO-SAR Imaging Using Multi-Chip Cascaded Millimeter-Wave Sensors," 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1-5, 2019.
  • [14] M. E. Yanik and D. Wang and M. Torlak, "Development and Demonstration of MIMO-SAR mmWave Imaging Testbeds," IEEE Access, vol. 8, pp. 126019-126038, 2020.
  • [15] S. Hamidi "3D Near-Field Virtual MIMO-SAR Imaging using FMCW Radar Systems at 77 GHz," accepted for publication in IEEE CCECE, 2024.
  • [16] S. Hamidi, and S. Safavi-Naeini, "3D Near-Field Millimeter-Wave Synthetic Aperture Radar Imaging," IEEE 19th International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM), pp. 1-2, 2021.
  • [17] S. Hamidi, M. -R. Nezhad-Ahmadi and S. Safavi-Naeini, "SAR Imaging for a 79GHz FMCW Radar Based on MUSIC Method," 18th International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM), Waterloo, ON, Canada, pp. 1-2, 2018.
  • [18] S. Hamidi and S. S. Naeini, "Millimeter-Wave Circular Synthetic Aperture Radar Imaging," IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), ON, Canada, 2021, pp. 1-6, 2021.
  • [19] S. Hamidi, M. Nezhad-Ahmadi, S. Safavi-Naeini, "TDM based Virtual FMCW MIMO Radar Imaging at 79 GHz," 18th International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM), pp. 1-2, 2018.
  • [20] S. Hamidi, S. Safavi-Naeini, "CDM Based Virtual FMCW MIMO Radar Imaging at 79 GHz," IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1-4, 2021.
  • [21] H. C. Stankwitz, R. J. Dallaire and J. R. Fienup, "Nonlinear apodization for sidelobe control in SAR imagery," in IEEE Transactions on Aerospace and Electronic Systems, vol. 31, no. 1, pp. 267-279, Jan. 1995.
  • [22] Rudin, L. I., Osher, S., Fatemi, E., "Nonlinear total variation based noise removal algorithms," Physica D. 60 (1–4): 259–268.
  • [23] S. Boyd and L. Vandenberghe, "Convex Optimization," Cambridge University Press, 2004.
  • [24] K. F. Riley, M. P. Hobson, S. J. Bence, "Mathematical Methods for Physics and Engineering: A Comprehensive Guide," Cambridge University Press; 3rd edition, 2006.
  • [25] P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, July 1990.
  • [26] R. Gonzalez, R. Woods, "Digital Image Processing,” 4nd ed., Pearson, 2017.

 

No comments:

Post a Comment

Successful tracking of hypersonic flight by U.S. HBTSS satellites is a big deal

Why successful tracking of hypersonic flight by U.S. HBTSS satellites is a crucial development ? According to the Missile Defense Agency., t...