Thursday, February 1, 2024

Image Quality Assessment of UAV Hyperspectral Images Using Radiant, Spatial, and Spectral Features Based on Fuzzy Comprehensive Evaluation Method | IEEE Journals & Magazine | IEEE Xplore

Fig. 1.  MTF calculation process.

Image Quality Assessment of UAV Hyperspectral Images Using Radiant, Spatial, and Spectral Features Based on Fuzzy Comprehensive Evaluation Method |
IEEE Journals & Magazine | IEEE Xplore

Publisher: IEEE


Currently, unmanned aerial vehicle hyperspectral images (UAV-HSIs) lack quick, objective, and comprehensive image quality assessment (IQA) methods. Therefore, a multifeature-based fuzzy comprehensive evaluation (FCE) method was proposed in this letter to comprehensively evaluate UAV-HSI quality. To characterize the hyperspectral quality comprehensively, we selected four radiometric features, three spatial features, and two spectral features to construct an indicator set. After analyzing the statistical distribution of the above features of the 23 UAV-HSIs, a fuzzy evaluation threshold table and membership functions were established. To determine the optimal feature weights, the weights obtained using the three methods were tested on a test set of UAV-HSIs constructed with different degrees of noise and blur. The test results showed that the combined weight based on a combination of subjective and objective weight is more robust. The experiment for comprehensive quality assessment of different distortion types and flight heights was done with UAV-HSIs. The results indicated that the comprehensive quality score was in good agreement with the subjective assessment and the objective fact. This comprehensive quality evaluation method can be effectively used for blurred, noisy, overexposed, and different-height UAV-HSIs.
 
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 21)
Article Sequence Number: 5501805
Date of Publication: 12 January 2024
ISSN Information:
Publisher: IEEE
Funding Agency:

 SECTION I. Introduction

Image quality assessment (IQA) of unmanned aerial vehicle hyperspectral image (UAV-HSI) is often the first step in screening qualified hyperspectral images (HSIs). It is also an important basis for measuring whether the performance of unmanned aerial vehicle (UAV)-borne hyperspectral imaging is up to the standard and for optimizing image processing algorithms. Furthermore, the quality of HSIs is directly related to the accuracy of the remote sensing (RS) information extraction [1]. Thus, as part of the work before image application, the quality evaluation of UAV-HSIs using a quick and effective method has become an urgent problem.

IQA includes both subjective and objective IQAs [2]. Subjective IQAs are time-consuming and labor-intensive, and objective IQAs have become increasingly popular with the development of computers. Objective IQA includes full-, semi-, and no-reference IQA (NR-IQA). However, owing to the lack of standard reference images, UAV-HSI prefers NR-IQA. Currently, NR-IQA methods developed for UAV-HSIs are mainly based on machine learning or deep learning techniques [3], [4]; however, they are complex and time-consuming. Simple and efficient NR-IQA methods have been developed for satellite RS images. For example, using single or single-type features NR-IQA [5], [6] and, more recently, using multiple features NR-IQA [7], [8]. However, compared with satellite HSIs, UAV-HSIs have a higher resolution and richer texture features. Compared with satellite visible and multispectral images, UAV-HSIs contain not only spatial features but also spectral features that are particularly important. Therefore, it is difficult to comprehensively assess the image quality of UAV-HSIs by using single or single-type features. Moreover, in a multifeature quality assessment, the extracted feature values are not exactly the same for images with different distortion types but the same quality grade, making it impossible to judge a specific quality grade. Therefore, some algorithms are needed to coordinate multiple features to comprehensively evaluate the image quality. For example, Wu et al. [9] used a fuzzy comprehensive evaluation (FCE) method and nine features to comprehensively assess the image quality of GF2 and SPOT7. However, a few studies have been conducted on comprehensive quality assessment methods using multiple features for UAV-HSIs.

Therefore, in this letter, we propose a quick, no-reference, and comprehensive IQA method for UAV-HSIs based on the FCE method. First, nine feature factors are constructed based on radiant, spatial, and spectral features in Section II. Second, the fuzzy evaluation threshold table of UAV-HSI was established after statistical analysis of the nine features of the 23 UAV-HSIs in Section IV. Next, four types of feature weights were screened. Finally, we demonstrate the robustness and accuracy of our method on UAV-HSIs with different distortion types and flight altitudes.

SECTION II. Methodology

A. Feature Extraction of UAV-HSIs

1) Radiant Feature:

In this study, radiometric features were used to evaluate the quality of UAV-HSIs, including the signal-to-noise ratio (SNR), entropy, AG, and contrast. These features were used to evaluate the image noise level from the perspective of noise, the image information level from the perspective of information theory, and the image blurring level from the perspective of sharpness, respectively. It is worth mentioning that these three features (entropy, AG, and contrast) were calculated from each band of HSIs and then averaged.

a) Signal-to-Noise Ratio:

The SNR of RS images is a key index for evaluating the quality of the data obtained by RS sensors. There are various methods for calculating the SNR, and the PPESDC method proposed by Tian et al. [10] was used to calculate the SNR in this study.

b) Information Entropy (Entropy):

Entropy is a random measure for the amount of information in an image, which reflects the degree of nonuniformity and complexity of the texture in the image. The calculation formula shows as follows:

 Entropy=i=1Npilog2pi(1)
View SourceRight-click on figure for MathML and additional features. where pi is the probability of gray level i appearing in the image and N is the total number of pixels in the image.

c) Average Gradient:

Average gradient (AG) is the degree of gray change in a fixed direction of an image, which reflects the ability of the image to express details. The higher the gradient in the fixed direction, the more pronounced the gray change, which means that the image has better hierarchy and clarity. For a discrete image x(i,j) , the first-order derivative can be expressed by a first-order difference approximation. Thus, the horizontal and vertical gradients at (i , j ) are expressed as

gi=x(i,j+1)x(i,j),gj=x(i+1,j)x(i,j).(2)
View SourceRight-click on figure for MathML and additional features.

The AG is a vector whose value is

AG(i,j)=1(w1)(h1)×i=1w1j=1h1g2i+g2j2.(3)
View SourceRight-click on figure for MathML and additional features.

d) Contrast:

The contrast reflects the clarity of the image and degree of texture of the groove depth. The clearer the texture, the richer the light and dark transition layers of the RS image, and the greater the contrast. There are various methods for calculating the contrast. In this study, the contrast was calculated using a gray-level co-occurrence matrix (GLCM) using the following formula:

 Contrast=i=1j=1(ij)2P(i,j)(4)
View SourceRight-click on figure for MathML and additional features. where P (i , j ) is the co-occurrence probability at position (i , j ) in GLCM. In this study, the gray level was 16 and the step size was 1. We calculated the contrast in four directions: 0°, 45°, 90°, and 135°, and then calculated the average value.

2) Spatial Features:

Modulation transfer function (MTF), MTF 0.5, and ground sampling distance (GSD) are the three spatial features used in this study to evaluate the quality of UAV-HSIs. The MTF is one of the most practical and commonly used metrics for the RS IQA. Because UAV-HSIs have a high spatial resolution and the knife-edge region is easier to extract, MTF is a good choice for evaluating the quality of UAV-HSIs. In addition, considering the different image resolutions obtained by the UAV at different heights, we added a GSD feature.

a) Modulation Transfer Function:

We adopted the knife-edge method to calculate the MTF of the UAV-HSI. The procedure is shown in Fig. 1. First, a knife-edge region was manually selected from the UAV-HSI. Given that the contrast and inclination angle of the chosen image knife edge have an impact on the MTF and that diffuse reflectance plates are typically needed for reflectance correction in UAV-HSIs, we chose a knife edge consisting of 3% and 48% diffuse reflectance plates for this study and maintained an inclination angle of 5° for the calculation. The edge spread function (ESF) was then obtained using the least-squares fitting technique to fit all edge points. Next, the derivative of ESF yields the line spread function (LSF). Finally, the LSF was intercepted at a certain length, Fourier transformed, and normalized to obtain the MTF curve. In this study, the MTF feature is the MTF value at a normalized frequency of 0.5. In addition, we extracted the frequency value at an MTF of 0.5, that is, the MTF 0.5 feature, which reflects the contrast in the target image and is associated with the visual low-frequency part.

Fig. 1.  MTF calculation process.

b) Ground Sampling Distance:

The GSD in RS images is also called the image resolution or ground resolution. This is the distance between two consecutive pixel centers measured on the ground. While evaluating the quality of RS images with different resolutions, the GSD is a critical factor.

3) Spectral Features:

SAM and spectral information divergence (SID) are the two spectral features used in this study to evaluate the quality of UAV-HSIs. These two features evaluate the differences between the image and reference spectra in terms of the cumulative differences and overall similarity, respectively. The reference spectra for the images of various ground objects taken by the UAV hyperspectral must be located in the spectral database, and for some ground objects, it is challenging to locate the corresponding spectra accurately. Given that UAV-HSIs generally require diffuse reflectance plates for reflectance correction, standard diffuse reflectance plate spectra were selected as reference spectra. In this study, we used the spectra of the 3%, 22%, 48%, and 64% diffuse reflectance plates as reference spectra.

a) Spectral Angle Distance:

The spectral angle distance SAD considers two spectral curves as 2-D space vectors, and their differences are characterized by calculating their generalized angles. The calculation formula is as follows:

SA(x,y)=arccosNi=1xiyiNi=1x2iNi=1y2i(5)
View SourceRight-click on figure for MathML and additional features. where x and y are two spectral curves, x = (x1 , x2 , x3,,xn ), y = (y1 , y2 , y3,,yn ), and N is the total number of bands.

b) Spectral Information Divergence:

The probability vectors of the two spectra of x and y are a = (a1 , a2,,aN ) and b = (b1 , b2,,bN ), respectively, where ai=xiNi=1xi and bi=yi/Ni=1yi . Based on the information theory, the self-information of x and y can be obtained as Ii(x)=lgai and Ii(y)=lgbi . From this, the relative entropy of y and x can be obtained as

D(x||y)=i=1NaiDi(x||y)=i=1Nai(Ii(y)Ii(x))=i=1Nailg(ai/bi).(6)
View SourceRight-click on figure for MathML and additional features.

Similarly, the relative entropy with respect to x and y can be obtained as

D(y||x)=i=1Nbilg(bi/ai).(7)
View SourceRight-click on figure for MathML and additional features.

Thus, the SID of x and y is

SID(x,y)=D(x||y)+D(y||x).(8)
View SourceRight-click on figure for MathML and additional features.

B. FCE Method

In the FCE, it is necessary to first establish the indicator set U={u1,u2,u3,,um} and the comment set V={v1,v2,v3,,vn} . In this study, U was constructed using the nine features described in Section II, and V was constructed using five assessment grades, namely, very bad (v1 ), poor (v2 ), fair (v3 ), good (v4 ), and excellent (v5 ). Then, the fuzzy matrix R is constructed, whose formula is

R=r11r21rm1r12r22rm2r1nr2nrmn(9)
View SourceRight-click on figure for MathML and additional features. where ri,j denotes the membership degree of image quality belonging to the assessment grade n in the evaluation of m features.

The shapes of the membership functions used to calculate the membership degree have various forms, and the common shapes include triangle, trapezium, and Gaussian distributions. Because the comment set had more assessment grades and the feature value resembled a fuzzy range in this study, we chose a trapezium distribution.

The comprehensive evaluation formula is as follows:

B=WR(10)
View SourceRight-click on figure for MathML and additional features. where B is the membership degree of the image belonging to each quality grade, W is the fuzzy weight vector of each evaluation factor, and is a fuzzy synthesis operation.

The synthetic operation typically includes the main factor determination, main factor highlighting, and weighted average type, among others. The weighted average type considers all factors and can fully reflect the role of each feature. Therefore, in this study, a weighted average synthetic algorithm was used. Finally, we assign values by 1, 2, 3, 4, and 5 quality scores to obtain the comprehensive evaluation quality scores.

In addition, it is worth noting that weight is an important parameter in the FCE. To determine the appropriate weights, objective, subjective, and combined weights were used. The objective weight was calculated using the entropy method (entropy weight) [11], the subjective weight was calculated using the analytic hierarchy process (AHP weight) [12], and the combined weight was constructed based on the objective and subjective weights as follows:

wi=αwi+(1α)vi(11)
View SourceRight-click on figure for MathML and additional features. where wi is the combined weight of the i th feature, w is the subjective weight, v is the objective weight, and α is the proportion coefficient of the subjective and objective weights.

SECTION III. Data

There are 23 UAV-HSIs were used in this study, which were acquired using a Rikola hyperspectral imager equipped with a DJI M600Pro UAV at different heights. The ground object types of these images included road, water, bare soil, jujube trees, cotton, wheat, and Zucchini of different growth periods. All images contained four diffuse plates with standard reflectance. Preprocessing of these images includes dark current correction, format conversion, band registration, image mosaic, and reflectance correction [13]. Besides, the pixel values of the reflectance images were scaled from 0–100 to 0–255 before extracting the feature values. The final image was cropped to 1800×1100 pixels.

SECTION IV. Experiments and Discussion

A. Establishing Fuzzy Evaluation Threshold Table

The fuzzy evaluation threshold table must be determined in advance, and it is a key component in constructing a fuzzy relationship matrix. By statistical analysis of the feature values extracted from the 23 UAV-HSIs, the fuzzy evaluation threshold table was obtained (Table I). Compared the fuzzy evaluation threshold table of satellite images [9], the feature thresholds of UAV-HSI are significantly different from those of satellite images. This indicates the necessity of establishing a fuzzy threshold table for the UAV-HSI.

TABLE I Fuzzy Evaluation Threshold Table (the Values in the Table are the Intermediate Values of Each Trapezoidal Membership Function)
Table I- 
Fuzzy Evaluation Threshold Table (the Values in the Table are the Intermediate Values of Each Trapezoidal Membership Function)

B. Feature Weights Test Experiment

First, based on the fuzzy evaluation threshold table, the entropy weight was calculated. Second, based on the subjectively designed fuzzy judgment matrix

Z=11/31/31/311/21/51/21/23111111/211311111111311111111111111/21/21/21/2211121111521121111211121111211121111
View SourceRight-click on figure for MathML and additional features. the AHP weight was calculated. Finally, according to formula (11), the combined weight was calculated using α = 0.5. The constructed four types of feature weights are listed in Table II.

TABLE II Feature Weight Set

To determine the optimal feature weights, we selected nine images from 23 UAV-HSIs. Because the main types of distortion in UAV-HSIs are noise and blur [13] and five assessment grades were designed in Section II, the distortion processes included four degrees of Gaussian noise and four degrees of blur. Gaussian noise with a mean of 0 and variances of 0.05, 0.5, 2.5, and 5 was added to obtain noisy images. Gaussian filters with blurred pixels of 0.5, 2, 3.5, and 5 were used to create blurry images. These nine original HSIs and 72 HSIs with different degrees of noise and blurring ultimately constitute a test set of five quality grades. Example images of the test set are shown in Fig. 2.

Fig. 2. - Example images of a five-grade teat sets of UAV-HSIs.
Fig. 2. Example images of a five-grade teat sets of UAV-HSIs.

Fig. 3 shows the results of the FCE of the test set using four types of feature weights. Overall, the evaluation results of the average weight were significantly different from those of the other three weights, and the quality scores assessed for the images of grades 1 and 5 using the average weight were closer to middle. Additionally, comparing the test images in Fig. 3(a), the subjective scores of sample 2 were almost the same as those of samples 5, 7, and 8 and were higher than those of samples 1, 2, 4, and 5. However, the quality scores of sample 2 obtained using the average weights were significantly lower than those of samples 7 and 8. As a result, the average weight was lower than the other three weights in terms of distinguishing quality grades and evaluating accuracy. As shown in [9], feature weights affect the results of the IQA. Therefore, it is necessary to determine the feature weights rather than simply using the average weight.

Fig. 3. - FCE results of the UAV-HSI datasets with different quality grades. (a)Evaluation result for the nine original HSIs belonging to the excellent quality grade. (b)–(e) Evaluation results for the HSIs with noise addition and blurring belonging to the four grades of good, medium, poor, and very bad, respectively; the odd and even numbers of the test samples represent noise and blurred images, respectively. (f) Total graph for comparing the FCE results of different grades of HSIs.
Fig. 3. FCE results of the UAV-HSI datasets with different quality grades. (a)Evaluation result for the nine original HSIs belonging to the excellent quality grade. (b)–(e) Evaluation results for the HSIs with noise addition and blurring belonging to the four grades of good, medium, poor, and very bad, respectively; the odd and even numbers of the test samples represent noise and blurred images, respectively. (f) Total graph for comparing the FCE results of different grades of HSIs.

The scores using entropy, AHP, and combined weights showed the same trend, and the scores of the combined weight were between the entropy and AHP weights. As suggested in [11] and [12], combining subjective and objective weights to obtain combined weights solves the problem of weight calculation errors caused by the interaction between features, and the constructed feature weights are more robust in FCE.

C. UAV-HSIs Quality Comprehensive Evaluation Experiment

To further test the robustness and accuracy of the proposed comprehensive evaluation algorithm, we used six UAV-HSIs (shown in Fig. 4) in the same scene, including original, noisy, fuzzy, and overexposed HSIs at an 80-m flight height and two original HSIs at 150- and 200-m flight height. Among these, Fig. 4(a) was acquired at noon (first flight, time: 13:45), Fig. 4(d) was acquired on the second flight (time: 14:33), and Fig. 4(e) and (f) was acquired on the third and fourth flights (time: 16:17 and 16:34, respectively). The feature values extracted from these six images are listed in Table III. The examination of each individual evaluation criterion can provide an understanding of the quality of a particular aspect of the image; however, it is difficult to comprehensively assess the image quality even in the same scene. Therefore, it is necessary to synthesize these evaluation features using the FCE criteria.

TABLE III Feature Values of the Six HSIs in Fig. 4
Table III- 
Feature Values of the Six HSIs in Fig. 4
Fig. 4. - UAV-HSIs of different situations in the same scene. (a) Original image (80 m height). (b) Noisy image (noise with 2.5). (c) Blurred image (blur with 2). (d) Overexposed image. (e) 150-m-height image. (f) 200-m-height image.
Fig. 4. UAV-HSIs of different situations in the same scene. (a) Original image (80 m height). (b) Noisy image (noise with 2.5). (c) Blurred image (blur with 2). (d) Overexposed image. (e) 150-m-height image. (f) 200-m-height image.

Based on the test results of the weights in Section IV-B, we selected the combined weight. The quality scores of the six HSIs using the FCE method are shown in the bar chart in Fig. 5. In this figure, Fig. 5(a) has the highest quality score and best image quality. Fig. 5(e) and (f) has lower quality scores than Fig. 5(a), and the visual observation of Fig. 5(a) is also better than Fig. 5(e) and (f). This quality score is consistent with the subjective assessment. In addition, Fig. 5(a) was acquired at noon, with the best light intensity and imaging conditions. In contrast, Fig. 5(e) and (f) was acquired when the light intensity decreased, solar altitude angle decreased, and imaging conditions worsened. Therefore, this quality score is consistent with objective facts. Additionally, Fig. 5(e) and (f) has essentially the same quality score, and visual assessment cannot accurately determine who is better. Comparing the three image features of a, e, and f in Table III, the main variance is the spatial feature GSD values due to the difference in flight height; therefore, the GSD used in this study is necessary to balance the radiometric and spectral features. b and c have significantly lower quality scores than a, with quality scores of Qa>Qc>Qb , which is consistent with the different types of designed distortion levels. Furthermore, there is a significant spectral distortion in d. The quality score of d is significantly lower than that of a, and there are also significant differences in the spectral and radiometric features of d. Therefore, the FCE based on the above nine features can also be used for the quality assessment of overexposed HSIs.


Fig. 5.
Quality score (bar chart) and membership degree (pie chart) of the FCE of the six HSIs. (a)–(f) Six HSIs described in Fig. 4, respectively.

The membership degrees of the six HSIs obtained using the FCE method are shown in the pie chart in Fig. 5. Fig. 5(a) and (c)–(f) obtained quality grades based on the maximum membership principle (quality grades 5, 4, 3, 4, and 4, respectively), which are consistent with the assigned quality scores. However, the quality grade (quality grade 2) assessed by the maximum membership principle of Fig. 5(b) is different from the assigned quality score (2.833). Therefore, this study cannot use the principle of maximum membership degree to obtain the quality grade of UAV-HSIs as in [12] and [14]. In contrast, the quality score obtained using the weighted average synthesis operation [15] and the assigned value operation considers all factors and reflects the role of each index more fully, and the quality assessment is more accurate.

SECTION V. Conclusion

In this letter, a no-reference and comprehensive IQA method is proposed for UAV-HSIs using radiant, spatial, and spectral features based on the FCE method. The evaluation results demonstrated that the method can distinguish between different quality grades of UAV-HSIs with noise, blur, overexposure, and different heights. In the fuzzy comprehensive assessment, the quality score obtained using the weighted average synthesis operation and the assigned value operation more accurately represents the quality of the UAV-HSIs. Additionally, the results of the weight tests showed that objective and subjective combined weights can help produce a more reliable fuzzy comprehensive quality assessment. Therefore, we recommend using the combined weight.

No comments:

Post a Comment

Novel AI System Achieves 90% Accuracy in Detecting Drone Jamming Attacks

Loss convergence analysis using test data under LoS and NLoS conditions     Novel AI System Achieves 90% Accuracy in Detecting Drone Jamming...