In response to the challenges posed by insufficient real-time performance and suboptimal matching accuracy of traditional feature matching algorithms within automotive panoramic surround view systems, this paper has proposed a high-performance dimension reduction parallel matching algorithm that integrates Principal Component Analysis (PCA) and Dual-Heap Filtering (DHF). The algorithm employs PCA to map the feature points into the lower-dimensional space and employs the square of Euclidean distance for feature matching, which significantly reduces computational complexity. To ensure the accuracy of feature matching, the algorithm utilizes Dual-Heap Filtering to filter and refine matched point pairs. To further enhance matching speed and make optimal use of computational resources, the algorithm introduces a multi-core parallel matching strategy, greatly elevating the efficiency of feature matching. Compared to Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the proposed algorithm reduces matching time by 77% to 80% and concurrently enhances matching accuracy by 5% to 15%. Experimental results demonstrate that the proposed algorithm exhibits outstanding real-time matching performance and accuracy, effectively meeting the feature-matching requirements of automotive panoramic surround view systems.

The automotive panoramic surround view system combines partial road images from multiple vehicle-mounted cameras to generate real-time 360° panoramic views centered on the current vehicle. This significantly enhances the vehicle’s perception of the surrounding road environment and improves driving safety [

Currently, researchers have embarked on investigations into feature matching for high-dimensional images, broadly falling into four categories:

It can be seen that the current algorithms for matching high-dimensional features suffer from low matching efficiency and poor real-time performance. There is still considerable room for improvement before these algorithms can be practically applied in automotive panoramic surround view systems. To address these issues and meet the real-time and matching accuracy requirements for local road image stitching in automotive panoramic surround view systems, this paper proposes a high-performance dimensionality reduction parallel algorithm, named SUPD (SURF combining PCA and DHF), based on the SURF algorithm.

Addressing the computational complexity caused by high-dimensional descriptors in traditional SURF algorithm [

The contributions of this algorithm are as follows:

Dimensionality reduction projection is applied to high-dimensional image feature points, replacing the high-dimensional Euclidean distance with the square Euclidean distance in a lower-dimensional space, thereby reducing computational overhead by reducing the dimensionality of feature points and avoiding square root calculations.

To eliminate erroneous matches, the Double Heap Filter algorithm is proposed to filter and refine the matched point pairs, enhancing the matching accuracy of the SUPD.

To enhance the concurrency performance of the SUPD algorithm, a multi-core parallel matching strategy is proposed. This strategy involves parallel matching of reference feature points in a grouped manner, thereby improving the efficiency of feature point matching.

Consider the matching process of the partial road images

Define

Define the matching accuracy

Define the matching efficiency

In order to meet the requirements of panoramic surround view systems in terms of matching accuracy and real-time performance, this paper proposes a novel panoramic image feature matching algorithm named SUPD. This algorithm integrates the advantages of PCA and DHF algorithms, aiming to perform dimensionality reduction matching and purification filtering on high-dimensional feature points. Simultaneously, by employing a parallel matching strategy to fully utilize computational resources, the algorithm further accelerates the matching efficiency.

Principal Component Analysis (PCA) is a commonly used dimensionality reduction technique. Its fundamental concept involves projecting high-dimensional data onto a lower-dimensional space, extracting essential features, and reducing redundant information, thereby offering a more concise and efficient data representation for subsequent data analysis and modeling processes [

In this paper, singular value decomposition is used to obtain the projection matrix

From

Once the number of principal components

After obtaining the projection matrix

Upon completing the projection, the distance between each reference feature point

The nearest reference feature point

As shown in

3D | ED | Rank | PCA | SED | Rank |
---|---|---|---|---|---|

2.7 | 1 | 3.1 | 1 | ||

4.1 | 2 | 16.9 | 2 | ||

6.3 | 3 | 35.5 | 3 | ||

7.8 | 4 | 61.2 | 4 |

Although using the squared Euclidean distance ranking in the low-dimensional PCA space can predict the similarity between query points and reference feature points, occasional incorrect matches may occur, as depicted in

3D | ED | Rank | PCA | SED | Rank |
---|---|---|---|---|---|

3.5 | 2 | 1.7 | 1 | ||

2.3 | 1 | 5.0 | 2 | ||

5.3 | 3 | 22.8 | 3 | ||

6.6 | 4 | 42.6 | 4 |

To address the aforementioned issue, this paper proposes a Dual-Heap Filtering (DHF) algorithm based on PCA ranking prediction. DHF maintains two filtering heaps: one in the PCA space called the filtering heap, and the other in the original space called the validation heap. Each heap retains the top

When using the DHF algorithm to filter feature points, the first step is to calculate the squared Euclidean distance (SED) between the projected reference feature point and the query point in the PCA space. If this squared Euclidean distance is greater than the maximum value in the filtering heap, the reference feature point is discarded. If the squared Euclidean distance is less than the maximum value in the filtering heap, the Euclidean distance (ED) between the reference feature point and the query point in the original space is then calculated. If this Euclidean distance is greater than the maximum value in the validation heap, the reference feature point is discarded. If it is less than the maximum value, it is considered a correct judgment. The respective Euclidean distance value and squared Euclidean distance value are then inserted into the validation heap and the filtering heap, respectively, replacing their initial maximum values and reordering them accordingly. The specific code implementation is as follows:

The relationship between the adjustment factor

Traditional matching algorithms typically involve matching according to the index order of reference feature points, with each matching task performed on a per-reference-point basis. For instance, matching algorithms utilizing kd-trees necessitate continuous searching of reference feature points based on tree-like indexing [

Due to the relatively independent nature of the matching tasks for each reference feature point in the proposed algorithm, it is particularly suitable for parallel processing when dealing with a large number of reference feature points [

During feature matching, the first step involves partitioning the reference feature point set

In this section, brute-force search (BF) will be used as a comparative algorithm to analyze the time and space complexity of the SUPD. For the matching of a single query feature point, the BF algorithm calculates and matches the distances of all reference feature points in the original space through exhaustive search [

However, in the actual matching process, if most unnecessary distance calculations are avoided, the algorithm’s performance can be significantly improved. The SUPD precisely possesses this characteristic. This algorithm first projects feature points into a low-dimensional PCA space and then utilizes the K-nearest neighbors (KNN)-based DHP algorithm for filtering. Although these two steps introduce some computational overhead, they play a crucial role in reducing the computational load in the subsequent matching process.

In the dimensionality reduction process, Principal Component Analysis is used to project feature points from high-dimensional space to low-dimensional space, thereby eliminating redundant information among features. Although PCA involves calculating the covariance matrix and decomposing eigenvectors, which may incur some computational overhead, it can significantly reduce the amount of data to be processed in the subsequent matching steps. Therefore, from an overall performance perspective, this overhead is worthwhile. Next, the DHP utilizes the KNN for filtering. The space complexity of the KNN mainly stems from the construction of the tree, while the tree search process does not require additional storage space [

Considering these computational overheads, the space complexity of the SUPD after dimensionality reduction and filtering can be approximated as

Suppose the total time required for the BF algorithm to compute all distances for a single query feature point in the original space is denoted as

The principal parameters of the SUPD encompass the number of principal components (

Firstly, when employing Principal Component Analysis for dimensionality reduction, it is essential to determine the number of retained principal components, namely the reduced dimensionality. Having too many principal components (large

Subsequently, the DHP is employed to refine and extract matching point pairs. The size of the filtering heap is determined by

Finally, the projected reference feature point set is divided into

The hardware platform for this experiment is an AMD Opteron Processor 6376 CPU @ 2.30 GHz with 16 MB L3 cache, 16 cores, 32 logical processors, and 128 GB of memory. The software platform consists of a 64-bit Windows 10 operating system, Visual Studio Code, and OpenCV 4.

To meet the research requirements, this paper utilized a self-constructed dataset comprising 100 sets of local road images with a certain degree of overlap. The images are sized at 1000 pixels × 700 pixels and are formatted as JPG. The dataset covers various road scenarios, including urban roads, rural lanes, highways, etc. It exhibits significant diversity in perspectives, lighting conditions, and weather conditions to better reflect the complexity of real road environments.

Total Matching Time (

Matching Accuracy (

Root Mean Square Error of Matches

This section establishes the precise values of each parameter when the algorithm achieved optimal performance through experimentation, based on the parameter ranges analyzed in

Speedup Ratio: Matching time of brute-force matching divided by the matching time of the algorithm proposed in this paper.

Acceleration Ratio: Serial matching time of the algorithm divided by parallel matching time.

Comprehensive Efficiency Index: This index is obtained by multiplying the speedup ratio with the matching accuracy rate, thereby comprehensively considering the algorithm’s runtime and matching precision.

According to the analysis in

The experimental results are shown in

Considering the relatively high independence of the number of subset divisions (

The experimental results are depicted in

To verify the performance of the SUPD, this paper divides the data set into 5 groups according to the complexity of color and texture, and the complexity of the experimental group increases from 1 to 5. For each experimental group, the average values of matching total time, matching accuracy, and MERD were calculated. Additionally, comparative experiments were conducted with traditional SIFT and SURF to obtain more comprehensive performance comparison results. Both SIFT and SURF algorithms were paired with the built-in brute-force matching algorithm in OpenCV for feature matching.

Total matching time as a significant metric effectively measures the real-time performance of algorithms.

Matching accuracy effectively reflects the precision and robustness of feature matching algorithms. Its calculation method is shown in

To provide a more intuitive representation of the contrasting matching accuracy, this paper takes the matching results of the first experiment in the third group as an example to analyze the relative performance of the SUPD against the traditional SIFT and SURF in terms of matching accuracy, as shown in

From

In order to provide a more intuitive representation of the overall matching quality between feature point pairs, this paper introduces the Matching Error Root Mean Square (MERD) as an evaluation metric. A smaller MERD value indicates higher overall matching quality between feature point pairs, whereas a larger MERD value suggests poorer matching quality. The calculation formula for MERD is given in

Addressing the inefficiencies of traditional image feature matching algorithms in handling high-dimensional feature points within partial road images, this paper proposes a high-performance dimensionality reduction and parallel matching algorithm that combines Principal Component Analysis (PCA) and Dual-Heap Filtering. Through PCA, the algorithm projects the feature point set into a lower-dimensional space and employs squared Euclidean distance for rank estimation, effectively reducing the computational complexity during the matching process. Furthermore, to ensure the accuracy of feature matching, the proposed algorithm employs the Dual-Heap Filtering for refining matched point pairs. In addition, the algorithm utilizes a parallel structure to fully leverage computational resources, enhancing the overall matching speed. Experimental results demonstrate that the proposed algorithm holds distinct advantages over traditional image feature matching algorithms in terms of total matching time, matching accuracy, and MERD. Thus, the algorithm strikes a balance between matching accuracy and real-time performance, making it suitable for efficient matching of high-dimensional feature points in partial road images.

Despite the advantages presented by this algorithm, there are challenges that might arise when it is applied to extremely large datasets, especially within environments constrained by limited computational resources. In response to these challenges, future research will be directed towards the development of more effective strategies and optimization techniques. This includes plans to update the experimental equipment and to conduct experiments with a broader range of datasets to enhance the scalability and robustness of the algorithm in complex environments. Furthermore, the integration of machine learning methods for the automatic adjustment and optimization of the algorithm’s parameters is under consideration. Such steps are anticipated to improve the adaptability and performance of the algorithm in various application scenarios.

The authors thank their institutions for infrastructure support.

The authors would like to thank the National Natural Science Foundation of China (61803206), the Key R&D Program of Jiangsu Province (BE2022053-2), and the Nanjing Forestry University Youth Science and Technology Innovation Fund (CX2018004) for partly funding this project.

The authors confirm contribution to the paper as follows: study conception and design: Guangbing Xiao, Ruijie Gu; data collection: Ning Sun; analysis and interpretation of results: Ruijie Gu, Ning Sun; draft manuscript preparation: Ruijie Gu, Yong Zhang. All authors reviewed the results and approved the final version of the manuscript.

The authors confirm that the data supporting the findings of this study are available within the article.

The authors declare that they have no conflicts of interest to report regarding the present study.