Abstract
Introduction
With the rapid development of urbanization, the quantity of cars worldwide is increasing. It has caused many traffic problems, especially evoking more traffic accidents which bring peoples serious life and economic losses. At present, an actual problem of the accident handling is how to calculate more accurately and rapidly the driving speed of the accident vehicle. The identification result of the vehicle speed can not only be used to analyze the property of the traffic accident for determining the cause of the traffic accident but also be an important basis for the identification of the traffic accident responsibility division. Traditional speed identification methods, such as radar detector and underground coil detector, is difficult to accurately measure anywhere and record the whole scene for complex traffic accidents. However, video-based vehicle speed identification method can settle theses problems. In recent years, many researchers have conducted a lot of research in the field of video speed measurement. In 2012, Guoxiang 1 calculated the average speed of the vehicle between the starting point and the end point according to the frame rate of the video image, the number of the single frame image between the starting point and the end point, and the practical measured distance of the traffic accident site. The method can only calculate the corresponding vehicle speed in the condition of the fixed interval feature point “point to point”, and cannot the vehicle speed at any position between the fixed interval feature points. To this end, in 2015, Zhihai et al. 2 proposed a mathematical model for speed calculation of time interpolation method and distance interpolation method. In order to improve the accuracy of speed identification, in 2017, Lieyun 3 proposed to use the direct linear transformation method to transform the image coordinate system and the object coordinate system, so as to obtain the object coordinates of the target vehicle characteristic point from the image coordinate of the target vehicle characteristic point, and then use the inter-frame difference method to calculate the vehicle driving speed. The same year, Luvizon et al. 4 select different features on the license plate area, track across multiple frames, and measure the vehicle speed by comparing the tracks of the tracking features with the known real measurements. In 2020, Wu et al. 5 achieves vehicle velocity measurements by calculating the average of the instantaneous velocity between multiple frames, which ignores the effect of acceleration. The same year, Vakili et al. 6 proposed a new approach based on imaging system geometry and defining solid heart angle. The method calculates the number of pixels in the video plate image in two frames by determining the position of the license plate in the image plane. Then, the displacement and velocity of the vehicle within two frames were calculated using the stereo angular relationship. In 2010, Rahim et al. 7 proposed a vehicle speed estimation method based on video analysis. Convert 2D image points into a 3D virtual world to obtain the actual position of the vehicle in 3D space. These three-dimensional points are used to measure the displacement time and thus the vehicle speed. In 2020, Vakili et al. 6 proposed a new method based on the geometry of imaging systems and the definition of fixed core angles. This method uses a video image taken by a single camera, then extracts the video frame, determines the position of the license plate on the image plane in two frames, and counts the number of pixels in the license plate image. Then, the solid angle relationship is used to calculate the displacement and velocity of the vehicle within the two frames. Although this method is not affected by the license plate height, camera installation height and lane, the calculation cost of this method is high and the calculation accuracy is low. Czajewski and Iwanowski 8 proposed to use the license plate as a feature block to realize the positioning and tracking of the target vehicle, and use the positioning and tracking results of the license plate to calculate the vehicle speed. In order to improve the accuracy of speed measurement, an adaptive threshold algorithm is adopted in the division of license plates. But, the algorithm ignores the height difference of the reference plane, resulting in a large calculation error. Ali Tourani et al. use the T-HOG algorithm to extract the target feature, use the pyramid KLT to track the feature to provide a motion vector of moving vehicles, each vector can represent the vehicle the instantaneous speed at the specified time. 9 Nguyen et al. 10 found that in vehicle speed detection, even with a fixed camera, there may still be disturbances such as vibrations caused by vehicles passing through the road surface and windy weather, which may skew the calculation results. To this end, the idea of using vertical and horizontal histograms to compensate for the background image is proposed to achieve the elimination of vibration noise. Maduro et al. 11 proposed a speed measurement method based on lane lines, which calculates the proportional relationship between pixel distance and actual distance according to the actual road information (such as zebra crossings, lane markings, etc.) in the video and calibrates the camera, and obtains the actual distance of the vehicle moving within a certain number of frames according to this proportional relationship to solve the average speed of the vehicle within this distance. Sri Harsha et al. 12 proposed a vehicle detection, tracking, and speed measurement model based on robust image processing. The model implements enhanced preprocessing, background subtraction, morphological operation and feature mapping process for detection, tracking and speed estimation of moving vehicles. In order to achieve optimal performance, a multidirectional moving vehicle detection and filtering scheme is developed, which comprehensively considers factors such as intensity and moving pixel direction to achieve effective candidate vehicle detection in traffic videos. In order to enhance the background difference, a new multidirectional intensity stroke estimation method is proposed, which plays an important role in distinguishing vehicle areas from other background content. Chintalacheruvu and Muthukumar 13 proposed a video-based efficient vehicle detector algorithm based on Harris-Stephen corner points. The algorithm was used to develop a stand-alone vehicle detection and tracking system to determine the number and speed of vehicles on arterial roads and highways. Wu et al. 14 propose a method for estimating the speed of a single vehicle using a monocular camera, which estimates the vehicle speed by converting pixel displacement into the distance traveled along the road, and then estimates the distance and height of the object from the tracking point to the road plane, and adjusts the previously estimated vehicle speed according to the estimated height of the object and the camera. This method achieves high-precision measurement of vehicle speed, but its calculation cost is high. Worrall et al. 15 proposed to use the camera calibration method to realize the calculation of vehicle speed, and after obtaining the corner information of the grid placed on the road surface, the camera parameters were obtained by calibrating the model, and then the camera calibration was completed, and the vehicle speed was calculated and calculated. Dailey and Li 16 proposed a new method of estimating vehicle speed using a series of images taken by an uncalibrated camera. The algorithm simplifies the problem into a one-dimensional geometric problem according to the inherent geometric relationship in the image and puts forward conventional assumptions, and uses the inter-frame difference method to calculate the average speed of the solved vehicle according to the vehicle motion trajectory and geometric relationship in the image sequence. Giovanni and Castello 17 calculate the instantaneous speed of vehicles configured using monocular and binocular systems using speed vision of license plate reading. The vehicle is tracked by two images, so as to obtain a high-precision estimation of vehicle speed, but the system configuration of this method is high and the algorithm is complex. In order to accurately estimate vehicle speed, Cathey and Dailey 18 proposed a new method to automatically calculate camera calibration information, that is, using straightening technology to remove the perspective effect, and using related techniques to establish the necessary scale factors. Using the temporal correlation between video frames, robust speed estimation is made. Although the proposed method is not affected by the license plate height, camera installation height, and lane, it is highly complex, has poor robustness, and has low computational accuracy. Man-Woo et al. 19 have developed a vision-based vehicle information detection system that integrates vehicle detection, vehicle tracking, and camera calibration algorithms to obtain vehicle count data and estimate the speed of individual vehicles. The system is suitable for fixed single-camera views and is suitable for video streams captured from highway CCTV. Finally, according to the weakness of manual calibration feature points, such as large error and inefficiency, the traditional time interpolation method ignores the acceleration between frames, resulting in low accuracy of vehicle speed calculation. Therefore, it is proposed to use the MATLAB image processing function to automatically identify the target vehicle feature points to improve their recognition accuracy and efficiency, and a time interpolation method considering the acceleration between frames is proposed to further improve the accuracy of vehicle speed calculation. The experimental results show that the method can obtain high accurate speed estimation results compared with the traditional method.
Target vehicle detection and tracking
The characteristic points selected in the literature 20 are the vehicle tail line, and the characteristic points selected in the literature 21 are the front and rear connections of the carriage. The characteristic points selected in the literature 22 are the point at which a tire makes contact with the ground. In this article, the tire center of the target vehicle is selected as the characteristic point for the vehicle speed calculation. Therefore, the wheel center is identified to obtain the coordinate position of the wheel center, and the identification process is shown in Figure 1.

The wheel center identification flow chart.
Image preprocessing
During the acquisition of the images, the collected image has certain interference and noise due to the wheel surface stains, weather conditions, lighting conditions, and other factors, and the image preprocessing can effectively highlight the useful information of the vehicle image, eliminate or reduce the useless information, thus improving the accuracy and identification rate of the vehicle wheel center recognition. 23 Therefore, the acquired images need to be preprocessed before the wheel-centering recognition. The preprocessing is shown in Figure 2, based on the vehicle’s characteristics and characteristics of the picked image.

Image preprocessing flow chart.
The ROI is the area of interest in which containing the target vehicle. Selecting the vehicle tire as the area of interest, by ROI positioning can greatly reduce the operation amount of the subsequent steps. 24 Therefore, this article obtains the image shown in Figure 3(a) through ROI positioning and appropriate cropping of the image. For processing convenience, the RGB real-color image converted into a gray image, and then converted into a binary image using an appropriate threshold, which is shown in Figure 3(b). 25 Image segmentation is to divide the image into several specific areas with unique properties and extract the ROI again for them, because the part of we want is the tire area, what we want to extract is the tire part next. The noise is removed by image segmentation of the binary image and morphological image filtering processing (image dilation, open operation, etc.), and then the vehicle tire is identified in Figure 3(c). 26

Feature point identification of car images. (a) ROI image, (b) binarization image, (c) tire area image, and (d) image of the tire center.
Wheel center feature point identification
For a circular target object, the radius is a crucial parameter. Since the size of the radius determines the area and the circumference of the circular target object and the coordinate position of the circular center, this article also needs to confirm the size of the vehicle tire radius. 27 When calculating the radius of the tire, it is necessary to obtain the tire boundary. Therefore, first the function “bwtraceboundary” is used to obtain the tire boundary, then obtain the corresponding parameters according to various formulas of the circle, and finally figure out the radius of the tire. Meanwhile mark the center of the tire (Figure 3(d)).
Improved camera calibration method
Identification of the detection areas
In the field of intelligent transportation, video speed measurement is a technology jointly completed with camera monitoring equipment and digital image processing. It does not use special speed measuring equipment (coil, radar and ultrasonic, etc.) and is a method of obtain the travel speed of target vehicle only by the analysis of the vehicle monitoring signal. In this process, the appropriate testing area is selected to improve the detection efficiency. When choosing the detection area, the detection area should be designated at the accident site, and the following points should be paid attention to when defining the detection area at the accident site: According to the location of the target vehicle accident occurred, reasonably select the detection area; According to the environmental conditions of the road, the detection area should be set as far as possible in the area where there is no impact on the video image observation; Try to choose the location of road markings (such as sidewalks, lane lines) to delimit the sign lines.
The setting principle of detection area proposed in this article is shown in Figure 4. The camera is located above the detection area. The

Schematic diagram of the detection locale. (a) Image pavement detection area and (b) actual pavement detection area.
However, because the foreshortening effects makes the image displacement different from the actual displacement, the ladder shape is shown in the actual image in Figure 5. Therefore, to enable the accurate measurement of the vehicle speed, we should find a transformation relationship between the image plane and the 3D space.

Actual road image.
Principle of photogrammetry
The photogrammetry principle usually consists of the world coordinate system
According to the photogrammetry principle (Figure 6), the direct transformation relationship between the image pixel coordinate system and the world coordinate system is shown in formula (1).

Principle of photography measurement.
where (
Formula (1) shows that, there are eight unknown parameters in the equation, and two unknown parameters can be solved by importing the image coordinates and corresponding world coordinates of a set of regional feature points each time. Therefore, as long as the world coordinates and image coordinates of the four vertices on the detection region are obtained, the conversion relationship between the image coordinate system and the world coordinate system can be solved, the area enclosed by the four vertices is the calibration region, and the specific perspective matrix parameter equation is 27 shown in equation (2).
The image coordinates of the four vertices can be directly obtained through image processing and identification, and the corresponding three-dimensional spatial coordinates can be directly measured at the accident site. After solving the parameters in equation (2), the conversion relationship between the image coordinates and the real coordinates can be obtained
Speed estimation model
Improved thought
The videos are used to calculate vehicles speed usually including fixed video image and onboard video image. In this article, the method is only discussed for fixed video image. When calculating the vehicle speed with accident vehicle video, normal think is considering the vehicle speed is invariable between two adjacent frames. The time interpolation method mentioned in the literature 28 is carried out on the basis of assuming that the vehicle does uniform speed motion.
However, when the driver finds risk to happen an accident, emergency braking will be done, in most of the case. Therefore, in order to improve the accuracy of vehicle speed calculation, the acceleration is integrated in the traditional interpolation method. The front and rear tire centers of the vehicle are selected as the feature point. The vehicle speed calculation model is shown in Figure 7.

Schematic diagram of the vehicle speed calculation model by the time interpolation method.
where
Traditional temporal interpolation method
First, many continuous vehicle speeds are calculated using the traditional time interpolation method, the average speed
where
As the target vehicle is moving forward, then the second frame of the video as a starting point, the above process is repeated, and a number of continuous frame vehicle speeds
Improved temporal interpolation method
In order to improve the method of traditional time interpolation and to raise the accuracy of vehicle speed identification, the acceleration vehicle motion is considered between adjacent two frames. The improved time interpolation method is used to correct the calculation of the speeds obtained above, and the improved vehicle speed calculation method is as follows:
The first step, to solve the
where
The second step, to solve the average acceleration, according to the formula shown in formula (6)
where
The third step, corrects the time of a characteristic length, and solving the formula is formula (7)
where
The fourth step, solve the vehicle speed at the integer frame, its solution formula is formula (8)
where the
The fifth step, corrects the speed of the vehicle, solving with formula shown as (9)
Example verification
Accident background
Basic case: At about 13:50 on October, 2020, a small off-road vehicle (hereinafter referred to as the A car) driving from east to west and a small car (hereinafter referred to as the B car) driving from south to north, at a city intersection, occurred collision accident.
Experimental result
Identification of the detection areas
According to the collision location and characteristics of the two vehicles and the road environment shown in the video image, a rectangular detection area with A, B, C, and D as vertices is selected in Figure 8. The rectangular detection area is in ladder shape on the image due to the foreshortening effects of the camera.

Detection of the area diagram in figure.
Calibration of camera parameters
The coordinates of the four vertices of the detected region in the video image are obtained using the Matlab image processing and identification technology, and the actual coordinates of the four vertices can be obtained by actual measurements (shown in Table 1).
Coordinates of the four vertices in the detection region.a
a With the data in Table 1, the parameters of the camera can be determined by equation (1) and obtained by solving:
Determination of the characteristic point coordinates
This article takes 6s frame 15 as an example. Video image key frames were extracted by observing the location of the target vehicle in the video image. Firstly, the RGB image is grayed, obtains Figure 9(b), then according to Figure 9(c) that is gray scale histogram selects an appropriate threshold for binarization processing, uses image segmentation and morphological processing are used to detect target vehicle that is (f) in Figure 9, the vehicle tire is identified by image segmentation and morphological processing, the detection result is (g) in Figure 9, and finally identify the wheel center by Hough circle detection, and mark its image coordinates, and the detection result is (h) in Figure 9. The actual coordinates of feature points are obtained by the conversion of image coordinates (Table 2).

44s frame 20 feature point identification diagram. (a) Original drawing, (b) grey-scale map, (c) refigure, (d) ROI binarization, (e) morphological processing of the images, (f) target vehicle, (g) tire area image, and (h) wheel center mark.
The coordinates of the different frame feature points used for the vehicle speed calculation.
Speed calculation
Through the data solved above calculate the vehicle speed in the detection area in Table 3 using the traditional time interpolation method and the time interpolation method that considers the vehicle to do constant variable speed motion between frames. According to the data presented in Table 3, the variance of the vehicle speed obtained by considering the time interpolation method of the vehicle for making the constant variable speed motion between two adjacent frames is smaller than the traditional time interpolation method, indicating that the time interpolation method of considering the vehicle for moving constant variable speed is closer to the actual vehicle speed.
Relevant parameters for the different algorithms.
According to the analysis of Figure 10 shows that the vehicle speed fit obtained by the optimized time interpolation method is higher, the average acceleration between the speed of the two adjacent feature lengths is 0.17 m/s2, the average vehicle speed calculated by the traditional time interpolation method is 69.465 km/h, while the average vehicle speed calculated by the time interpolation method of integrated acceleration between the two frames is 69.128 km/h. The actual average vehicle speed is 69 km/h. After comparison, the optimized algorithm reduced the error by 0.7%, and the error analysis graph is shown in Figure 11.

Vehicle speed diagram of the different algorithms.

Error analysis.
Conclusion
The selection of the detection area by artificial means can not only eliminate the interference of the road conditions (such as rugged, occlusions, etc.) but also select optimum the detection area according to the movement characteristics of the vehicle. Secondly, utilize the image preprocessing and morphological methods and the extraction technology of the characteristic points in Matlab can automatically identify the wheel center, which avoids the error caused by the manual identification of the wheel center, and improves the coordinate calibration accuracy of the characteristic points. Finally, the improved time interpolation method proposed in this article is used to calculate the vehicle speed, effectively avoiding the error caused by the vehicle acceleration changes between two adjacent frames, and thus improving the estimation accuracy of the vehicle speed.
