Abstract
Introduction
The use of unmanned aerial vehicles (UAVs), also called drones, has increased dramatically in recent years because of tremendous advancements in UAV technology and a drastic decrease in the price of UAV products. This has brought new challenges to the safety, security, and privacy of the public. Current UAVs are capable of carrying weapons, including weapons of mass destruction. They can be used for smuggling, terrorist attacks, espionage, and stealing wireless data. They can also pose a threat to commercial air flights. It is essential to protect many high-risk areas, such as nuclear power plants, jails, industrial facilities, parliaments, government facilities, embassies, stadiums and concert halls, international borders, no-fly zones, and demilitarized zones, among others.
Many incidents of civil offenses and criminal offenses related to UAVs have been reported. 1 Most of them involve the illicit use of small UAVs, creating a nuisance to the general public, conducting surveillance and reconnaissance (collecting intelligence on a known “enemy” target), creating airspace interference, causing damage to private or public property (with or without armaments), engaging in cross-border or restricted-area smuggling, using a small UAV in a public demonstration against state agencies, hijacking wireless signals, and so on. Even though small UAV operators may not have bad intentions, UAVs pose threats to airplanes, helicopters, and even to other UAVs, and they are becoming a great challenge for law enforcement agencies. These threats are increasing day by day because of the off-the-shelf availability to anyone of diverse types of UAVs developed by many companies at very affordable prices. Thus, it is very important to develop drone detection techniques to protect a given building or geographic location from possible threats by UAVs.
Based on the type of sensor used, there are three main UAV detection techniques. The first is UAV detection by radio frequency (RF). This technique works fine if a UAV is controlled over an RF channel. However, if a drone is preprogrammed to follow a fixed route using global positioning system (GPS) information without RF communications, 2 RF detection simply does not work.
A second approach is detection by radar, which is the traditional technology for detecting flying objects. Evidence shows that in some cases, radar fails to detect UAVs. In particular, radar cannot detect an object within a radar shadow behind a building, mountain, any large obstacle, or masking terrain, and cannot detect objects flying at low altitude, where the electromagnetic waves from radar cannot reach. Also, radar has a hard time detecting small, plastic, electric-powered UAVs because that is not what they were designed to detect. 3 It might be possible to design new radar to detect such small drones. However, if it is modified to also detect small objects, then the amount of noise will increase as birds, and all other small flying objects will surface above the threshold. This drawback limits the use of radar for identifying small UAVs. 4
A third approach is detection by vision sensors and image processing. In this approach, a camera usually plays the role of vision sensor to detect UAVs. An experiment by Nistér et al. 5 proved that vision can function as a promising navigation sensor that provides accurate localization. Modern cameras can see long distances at a usable resolution. An advantage of vision sensors and image processing–based approaches is that motion detection combined with a speeded-up robust features (SURF) algorithm can properly detect small UAVs while successfully ignoring other flying objects, such as birds. 4 Our proposed schemes come under this category.
In this article, we propose new methods to localize UAVs, overcoming the limitations of radar detection. In the proposed schemes, the radar shadow problem is resolved by deploying a sufficiently large number of image sensors, such as cameras connected through sensor networks, around the region needing protection. Each image sensor measures the azimuth angle and angle of elevation of the target UAV and sends that angle information to the data collector (sink) node through sensor networks. 6 Then, the data collector node estimates the location of the target UAV based on the collected angle samples. It is necessary to detect a UAV in the image in order to measure the angles.
The remainder of this article is organized as follows. Section “Literature review” reviews the related literature. In section “UAV localization scheme,” we describe our proposed aerial vehicle localization scheme. In section “Analysis results,” we compare the estimation accuracy of different schemes for low-altitude target UAVs and high-altitude target UAVs, and we present analysis results. In section “Conclusion and future work,” we summarize the work in this article and describe possible future research that builds upon this study.
Literature review
Localization is the process of finding the physical location of a UAV in accordance with some real or virtual coordinate system. Localization is an important task when direct measurement of the UAV location is not available. Generally, system performance with localization is evaluated based on the accuracy of the estimated location information at a given time.
In the literature, vision-based UAV detection has been investigated intensively in order to avoid collisions between UAVs.7–9 Most of those vision-based approaches focused only on the detection of other UAVs, but not on localization.
Although distance estimation has been considered, 7 the research used a training data set for a specific UAV, and thus, the estimation is not likely to work reliably for any arbitrary UAV.
Boddhu et al.
2
considered detection and tracking of hostile drones based on the paradigm of
Although this scheme is similar to our scheme, in that it uses multiple sensors, and the detailed localization method is very different, since localization is done based on users’ manual feedback.
At variance to our purpose, there is a lot of work in the literature on localizing ground-based objects when imaged from UAVs.10–15 In these methods, the target is basically localized using its pixel location in an image, with measurement of UAV position and attitude and camera pose angle. These vision-based localization techniques in unknown environments were classified and reviewed by Ben-Afia et al. 16
To the best of our knowledge, the issue of localizing a UAV with multiple distributed sensors has not yet been discussed intensively, because most of UAVs specifically designed for civil and commercial applications were used to localize targets on the ground. In this work, we investigate the issue of localizing a UAV with multiple ground sensors in detail. Unlike most of the UAV localization schemes in the literature that focus on localizing objects in the ground with image sensors mounted on UAVs,10–15 in the proposed schemes image sensors are deployed in the ground, that is, the region needing protection. Figure 1(a) shows the deployment of image sensors for UAV localization.

(a) Deployment of image sensors for UAV localization and (b) measurement of azimuth angle (
When we have two sensors monitoring a selected target UAV, if we know the angle of the target UAV measured by each sensor and the distance between two sensors, then the location of the target UAV can be determined by triangulation. 2 However, if there is error in the measured angles, the accuracy of the triangulation method is significantly low. This issue is discussed in section “Analysis results.”
In this work, we investigate two new methods to improve localization accuracy by incorporating multiple sensors and solving an optimization problem based on the measurement results from multiple sensors while overcoming the limitations of a simple triangulation method. We analyze the proposed scheme by numerical analysis, and we kept a comparison of the scheme with experimental results for our future work.
UAV localization scheme
In this section, we discuss detailed methods to estimate the location of a UAV using the angle measurement values provided by image sensors deployed over a selected range of a geographic region.
To simplify the problem, we make the following assumptions. (a) We assume that each sensor node has the ability to detect an aerial moving object and extract the azimuth angle and angle of elevation of the target UAV using image-processing techniques on a picture it took.7–9 (b) The clocks of the sensor nodes are synchronized, and thus, all of the working sensor nodes can take a picture at the same time. (c) The entire set of sensor nodes shares a common coordinate system, as shown in Figure 1(b), and they face in the same direction, that is, the direction of the positive
The location of the UAV needs to be determined in three-dimensional (3D) space, that is, in the reference coordinate system. We attempt to solve this problem in two different ways. In the first approach, we localize the projection of the target UAV on the horizontal plane and estimate the location of the UAV in 3D space by calculating its altitude afterwards. In the second approach, the estimation is done directly in 3D space.
Localization of the target UAV through the projection image on the horizontal plane
In this subsection, we attempt to find the location of the target UAV in two steps. In the first step, we estimate the location of the projection of the target UAV on the horizontal two-dimensional (2D) space using azimuth angles measured by sensors. In the second step, we estimate the altitude of the target UAV using the measured angles of elevation.
Figure 1(b) shows two sensors,
where
From
By solving the above equation in terms of
Thus, we find that the location of target UAV
As we have seen in Figure 1(b), when the location of
When the number of sensor nodes is three or more, and the azimuth angle measurement is not accurate due to noise, there is not likely to be a single point where all the TPLs from the sensor nodes meet. In such a case, we estimate the location of
where the second equality is valid because
If we define
then
This is a well-known linear least square optimization problem, and the optimal solution for this problem can be obtained as follows 17
From the definition of
Localization of the target UAV in 3D space
As a second approach to UAV localization, we solve an optimization problem directly in 3D space. A line that passes sensor
where
Since
where
The eigenvalues of the Hessian matrix of
where
The above set of simultaneous equations can be summarized as
where
Since the solution of equation (8) can be obtained by multiplying
Analysis results
In this section, we evaluate the accuracy of our proposed UAV localization schemes through simulation. Our proposed UAV localization scheme discussed in subsection “Localization of the target UAV through the projection image on the horizontal plane” will be referred to as 2D-TPL scheme, since the least square estimation is based on the TPL on the horizontal plane. Since the estimation is based on the 3D-TPL, the second localization scheme, discussed in subsection “Localization of the target UAV in 3D space,” will be referred to as the 3D-TPL scheme. For comparison purposes, we consider one more scheme, referred to as the
Simulation environment and parameters
We evaluated the proposed schemes in the following simulation environments. In our simulation, we classify the target UAV based on its altitude, since the sensors are assumed to be deployed over a sufficiently wide range of the geographic region. We consider two types of UAV: a high-altitude UAV, where the altitude is 550 m, and a low-altitude UAV, where the altitude is 50 m. The number of sensors,
Evaluation results
Figure 2(a) compares the accuracies of the 2D-TPL, 3D-TPL, and centroid schemes for a low-altitude target UAV and various numbers of sensors. We ran 20 simulations for each number of sensors, and the average error is shown in the figure. The centroid scheme exhibits significantly worse performance than the proposed schemes, and we found the same trend in other scenarios. Thus, we concentrate on a comparison of 2T-TPL and 3D-TPL hereafter. Figure 2 shows that the average error in 2D-TPL and 3D-TPL tends to decrease as the number of sensors increases. The larger the value of

Comparison of estimation accuracies for the 2D-TPL, 3D-TPL, and centroid schemes for a low-altitude target UAV at target location (500, 200, 50): (a) comparison of the 2D-TPL, 3D-TPL, and centroid schemes (
Figure 3 compares the accuracies of 2D-TPL and 3D-TPL for a high-altitude target UAV. The 2D-TPL and 3D-TPL schemes show similar performance when

Comparison of estimation accuracies of the 2D-TPL and 3D-TPL schemes for a high-altitude target UAV at target location (500, 200, 550): (a)
Conclusion and future work
In this work, we proposed two schemes for localizing UAVs based on measurement of azimuth angle and angle of elevation by many image sensors distributed over a wide geographic region. The first scheme (2D-TPL) attempts to localize the projected image of the target UAV on the horizontal plane and finds the altitude of the target UAV as the next step. The second scheme (3D-TPL) attempts to directly localize the target UAV in 3D space. The simulation results show that both schemes exhibit very similar performance in terms of localization error when the altitude of the target UAV is low. However, 3D-TPL outperforms 2D-TPL, especially when the altitude of the target UAV is high and the measurement error in the angle of elevation is large. Comparison of the simulation results with experimental results is left for future work.
