Abstract
Keywords
1. Introduction
Simultaneous Localization and Mapping (SLAM) is an indispensable component of any autonomous robot which deals with making a map of its surroundings and localizing itself in the map at the same time. The robot perceives the exterior environment from sensors such as a laser range sensor (LRS), RGB-camera, stereo camera, inertial sensors, GPS, etc. The robot has wheel encoders which give estimates about how much the robot has moved in the environment. The sensor data and encoder data are both used to build a map and estimate the position of the robot in the map. It is a challenging problem, mainly because the sensors and encoders are prone to errors, which is further complicated by the dynamics of the environment. These errors accumulate over time as the robot moves in the environment. Hence, a robot must incrementally build the map and keep on correcting both the map and its position at each step. Various approaches to solve this problem have been proposed and the earliest work can be found in the works of Smith et al. [1]. Probabilistic approaches including Extended Kalman Filter (EKF) SLAM have been proposed in [2], which uses particle filters. Similar approaches using different sensors like cameras [3] and depth sensors [4] have also been proposed.
In most of the approaches used to solve SLAM, a robot needs to match its current data with the previously saved map data to estimate its position and then apply correction. Matching can either be point-based or feature-based. In the former, feature points obtained from sensors such as LRS are directly matched, such as in [5, 6, 7]. One of the most common point-matching algorithms is ICP (Iterative Closest Point) [8]. ICP is a classical rigid-point set registration algorithm, and many variants of ICP have been proposed in [9, 10, 11, 12]. When given two point clouds, ICP algorithm tries to iteratively minimize the least square difference between them to determine rotation and translation. A major drawback of ICP is that it is computationally complex and the performance deteriorates for large number of points. ICP has a computational complexity of
On the other hand, feature-based matching utilizes map features like line segments, corners, colour histogram information, scale invariant features (SIFT) [16] and others to match the sensor data. Compared to point-based matching, feature-based mapping systems generally require less memory, are relatively computationally efficient, and have been extensively utilized in SLAM. For distance sensors such as LRS, line segment-based SLAM has been studied extensively, employing line extraction algorithms such as Split and Merge [17] and incremental [18] algorithms. Among these, some of the most prominent algorithms for line extraction are Hough transform [19, 20, 21] and RANSAC [22]. Other techniques include the Expectation-Maximization algorithm [23, 24], line regression [25], and clustering-based line map generation [26, 27, 28]. Mapping based on line segments have also been reported in [29, 30, 31]. Line-segment extraction and its integration in particle filter SLAM has been proposed in [32, 33]. A good summary of various line-extraction techniques can be found in [34].
This paper proposes a novel line detection algorithm using SVD and Hough transform with good computational efficiency. In order to speed up the line extraction process, we propose a hopping mechanism in which SVD is applied to intermittent points. A reverse hop mechanism ensures that the end points of each line segment are accurately extracted. Moreover, the line extraction algorithm is integrated in a SLAM framework and the map comprising the extracted lines are used to match the current LRS data for efficient localization.
This paper is structured as follows: Section 2 explains the problem of straight-line detection from LRS data using Hough transform. Section 3 explains incorporating SVD in the equations of Hough transform to accurately detect lines. The forward and reverse hopping scheme to accelerate the algorithm are explained in Section 3.3. The proposed line extraction algorithm is integrated into SLAM and explained in Section 4. Mapping is explained in Section 4.1 and localization in Section 4.2. Results are discussed in Section 5. Section 5.1 discusses the simulation results, and Section 5.2 shows the results of the experiments in a real environment. Finally, Section 6 concludes the paper.
2. Straight-Line Detection from LRS data with Hough Transform
A typical indoor environment of a building is shown in Fig.1. It consists of straight passages which are diverting and opening in both left and right directions. A robot with an LRS sensor mounted on it moves through the passage and the sensor records the points on the edges of the walls of the environment, which have been indicated by heavy black lines. How far an LRS can detect these points depends on the specifications of the sensor. In general, every LRS sensor is characterized by a maximum detection distance

Environment for simulation

Local coordinate for laser-range sensor
As shown in Fig.2, LRS outputs a point
The position of the detected wall is indicated by the heavy line in Fig.2 and is expressed as,
on the coordinate system o-
The vector
The Hough transform of the

Example of laser-range sensor data

Detection of a line from points with SVD
Hough transform changes the points from Cartesian (
If the sensor is accurate without any errors, and there is no noise, then all the points on the line will intersect at exactly the same point (with maximum votes) in the (
3. SVD-Based Line Detection Algorithm
This section describes the proposed algorithm for detecting straight line segments using SVD and Hough transform.
3.1. Straight line detection with SVD
We can re-write the equation of Hough transform (2) as,
This equation can be further expressed as,
A regression line which starts from the
where
We employ Singular Value Decomposition (SVD) to solve this equation. SVD of the matrix
where

Flow chart of line-segment detection
Here, the matrices
When a sufficiently small singular value cannot be obtained, it means that it is difficult to determine a regression line of
Figure 4 gives a graphical intuition of the proposed method. Points indicated by • are the points measured by LRS. The detected straight line is shown by a bold line. The
A straight line is detected as a line of intersection of the plane which consists of the vectors
A threshold
where α is an undetermined factor. Expressing the singular vector
we get,
from Eq.(6) and α is determined as,
from the relationship
3.2. Line-segment detection algorithm
In this subsection, we describe the entire process of line segment detection. A flow-chart of the algorithm is shown in Fig.5. The parameter ψ in Fig.5 deals with the ‘hopping’ feature of the algorithm and is explained in detail in Section 3.3. First, the algorithm is explained without the hopping feature, in which the value of ψ is set to 0 (i.e.,

Explanation of hopping point SVD-based line segment detection. (a) No hopping with
The line detection process comprises the following steps:
The accuracy of the line segments are determined by the threshold
3.3. Accelerating the line detection algorithm using hopping-points SVD
In the proposed line-detection method, SVD is applied to each and every point, and the lower singular value (
This process is explained graphically in Fig.6. The normal SVD on every point is shown in Fig.6(a). Fig.6(b) shows SVD applied to intermittent points with
A ‘reverse-hop’ mechanism ensures that the end points of line segments are accurately extracted. In both Fig.6(b) and Fig.6(c), the hop accurately terminates at the end-points of the line segments

(a) Overhopping with a large value of
If the time taken to compute SVD over
where
Note that when
Setting ψ to a large value might seem advantageous but it may lead to ‘overhopping’ and some shorter line-segments’ detection might be skipped. This may lead to a drop in the accuracy of the line detector, as the end points of line segments might not be detected correctly. As shown in Fig.7(a), segment

Comparison of the proposed method with Hough transform-based line detection. (a) A typical example of LRS scan data with three line segments. (b) Hough space of the data with multiple points, with maximum votes shown in green boxes. (c) Multiple lines extracted by Hough transform. (d) Line segment detected by the proposed method. The start point and end point of each segment are shown by blue boxes and circles, respectively.
3.4. Comparison with Hough transform
Figure 8(a) shows a typical example of LRS data in which points are not necessarily linear due to sensor and environment noise. Figure 8(b) shows the corresponding Hough (
Like Hough transform, the proposed method is sensitive to outliers. RANSAC [22] gives better performance than the proposed method in the case of data with many outliers. However, RANSAC requires a large number of iterations, and requires the setting of many parameters like the minimum number of points, the threshold distance between points and fitting line and the inlier ratio, which are often problem-specific thresholds [36, 22]. In the presence of too many outliers, the proposed method can easily be combined with noise-removal techniques [37, 26] to detect line segments from LRS data.
4. Integration of the Line-extraction Algorithm into the SLAM Framework
This section explains the integration of the proposed line-extraction algorithm into the framework of simultaneous localization and mapping. As shown in Fig.2, at the

Matching algorithm
4.1. Mapping from LRS information
The output data of the LRS at the
Let
where
Hence, we obtain
We can represent the estimated global map (
where
At the initial condition, the estimated global map is given by
The robot updates the estimated global map at each scan. In order to do so, estimated values of absolute position
4.2. Self-localization by matching LRS data with the estimated global map
For very short distances, the estimated value of the absolute position and absolute attitude angle of the LRS can be obtained from wheel encoders. However, for long distances, the odometry errors continue to integrate and we cannot rely on encoder data alone. Therefore, correction of the estimated values by using landmarks is necessary.
An outline of the correction of the self-localization method's estimation value is shown in Fig.9.
After executing one LRS scan at the
by using the transform matrix
Here,
In order to estimate
to all points
Points
In addition, the matrix, which is constructed by arranging the vector
Here, the centre of gravity

Example of
In this study, the amount of correction of translational displacement generated by a line segment
as shown in Fig.9, where the vector
and
As shown in Fig.9,
and the distance
Here, the vector
Similarly, it follows from Eq.(28) that the distance
Obviously, the distance
Therefore, the amount of correction of translational displacement
equation (24) is introduced to simulate a force such as the gravitational or magnetic force which decreases rapidly with increasing distance between two objects, where
The amount of correction of rotational displacement is defined as,
which is the moment around the centre of gravity
The amount of correction for the map
Initially, at the start of the robot operation, we set
where
Finally, the estimated values of
The calculations are executed recursively returning to Eq.(21). Moreover, self-localization can be achieved by Eq.(19). Simultaneous Localization And Mapping (SLAM) is also achieved by executing alternately the map updating in Subsection 4.1 and self-localization in Subsection 4.2.

Laser range sensor URG-04LX (HOKUYO AUTOMATIC CO., LTD.) and measurement range
5. Results and Discussion
This section discusses the results in both simulation and real environments. For the real environment, we used the Pioneer-P3DX [38] robot shown in Fig.12. It is a two wheeled, differential drive robot. We first describe the motion model of the robot. The distance between the left and the right wheel is
and the radius of turn
The coordinates of the centre of rotation (
The new heading
from which the coordinates of the new position
If
and,
Laser-range sensor for simulation
The flowchart of mapping and localizing process integrated with the proposed line-extraction method is shown in Fig.14. For each LRS scan at the
5.1. Simulation results
We performed a simulation based on a URG-04LX (HOKUYO AUTOMATIC) laser-range sensor (Fig.11), the specifications of which have been summarized in Table 1. Figure 11(b) shows the measurement range of the LRS. Gaussian noise values of ±25 mm and ±2.5% were added to the distance ranges of 1000 mm and 4000 mm, respectively. The effects of errors caused by the shape of objects due to reflection etc. were not taken into consideration. Compared to the real LRS, the angular resolution was set to 1/8 for faster computation. The simulation was carried out using MATLAB 7.12 (R2011a) running on a Linux machine with an Intel Core i7-4500U CPU, 1.80 GHz and 16GB RAM.

Pioneer P3-DX

Motion model of two wheel, differential drive robots

Flow chart of the SLAM process integrated with the proposed line-segment detection. The marked portion shows the proposed line extraction accelerated by hopping.
Taking the distance resolution of the sensor into consideration (Table 1), the value of threshold
An example of a map-matching result in the sensor's o
The starting point of the extracted line is shown by □, and the end point by ○ in Fig.15 and Fig.16. The final constructed map is shown by the small ‘+’ mark in pink. The sensor location is shown by a larger ‘+’ sign in pink and, similarly, the sensor's orientation is shown by an arrow. Both accurate mapping and accurate robot localization were achieved with the proposed method. The total execution time was 1015 seconds, which included the calculations and displaying of graphs. It can be seen that setting the threshold
Figure 17, Fig.18 and Fig.19 represent the estimation errors in the
We tested the proposed hopping point SVD-based line extraction algorithm for a large set of almost linear 2280 points generated by URG-04LX in three subsequent scans (760 × 3=2280, 760 points per scan) against different values of hopping factor ψ. As shown in Fig.5, the proposed algorithm applies SVD to data generated from the starting point of each detected segment to its end point, after which the process is repeated from the beginning for the new line. The worst-case scenario for the algorithm is when all of the 2280 points are almost linear, and this has been considered for calculating the speed-up. The execution time for different values of ψ is summarized in Table 2. It can be seen that the speed-up is nearly linear.

Simulation results of the estimation of sensor position with map matching
Execution time for line detection using hopping point SVD on 2280 (760 × 3) points and the speed-up for various values of

Results of the simulation of simultaneous localization and mapping

Estimation error in the

Estimation error in the

Estimation error of the sensor angle

Results of the simulation of the iteration number for estimation of the sensor's position
5.2. Experiments in a real environment
The robot P3-DX (Fig.12) was used in the experiment and the motion model was explained earlier. The robot was equipped with the same sensor used in the simulation, moved in the environment shown in Fig.21(a) and data were collected. The environment is one of the corridors of our university, cluttered with real objects such as boxes and tables. The corridor environment shown in Fig.21(c) is approximately 4 m in breadth and 35 m in length.
Figure 21(b) shows the dead reckoning of the collected sensor data. Figure 21(c) shows the final map constructed by the proposed method. The line segments are indicated by a bold black line on the map. Many line segments are detected, and the starting point of each line segment is indicated by a blue □, and the end point by red ○ in Fig.21(c). Note that when the end point of a line segment is the starting point of the new line segment, the blue □ and the ○ overlap each other. Some minor segments were also detected by the robot in the map as the environment was cluttered. When the objects are too far from the LRS and not enough data are available, a bunch of very small line segments which are not joined to each other can also be seen. However, this can be corrected by moving the robot sufficiently close to the far away objects and by collecting enough data. A line map was obtained by the proposed method with a runtime of 1453.2 seconds. The bulk of computation (1302 s ≈ 89%) was consumed by map updating and localization with a large number of iterations for the convergence of Eq.(34) and Eq.(35). As explained in Section 4.2, this convergence can be tuned by parameters ξ, η and ζ. Although not realized in the current work, the proposed line-extraction algorithm has a scope of parallelization with different CPU threads working on different sections of line segments that can later be joined.

(a) Corridor environment for experiment. (b) Dead reckoning map with trajectory in red. (c) Corrected line map with robot trajectory in green.
6. Conclusion
While there exists a large plethora of literature on building maps using traditional techniques like the Extended Kalman Filter, and other probabilistic methods which have been successfully demonstrated in various scenarios, this paper took a fresh approach in using SVD and Hough transform for line detection from LRS data to build maps. We accelerated the algorithm using a novel hopping-points method in which SVD is applied to intermittent points governed by a hopping parameter
