Abstract
Introduction
Simultaneous localization and mapping (SLAM) is a fairly well-known problem for autonomous robots that still presents numerous difficulties for researchers, even though a variety of efficient solutions have been proposed mainly after the dawn of the 21st century. SLAM applications are widespread, and its benefits have been clearly demonstrated for the mapping and navigation of autonomous robots. Although a great deal of effort has focused in the recent years in fully resolving the problem of driverless cars, other categories of autonomous vehicles and their specific requirements for a smooth and efficient application of SLAM have not been sufficiently examined. Such a type of vehicles is the autonomous bicycle or motorcycle. Compared to unmanned cars, bicycles are more flexible and can freely pass through narrow spaces such as alleys and country roads, while they constitute a widely used means of transportation, mainly due to their low cost and ease of use. In contrast to four-wheeled vehicles, the virtues of autonomous bicycles and similar two-wheeled vehicles in general have yet to be thoroughly presented and understood. Lightweight, easily controlled, fast in changing direction, cost-, space-, and energy-efficient bicycles have a plethora of advantages and possible applications once they achieve autonomous behavior. In order to reach that stage, however, the achievement of reliable SLAM, that is, constructing a model of their surroundings and navigating with stability, speed, and safety within the map based on that model, is a necessity.
That necessity can also be understood, if we consider the following. Firstly, SLAM can handle difficult and highly dynamic environment conditions and sensor fusion techniques under more challenging setups, for example, no GPS, low-quality sensors, than examined by previous localization and mapping methods, which is usual in the case of more simple vehicle configurations that befit bicycles. Secondly, by finding loop closures with SLAM, autonomous bicycles can understand the real environment topology and are able to find shortcuts between different map locations, while the metric map reconstruction can prevent erroneous loop closures from happening, enhancing robustness. Finally, due to numerous autonomous bicycle applications being in need of a globally consistent map, such as goods deliveries or intra-city navigation, the implementation of bicycle SLAM is deemed crucial. More specifically, a successful SLAM should account for the two-wheeled vehicles’ complex kinematics and dynamics, include a sensor system that can use the bicycles’ one wheel odometry, account for the constant vibrations of the bicycle frame, especially on uneven types of terrain, and make sure that real-time performance can be achieved for navigation and obstacle avoidance purposes.
Regardless of the SLAM methodology applied, there are two vital components for their application based on the Bayesian recursion model. 1 First, the reliable prediction or registration of the robot’s motion, given an accurate motion or odometry model, respectively, in order to execute the “sample” phase. And second, the collection of measurements of the surrounding environment, using a variety of sensors with known measurement models, in order to execute the “update” phase. The theoretical basis of autonomous vehicles and obstacle avoidance has been firmly set over the past decades, mainly following the principles of automatic control. 2 Publications of the last decade have introduced several solutions for the autonomous cars’ SLAM problem. 3,4 Various teams proposed techniques for environment perception of autonomous vehicles using range-bearing finding sensors, 5,6 while others analyzed the solution to the visual SLAM problem. 7,8 Sensor fusion techniques have also been implemented by numerous researchers for outdoor environments, 9 whereas RGB-D sensors have not yet been fully utilized due to their still poor results in outdoor environments.
One of the first efforts pertaining specifically to autonomous bicycles was 10 examining the stabilization and control methodologies of such two-wheeled vehicles. Other teams 11,12 have studied bicycles as well, but mainly from the control aspect. Yi et al. provided a thorough analysis of the bicycle’s kinematics and dynamics models, 13,14 while an applicable version of these models for a real-world implementation of a self-stabilizing bicycle was presented in the work by He et al., 15 and obstacle avoidance methodologies were examined by Stasinopoulos et al. 16 Further real-world applications of autonomous bicycle perception are relatively scarce, while no team has analyzed the two-wheeled SLAM problem before, mainly due to the lack of a sound theoretical framework describing their perception and SLAM aspects. A simplistic approach could suggest that the same principles that apply for four-wheeled vehicles, as seen in research efforts by teams for both two-dimensional 17,18 or three-dimensional 19,20 environment representations, could be used to analyze the SLAM problem for autonomous bicycles. However, in reality, a significant number of fundamental differences between these two types of vehicles exist that demand the redefinition and adaptation of the problem to conform to the special characteristics of these autonomous vehicles.
This article provides a thorough theoretical analysis of the unique characteristics of autonomous bicycles that differentiate them from other mobile robots with regards to the application of SLAM. Our main contributions comprise the modification of the differential motion model for mobile robots based on the complex bicycle kinematics and dynamics, two types of odometry models along with the uncertainty factors they introduce, the introduction of an applicable sensor design and rotation pattern for onboard perception, and the specially modified measurement model for landmark detection. Our SLAM application solution can be extended for use by all similar two-wheeled vehicles. In addition, we present for the first time our laboratory’s autonomous platform prototype and its fully functional control, odometry registration and perception systems, capable of SLAM application. Furthermore, we analyze how the rate of change of the bicycle frame roll angle and the overall SLAM performance are connected and verify this hypothesis through our experiments.
The remainder of the article is organized as follows: “Bicycle model” section introduces the kinematics and dynamics models for the bicycle, while “Motion and odometry models” and “Environment sensing system and measurement model” sections present our proposed motion/odometry model and sensory system/measurement model, respectively. “Implications on SLAM application” section provides discussion on the main difficulties of the two-wheeled vehicle SLAM problem and establishes a connection between the bicycle roll change rate and the SLAM algorithm performance. “Autonomous bicycle prototype and experimental setting” and “Experimental results” sections describe our autonomous bicycle prototype and the experimental system and results that verify our bicycle SLAM problem solution’s potency and our hypothesis regarding performance. Finally, “Conclusion” section presents the conclusions of our effort and possible further extensions.
Bicycle model
As mentioned above, one of the most accurate models was presented in the work of Yi et al., 13,14 where the bicycle is considered as a two-part platform, a rear frame, and a steering mechanism. Figure 1 shows a schematic of the vehicle. They assume that (1) the wheel ground is a point contact and thickness and geometry of the tire are neglected; (2) the bicycle body frame is considered a point mass; and (3) the bicycle moves on a flat plane with no vertical motion.

A kinematic diagram of the bicycle dynamics. (a) Schematic view of a bicycle on a plane and (b) top-view of the bicycle, as presented in the work of Yi et al. 14
In this model,
The kinematic steering variable
The nonholonomic constraint of the rear wheel implies that
In such a way, the mass center velocity is derived as
In a similar fashion, by introducing the Lagrangian
where
It is clear that the control inputs in equation (4) are the virtual steering velocity
where
Motion and odometry models
From the above description of the bicycle model and dynamics, it is obvious that the motion of a bicycle is fundamentally different from a four-wheeled vehicle’s. Considering the effect this model has on the registration of the bicycle’s displacement and change in orientation at every instant, that is, the registration of its odometry, we can observe from equation (3) that in order to calculate the Cartesian displacement
Motion model
As can be seen from the bicycle model above, the motion model from pose
where
As can be observed, the main difference is that our motion also depends greatly on the roll angular velocity
where
Under our error model, specified in equation (9), these motion errors have the probabilities shown in the set below
Since we assume independence between the different sources of error, the desired probability
Odometry model
Regarding the model where only odometry is registered instead of using the motion model to predict the behavior of the bicycle, we can obtain the main two components of our motion, namely the rear wheel longitudinal velocity
Velocity-orientation registration
Firstly, we can measure the rear wheel longitudinal velocity
where
Additionally, most four-wheeled, three-wheeled, or even two-wheeled—with the two wheels on the same axis—vehicles obtain their odometry mainly by means of attaching encoders to the differential drive system, that is, comparing the different rotation of the two wheels at either side of a main axis and calculating the instantaneous orientation of the robot. However, this technique cannot be implemented for bicycles, for which the rate of change of the yaw angle of the bicycle frame
Therefore, since the caster angle
where
while the errors due to equation (12) become
where
Gyroscopic odometry
Of course, the above data can also be obtained using a gyroscopic sensor at the rear part of the bicycle that can measure its speed and orientation at each instant, thus providing a method of measuring the odometry that is uncorrelated with the bicycle and tire dynamics. As observed in real-world applications, however, the accuracy provided for the velocity and the yaw angle of the bicycle, that is, its orientation, are not as reliable as the measurements provided by the motor encoders described above. Therefore, measuring the velocity and orientation by means of a gyroscope can be used as an extra input to the system of an autonomous bicycle to validate the existing measurements. Otherwise, if we solely depend on the gyroscope, factors of gyroscope covariance for the velocity
and in a similar fashion
Environment sensing system and measurement model
The effects of the bicycle’s motion model and road behavior on the perception of the environment and the detection of landmarks that can be used for the mapping procedure and the map updating are multiple. Although they inherently depend on the sensor system deployed to measure its surroundings, the main reasoning behind them remains the same. Thus, we present two cases of sensor designs that can provide environment scans on a two-dimensional level, such as with the use of laser range sensors (LRS), radars, sonars, and so on. Even in the case of three-dimensional sensors, such as multi-level LRS, cameras, or RGB-D cameras, a methodology to extract the distance to the environment objects is used and the same landmark detection limitations that we demonstrate exist, so the main principles of this analysis still apply.
Fixed range finding sensor
At first, we examine the case of a simple range finding sensor attached on the bicycle’s main frame, which is the same as the rear wheel frame for standard bicycles, with the sensor fixed at a position vertical to the ground and its scanning plane horizontal to the ground. As described by Thrun et al.,
1

“Pseudo-density” of a typical mixture distribution
If the likelihood field model is used instead of the plain beam model, then we have a two-dimensional representation and the effect of
As can be seen from Figure 3, given the roll angle

Range finder scan plane’s rotational effect on object detection measurements.
If

Adjustment of range finding sensors’ measurement probability distribution when (a) “looking up” and (b) “looking down.”
For 3D SLAM, where measurements are used to construct a three-dimensional representation of the environment, we need to correct the measured position of each object according to the roll angle
Rotating range finding sensor
Since most modern applications require the creation of three-dimensional maps, we need to consider the case of a rotating sensor to cover the maximum field of view (FoV) possible. Most vehicles can incorporate sensors executing complex rotational patterns, covering the entire 360° FoV. For bicycles, however, given their inherent instability, the sensor rotation mechanism and pattern should be designed carefully so that it does not interfere with the balancing motion control of a self-stabilizing bicycle. The most appropriate mounting position is deemed as the frontal part of the bicycle frame, in the position of the headlight. Other favorable positions include the back of the bicycle for monitoring the rear area, or on top of the bicycle, in which case, however, the bicycle cannot be ridden by someone, a trait we wish to maintain. Sensor rotations on the sagittal or the transverse plane are deemed as highly disruptive to the bicycle’s stability, as well as more complex rotations. Therefore, we choose a rotation on the coronal plane, that is, having the range finding sensor tilting up and down. In that way, we can cover the entire frontal and lateral environment of the bicycle, with the only drawback of not being able to sense objects in the bicycle’s rear; for SLAM purposes, however, such a FoV is deemed sufficient.
For two-dimensional measurements, if the sensor’s scanning plane at time

Ground distance for (a) sensor coronal rotation (tilt) and (b) bicycle roll and sensor tilt integration.
If we now integrate the bicycle roll angle
we can define the two situations as above. If
Adjusted measurement probability distribution cases.
Thus, we can see that even if we can reliably control the sensor pitch angle
In this analysis, it is important to include the specific errors that determine the accuracy of our measurements. Apart from the inherent error inserted by the range finding sensor, which can be represented by the detected range variance
Implications on SLAM application
Given the above modeling and proposed SLAM framework, we see fit to discuss the limitations of various SLAM algorithms, given the special characteristics of bicycles and akin vehicles, which may cause some algorithms that are potent on other types of vehicles to produce poor results. Then, we can attempt to relate the main uncertainty factor, that of
At first, as seen in the above analysis, two-wheeled vehicles suffer from an inherent increased degree of uncertainty, a constant systematic, but rather unpredictable clutter, even for relatively simple environment types. This is apparent from the uncertainty of the vehicle’s control, with the need to account for all the errors inserted by the control variables of the motion model or the errors in the odometry measurement, as explained in “Motion and odometry models” section. Moreover, the dependence of the correct detection of landmarks on the rapid and accurate registration of the bicycle’s roll rotation
Another important aspect of the bicycle SLAM problem is the data association that is often used in various SLAM algorithms that use feature extraction, especially if cameras or RGB-D sensors are used. Many recent feature extraction techniques are becoming more robust to differentiations in viewing angle and perspective, offering scale, rotation, and even affine transform invariant image features. However, the inaccuracy and blurriness introduced by rapid shifts of the sensing plane and by road abnormalities may be detrimental to the application of data association, causing many techniques to produce poor results. That is a common phenomenon for vehicles such as bicycles, where their steering direction and frame roll change rather frequently and very suddenly during their cruise, while avoiding possible obstacles along the way. Such an example can be seen in Figure 6, where the viewing angle of the camera may differ vastly (left–right) and the extracted features from landmarks can vary substantially, with some features not existing in both images or some existing only partially, which can result in great uncertainty of associating the landmarks expected to be registered in a specific location. Given four-wheeled vehicles’ stable motion, even at high speeds, these problems do not have the same gravity, making them easier to overcome.

Two different views of the same environment under positive and negative bicycle roll angles
Finally, given the above uncertainties in localization and environment object detection and recognition, loop closure, an important part in mapping, will suffer from great inaccuracy. Examining moments when bicycle-like vehicles pass from the same spatial positions, having a similar sensing FoV, but different bicycle roll angles

Loop closure under different bicycle roll angles
Regardless of the specific SLAM algorithm, however, based on our proposed SLAM framework, we can determine the effect that the most important inaccuracy-introducing factor, the bicycle roll angle
Autonomous bicycle prototype and experimental setting
To demonstrate the potency of our proposed system to solve the SLAM problem for bicycles and to verify our aforementioned hypotheses, we implement SLAM on the autonomous bicycle prototype designed by our laboratory, as seen in Figure 8(a). The main units of the autonomous bicycle comprise an NI cRIO controller for the control of the throttle, steering, and brake motors, and for the process of the bicycle roll angle data from a gyroscopic sensor, and an Intel NUC mini computer running a combination of Ubuntu and ROS for the collection and process of the perception data from a SICK LiDAR and of the serial input from an Arduino that handles the laser-based speedometer signal. The control system and sensor integration is such that the bicycle maintains its lightweight, rideable, and easily maneuverable structure.

(a) Autonomous bicycle prototype and experiment venue for (b) type 1 and (c) type 2, 3 experimental procedures.
The implemented self-stabilizing control methodology is described in the work of our team. 15 We make use of the LiDAR range-bearing data to apply two-dimensional passive SLAM and recreate a mapping of the environment where landmarks are represented by a mixture of weighted Gaussians and the bicycle’s trajectory within it, as a proof-of-concept for our SLAM framework, without involving visual input at this stage. The weights of the estimated map feature Gaussians are determined by the certainty of the landmarks’ existence at that location and are updated after each measurement set is processed. Given its success and recognition as one of the most effective algorithms for SLAM of the last 15 years, we choose to apply the multi-hypothesis (MH) FastSLAM, 21 whose multiple hypotheses methodology can theoretically account for the trajectory and measurement uncertainties.
We perform a series of experiments at two different types of experimental venues and conditions. The first, regarding type 1 experiment, involves a rather small terrain of

FastSLAM results for (a) the type 1, (b) type 2, and (c) type 3 experimental cases. The ground truth trajectory is marked with green, the actual landmark locations are marked with green circles, the estimated map features are marked with blue ‘X’s and the estimated bicycle trajectory is marked with red, while odometry registration is marked with yellow.
The bicycle is remotely controlled to execute the desired trajectories and no dynamic path planning or obstacle avoidance techniques are used, in order to ensure the accuracy of these trajectories. All trajectories are registered by our onboard GPS and are used as the ground truth, which can be seen in all the experimental result figures. The routes last 1 − 2 min for type 1 experiment and 4 − 6 min for the types 2 and 3 with the bicycle running at an average speed of 2 m/s. In order to visualize our results, we draw an occupancy grid-like map to depict landmark locations, given that the landmarks’ specific structure is not taken into account. For all experiment types, we draw our estimated landmarks on specific occupancy grid cells according to the location of the corresponding Gaussians’ means, while we choose the occupancy grid’s accuracy as 0.01 m, that is, each grid cell is 1 cm × 1 cm and all landmarks are assumed to occupy only one grid cell, without visualizing their variance. In addition, merging of neighboring Gaussians map features within a specific radius is used in order to ensure nearby detections are related to the same landmark. Moreover, estimated landmarks with low probability of existence, that is, with corresponding Gaussians of low weights under a specific log odds threshold, are pruned after the update phase, in order to ensure the overall map feature count does not increase out of control in case of erroneous detections.
In the experiments, we make use of the velocity-orientation registration odometry model, using the velocity obtained by a rear-wheel laser speedometer and the gyroscopic sensor orientation, and the fixed LiDAR measurement model, meaning that the sensor’s tilt angle
Experimental results
After conducting our experimental procedure, we can verify the capability of our system to provide a reliable SLAM solution. As can be seen by Figure 9(a), where the results of MH FastSLAM are presented for the first type of environment, we observe that despite the increased noise of the environment we can realize SLAM and produce a mapping of the environment. The potency of our methodology is more visible in the second type of experimental venue with experiment types 2 and 3, as shown in Figure 9(b) and (c), respectively, where we have a more accurate mapping of our environment and trajectory representation. In all figures, the ground truth trajectory and actual landmarks are marked with green, while the estimated trajectory is marked with red and the estimated map features are marked with blue ‘X’s.
In all cases, the odometry seems to produce quite accurate results, proving the potency of our framework’s odometry model. The measurement model being used is also capable of predicting the existence or not of the map landmarks with our proposed estimation of the ground distance and the compensation for the missed landmarks. However, especially for the case of the increased environment clutter experiment, we observe that the introduced uncertainty in the registration of the bicycle roll angle
To provide a quantitative analysis of our experimental results, we also include the feature count of the resulting MH FastSLAM map compared to the real landmark count as they evolve over time for the type 2 and type 3 experimental cases, as can be seen in Figure 10(a) and (c), respectively. The real landmark count increases monotonically as new landmarks enter the vehicle’s FoV. Additionally, we provide the optimal sub-pattern assignment (OSPA) metric 22 for the comparison of the real map with the MH FastSLAM map for those cases, as can be seen in Figure 10(b) and (d). Although the calculation of the OSPA metric is rather complicated, the core equation used is as follows
where
that is, the sum of the Mahalanobis distances between all the map feature locations and their corresponding closest real landmark locations multiplied with the weight of each feature’s Gaussian, if the map features are estimated within a specific vicinity of the actual landmarks. If the Mahalanobis distance is greater than a threshold called
that is, the maximum between the unaccounted actual map weight

FastSLAM (a and c) map feature count and (b and d) OSPA results for the type 2 and type 3 experimental cases, respectively.
From the graphs, we can see that despite the inclusion of more landmarks in the maps over the course of time, the map feature count follows the increase of the real landmark count and the OSPA metric is gradually decreasing over time, especially after loop closures. The map feature number increases above the true landmark count due to the erroneous landmark detections mainly from the registration of the ground, as explained in previous sections, leading to large differences at times. This issue is rather evident in Figure 9(c) in the middle of the clockwise elliptic trajectories, where a high number of false landmarks is included in the estimated map, due to the ground registration as the bicycle tilts to the right. This difference decreases when loop closures occur, which can be seen more clearly in Figure 10(c), but still remains relatively high, which constitutes one of the major problems of bicycle SLAM.
Regarding the algorithm’s performance, after the collection of the data, the off-line process time was approximately 5 min for the type 1 experiment and approximately 10 min for types 2 and 3, while the SLAM algorithm was executed on an Intel i5 processor @2.1 GHz over ROS. Although we did not reach online performance with this proof-of-concept implementation, the algorithm’s performance can be improved with a better selection of merging and pruning techniques, as well as a more optimized SLAM algorithm running stand-alone outside ROS, in order to bring it close to real-time execution.
Regarding our hypothesis relating the bicycle roll angle and angular velocity rate of change and the overall SLAM performance, we examine the graphs of Figure 10. By comparing the estimated landmark counts, we see that in the case of type 3 experiment, where we have more direction shifts and more frequent and more sudden bicycle roll changes, the map feature count and landmark count difference still increases and remains high even though the elliptic trajectory passes from the same location three times. The same holds for the OSPA error that increases after the completion of each elliptic loop, although it should be constantly decreasing, since we have continuous loop closure in theory. On the contrary, in type 2 experiment, due to the continuous loop closures and the smoother change rate of bicycle roll angle and angular velocity, the map feature number is closer to the true landmark count during most of the trajectory and gets reduced significantly after the big loop closure, after time
Conclusion
In this article, we proposed for the first time a comprehensive theoretical framework for the application of SLAM for autonomous bicycles and similar two-wheeled vehicles. We introduced the motion and odometry models that are specifically applicable for bicycles based on the complex kinematics and dynamics. Furthermore, we examined two different sensor designs that are suitable for bicycles and analyzed the modification of the measurement model for these two cases. Additionally, we presented our laboratory’s autonomous bicycle prototype and the systems that allow it to realize SLAM and implemented two types of experiments based on the MH FastSLAM methodology as a proof-of-concept. The results of our experiments validate the potency of our bicycle SLAM framework, making it clear that our approach can be also applied to similar two-wheeled vehicles, such as motorcycles and others. However, several issues were raised about the specific SLAM algorithm that is applied on top of our framework. In addition, we analyzed the relationship between the rate of change of the bicycle frame roll angle and the overall SLAM performance and verified this hypothesis through our experiments.
We hope that the suggested theoretical framework will provide the proper groundwork for more and more detailed research on the area of autonomous two-wheeled vehicles, especially with regards to environment perception and SLAM. In that way, we hope that autonomous bicycles and akin vehicles can reach the same level of progress as that of autonomous cars. Future work should focus on further testing our proposed framework for more two-wheeled vehicle SLAM applications and on researching SLAM algorithms with which the aforementioned problems of “Implications on SLAM application” section can be faced. The high degree of uncertainty could be dealt with SLAM methodologies that can withstand the high level of clutter and that can preserve the accuracy of the resulting environment map, despite the measurement errors. Secondly, in order to face the problem of data association, methods that include visual odometry or tracking landmarks could be utilized. Finally, to avoid erroneous loop closure instances, methodologies that readjust their maps in either probabilistic ways or by choosing to integrate all the detected objects could be studied. The two-wheeled vehicle SLAM problem is still a new and challenging field of research.
