Abstract
Introduction
Mobile robotic technology has been widely explored within the inspection fields of various industries. Specifically, we look at a pipe-climbing robot to inspect the integrity of pipe structures. Such structures might include the containment spray system in a nuclear power plant or the roof frames of a stadium consisting of multiple pipes. In various light-water reactors worldwide, a containment spray system is installed as part of the accident management system on the ceiling of the containment interior, as shown in Figure 1.

Containment spray-piping system in nuclear power plants.
When an accident such as a loss of coolant happens at a nuclear power plant, the increase in heat and pressure within the containment vessel is extreme. To handle such atmospheric condition, the containment spray system works to spray coolants from the ceiling of the containment building. The spray system is constructed by connecting various pipes together. These pipes must be inspected periodically to determine whether they have defects or faults. However, the pipes are installed in very high places, and it is difficult for a human to climb the pipes and inspect them. To carry out (mainly) visual inspections, we developed a mobile robot to climb up and down, and to cross over, such pipes. We designed the robot to meet the following requirements.
The mobile robot should be light enough so that it can be practically and safely operated in a nuclear power plant. In the field, for example, humans should avoid injury, even if the robot fails to grip the pipe and falls down. The robot should also not fall, even after a loss of power. The robot should be able to climb spray pipes up to 50 m, work at the ceiling of the containment building, and then climb down again. The robot should be able to bypass obstacles: T-shaped branches, pipe flanges, and valves. The robot must be able to work with no power cables and no communication cables to the ground, thus avoiding problems involving the weight or twisting of the cable. The robot should be able to autonomously grip nearby pipes because the remote operator often cannot see the target pipe well.
Thus, we designed and optimized a very light five-degree-of-freedom (DOF) manipulator to reduce unnecessary DOFs, whereas the conventional system uses a general six-DOF manipulator. The robot has five links and two grippers at both its ends, and moves along cylindrical pipes, bypassing obstacles such as flanges and valves, in motions such as an inchworm or a gibbon.
There is a body of research about climbing robots to be used for inspections and maintenance of vertical structures. 1 Many forms of locomotion have been employed, including arms and legs (limbs), 2 –4 wheels and chains, 5 a sliding frame, 6 and wires and rails. 7 As with locomotion, a number of types of adhesion have also been used, depending on the given task, including magnetic adhesion, 8 –10 pneumatic adhesion, 11 mechanical adhesion, 12 –15 electrostatic adhesion, 16 and chemical adhesion. 17
Among these options, we adopted arms and legs for locomotion and mechanical grip for adhesion. Limb-climbing robots are highly adaptable to surface structure and can overcome obstacles and steps. 18 Such grip-based adhesion has a key safety advantage because even loss of power does not necessarily lead to a drop-off of the system, even though admittedly such systems are not very fast. Similar climbing robots have also been developed by researchers. 19,20
ROMA 19 was designed to climb on beam-based structures with its two grippers. InspiRat 20 is a robot built from biological inspiration and is able to climb cables and tubes like a rat. Treebot 21 was designed to climb trees or poles and could be helpful in terms of inspection and maintenance applications involving wooden structures.
Three-dimensional (3-D) CLIMBER was developed by the University of Coimbra. 22,23 That robot has a four-DOF manipulator and two grippers. It is a very useful mechanism which is able to climb arbitrarily bent poles. Climbot 24 –26 was also developed at South China University of Technology for pipe climbing using a five-DOF mechanism. It can climb cylindrical pipes as well as rectangular truss structures with high mobility and good manipulation function. However, none of these aforementioned robots are suitable for practical use for a containment spray system in a nuclear power plant in terms of the required combination of weight, automatic gripping function, and autonomy.
For these reasons, we developed a very practical robot for the inspection of containment spray-piping systems in nuclear power plants. Our mobile robot is very small and light enough that it is practical for operation in a nuclear power plant. It also has a fail-safe feature especially for use in cases of blackout. The robot has no power cables and no communication cables from the ground. It communicates with the ground control station via wireless communications. The robot autonomously grips nearby pipes because the remote operator often cannot see the objective pipe well. We designed and optimized a very light five-DOF manipulator to reduce unnecessary DOFs. The robot has five links and two grippers at each end and is able to move along cylindrical pipes, bypassing obstacles such as flanges and valves, like an inchworm or a gibbon.
The most essential technology for a pipe-climbing robot is how to grasp pipes firmly, quickly, and precisely. There have been several studies to try and solve this problem, 27 –29 which is a very difficult aim because the robot climbs so far up and away from where the human operates the robot. The human operator cannot see the robot’s hands or the object pipe from the ground. Equipping the robot with several stereo cameras is problematic owing to their weight and volume, and because remote visual operation requires the skills of highly trained people. For these reasons, we herein propose a practical scheme for a pipe-climbing robot to grasp pipes autonomously, and the scheme can be used in real time.
The remainder of this article is organized as follows. The configuration of the PiROB as well as its hardware and software is described in the next section. The third section includes the forward and inverse kinematics of the robot, as well as its Jacobian for master control. A vision-based grasping scheme is proposed and its achievement is described in the fourth section. Finally, we summarize the results and address items for future research.
Pipe-climbing robot and its kinematics
Our pipe-climbing robot, called PiROB, is shown in Figure 2. The robot was originally designed to climb up a spray-piping system located in the containment building of a nuclear power plant, as shown in Figure 1. The piping system has obstacles such as flanges and valves along the pipe, and the ceiling is up to 40 m high. The pipes have branches to feed spray water into horizontally circumferential pipes to reach the entire containment.

General view of the vision-based pipe-climbing robot.
Robot design
The PiROB is a robot with five-DOF and two grippers at both ends and moves along the cylindrical pipes, bypassing pipe obstacles such as flanges and valves like an inchworm or gibbon. It has a symmetric shape in view of the determination of the base link. The two grippers work as a base by turn. Figure 3 shows a kinematic model. Suppose that gripper A is closed, gripping the pipe, and that gripper B is open. In this case, gripper A works as the base of the manipulator and gripper B is considered a hand of the five-DOF manipulator. The robot has five joints, which are all rotational joints. Frame {0} is placed at the palm of the hand for convenience.

Kinematic modeling of the pipe-climbing robot and its coordinate systems.
DOF analysis
Generally, six-DOF is needed for a manipulator hand to be placed at any specified position and orientation (three-DOF for position and three-DOF for orientation). Because PiROB has five-DOF, it cannot place its hand at any desired position and orientation. However, since the pipe is cylindrical and symmetric about its center line, its orientation can be considered as two-DOF just like a line. Therefore, five-DOF PiROB can theoretically grasp the objective pipe, even though it cannot grasp the objective pipe at a specified three-DOF orientation. These are general phenomena of a five-DOF robot manipulator. 33
If we consider the nature of the subspace of the five-DOF PiROB manipulator, we quickly realize that this subspace creates one constraint on the attainable orientation, as shown in Figure 4. The pointing direction of the gripper, the

PiROB manipulator and its pointing direction to the pipe.
It can be summarized as follows: Since the robot has five DOFs, the robot cannot grasp the pipe at a specified position and arbitrary-specified orientation. When the target point of the pipe is specified, the robot grasps the pipe along the pointing direction which is passively determined. The robot uses three DOFs for positioning, one DOF to mate passively, determining pointing direction in the plane of the arm, and the other one DOF for the gripper to mate the pipe.
PiROB weighs 3.8 kg and its total length is 90 cm. Following is a description of the experimental setup as experienced. To reduce weight, the frame of the link between the joints was optimally designed, even though the frame deflected slightly. The motor casing was used as part of the link frame to reduce the weight of the robot. The number of motors was minimized based on the DOF analysis, thus the number of links and joints and their weights was reduced. The motors we used were light compared with its torque. It partially used plastic to reduce the weight, even though it resulted in some gear backlash. The gear backlash and frame deflection were compensated when we controlled the robot motion, which is something not described as well as gripper misalignment in this article. 29 –32 The gripper was also optimally designed to reduce weight and remove all the unnecessary parts. Even cameras and sensors were minimized in consideration of weight. Electronic devices and a battery were neatly attached to the link frame.
Robot gripper
As shown in Figure 5, the gripper was designed to grasp pipes of 3–5 inches in diameter, considering that the diameter of a spray pipe is normally 4 inches. Because of the limitation of the robot weight and battery power, the gripper was very much optimized and was implemented in a series of trials. Finally, the reduction gear train was a combination of worm and partial worm gear after a bevel gear, in consideration of a number of factors in its use, fail-safe against blackout, reduction ratio, gripping speed, motor torque, and repeatability of motion. By the irreversible worm and worm gear, the gripper would not be open when the electric power was off. The gripping motor was a small DC motor (0.6 W), and the gripper opened in 3 s. The fingers were covered with rubber to improve their grip. Six proximity sensors were embedded in the palm of each hand. The sensors provided a gap between the palm and the objective pipe surface. With regard to weight, we chose the proximity sensor OSG-105LF (Opto Sensor), which is super miniature and a thin, reflective sensor. The emitter and the detector were arranged in the same direction to the object, the emitter being a high output gallium arsenide infrared light-emitting diode (LED) and the detector a high-sensitivity silicon transistor. It had small dimensions (2.7 × 3.2 × 1.4 mm3), an optimum detection distance of 0.8–1.2 mm, and a wavelength of 940 nm.

Illustration of gripper design and power transmission.
A camera and light were mounted on the robot hand to help see the object to grasp. The camera had a charge-coupled device with 525 lines interlaced and transmitted the image to the remote station wirelessly. A diode laser with a slit beam was also mounted there to assist during visually assisted grasping. The image captured by the camera was transmitted to a remote control station via wireless communication. The overall specification of PiROB is listed in Table 1.
Overall specifications of PiROB.
Forward kinematics and inverse kinematics
To analyze the kinematics of our robotic system, let us define the following coordinate system used to develop the modeling of the whole robotic system: { { {
In Figure 3, the robot is shown as grasping a horizontal pipe with gripper A, which, in this position, works as the base of the robot. For ease of calculation, the world coordinate system, {
The pipes were bent along the center line with a unit vector,
In this section, we determine the position of gripper B with respect to the {Rob-Base} frame. Thus, we will derive the kinematics of the robot manipulator. From the basic equation, the position,
and the kinematics of the robot can be described as
where
After deriving each transformation in equation (2) in Appendix 1, we can calculate the entire transformation,
where
Additionally, the transformation
Inverse kinematics
The inverse kinematics is obtained by serious effort. When the position and orientation of the object are given by the matrix
we succeed in obtaining the final solution of the inverse kinematics:
where
To solve the inverse kinematics, various kinds of backward transforming techniques are used. For deriving
and for deriving
Vision-based grasping scheme
For the robot to grasp a pipe, we have to know the position and orientation of the pipe in the form of a homogeneous transformation
This can be calculated through a transformation of the robot-to-pipe coordinate system
where

Perspective transformation and the variables of the coordinate transformation.
Here, we have the following three assumptions: A 3-D cylinder is assumed to be a two-dimensional planar strip on a marked plane, as shown in Figure 7. In a real situation, the 3-D cylinder is shown a little bit larger than the planar strip. When the camera lens is closer to the pipe, for example, from the position (a) to (b), the difference becomes larger. The pipe is cylindrical to be symmetric about the The robot may grasp the pipe at an arbitrary point along the

Difference between 3-D cylinder and planar strip (top view). 3-D: three-dimensional.
From these three assumptions,
considering that we have four unknowns:
By calculating the geometric relationship between the pipe and image plane, point
or
where the matrix,
As shown in Figure 8, the planar pattern of the pipe is projected onto the image plane as a trapezoidal shape. The straight vertical line on the pipe-mark plane will be projected onto an image plane as an oblique line with slope,
where

Planar pattern of the pipe and its projected image angles. (a) Planar pattern of the pipe. (b) Projected image of the pipe pattern.
We can obtain two slopes, four unknown values, four equations with four known values (
Solving the four nonlinear simultaneous equations (15) (
Next, we have to obtain the joint angles of the robot,
The solution can be obtained using the previous results of equations (6) and (7). Finally, we control the manipulator using the desired joint angles,
Practical scheme
One issue is that we have to use numerical computation to solve the four nonlinear simultaneous equations (15), and it takes a lot of time to control the manipulator properly. Thus, we propose a more practical scheme to solve this problem for use in real environments.
We turned our attention to the point that the four unknowns,
Thus, we can calculate the inverse kinematics (16) by supplying four variables for all visual feedback
where
Thus, we can command the robot manipulator using the joint
Robot control with visual grasping
Robot controller
Five servomotors were used for the five joints of the robot manipulator, and two DC motors were used to open and close the gripper. The servomotor, with reduction gears and absolute encoder inside, weighed 154 g, with dimensions of 40 × 65 × 46 mm. Its input voltage was 12 V, stall torque 8.4 Nm, stall current 5.2 A, and no-load speed 45 r/min. We used the same motors at all five joints because the robot works with various kinds of configurations; thus there were no typical joints which required less torque than the others.
The electronic hardware of the robot controller was mounted on the robot link frame. It consisted of three circuit boards: a power board, a CPU board, and a control board (Figure 9). The power board provided four voltages from the 14.8 V DC battery input, as the various components and sensors required different input voltages. The CPU board had a main processor, communicated with the remote computer system wirelessly, and controlled 12 proximity sensors in the palms and inclinometers. The microcontroller was a PIC32MX440F256 H 80 MHz with 256 kB flash and 32 kB RAM. The control board managed the five servomotors at the manipulator joints and the two DC motors at each gripper. It also managed the video camera, line beam laser, and illumination LED.

Configuration of the electronic hardware of the robot controller.
The communication between the remote control station and the robot was through WiFi, with a frequency of 2.4 GHz, whereas the camera image was separately transmitted to the main control station through WiFi with a frequency of 5.8 GHz. That was because the amount of camera image was very large, and we had to prepare against noise of the transmission. They could communicate 50 m apart from each other, especially when there were no obstacles between them.
We used a lithium-ion rechargeable battery pack to supply power, which was assembled with four high-quality cylindrical cells (3.7 V) in series and built-in printed-circuit-board protection, thus its voltage became 14.8 V, capacity 2.6 Ah, weight 183 g, dimensions 68 × 37 × 38 mm, and maximum discharge current 5 A. The robot could continuously climb up for 64 min with the battery pack and go down for 86 min.
Figure 10 presents our reactor spray-pipe inspection system along with the pipe-climbing robot and the remote control station. A test bed of reactor spray-pipe structures was built in our laboratory. The mockup had typical obstacles (e.g. valves, flanges, T-branches). The remote control station is shown near the robot but would be located outside the containment building in a real situation. The robot communicated with the control station wirelessly. The robot was operated manually but also had some autonomous options.

The robot-climbing pipe structures and remote control station.
The operator commanded the robot while watching a monitor. A haptic master, Phantom Omni, was used to command the manipulator motion, and send the velocity commands for the hand (
Experimental procedure of visual grasping
The graphic user interface of the main control system included a motor control module, a sensor data monitoring module, a master manipulator status module, and a camera image display module among other parts, as shown in Figure 11. The camera image display module shows the gripper grasping the pipe (it is switchable display of two cameras). The camera image was used to calculate the transformation variables of the gripper from the pipe position. Twenty-five frames of the images were processed every second. By processing the captured image, we could obtain two slopes,

Graphic user interface of the remote control computer.

The captured image of the pipe and its processing procedures.
Figure 12 shows the image processing procedure to obtain the line equation of the objective pipe edge. This is to handle lens-distortion compensation of the original image from the on-hand cameras of the robot, edge detection, selection of straight lines, and so on. Detection of the edge line of the pipe is not easy, especially when illumination is poor.
We also had to intelligently choose the correct line among several straight lines behind the pipe. For this, we used a laser line beam to detect straight lines (Figure 13) and determine the correct line from many background lines. The laser scanning device emits the laser line beam to the pipe, varying the incident angle to the pipe. We obtained the differential image between the image captured currently and the image captured just before. The differential image is shown as contours (Figure 12(a1)). After calculating the left and right endpoint of the contours (a2), we fit the line using the endpoints (a3).

Structural line beam laser and vision system.
However, the obtained pipe edge lines are not the same as real pipe edge because of many uncertain environmental conditions such as illuminations. Thus, we also process a line detection of the original image using Hough transform and obtain many candidates for pipe edge line (b2). Among the candidates, we choose the most similar one (c) with the line obtained by laser scanning (b2).
Figure 14 presents the details of these processing methods. In order to grasp the objective pipe, we moved the gripper near the objective pipe manually with a master or joystick, until the pipe image was displayed on the monitor, or more specifically speaking, until both the left and right boundaries of the pipe were showing. We emitted the laser line beam to the pipe and captured the pipe image. By image processing explained above, we obtained the values of the pipe edge line,

Experimental procedures of visual grasping.
Experiments and results
We carried out a series of experiments under various conditions. We classify the pipe structures for the robot to move as (a) a linear pipe, (b) a curved pipe including T-shaped branch, and (c) parallel-separated pipes. Arbitrary separate pipes are not experimented in this article. When moving along the linear pipe, the robot control and its performance shows differently along the vertical pipe and horizontal pipe. Climbing upward and downward also shows different characteristics. The curved pipe such as T-branch is also classified just like the linear pipe. The next one involves the parallel pipes, which are divided into vertical and horizontal pipes.
PiROB has three gaits to move with five DOFs. 25 The first one is inchworm mode. The robot raises and stretches the forearm and grasps the pipe, then releases the rear-hand gripper and moves close to the forearm. Turning over mode is to release the rear-hand gripper and turn over the body and then grasp the pipe. Turning around mode is to release the rear-hand gripper and then rotate about the forehand pivot and then grasp the pipe. Brachiation is to swing from the current pipe to the objective pipe similar to a gibbon. The latter is not studied in this article and is left for future study.
Table 2 shows the failure rate of visual grasping in each case. Experiments are conducted 50 times for each case at arbitrary varying experimental conditions, such as illumination, background, battery residual amount, wireless communication strength, and so on. The movement on the horizontally linear pipe is easiest, especially in inchworm mode. That is because the pose of the robot need not be changed much, and the gripper is already so close to the objective pipe that the captured pipe image is large enough to obtain the projected line equation. When there are no bumps on the pipe surface, inchworm movement can even be accomplished with the proximity sensor at the palm without visual grasping.
Experimental condition and failure rate of visual grasping for each case (%).a
a X denotes that we did not experiment in this case. Failure rate is expressed with %.
Climbing upward is generally more difficult than climbing downward. When going downward, motor backlash and link deflection are compensated more easily, and vibrations are not so severe. In curve pipes, such as T-branches, our proposed algorithm is actually verified. If there is any mistake in the algorithm, the robot cannot grasp the arbitrary bended pipe any more. The common cause of the grasping failure is failure to obtain the projected line equation of the objective pipe edge. This is mainly because of the illumination change and confusing background lines. More generally, arbitrary-separated pipes and brachiation motion are not experimented here. The dynamic images of the experiments can be shown in YouTube. 34 In this experiments, the motor backlash was roughly compensated using the information of the torque imposed on the motor joint, and the difference of the motor position command and the corresponding measured motor position.
Figure 15 shows the experiments of climbing up the vertical pipe at the hydraulic facility in our institute, which is a little bit similar to the spray-piping system in a nuclear power plant. The robot finally climbs the pipe up to 23 m high in 21.3 min with inchworm mode. The pipe is vertically linear without big obstacles such as valves and flanges. It was wirelessly controlled by the remote control station on the ground.

Experiment of climbing at the hydraulic facility.
Summary
To inspect the reactor spray-piping system, we developed a very practical pipe-climbing robot. Our climbing robot has a five-DOF manipulator and two grippers and moves along cylindrical pipes like an inchworm, bypassing obstacles such as flanges and valves. The robot is able to work in very high places, grasping the pipes it is following semi-autonomously. The authors propose a vision-based scheme to allow a pipe-climbing robot to grasp a cylindrical pipe autonomously. This scheme could also be used in a practical pipe-climbing robot in real time. The configuration of the PiROB was described, including its hardware and software, and the forward and inverse kinematics of the mobile robot was derived. The robot control with visual grasping is also explained. The robot is practical for use in spray-pipe inspections, as well as for many other applications, such as the inspection of the roof frames of a stadium when they consist of pipes.
In a future study, a series of experiments will be performed, including pipe identification under the worst possible illumination conditions. In addition, error analysis based on our three assumptions and gain tuning of the visual feedback are required for better application of our scheme.






