LiDAR Robot Navigation
LiDAR robot navigation is a complicated combination of mapping, localization and path planning. This article will explain the concepts and demonstrate how they work by using a simple example where the robot reaches the desired goal within a row of plants.
LiDAR sensors are low-power devices which can prolong the life of batteries on robots and decrease the amount of raw data needed to run localization algorithms. This allows for more versions of the SLAM algorithm without overheating the GPU.
LiDAR Sensors
The sensor is the core of a Lidar system. It emits laser beams into the surrounding. These light pulses bounce off the surrounding objects at different angles depending on their composition. The sensor monitors the time it takes each pulse to return and then uses that information to calculate distances. Sensors are placed on rotating platforms that allow them to scan the surrounding area quickly and at high speeds (10000 samples per second).
LiDAR sensors are classified according to their intended applications on land or in the air. Airborne lidars are often attached to helicopters or unmanned aerial vehicle (UAV). Terrestrial LiDAR is usually mounted on a robot platform that is stationary.
To accurately measure distances the sensor must always know the exact location of the robot. This information is captured using a combination of inertial measurement unit (IMU), GPS and time-keeping electronic. LiDAR systems make use of sensors to calculate the precise location of the sensor in space and time, which is then used to create an 3D map of the surrounding area.
LiDAR scanners are also able to detect different types of surface which is especially useful when mapping environments that have dense vegetation. For instance, when an incoming pulse is reflected through a forest canopy it is common for it to register multiple returns. The first return is usually attributable to the tops of the trees, while the last is attributed with the ground's surface. If the sensor can record each pulse as distinct, this is referred to as discrete return LiDAR.
Discrete return scanning can also be useful for analyzing the structure of surfaces. For example, a forest region may yield an array of 1st and 2nd return pulses, with the final big pulse representing the ground. The ability to separate these returns and record them as a point cloud makes it possible to create detailed terrain models.
Once a 3D model of the environment has been built, the robot can begin to navigate based on this data. This involves localization, building an appropriate path to reach a goal for navigation,' and dynamic obstacle detection. The latter is the process of identifying obstacles that aren't present in the original map, and adjusting the path plan accordingly.
SLAM Algorithms
SLAM (simultaneous mapping and localization) is an algorithm that allows your robot to map its surroundings and then determine its position in relation to the map. Engineers make use of this information to perform a variety of tasks, including the planning of routes and obstacle detection.
To utilize SLAM your robot has to be equipped with a sensor that can provide range data (e.g. A computer with the appropriate software for processing the data as well as either a camera or laser are required. You will also need an IMU to provide basic positioning information. The system will be able to track your robot's exact location in a hazy environment.
The SLAM system is complicated and there are a variety of back-end options. No matter which one you choose, a successful SLAM system requires constant interaction between the range measurement device and the software that collects the data and the vehicle or robot itself. This is a dynamic process that is almost indestructible.
As the robot moves, it adds scans to its map. The SLAM algorithm will then compare these scans to previous ones using a process called scan matching. This assists in establishing loop closures. If a loop closure is discovered, the SLAM algorithm uses this information to update its estimate of the robot's trajectory.
Another factor that complicates SLAM is the fact that the environment changes in time. For instance, if your robot is walking through an empty aisle at one point, and then comes across pallets at the next point it will be unable to connecting these two points in its map. Dynamic handling is crucial in this case, and they are a characteristic of many modern Lidar SLAM algorithms.
Despite these challenges however, a properly designed SLAM system is incredibly effective for navigation and 3D scanning. It is especially beneficial in situations where the robot can't rely on GNSS for its positioning for positioning, like an indoor factory floor. It is crucial to keep in mind that even a properly configured SLAM system can be prone to errors. To correct these mistakes, it is important to be able detect them and understand their impact on the SLAM process.
Mapping
The mapping function creates an outline of the robot's surroundings, which includes the robot as well as its wheels and actuators, and everything else in its view. This map is used for location, route planning, and obstacle detection. This is an area where 3D lidars can be extremely useful since they can be utilized as a 3D camera (with one scan plane).
The map building process can take some time, but the results pay off. The ability to create a complete, consistent map of the surrounding area allows it to perform high-precision navigation as well being able to navigate around obstacles.
As a rule, the higher the resolution of the sensor then the more precise will be the map. However, not all robots need high-resolution maps: for example, a floor sweeper may not need the same amount of detail as an industrial robot that is navigating factories with huge facilities.
There are many different mapping algorithms that can be used with LiDAR sensors. Cartographer is a popular algorithm that uses a two phase pose graph optimization technique. It corrects for drift while ensuring an accurate global map. It is especially useful when combined with odometry.
Another option is GraphSLAM, which uses linear equations to model constraints in a graph. The constraints are represented by an O matrix, and a vector X. Each vertice in the O matrix contains the distance to an X-vector landmark. A GraphSLAM update consists of a series of additions and subtraction operations on these matrix elements with the end result being that all of the X and O vectors are updated to reflect new observations of the robot.
Another helpful mapping algorithm is SLAM+, which combines odometry and mapping using an Extended Kalman filter (EKF). The EKF updates not only the uncertainty of the robot's current position but also the uncertainty in the features that have been drawn by the sensor. The mapping function will utilize this information to better estimate its own position, which allows it to update the base map.
Obstacle Detection
A robot should be able to see its surroundings so that it can avoid obstacles and get to its goal. robot vacuums with lidar makes use of sensors like digital cameras, infrared scans, sonar and laser radar to sense the surroundings. It also utilizes an inertial sensors to determine its speed, location and its orientation. These sensors allow it to navigate safely and avoid collisions.
A key element of this process is the detection of obstacles that involves the use of an IR range sensor to measure the distance between the robot and obstacles. The sensor can be mounted to the vehicle, the robot or even a pole. It is crucial to keep in mind that the sensor may be affected by a variety of elements, including rain, wind, and fog. It is essential to calibrate the sensors prior every use.
The results of the eight neighbor cell clustering algorithm can be used to detect static obstacles. However, this method has a low detection accuracy because of the occlusion caused by the spacing between different laser lines and the speed of the camera's angular velocity, which makes it difficult to detect static obstacles in a single frame. To overcome this problem multi-frame fusion was implemented to improve the accuracy of the static obstacle detection.
The method of combining roadside unit-based and vehicle camera obstacle detection has been shown to improve the efficiency of processing data and reserve redundancy for subsequent navigational tasks, like path planning. This method provides an accurate, high-quality image of the surrounding. In outdoor tests the method was compared to other methods of obstacle detection such as YOLOv5 monocular ranging, and VIDAR.

The results of the test proved that the algorithm could accurately identify the height and location of obstacles as well as its tilt and rotation. It also had a good performance in detecting the size of an obstacle and its color. The method was also robust and reliable even when obstacles were moving.