Viam SLAM Using VIAM to Map a room 2 January 2024 8 minute read By Kevin McAleer Share this article on Table of Contents About this projectWhat is Viam and SLAM?What is SLAM?How SLAM worksHow Viam worksWhat is the SLAM Process?What is Pose esimation?What is Feature matching?What is Loop closure?What is bundle adjustment?What is Cubie-1?How to Sett up SLAM in ViamHow to Add a RPLidar to ViamHow to Add SLAM Cartographer to ViamNote about Data ManagementHow to Configure the SLAM CartographerHow to Start the SLAM CartographerViewing the Lidar map Tags SLAM Computer Vision LIDAR Python Raspberry Pi Viam Navigation Cartographer
About this project In this project, we will use Viam to map a room using SLAM, and then use the map to navigate the room. This project also provides upgrades to the Cubie-1 robot, including a new 3D printed shelf for the Motor Drivers and IMU. What is Viam and SLAM? Viam is an easy to use robotics platform, that provides simple software building blocks and web-based tools for building machine learning models, and naviation systems using SLAM and computer vision systems. Viam can run on a Raspberry Pi models 3 and up, or on a desktop computer. A Map created with VIAM and a Lidar Sensor Lidar (Light Detection and Ranging) is a remote sensing technology that measures the distance to an object by emitting a laser light and then measuring the amount of time it takes for the light to return after bouncing off the object. Lidar uses a sensor to measure the time of flight (TOF) of the laser pulses and then calculates the distance to the object that it has bounced off of. Lidar can be used to measure distances to objects in the air, on land, and underwater. It’s most commonly used for mapping and navigation, but can also be used for 3D imaging and object detection. What is SLAM? SLAM stands for Simultaneous Localization and Mapping. It is a technique used by robots and autonomous vehicles to build a map of an unknown environment, while at the same time keeping track of their current location within the map. The SLAM algorithm is based on the Kalman filter, which is a mathematical algorithm that uses noisy sensor measurements to produce a good estimate of the state of a system. In the case of SLAM, the system is the robot, and the state includes the robot’s location and the locations of landmarks in the environment. SLAM uses a Lidar sensor to capture a 2D map of the environment. The Lidar sensor is mounted on the robot, and as the robot moves around the environment, the Lidar sensor captures a series of scans. Each scan is a 2D point cloud, which is a set of points in the form of (x, y) coordinates. The SLAM algorithm uses these scans to build a map of the environment, and to locate the robot within the map. How SLAM works SLAM (Simultaneous Localization and Mapping) is an collection of algorithms used in robotics for navigation and mapping. It works by using LIDAR, sonar and other sensor data to construct a 3D map of the environment and then using this map to localize the robot within it. LIDAR (Light Detection and Ranging) is a sensing technology that uses lasers to measure distances to nearby objects by timing how long it takes for the laser to return after being emitted. The LIDAR data is used to construct a 3D point cloud of the environment which is then used to build an occupancy grid map. The occupancy grid map is then used to localize the robot and navigate it through the environment. Additionally, SLAM algorithms can use additional sensory data such as inertial measurements and camera images to improve the accuracy and reliability of the mapping and localization process. The SLAM algorithm starts by generating an initial map of the environment and then uses the data from the sensors to refine the map. SLAM algorithms can also localize the robot in the environment by tracking its motion and comparing it to the map. Slam algorithms are a powerful tool for navigation and can be used in many applications such as self-driving cars, robotics, and augmented reality. How Viam works What is the SLAM Process? What is Pose esimation? Pose estimation is a process of estimating the position and orientation of an object in a 3D space. It uses a combination of computer vision and machine learning techniques to determine the 3D position of an object from an image or video. Pose estimation can be used to recognize objects and estimate their poses in a scene, allowing for applications such as augmented reality, robotics, and virtual reality. The process typically involves using algorithms to detect features in the image or video, such as keypoints or edges, and then using machine learning techniques to identify the object and estimate its pose. It can also be used to estimate the pose of a person in a video, allowing for applications such as gesture recognition and tracking. What is Feature matching? Feature matching is a key component of SLAM. It essentially involves matching features between images taken from different locations and orientations to create a map. Feature matching involves extracting features from an image and then finding the same features in other images. This is done by comparing features such as intensity, color, shape, and texture. Once the features are matched, the pose or location of the camera can be estimated. By combining this information over time, the SLAM algorithm can build a map of the environment. Optical Computer mice also use this method to track the motion of the mouse. What is Loop closure? Loop closure in SLAM is the process of recognizing when a robot has returned to a previously visited location. This enables the robot to more accurately map its environment and improve its navigation capabilities. By recognizing a previously visited area, the robot can more accurately understand the layout of the environment and accurately determine its location. This process can prevent drift, where sensors such as IMU and odemetry’s small inaccuracies can build up over time and cause the pose estimation to incorrect position the robot, and it appears to drift around on the map. What is bundle adjustment? Bundle adjustment in SLAM is a process of refining the estimated camera poses and point locations of a scene by minimising the reprojection errors of the estimated 3D points onto the observed 2D image points. This is done by adjusting the camera poses and 3D points in a least squares sense. The goal is to optimise the estimates of the camera poses and 3D points to obtain the best-fit solution. This is an iterative process that is repeated until the reprojection errors are minimised. What is Cubie-1? I created Cubie-1 with SLAM and navigation in mind. Cubie has a Slamtec RPLidar A1 mounted on top, and a Raspberry Pi 4 inside. The Raspberry Pi runs Viam, and the Lidar is connected to the Raspberry Pi via USB. Cubie is powered by a USB power bank. Cubie-1 also has a GY-521 IMU sensor, which is mounted on the top of the robot, on the internal shelf. The IMU sensor is connected to the Raspberry Pi via I2C. How to Sett up SLAM in Viam To set up SLAM in Viam, we need an existing robot project. If you don’t have one, you can create one by following the Viam Getting Started Guide. Once you have a robot project you will also need a supported Lidar sensor. I chose the Slamtec RPlidar A1. These often come with a USB connector maching it easy to connect to the Raspberry Pi. How to Add a RPLidar to Viam From the Config tab, Select the Components subtab Select the Add Component button Select RPLidar A1 from the list of Cameras. Give the sensor a name, such as RPLidar Click the Save config button There are no attributes that need to be configured. How to Add SLAM Cartographer to Viam From the Config tab, Select the Services subtab Select the Add Component button Select SLAM Cartographer from the list of Cameras Click the Save config button Note about Data Management Notice that the Data Management will send the data to the Cartographer - this will eventually incurr a cost if you leave it running indefinitely. So make sure you disable the Data Management when you are not using it (from the Cartographer Service). How to Configure the SLAM Cartographer From the Config tab, Select the Services subtab Select the SLAM Cartographer service Change the Mapping mode to Create new map Change the Camera to RPLidar (or whatever you have called the lidar) Click the Save config button How to Start the SLAM Cartographer From the Control tab, Select the Cartographer component and click the Start Session button Give the map a name, such as My Map Move the robot around the environment until you have mapped the entire area. Click the Stop Session button From the Config tab, scroll to the RPLidar component Click the Off button to stop the RPLidar components Data Capture Configuration You can now change the Cartographer Mapping mode to Localize only and select the map you just created. Viewing the Lidar map From the Control tab, Open the Cartographer component You will see a map with a red arrow showing the location and orientation of your robot. Use the control keys to move the robot around the map. Visit the Viam Documentation for more information on how to use Viam.