Video

Watch the associated video here:


About this project

In this project, we will use Viam to map a room using SLAM, and then use the map to navigate the room.

This project also provides upgrades to the Cubie-1 robot, including a new 3D printed shelf for the Motor Drivers and IMU.


What is Viam and SLAM?

Viam is an easy to use robotics platform, that provides simple software building blocks and web-based tools for building machine learning models, and naviation systems using SLAM and computer vision systems. Viam can run on a Raspberry Pi models 3 and up, or on a desktop computer.


A Map created with VIAM and a Lidar Sensor

Viam SLAM


Lidar (Light Detection and Ranging) is a remote sensing technology that measures the distance to an object by emitting a laser light and then measuring the amount of time it takes for the light to return after bouncing off the object.

Lidar uses a sensor to measure the time of flight (TOF) of the laser pulses and then calculates the distance to the object that it has bounced off of. Lidar can be used to measure distances to objects in the air, on land, and underwater. It’s most commonly used for mapping and navigation, but can also be used for 3D imaging and object detection.


Lidar information

Lidar information

Lidar information

Lidar information


What is SLAM?

SLAM stands for Simultaneous Localization and Mapping. It is a technique used by robots and autonomous vehicles to build a map of an unknown environment, while at the same time keeping track of their current location within the map. The SLAM algorithm is based on the Kalman filter, which is a mathematical algorithm that uses noisy sensor measurements to produce a good estimate of the state of a system. In the case of SLAM, the system is the robot, and the state includes the robot’s location and the locations of landmarks in the environment.

SLAM uses a Lidar sensor to capture a 2D map of the environment. The Lidar sensor is mounted on the robot, and as the robot moves around the environment, the Lidar sensor captures a series of scans. Each scan is a 2D point cloud, which is a set of points in the form of (x, y) coordinates. The SLAM algorithm uses these scans to build a map of the environment, and to locate the robot within the map.


How SLAM works

SLAM (Simultaneous Localization and Mapping) is an collection of algorithms used in robotics for navigation and mapping. It works by using LIDAR, sonar and other sensor data to construct a 3D map of the environment and then using this map to localize the robot within it.

LIDAR (Light Detection and Ranging) is a sensing technology that uses lasers to measure distances to nearby objects by timing how long it takes for the laser to return after being emitted.

The LIDAR data is used to construct a 3D point cloud of the environment which is then used to build an occupancy grid map. The occupancy grid map is then used to localize the robot and navigate it through the environment. Additionally, SLAM algorithms can use additional sensory data such as inertial measurements and camera images to improve the accuracy and reliability of the mapping and localization process.

The SLAM algorithm starts by generating an initial map of the environment and then uses the data from the sensors to refine the map. SLAM algorithms can also localize the robot in the environment by tracking its motion and comparing it to the map. Slam algorithms are a powerful tool for navigation and can be used in many applications such as self-driving cars, robotics, and augmented reality.


How Viam works

Viam High Level diagram


What is the SLAM Process?

SLAM Process diagram

What is Pose esimation?

Pose estimation is a process of estimating the position and orientation of an object in a 3D space. It uses a combination of computer vision and machine learning techniques to determine the 3D position of an object from an image or video.

Pose estimation can be used to recognize objects and estimate their poses in a scene, allowing for applications such as augmented reality, robotics, and virtual reality.

The process typically involves using algorithms to detect features in the image or video, such as keypoints or edges, and then using machine learning techniques to identify the object and estimate its pose. It can also be used to estimate the pose of a person in a video, allowing for applications such as gesture recognition and tracking.

Pose Estimation diagram


What is Feature matching?

Feature matching is a key component of SLAM. It essentially involves matching features between images taken from different locations and orientations to create a map. Feature matching involves extracting features from an image and then finding the same features in other images. This is done by comparing features such as intensity, color, shape, and texture. Once the features are matched, the pose or location of the camera can be estimated. By combining this information over time, the SLAM algorithm can build a map of the environment.

Optical Computer mice also use this method to track the motion of the mouse.

Features Mapping diagram


What is Loop closure?

Loop closure in SLAM is the process of recognizing when a robot has returned to a previously visited location. This enables the robot to more accurately map its environment and improve its navigation capabilities. By recognizing a previously visited area, the robot can more accurately understand the layout of the environment and accurately determine its location.

This process can prevent drift, where sensors such as IMU and odemetry’s small inaccuracies can build up over time and cause the pose estimation to incorrect position the robot, and it appears to drift around on the map.


What is bundle adjustment?

Bundle adjustment in SLAM is a process of refining the estimated camera poses and point locations of a scene by minimising the reprojection errors of the estimated 3D points onto the observed 2D image points. This is done by adjusting the camera poses and 3D points in a least squares sense. The goal is to optimise the estimates of the camera poses and 3D points to obtain the best-fit solution. This is an iterative process that is repeated until the reprojection errors are minimised.


What is Cubie-1?

I created Cubie-1 with SLAM and navigation in mind. Cubie has a Slamtec RPLidar A1 mounted on top, and a Raspberry Pi 4 inside. The Raspberry Pi runs Viam, and the Lidar is connected to the Raspberry Pi via USB. Cubie is powered by a USB power bank.

Viam SLAM

Cubie-1 also has a GY-521 IMU sensor, which is mounted on the top of the robot, on the internal shelf. The IMU sensor is connected to the Raspberry Pi via I2C.

Features Mapping diagram


How to Sett up SLAM in Viam

To set up SLAM in Viam, we need an existing robot project. If you don’t have one, you can create one by following the Viam Getting Started Guide.

Once you have a robot project you will also need a supported Lidar sensor. I chose the Slamtec RPlidar A1. These often come with a USB connector maching it easy to connect to the Raspberry Pi.


How to Add a RPLidar to Viam

  1. From the Config tab, Select the Components subtab
  2. Select the Add Component button
  3. Select RPLidar A1 from the list of Cameras.
  4. Give the sensor a name, such as RPLidar
  5. Click the Save config button

There are no attributes that need to be configured.

RPLidar Screen


How to Add SLAM Cartographer to Viam

  1. From the Config tab, Select the Services subtab
  2. Select the Add Component button
  3. Select SLAM Cartographer from the list of Cameras
  4. Click the Save config button

Note about Data Management

Notice that the Data Management will send the data to the Cartographer - this will eventually incurr a cost if you leave it running indefinitely. So make sure you disable the Data Management when you are not using it (from the Cartographer Service).

RPLidar Screen


How to Configure the SLAM Cartographer

  1. From the Config tab, Select the Services subtab
  2. Select the SLAM Cartographer service
  3. Change the Mapping mode to Create new map
  4. Change the Camera to RPLidar (or whatever you have called the lidar)
  5. Click the Save config button

Cartographer Screen


How to Start the SLAM Cartographer

  1. From the Control tab, Select the Cartographer component and click the Start Session button
  2. Give the map a name, such as My Map
  3. Move the robot around the environment until you have mapped the entire area.
  4. Click the Stop Session button
  5. From the Config tab, scroll to the RPLidar component
  6. Click the Off button to stop the RPLidar components Data Capture Configuration

You can now change the Cartographer Mapping mode to Localize only and select the map you just created.


Viewing the Lidar map

  1. From the Control tab, Open the Cartographer component
  2. You will see a map with a red arrow showing the location and orientation of your robot.
  3. Use the control keys to move the robot around the map.

Visit the Viam Documentation for more information on how to use Viam.


Did you find this content useful?


If you found this high quality content useful please consider supporting my work, so I can continue to create more content for you.

I give away all my content for free: Weekly video content on YouTube, 3d Printable designs, Programs and Code, Reviews and Project write-ups, but 98% of visitors don't give back, they simply read/watch, download and go. If everyone who reads or watches my content, who likes it, helps fund it just a little, my future would be more secure for years to come. A price of a cup of coffee is all I ask.

There are a couple of ways you can support my work financially:


If you can't afford to provide any financial support, you can also help me grow my influence by doing the following:


Thank you again for your support and helping me grow my hobby into a business I can sustain.
- Kevin McAleer