AI Hub Tampere Interview with Bharath Garigipati

Bharath Garigipati

AI Hub Tampere interviewed one of our own project employees, research assistant Bharath Garigipati, who has worked on SLAM projects with Tampere University and AI Hub. Let's congratulate Bharath for submitting his thesis in Automation and Robotics right before the holiday season. Here are some thoughts from the man himself.

What led you to your current studies?
While I was completing my bachelor’s degree in Mechatronics, courses such as
Computer vision and Mobile Robotics drew my interest in robotics. I started
searching for master’s programs in Automation and Robotics across
universities in Europe and found Tampere University’s Factory Automation and
Robotics program to be the ideal fit for me.

● What do you find the most interesting recent developments in science?
Machine Learning-based methods have been quickly advancing to replace
traditional methods in robotics. For example in my topics of research;
Simultaneous Localization and Mapping (SLAM) and mobile robot navigation,
there have been developments in end-to-end neural network-based methods
such as Hf-net where the network directly predicts the pose of the robot from
the input image or hybrid methods such as D3VO for pose estimation. There
are also end-to-end navigation methods such as BADGR, which completely
skips the SLAM estimation step and navigates the robot towards the set goal,
based on input images.

Bharath Garigipati is showing a demo in FIMA+ FCAI workshop ‘AI for Mobile Work Machines’ in November 2021

● Could you briefly explain the demos you have been making and what kind of
data they create?
The main focus of my work included testing various LIDAR and visual SLAM
methods and seeing how they compare against each other. In order to
compare their performance, we used a setup with two LIDARs (Velodyne VLP
16, Robosense Bpearl), two stereo cameras (Intel realsense T265, ZED stereo
camera), 9-axis Xsens IMU, and ground truth reference trajectories generated
from GNSS, IMU, and odometry based EKF in outdoor cases and with Optitrack
motion tracking system in indoor cases. In the demo that was shown, we
demonstrated the usage of localizing the current robot pose using a previously
built map from a SLAM session.

● What types of algorithms have you used?
Initially, in order to generate the calibration parameters, we used Kalibr and
Matlab for calibrating the stereo cameras. And for testing the collected
datasets, we used Visual SLAM methods such as ORB SLAM3, Basalt VIO,
Kimera VIO, and SVO2 and LIDAR based methods including LOAM, LeGO
LOAM, LIO SAM, and HDL graph SLAM.

What opportunities do you think SLAM can open for companies and
industries?
SLAM system can provide an automated way to map unknown environments
and provide a localization solution for the robots that do not require any
on-sight markers, robot pose from SLAM can then be used for robot
navigation. The generated maps can also be used for object detection, and to
monitor the changes in the case of a known environment.

● What problems or threats do you think companies might face with SLAM?
The reliability of the SLAM system is still a question, it requires a lot of testing
and tuning in order to avoid unexpected failures.

What are your current goals and interests when it comes to your studies?
After studying and comparing the workings of visual and LIDAR SLAM systems
and gaining insights into how they work, my focus now is to look into the ways
with which these SLAM systems can be made better and more reliable.

● What do you do in your free time?
In my free time, I like to go on hikes and do photography.

Thank you for the interview!