What led you to current studies?
Well, my master’s was in computer applications where, I study about artificial intelligence, graphics, and image processing. I had an interest in machine vision and after master’s, I started working as a machine learning engineer. Over the time while working on various computer vision applications I got my way towards research. So, my current studies are induced by the things I have been doing in last 3-4 years.
SLAM has been evolved a lot in last decade and shown the potential to be used in mobile robotic applications as well as augmented reality.
What do you find the most interesting recent developments in science?
As I feel that the most promising developments are data sciences (modern machine learning and big data), better communication systems and less expensive & efficient computational resources. Further these changes are helping to solve harder problems in society and leading new applications as well. In slam also we have seen some successful applications of machine learning in visual odometry, radar and Lidar odometry and semantic mapping in the last 5 years.
Could you briefly explain your work at Tampere University?
I am a Marie Sklodowska-Curie doctoral student supervised by Prof. Reza Ghabcheloo in Autonomous Mobile Machines group at faculty of engineering and natural sciences. I’m working on localization and mapping for heavy mobile machines in un-even terrains. Through my research I will address the real-world problems of industry by collaborating with industrial R&D departments. As part of my project, I will spend around 75 % of my PhD time at Hiab AB and Volvo CE.
What types of software, algorithms and tools have you used?
I am working with imaging radars, IMU and Lidars with ROS & ROS2. I code algorithms and tests in Python, Matlab and C++ and there are a bunch of libraries like PCL, scipy and relevant toolboxes in Matlab which I use for data capture and processing. I also use PyTorch for neural network modelling and applications. I used various available open-source algorithms in radar and Lidar ego motion and localization.
What opportunities do you thinks SLAM can open for companies and industries?
In my opinion SLAM has been evolved a lot in last decade and shown the potential to be used in mobile robotic applications as well as augmented reality. It’s very time consuming and hard to record and maintain maps for autonomous mobile machines/cars (like HD maps) which limits the performance in the operational environment. SLAM can be used to mitigate those issues either it’s a small mobile robot for indoor/home or heavy mobile machines for construction sites/forestry, with emergence of powerful edge computing devices it’s possible to do SLAM in dynamic and semi-dynamic environment.
What problems or threats do you think companies face when working with SLAM?
After showing the significant mapping capabilities in last decade, I feel robustness is still a challenge which is basically meant by how well the SLAM algorithm generalizes in a new scenario. Improvement in robustness can lead machines to perform long term localization and mapping against the changes in environment, these changes include weather, visibility etc. There are challenges related to the failure tolerance and the runtime of the algorithms which respectively makes an algorithm less prone to sensor failure(in case of multi-sensor application) and more efficient to be deployed.
My plan is to work on robustness of SLAM in semi-dynamic and dynamic environments against adverse visibility and weather. I will also explore the capabilities and limitations of the sensors like Lidars and imaging radars.