Each thesis can be completed either individually or by a maximum of 2 students. Theses are supervised by at least one senior researcher and are graded by professors.

Students can also engage in practical and hands-on project courses in a group of at least 2 students. Project courses are supervised and graded by at least one senior researcher. You can also check out some videos of Past Projects.

Theses and project courses will be listed periodically. If you’re interested, do reach out to the listed contact person, with your CV and a brief statement of motivation.

BSc. Theses

1. Vision-Based Safe Reinforcement Learning

Research Roadmap: Simulation & Data Driven Modeling.

Supervisors (contact): Naeim Ebrahimi Toulkani (naeim.ebrahimitoulkani@tuni.fi), Prof. Reza Ghabcheloo.

Background: Safe Reinforcement Learning enables agents to learn effective control policies while respecting safety constraints, making it crucial for real-world robotics, where unsafe exploration can cause failures or damage.

Image of a simulation where a robot navigates through obstacles.

Your Tasks: 

  1. Implementation of three safe RL algorithms (e.g., CPO, PPO-Lag, TRPO-Lag) on three differen robots (Point, Unicycle, and CarLike Robot) in safety-gym.
  2. Reproducing results of the target paper (GitHub)
  3. A detailed project report explaining methodology, results, and insights.
  4. A GitHub repository with reproducible code, setup instructions, and documentation.

What you learn: 

  1. Fundamental concepts behind safe RL algorithms.
  2. Working with modern robotic Libraries, APIs, and Simulators (MuJoCo, Isaac Lab, Safety-Gymnasium, OmniSafe, etc.)
  3. Techniques for leveraging real-machine data to improve simulators and policies.

Skills required:

  1. Software Skills: Python, PyTorch, MuJoCo.
  2. Theoritical Skills: RL basics (Model-Based vs. Model-Free RL, Off-Policy vs. On-Policy RL, Offline vs. Online RL) and you should be comfortable with mathematics.

2. Imitation Learning for Excavation

Research Roadmap: Simulation & Data Driven Modeling.

Supervisors (contact): Prof. Reza Ghabcheloo (reza.ghabcheloo@tuni.fi).

Background: Autonomous control of excavators is a key enabler for improved productivity in earth-moving operations. Traditional controllers struggle with complex contact dynamics and varying terrain. In contrast, Imitation Learning, where the system learns directly from human demonstrations, offers a promising path.

An image of a real excavator and simulated excavator with a depiction of Imitation Learning.

Your tasks: 

  1. Record expert operator trajectories (real excavator or simulator) covering representative tasks: rock-picking, bucket digging, dumping.
  2. Implement Imitation Learning algorithms (e.g., behaviour cloning) to map observation → action (joint / velocity control) using the collected dataset.
  3. A detailed project report explaining methodology, results, and insights.
  4. A GitHub repository with reproducible code, setup instructions, and documentation.

Learning outcomes: 

  1. Fundamental concepts of Imitation Learning and learning-from-demonstration in robotics.
  2. How to structure a robotics-ML research pipeline: from data through learning, to policy validation and reproduction.

Skills required:

  1. Python, PyTorch.
  2. Possibly ROS2 or control interfaces for data capture and real-time control.
  3. Robot simulation tools (AGX Dynamics, digital-twin setup).

3. High Resolution Radar Based SLAM

Research Roadmap: Perception & Sensing

Supervisors: Dr. Prashant Rai (prashant.rai@tuni.fi), Prof. Reza Ghabcheloo.

Background: Imaging radars are an important part of autonomy sensor stack. Their ability to perceive in adverse visibility and weather conditions make them more useful for safe navigation, localization and mapping.

An GIF image of radar data compared with camera data

Your tasks: 

  1. Understanding the fundamentals of automotive imaging radars and the data (we have some data collected).
  2. Setting up, running the open-source ROS package (based on a published paper, GitHub) on our data and porting the ROS1 code to ROS2.
  3. Detailed report and the demonstration (Optional).
  4. A GitHub repository with reproducible code, setup instructions, and documentation (optional).

Learning outcomes: 

  1. Fundamentals of automotive imaging radars.
  2. Working with the sensor in real-world scenario.
  3. Odometry estimation and SLAM.

Skills required:

  1. Software Skills: Python, C++, ROS/ROS2.
  2. Willingness and interest to work with hardware (radar sensor and sensor box).

4. Environment Modeling for Safe Offroad Navigation

Research Roadmap: Safe Motion Control

Supervisors (contact): Dr. Miloš Prágr (milos.pragr@tuni.fi), Golnaz Raja, Prof. Reza Ghabcheloo.

Background: Support real mobile robot navigation through complex environments by building high-fidelity terrain models using outdoor RGB cameras, LiDAR, or RADAR data.

Image of wheel loader and SLAM

Your tasks: 

  1. Establish a mapping module (using open-source systems: SLAM, GitHub) system to be used using LiDAR or RGB data (point cloud fusion or learning-based, depending on student’s interest).
  2. Modify the selected module for estimation of heavy machine traversal ability.
  3. Couple the developed module to a simple path planner.

Learning outcomes: 

  1. Work with real outdoor sensor data.
  2. Machine learning or data processing for real time vision.
  3. Deployment for navigation on a real work machine.

Skills required:

  1. Software Skills: Python+PyTorch or C++ depending on selected module, ROS/ROS2.
  2. Interest to work with a real mobile robot.

5. Data-driven Hydraulic Modeling for Excavators

Research Roadmap: Simulation & Data Driven Modeling.

Supervisors (contact): Prof. Reza Ghabcheloo (reza.ghabcheloo@tuni.fi).

Background: Hydraulic systems are the primary actuation mechanism of excavators and exhibit strong non-linearities due to valve dynamics, pressure–flow relationships, friction, delays, and coupling with mechanical loads. Data-driven modeling leverages system identification and machine learning techniques to learn hydraulic behavior directly from measured data.

Image of an excavator

Your tasks: 

  1. Design a rock-capturing task in simulation (digital twin), including realistic contact, terrain, and sensor observations.
  2. Train reinforcement learning agents to control the excavator for rock capturing using suitable state, action, and reward definitions.
  3. Evaluate and analyze policy performance across varying rock properties and initial conditions.
  4. A detailed project report explaining methodology, results, and insights.
  5. A GitHub repository with reproducible code, setup instructions, and documentation.

Learning outcomes: 

  1. Fundamentals of reinforcement learning for robotic manipulation.
  2. Reward design and state/action modeling for contact-rich tasks.
  3. Training and evaluating RL policies in simulation.
  4. Applying RL to large-scale robotic systems such as excavators.

Skills required:

  1. Python, PyTorch.
  2. Basic knowledge of control systems and robotics.
  3. Experience with reinforcement learning frameworks.
  4. Familiarity with robot simulation tools (e.g., AGX Dynamics, digital twins)

MSc. Theses & Project Courses

Will be updated soon...