This project implements a robot navigation system using the Tiago robot platform in Webots, featuring LIDAR-based mapping, configuration space computation, path planning algorithms, and autonomous navigation.
- ✅ LIDAR-based occupancy grid mapping implemented and functioning
- ✅ Probability-based mapping with grayscale representation working
- ✅ Configuration space computation through convolution implemented
- 🔄 Path planning algorithm (A*) implementation in progress
- ⬜ Autonomous navigation controller still needs implementation
- ⬜ Pick and place functionality not yet implemented
Lab 5/
├── controllers/ # Tiago robot controllers
│ ├── lab5_controller2/
│ │ ├── lab5_controller2.py # main controller file
│ │ └── map.npy # map obtained by our manual LIDAR scan after filtering
│ └── lab5_joint/
├── images/
│ ├── CSpace.png
│ └── LIDAR_Map.png
├── worlds/ # Webots world files
└── ... # misc files
The image above shows our robot's generated occupancy grid map of the environment using LIDAR data. White areas represent free space, while darker pixels indicate detected obstacles.
The image above shows the configuration space after dilating obstacles to account for the robot's physical dimensions. This ensures paths maintain safe distances from obstacles.
- LIDAR-based occupancy grid mapping in real-time
- Probabilistic sensor model with grayscale representation
- Map filtering and saving capabilities
- Coordinate transformation from robot to world frame
- Dilation of obstacles using convolution
- Binary thresholding for obstacle classification
- Ensures planned paths keep safe distance from obstacles
- Implementation of graph search algorithms (A* or Dijkstra's)
- Conversion between world coordinates and map coordinates
- Generation of waypoints for robot navigation
- Visualization of planned paths
- Proportional control for waypoint navigation
- Error calculation for distance and heading
- Smooth velocity adjustments
- Wheel speed normalization
- Direct control of robot through keyboard input
- Real-time mapping during manual exploration
- Saving/loading map functionality
- Loads saved map from disk
- Computes configuration space
- Plans path between specified start and goal points
- Saves planned path for autonomous execution
- Loads pre-planned path
- Navigates through waypoints using feedback controller
- Real-time obstacle detection and avoidance
- Advanced mode combining navigation with manipulation
- Path planning to reach objects
- Robot arm control for pick and place operations
The system implements a complete perception-planning-action loop:
- Perception: LIDAR data is processed to create and update an occupancy grid map
- Planning: Configuration space is computed and paths are planned using search algorithms
- Action: Robot navigates through waypoints using a feedback controller
The map is represented as a 360×360 grid corresponding to a 12×12 meter environment, with each pixel representing a 3.33×3.33 cm area.
- Select operation mode by modifying the
modevariable:mode = 'manual' # For manual control and mapping mode = 'planner' # For path planning mode = 'autonomous' # For autonomous navigation mode = 'picknplace' # For pick and place operations

