🤖 AI Learning Companion
Agent Skills:
Simulating Sensors
LiDAR, Depth Cameras, and IMUs​
A robot's perception of the world depends entirely on its sensors. In this section, we will learn how to simulate these sensors in Gazebo and Unity to generate synthetic data for our AI models.
LiDAR (Light Detection and Ranging)​
LiDAR provides a 360-degree view of the environment using laser beams.
- Topic:
/scan(LaserScan) or/points(PointCloud2). - Usage: SLAM, Obstacle Avoidance.
Depth Cameras (RGB-D)​
Cameras like the Intel RealSense or Microsoft Kinect provide both color (RGB) and depth (D) information.
- RGB Image: Standard video feed for object recognition.
- Depth Map: A grayscale image where pixel intensity represents distance.
# Accessing Depth Data in Python
def depth_callback(self, msg):
# Convert ROS Image message to OpenCV image
cv_image = self.bridge.imgmsg_to_cv2(msg, desired_encoding='passthrough')
# Get distance at center pixel
center_dist = cv_image[240, 320]
self.get_logger().info(f'Distance to object: {center_dist}m')
IMU (Inertial Measurement Unit)​
IMUs measure acceleration and angular velocity.
- Crucial for Humanoids: Used in the feedback loop to keep the robot upright.
- Topic:
/imu/data(Imu).