Want to create interactive content? It’s easy in Genially!

Get started free

PREVENT Robotics Practical Session (Halmstad)

citizensinpower

Created on January 20, 2025

Start designing with a free template

Discover more than 1500 professional designs like these:

Customer Service Course

Dynamic Visual Course

Dynamic Learning Course

Akihabara Course

Transcript

Robotics - PREVENT Project

Practical session

Start

Index

Objectives
Exercise 1
Exercise 2
Exercise 3

source: https://emanual.robotis.com/

Objectives

  • The goal of this robotics practical session is to gain hands-on experience with robot operation, understand key concepts in robotics control as well as apply problem-solving skills in real-world applications.
  • The session consists of two main scenarios:
    • visually detecting and recognising a fire
    • finding the nearest exit avoiding a fire
  • The practical session is based on ROS2 and Gazebo simulations which can be easily implemented in a physical models.
Robotics technology is transforming disaster response, making search, rescue, and recovery operations faster, safer, and more effective.

GitHub repository

Objectives

This is an intelligent robotics simulation that combines computer vision and autonomous navigation to detect fires and navigate through hazardous environments to the safer exit.

Key features

  • Color-based Arrow Detection for navigation guidance
  • Fire/Flame Detection using computer vision
  • Autonomous Robot Navigation in simulated environments
  • Real-time Control via keyboard input
  • Live Camera Feed from robot's perspective

source: https://emanual.robotis.com/

What will you experience

  • Vision Detection: Watch as your algorithms identify colored arrows and flames
  • 3D Simulation: See your robot navigate through a realistic school environment
  • Live Camera: View the world through your robot's eyes
  • Interactive Control: Drive your robot using WASD keys
  • Autonomous Navigation: Let the robot make decisions based on visual cues

Prerequisites

  • Basic Python knowledge
  • Computer vision fundamentals
  • OpenCV library

Exercise 1

Exercise 1- Vision detection

The goal of this vision detection exercise is to build foundational computer vision skills by starting with the practical exercise of color detection. Students will specifically learn and practice arrow color detection for the three primary colors (red, green, blue) and detecting flames using various sample images. This hands-on practice provides the necessary basis for understanding how image processing algorithms function in real-world applications.

Exercise 1- Vision detection

Start with the exercise

# Install OpenCV for the exercisespip install opencv-contrib-python# Try the detection algorithmscd Exercise/python detect.py

Exercise 1- Vision detection

The color detection code (detect.py) and the exemplary pictures can be downloaded from here

Vision detection

import cv2import numpy as npimport sysdef detect_arrow_and_show(image_path: str): image = cv2.imread(image_path) if image is None: print(f"Error: Could not read {image_path}") return image_bgr = image.copy() hsv_image = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2HSV) red_lower = np.array([0, 50, 50]) red_upper = np.array([10, 255, 255]) green_lower = np.array([35, 50, 50]) green_upper = np.array([85, 255, 255]) blue_lower = np.array([100, 50, 50]) blue_upper = np.array([130, 255, 255]) orange_lower = np.array([ 8,100,140]) orange_upper = np.array([20,255,255]) yellow_lower = np.array([20, 60,160]) yellow_upper = np.array([45,255,255]) red_mask = cv2.inRange(hsv_image, red_lower, red_upper) green_mask = cv2.inRange(hsv_image, green_lower, green_upper) blue_mask = cv2.inRange(hsv_image, blue_lower, blue_upper) orange_mask = cv2.inRange(hsv_image, orange_lower, orange_upper) yellow_mask = cv2.inRange(hsv_image, yellow_lower, yellow_upper)

red_pixels = cv2.countNonZero(red_mask) green_pixels = cv2.countNonZero(green_mask) blue_pixels = cv2.countNonZero(blue_mask) flame_pixels = cv2.countNonZero(red_mask) + cv2.countNonZero(orange_mask) + cv2.countNonZero(yellow_mask) color_counts = { 'Red': red_pixels, 'Green': green_pixels, 'Blue': blue_pixels, 'Flame': flame_pixels } max_color = max(color_counts, key=color_counts.get) max_count = color_counts[max_color] detected_color = max_color if max_count > 100 else "Unknown" # return detected_color cv2.putText(image, f'Detected: {detected_color}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.imshow(f'{image_path} - {detected_color}', image) print(f"{image_path}: {detected_color} detected") cv2.waitKey(0) cv2.destroyAllWindows() if __name__ == "__main__": if len(sys.argv) < 2: print("Usage: python detect.py <filename>") else: detect_arrow_and_show(sys.argv[1])

Exercise 1- Vision detection

Vision detection

And the flame detection code (liveFlameDetection.py):

import cv2import numpy as npdef flamedetector(): cap = cv2.VideoCapture(0) if not cap.isOpened(): print("Error: Could not open webcam.") return # Optional: make the image a bit sharper / consistent cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) # Parameters you can tune MIN_AREA_RATIO = 0.001 # 0.1% of frame area SAT_MIN = 140 # min saturation (reduce skin detections) VAL_MIN = 190 # min brightness (flames are bright) COOLDOWN_FRAMES = 8 # keep text on a few frames to avoid flicker cooldown = 0 print("Press 'q' to quit.") while True: ret, frame = cap.read() frame = cv2.flip(frame, 1) if not ret: print("Error: Could not read frame.") break hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) h, s, v = cv2.split(hsv) # Hue ranges for red/orange/yellow flames # red wrap-around (0-10) mask_r1 = cv2.inRange(hsv, np.array([0, SAT_MIN, VAL_MIN]), np.array([10, 255, 255])) # orange/yellow (15-45) – adjust upper bound if your flame looks more yellow mask_r2 = cv2.inRange(hsv, np.array([15, SAT_MIN, VAL_MIN]), np.array([45, 255, 255]))

mask = cv2.bitwise_or(mask_r1, mask_r2) # Clean up noise mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, np.ones((3,3), np.uint8), iterations=1) mask = cv2.morphologyEx(mask, cv2.MORPH_DILATE, np.ones((3,3), np.uint8), iterations=2) # Find large enough regions contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) frame_area = frame.shape[0] * frame.shape[1] min_area = max(200, int(MIN_AREA_RATIO * frame_area)) # never below 200 px detected = any(cv2.contourArea(c) > min_area for c in contours) # Debounce flicker if detected: cooldown = COOLDOWN_FRAMES elif cooldown > 0: cooldown -= 1 if cooldown > 0: cv2.putText(frame, 'Flame Detected!', (12, 34), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 255), 3, cv2.LINE_AA) cv2.imshow('Flame Detection (color)', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() if __name__ == "__main__": flamedetector()

Exercise 2

Excercise 2 - Robot Simulation

The goal of this exercise is to integrate fundamental concepts by having students apply their computer vision and control skills in a realistic simulation environment. Specifically, students will practice using a simulated vision system to detect navigation cues and implement decision-making logic to successfully navigate a TurtleBot3 through a model of a school. This exercise is designed to mimic real-world hazards that can follow earthquakes and wildfires. In an earthquake, numerous ignition sources—such as ruptured gas lines and electrical shorts—are created. During a wildfire, buildings are mainly ignited by wind-blown embers, or by intense radiant heat from nearby burning structures.

Excercise 2 - Robot Simulation

Prerequisites

  • Windows/Linux OS
  • GPU: NVIDIA or AMD graphics card - Highly recommended
  • RAM: 8GB+ recommended
  • Docker desktop software: If you are a windows user
Apple Mac can't be used because of the silicon model based GPUs.

Excercise 2 - Robot Simulation

Setup ROS2 Environment

Step 4 Access Your Virtual Desktop

Step 3 Launch the Container

Step 1 Clone the repository

Step 2 Build Docker Container

Excercise 2 - Robot Simulation

Quick start installation Open a terminal in your virtual desktop and run:

Step 4 Environment Setup (IMPORTANT!)

Step 3 Build the Project

Step 1 Install Dependencies

Step 2 Fix Python Compatibility

Excercise 2 - Robot Simulation

Launch the Simulation Once everything is installed, open 3 terminals and follow these steps:

Terminal 3: Drive the Robot

Terminal 1: Start the World

Terminal 2: Robot Vision

Exercise 3

Exercise 3 - Real world implementation

If you have access to real physical robots, you can do this exercises, where you will test run previous simulations in a real-world environment.

source: https://www.ros.org/robots/turtlebot3/

Exercise 3 - Real world implementation

You will need to:

  • construct a model of a school for the simulation environment
  • run the same code as in Exercise 1 (detect.py)
  • run the same code as in Exercise 2 (vision_detector.py)

Exercise 1

Exercise 2

soure: https://forum.robotis.com/t/awesome-turtlebot3-projects/4206

Course completed!

  1. Open your browser
  2. Navigate to: http://localhost:6080/
  3. Login with:
  • Username: ubuntu
  • Password: ubuntu
Boom! You now have a Linux desktop in your browser!
# Setup environment firstsource /opt/ros/humble/setup.bashsource /code/ros2_ws/install/setup.bashexport TURTLEBOT3_MODEL=waffle_pi# Launch the simulationros2 launch prevent sim_tb3.launch.py

You'll see the robot spawned in a 3D Gazebo world!

cd ~/code/ros2_ws/colcon build --symlink-install

Compiling magic happening...

docker run -p 6080:80 --gpus all --shm-size=32gb -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $(pwd)/:/code/ --privileged -it --name humble tiryoh/ros2-desktop-vnc:humble bash

Run these commands in EVERY new terminal:

source /opt/ros/humble/setup.bashsource /code/ros2_ws/install/setup.bashexport TURTLEBOT3_MODEL=waffle_pi
git clone https://github.com/SivadineshPonrajan/PREVENT.gitcd PREVENT
sudo apt update && sudo apt install -y \ ros-humble-turtlebot3* \ ros-humble-turtlebot3-gazebo \ ros-humble-gazebo-ros-pkgs \ ros-humble-gazebo-ros \ ros-humble-image-view \ python3-numpy \ python3-opencv \ ros-humble-vision-opencvsudo apt install python3-colcon-common-extensions
# Setup environment firstsource /opt/ros/humble/setup.bashsource /code/ros2_ws/install/setup.bashexport TURTLEBOT3_MODEL=waffle_pi# Control the robot with keyboardros2 run turtlebot3_teleop teleop_keyboard

Use WASD keys to drive your robot around!

docker build -t tiryoh/ros2-desktop-vnc:humble .

This might take a few minutes...

sudo python3 -m pip uninstall -y numpy || truesudo apt-get updatesudo apt-get install -y --reinstall python3-numpypython3 -m pip install "numpy<2" --no-cache-dir

🐍

# Setup environment firstsource /opt/ros/humble/setup.bashsource /code/ros2_ws/install/setup.bashexport TURTLEBOT3_MODEL=waffle_pi# View what the robot seesros2 run image_view image_view --ros-args --remap /image:=/camera/image_raw

Watch the world through the robot's camera!