Want to create interactive content? It’s easy in Genially!

Reuse this genially

ROAD LANE LINE DETECTION

Ambala Siva Tarun Reddy

Created on August 25, 2021

Start designing with a free template

Discover more than 1500 professional designs like these:

Transcript

Road lane line detection

Internship

Date: 31-08-2021

Index

01. Introduction

04. Development

02. Advantages

05. Conclusion

03. Process

06. Thanks

01. Introduction

Specific

  • An autonomous car is a vehicle capable of sensing its environment and operating without human involvement. A human passenger is not required to take control of the vehicle at any time, nor is a human passenger required to be present in the vehicle at all. An autonomous car can go anywhere a traditional car goes and do everything that an experienced human driver does.
  • The basic requirement for self driving cars is to detect the lanes and keep the cars in between the lanes.

1A. Introduction

Specific

02. Advantages

Real world motivation behind this project

There are many advantages of Self - Driving cars. The main advantages are1. Our roads will be safer 2. We’ll be more productive 3. We can save money 4. We’ll move more efficiently 5. The environment will thank us

2A. Our roads will be safer

  • Many car accidents have been caused by some sort of human error, be it speeding, driving recklessly, inattentiveness, or worse, impaired driving.
  • Turns out that an overwhelming majority of accidents have been caused by humans
  • In fact, a study by the National Highway Traffic Safety Administration (NHTSA) revealed that 94% of accidents were caused by the drivers themselves.

2B. We’ll be more productive

  • More than 60% of Indians spend 30 mins - 100 mins commuting to work each day. That’s almost an 2 - 4 hours each day, and it largely goes wasted.
  • Indians spend an astonishing cumulative total of 2.5 billion hours commuting each year. Imagine what we could do with all that time back.
  • A self-driving car would allow them to get some work done, knock a few emails out, or even get a little extra sleep if they have to wake up early to get to work.

2C. We can save money

  • Because self-driving cars are safer, they’ll cut down on any accident induced costs. Additionally, self-driving cars don’t sustain the same kind of wear and tear as human-piloted vehicles. They don’t floor the gas pedal or slam the brakes unless an emergency is detected, making for better upkeep, slower depreciation, and even better fuel efficiency. These little things can quickly add up, meaning that self-driving cars, while more expensive up front, can easily put money back in your pocket in the long-term.

2D. We’ll move more efficiently

  • For one, since self-driving cars are connected to the internet, their navigation will use GPS programs like Google Maps to automatically generate the quickest possible route.
  • The interaction between self-driving cars’ software means the car is also intelligent enough to communicate with other self-driving cars, detect delays and accidents before you arrive at them, so it can reroute the vehicle’s path without running into any impediments.

2E. The environment will thank us

  • According to the Union of Concerned Scientists, transportation, in general, was responsible for over half of the carbon monoxide and nitrogen oxide air pollution as well as a quarter of the hydrocarbons emitted into our atmosphere.
  • While many self-driving cars might still emit these same materials, their improved efficiency would be a huge step forward toward a cleaner future.
  • If future manufacturers of self-driving cars make electric vehicles, the positive impact on the environment could be even greater.

03. DETECTION PROCESS

The lane detection pipeline follows these steps:

  • Pre-process image using grayscale and gaussian blur
  • Apply canny edge detection to the image
  • Apply masking region to the image
  • Apply Hough transform to the image
  • Extrapolate the lines found in the hough transform to construct the left and right lane lines
  • Add the extrapolated lines to the input image

Developement Index

01. Perspective Transform

06. Make Points and Average Slope Intersept

02. Canny Edge Detection

07. Display Lines Average

08. Camera Calibration

03.Region of Interest

09. Thank You

04. Hough Lines P

05. Display Lines

01. Perspective Transform

In Perspective Transformation, we can change the perspective of a given image or video for getting better insights about the required information. In Perspective Transformation, we need provide the points on the image from which want to gather information by changing the perspective. We also need to provide the points inside which we want to display our image.Then, we get the perspective transform from the two given set of points and wrap it with the original image.

1A. Perspective Transform

cv2.getPerspectiveTransform(src, dst)

Parameters : src → Coordinates of trapezium in the source image. dst → Coordinates of the corresponding trapezium in the destination image.

1B. Perspective Transform

cv2.warpPerspective(src ,M, dsize, dst)

Parameters : src → input image M → 3X3 Transportation Matrix dsize → size of an output image dst → output image that has the size dsize and the same type as src .

Transportation Matrix

  • To Calculate the location of values of each pixel on Destination. Source Pixel values are multiplied with Transformation Matrix.
  • Step 1: Transformation Martix is multiplied with Pixel x,y cordinate values.
  • Step 2: Values obtiaind of first two rows are divides by third row to obtain (x,y).

Example

Scanner in our phones

Project Image:

Prespective transform from front view to top vies and from top view to front view

02. CANNY EDGE DETECTION

The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986.The process of Canny edge detection algorithm can be broken down to five different steps:1. Apply Gaussian filter to smooth the image in order to remove the noise2. Find the intensity gradients of the image

2A. CANNY EDGE DETECTION

3. Apply gradient magnitude thresholding or lower bound cut-off suppression to get rid of spurious response to edge detection4. Apply double threshold to determine potential edges 5. Track edge by hysteresis: Finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.

2B. CANNY EDGE DETECTION

Image has been reduced to grayscale, and smooth the image using Gaussian filter (5X5)

The original image

2C. CANNY EDGE DETECTION

The intensity gradient of the previous image. The edges of the image have been handled by replicating.

Non-maximum suppression applied to the previous image.

2D. CANNY EDGE DETECTION

Double thresholding applied to the previous image. Weak pixels are removed

Hysteresis applied to the previous image

2E. CANNY EDGE DETECTION

cv2.cvtColor(src, dst)

Converts an image from one color space to another.

Parameters : src → Input Image dst → output image of the same size and depth as src.

2F. CANNY EDGE DETECTION

cv2.GaussianBlur(src, ksize, dst)

Blurs an image using a Gaussian filter.

Parameters : src → Input Image dst → output image of the same size and depth as src. size → Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd.

2G. CANNY EDGE DETECTION

cv2.Canny(image, threshold1, threshold2)

Finds edges in an image using the Canny algorithm

Parameters : src → Input Image threshold1 → first threshold for the hysteresis procedure. threshold2 → second threshold for the hysteresis procedure.

2G. CANNY EDGE DETECTION

cv2.Canny(image, threshold1, threshold2)

Finds edges in an image using the Canny algorithm

Parameters : src → Input Image threshold1 → first threshold for the hysteresis procedure. threshold2 → second threshold for the hysteresis procedure.

Example

Project Images

Canny edge detection image

Orginal image

03. REGION OF INTEREST

  • The region of interest for the self-driving car’s camera is only the two lanes immediately in it’s field of view and not anything extraneous. We can filter out the other pixels by making a triangle region of interest and removing all other pixels that are not in the triangle
  • we make the mask(as triangle) and add it to the main image

black image with white triangle + original image = Region of interest image

Project Images

Region of interest image

Full image after canny edge detection

04. HOUGH LINES P

Hough Transform is a popular technique to detect any shape, if you can represent that shape in mathematical form. It can detect the shape even if it is broken or distorted a little bit. A line can be represented as y = mx+c or in parametric form, as \rho = x \cos \theta + y \sin \theta where \rho is the perpendicular distance from origin to the line, and \theta is the angle formed by this perpendicular line and horizontal axis measured in counter-clockwise

4A. HOUGH LINES P

cv2.HoughLinesP(image, rho, theta, threshold, lines, minLineLength, maxLineGap)

Finds line segments in a binary image using the probabilistic Hough transform.

4b. HOUGH LINES P

Parameters : image → single-channel binary source image rho → Distance resolution of the accumulator in pixels. theta → Angle resolution of the accumulator in radians threshold → Only those lines are returned whose values are greater than the threshold lines →Output vector of lines minLineLength → Minimum line length. Line segments shorter than that are rejected. maxLineGap → Maximum allowed gap between points on the same line to link them.

Example

05. DISPLAY LINES

  • The hough transform gives us small lines based on the intersections in hough space. Now, we can take this information and construct a global left lane line and a right lane line.
  • We do this by separating the small lines into two groups, one with a positive gradient and the other with a negative gradient. Due to the depth of the camera’s view, the lane lines slant towards each other as we go further in depth, so that should have the opposite gradients.
  • it may contain multiple line for a single lane (left lane or right lane)

Example

Lines are drawn on the respective points (from hough lines)

06. MAKE POINTS AND AVERAGE SLOPE INTERSEPT

  • We have to make line from the points generated from the Hough lines p method
  • Using these points left lane and right lane are generated and these line are plotted on the image for predection indication

07. DISPLAY LINES AVERAGE

  • If the lane line are broad or multiples lines are detected for the same lane line the we average all the line and dispaly the only one line (or data) to our machine to perform with out any distractions.
  • We then overlay the extrapolated lines to the input image. We do this by adding a weight value to the original image based on the detected lane line coordinate.

Example

All Lines are averaged and drawn only one line

08. CAMERA CALIBRATION

  • While optical distortion is caused by the optical design of lenses , perspective distortion is caused by the position of the camera relative to the subject or by the position of the subject within the image frame.
  • To eliminate this distortation we need camera calibration.
  • Camera calibration is the process of finding the true parameters of the camera that took your photographs. Some of these parameters are focal length, format size, principal point, and lens distortion.

8A. CAMERA CALIBRATION

retval, corners = cv.findChessboardCorners( image, patternSize, corners, flags )

Finds the positions of internal corners of the chessboard.

8b. CAMERA CALIBRATION

Parameters : image → chessboard images patternSize → Number of inner corners per a chessboard row and column corners → Output array of detected corners. flags → Various operations retval → returns true or false

8d. CAMERA CALIBRATION

retval, cameraMatrix, distCoeffs, _, _ = cv2.calibrateCamera( objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs)

Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.

8e. CAMERA CALIBRATION

Parameters : objectPoints → In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. imagePoints → In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space imageSize → Size of the image cameraMatrix → input/output 3x3 floating-point camera intrinsic matrix (3X3 matrix image) distCoeffs →Input/output vector of distortion coefficients retval → returns true or false

8g. CAMERA CALIBRATION

dst = cv2.undistort( src, cameraMatrix, distCoeffs, dst, newCameraMatrix)

Transforms an image to compensate for lens distortion.

8h. CAMERA CALIBRATION

Parameters : src → Input image cameraMatrix → Input camera matrix (3X3 matrix image) distCoeffs → Input vector of distortion coefficients dst → Output image that has the same size and type as src . newCameraMatrix → Camera matrix of the distorted image

Example

Undistortated Image

Orginal Image(Distorted Image)

04. Conclusion

60%

of people waste their time

  • Accidents can be avoided
  • Time can be saved
  • Good for environment
  • Works more efficiently
  • And many

98%

of accidents occur due to human error

Thank you

Submitted by: Ambala Siva Tarun Reddy (1814120) Yelmela Nandini (1814122) Korada Manikanta (1814109)