How do self-driving cars see the world?

For computers, objects are just a set of green, red, and blue image datasets. So in self-driving cars, perception is critical to emulate the brain to make a meaningful sense of…
< 1 min read
271
Image source: GNU/Linux User Group (GLUG)

For computers, objects are just a set of green, red, and blue image datasets. So in self-driving cars, perception is critical to emulate the brain to make a meaningful sense of the world surrounding the vehicle.

Perception tasks are often associated with computer vision, machine learning, and neural networks. In a nutshell, these are four core tasks of self-driving cars for perceiving the world:

  1. Detection to recognize and figure out where an object is in the environment.
  2. Classification to determine what exactly the object is.
  3. Tracking to observe moving objects over time, e.g. a walking pedestrian. This is useful for monitoring speed or velocity of the surrounding objects in relation to ego (the vehicle itself).
  4. Segmentation to match each pixel in an image with semantic categories, such as road, car, and sky.

Source

Suggested Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed