How do self-driving cars see the world?

For computers, objects are just a set of green, red, and blue image datasets. So in self-driving cars, perception is critical to emulate the brain to make a meaningful sense of the world surrounding the vehicle.

Perception tasks are often associated with computer vision, machine learning, and neural networks. In a nutshell, these are four core tasks of self-driving cars for perceiving the world:

  1. Detection to recognize and figure out where an object is in the environment.
  2. Classification to determine what exactly the object is.
  3. Tracking to observe moving objects over time, e.g. a walking pedestrian. This is useful for monitoring speed or velocity of the surrounding objects in relation to ego (the vehicle itself).
  4. Segmentation to match each pixel in an image with semantic categories, such as road, car, and sky.

Source

News Desk
News Desk

Suggested Articles

- Advertisement -

Read The Next Article