How do self-driving cars see the world?
For computers, objects are just a set of green, red, and blue image datasets. So in self-driving cars, perception is critical to emulate the brain to make a meaningful sense of…
< 1 min read
271
For computers, objects are just a set of green, red, and blue image datasets. So in self-driving cars, perception is critical to emulate the brain to make a meaningful sense of the world surrounding the vehicle.
Perception tasks are often associated with computer vision, machine learning, and neural networks. In a nutshell, these are four core tasks of self-driving cars for perceiving the world:
- Detection to recognize and figure out where an object is in the environment.
- Classification to determine what exactly the object is.
- Tracking to observe moving objects over time, e.g. a walking pedestrian. This is useful for monitoring speed or velocity of the surrounding objects in relation to ego (the vehicle itself).
- Segmentation to match each pixel in an image with semantic categories, such as road, car, and sky.