Patent Title: Assisted perception for autonomous vehicles
Assignee: Google Inc.
Want to watch a video instead? Click here
- Autonomous vehicle – A vehicle (car/truck/motorcycle/bus) which can navigate roads without a human driver. More on Wikipedia.
- LIDAR – LIDAR is a mechanism by which one can measure objects/obstacles around a point of interest (POI). A laser beam is shot out from POI and objects can be detected using their reflection. Wikipedia
- Radar: Radar is a detection system that uses radio waves to determine the range, angle, or velocity of objects. Wikipedia
The invention lays out a framework for assisted driving in autonomous vehicles. If an autonomous vehicle is unable to understand its environment and make decisions, the decision can be offloaded to a more powerful computer or human using the apparatus described in this patent.
Breakdown of an autonomous vehicle
- Propulsion system
2. Sensor system
- GPS – Determines the position of the vehicle using satellite
- Inertial measurement unit – Gyroscopes and accelerometers sense position and orientation of the vehicle
- Radar unit – Uses radio signals to sense objects and in some cases their speed and direction
- Camera – Captures photos/video of the environment of
- LIDAR – Also called Laser rangefinder, uses lasers to detect objects around the vehicle
- Steering sensor – Detects steering angle of the vehicle
- Throttle/brake sensor – Senses either throttle or brake position of the vehicle
- Audio sensors – Capture audio from the environment of the vehicle
3. Control system
- Steering unit – Steering of the vehicle
- Throttle unit – Controls operating speed of engine/motor
- Brake unit – To decelerate the vehicle using friction or converting the kinetic energy of wheels to electric current
- Sensor fusion algorithm – Takes the input as sensor data. May evaluate individual objects or assess situations to provide possible outcomes
- Computer vision system – Hardware and software to process and analyze images. Detects and classifies objects such as stop signs, roadway boundaries etc.
- Navigation pathing system – Could be configured to determine a driving path for the vehicle.
- Obstacle avoidance system – Configured to evaluate potential obstacles based on sensor data and determine an alternate path
Peripherals include touchscreen, in-cabin microphone, speaker and wireless communication system (3G/4G etc.)
Note that the vehicle may have multiple cameras or other sensors (Radar/LIDAR/mics) for mapping the environment more accurately. For example, the above vehicle has one camera mounted on the top and other behind the windshield (front camera).
Object detection and identification
Autonomous vehicles need to detect and identify objects around them so they can respond accordingly. Based on sensor data, the control system can calculate its environment with a certain degree of confidence. This is called the confidence score.
Each object detected in the environment has a certain confidence score associated with it. If the confidence score is above
Although if the confidence is below a confidence threshold, the vehicle needs a more powerful computer to make a better prediction or a human to help it understand the environment.
Imagine a situation where an autonomous vehicle can’t see a stop sign on road because it is tailing a track and there the view is obstructed.
The view from front camera (behind the windshield) and sensor unit camera is shown below.
Since the autonomous vehicle can’t confidently identify the stop sign, it sends the data from
Once the operator punches in their response, the identification is confirmed and is added to a global map. Other vehicles may not have to request
If you liked to summary and want them delivered to your inbox every week, just subscribe to Fast Science below!