Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
Member:sungbeanJo_paper [2021/03/04 18:14] sungbean |
Member:sungbeanJo_paper [2021/04/21 22:08] (current) sungbean |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | Deep networks trained on demonstrations of human | + | get_config_param active timestamp_mode |
- | driving have learned to follow roads and avoid obstacles. | + | TIME_FROM_INTERNAL_OSC |
- | However, driving policies trained via imitation learning cannot | + | get_config_param active multipurpose_io_mode |
- | be controlled at test time. A vehicle trained end-to-end to imitate | + | OUTPUT_OFF |
- | an expert cannot be guided to take a specific turn at an upcoming | + | get_config_param active sync_pulse_in_polarity |
- | intersection. This limits the utility of such systems. We | + | ACTIVE_LOW |
- | propose to condition imitation learning on high-level command | + | get_config_param active nmea_in_polarity |
- | input. At test time, the learned driving policy functions as a | + | ACTIVE_HIGH |
- | chauffeur that handles sensorimotor coordination but continues | + | get_config_param active nmea_baud_rate |
- | to respond to navigational commands. We evaluate different | + | BAUD_9600 |
- | architectures for conditional imitation learning in vision-based | + | |
- | driving. We conduct experiments in realistic three-dimensional | + | |
- | simulations of urban driving and on a 1/5 scale robotic truck | + | |
- | that is trained to drive in a residential area. Both systems | + | |
- | drive based on visual input yet remain responsive to high-level | + | |
- | navigational commands. | + |