Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
Member:sungbeanJo_paper [2021/03/04 18:03] sungbean |
Member:sungbeanJo_paper [2021/04/21 22:08] (current) sungbean |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | We use L1 as loss function ℓ instead | + | get_config_param active timestamp_mode |
- | of the mean squared error (MSE), as it is more correlated | + | TIME_FROM_INTERNAL_OSC |
- | to driving performance [11]. As our NoCrash benchmark | + | get_config_param active multipurpose_io_mode |
- | consists of complex realistic driving conditions in the presence | + | OUTPUT_OFF |
- | of dynamic agents, we collect demonstrations from an | + | get_config_param active sync_pulse_in_polarity |
- | expert game AI using privileged information to drive correctly | + | ACTIVE_LOW |
- | (i.e. always respecting rules of the road and not crashing | + | get_config_param active nmea_in_polarity |
- | into any obstacle). Robustness to heavy noise in the | + | ACTIVE_HIGH |
- | demonstrations is beyond the scope of our work, as we aim | + | get_config_param active nmea_baud_rate |
- | to explore limitations of behavior cloning methods in spite | + | BAUD_9600 |
- | of good demonstrations. Finally, we pre-trained our perception | + | |
- | backbone on ImageNet to reduce initialization variance | + | |
- | and benefit from generic transfer learning, a standard practice | + | |
- | in deep learning seldom explored for behavior cloning. | + |