Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
Member:sungbeanJo_paper [2021/03/04 18:00] sungbean |
Member:sungbeanJo_paper [2021/04/21 22:08] (current) sungbean |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | To cope with the | + | get_config_param active timestamp_mode |
- | inertia problem without an explicit mapping of potential | + | TIME_FROM_INTERNAL_OSC |
- | causes or on-policy interventions, we jointly train a sensorimotor | + | get_config_param active multipurpose_io_mode |
- | controller with a network that predicts the ego vehicle’s | + | OUTPUT_OFF |
- | speed. Both neural networks share the same representation | + | get_config_param active sync_pulse_in_polarity |
- | via our ResNet perception backbone. Intuitively, | + | ACTIVE_LOW |
- | what happens is that this joint optimization enforces the | + | get_config_param active nmea_in_polarity |
- | perception module to have speed related features into the | + | ACTIVE_HIGH |
- | learned representation. This reduces the dependency on input | + | get_config_param active nmea_baud_rate |
- | speed as the only way to get dynamics of the scene, | + | BAUD_9600 |
- | leveraging instead visual cues that are predictive of the car’s | + | |
- | velocity (e.g., free space, curves, traffic light states, etc). | + |