Activation Function: Key to Cloning from Human Learning to Deep Learning
Keywords:
BC, end-to-end learning, saliency map, computer vision, behavioral cloning, autonomous vehicles, self-driving, obstacle mitigation
Abstract
Maneuvering a steady on-road obstacle at high speed involves taking multiple decisions in split seconds. An inaccurate decision may result in crash. One of the key decision that needs to be taken is can the on-road steady obstacle be surpassed. The model learns to clone the drivers behavior of maneuvering a non-surpass-able obstacle and pass through a surpass-able obstacle. No data with labels of 201C;surpass-able201D; and 201C;non-surpass-able201D; was provided during training. We have development an array of test cases to verify the robustness of CNN models used in autonomous driving. Experimenting between activation functions and dropouts the model achieves an accuracy of 87.33% and run time of 4478 seconds with input of only 4881 images (training + testing). The model is trained for limited on-road steady obstacles. This paper provides a unique method to verify the robustness of CNN models for obstacle mitigation in autonomous vehicles.
Downloads
- Article PDF
- TEI XML Kaleidoscope (download in zip)* (Beta by AI)
- Lens* NISO JATS XML (Beta by AI)
- HTML Kaleidoscope* (Beta by AI)
- DBK XML Kaleidoscope (download in zip)* (Beta by AI)
- LaTeX pdf Kaleidoscope* (Beta by AI)
- EPUB Kaleidoscope* (Beta by AI)
- MD Kaleidoscope* (Beta by AI)
- FO Kaleidoscope* (Beta by AI)
- BIB Kaleidoscope* (Beta by AI)
- LaTeX Kaleidoscope* (Beta by AI)
How to Cite
Published
2020-01-15
Issue
Section
License
Copyright (c) 2020 Authors and Global Journals Private Limited
This work is licensed under a Creative Commons Attribution 4.0 International License.