How can you guarantee that your self-driving software is safe, when verification moves from testing subroutines and functions to assuring that AI software that learns and applies new knowledge in real world always operates correctly?
While ISO 26262 guides us to define safety objectives, derive requirements from them, and then demonstrate that a rigorous design and verification methodology was used to design the software, advanced neural networks and machine learning techniques of autonomous driving software seem to break the traditional software development process. Download this paper to find out how to approach this dilemma.