Inverse Abstraction of Neural Networks Using Symbolic Interpolation
From Murray Wiki
				
				
				Jump to navigationJump to search
				
				
| Title | Inverse Abstraction of Neural Networks Using Symbolic Interpolation | 
|---|---|
| Authors | Sumanth Dathathri, Sicun Gao and Richard M. Murray | 
| Source | To appear, 2019 AAAI Conference on Artificial Intelligence | 
| Abstract | Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically involves extracting informa- tion through computing pre-images of neural networks, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images. Our approach relies on computing approximations that provably overapproximate and underapproximate the pre-images at all layers. The abstraction of pre-images enables formal analysis and knowl- edge extraction without modifying standard learning algo- rithms. We show how to use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are often interpretable and can be used for analyzing complex properties. | 
| Type | Conference paper | 
| URL | http://www.cds.caltech.edu/~murray/preprints/dgm19-aiaa.pdf | 
| DOI | |
| Tag | dgm19-aiaa | 
| ID | 2018e | 
| Funding | NSF VeHICaL | 
| Flags | 

