Inverse Abstraction of Neural Networks Using Symbolic Interpolation: Difference between revisions

From Murray Wiki
Jump to navigationJump to search
(Created page with "{{Paper |Title=Inverse Abstraction of Neural Networks Using Symbolic Interpolation |Authors=Sumanth Dathathri, Sicun Gao, Richard M. Murray |Source=To appear, 2019 AAAI Confer...")
 
No edit summary
 
Line 4: Line 4:
|Source=To appear, 2019 AAAI Conference on Artificial Intelligence
|Source=To appear, 2019 AAAI Conference on Artificial Intelligence
|Abstract=Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically involves extracting informa- tion through computing pre-images of neural networks, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images. Our approach relies on computing approximations that provably overapproximate and underapproximate the pre-images at all layers. The abstraction of pre-images enables formal analysis and knowl- edge extraction without modifying standard learning algo- rithms. We show how to use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are often interpretable and can be used for analyzing complex properties.
|Abstract=Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically involves extracting informa- tion through computing pre-images of neural networks, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images. Our approach relies on computing approximations that provably overapproximate and underapproximate the pre-images at all layers. The abstraction of pre-images enables formal analysis and knowl- edge extraction without modifying standard learning algo- rithms. We show how to use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are often interpretable and can be used for analyzing complex properties.
|URL=http://www.cds.caltech.edu/~murray/preprints/aaaYY-place.pdf
|URL=http://www.cds.caltech.edu/~murray/preprints/dgm19-aiaa.pdf
|Type=Conference paper
|Type=Conference paper
|ID=2018e
|ID=2018e

Latest revision as of 05:21, 27 December 2018

Title Inverse Abstraction of Neural Networks Using Symbolic Interpolation
Authors Sumanth Dathathri, Sicun Gao and Richard M. Murray
Source To appear, 2019 AAAI Conference on Artificial Intelligence
Abstract Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically involves extracting informa- tion through computing pre-images of neural networks, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images. Our approach relies on computing approximations that provably overapproximate and underapproximate the pre-images at all layers. The abstraction of pre-images enables formal analysis and knowl- edge extraction without modifying standard learning algo- rithms. We show how to use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are often interpretable and can be used for analyzing complex properties.
Type Conference paper
URL http://www.cds.caltech.edu/~murray/preprints/dgm19-aiaa.pdf
DOI
Tag dgm19-aiaa
ID 2018e
Funding NSF VeHICaL
Flags