当前位置: 首页 > 科技观察

深度神经网络可解释性方法总结,用Tensorflow代码实现

时间:2023-03-18 13:42:05 科技观察

对神经网络的理解:人们一直觉得深度学习的可解释性弱。然而,对理解神经网络的研究从未停止过。本文介绍了神经网络的几种可解释性方法,代码连接可以在Jupyter下运行。ActivationMaximization通过最大化激活来解释深度神经网络有两种方法,如下:1.1ActivationMaximization(AM)相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.1%20Activation%20Maximization.ipynb1.2PerformingAMinCodeSpace相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.3%20Performing%20AM%20in%20Code%20Space.ipynbLayer-wiseRelevancePropagationlayer-wiseRelevancePropagation,一共有5种可解释的方法。敏感性分析、简单泰勒分解、分层相关传播、深度泰勒分解、DeepLIFT。通过敏感性分析引入关联分数的概念,用简单的泰勒分解探索基本的关联分解,然后建立各种层次关联传播方法来处理它们。具体如下:2.1灵敏度分析相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.1%20Sensitivity%20Analysis.ipynb2.2简单泰勒分解相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.2%20Simple%20Taylor%20Decomposition.ipynb2.3Layer-wiseRelevancePropagation相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%281%29.ipynbhttp://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%282%29.ipynb2.4DeepTaylorDecomposition相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%281%29.ipynbhttp://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20分解%20%282%29.ipynb2。5DeepLIFT相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.5%20DeepLIFT.ipynbGradientBasedMethods基于梯度的方法包括:反卷积、反向传播、引导反向传播、积分梯度和平滑梯度。详情请参考以下链接:https://github。com/1202kbs/Understanding-NN/blob/master/models/grad.py详情如下:3.1反卷积相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.1%20Deconvolution.ipynb3.2Backpropagation相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.2%20Backpropagation.ipynb3.3GuidedBackpropagation相关代码为如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.3%20Guided%20Backpropagation.ipynb3.4IntegratedGradients相关代码如下:http://nbviewer.jupyter。org/github/1202kbs/Understanding-NN/blob/master/3.4%20Integrated%20Gradients.ipynb3.5SmoothGrad相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.5%20SmoothGrad.ipynbClassActivationMap类激活有三种作图方式,分别是:ClassActivationMap、Grad-CAM、Grad-CAM++。MNIST上的代码可以参考:https://github.com/deepmind/mnist-cluttered各个方法的详细介绍如下:4.1ClassActivationMap相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.1%20CAM.ipynb4.2Grad-CAM相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.2%20Grad-CAM.ipynb4.3Grad-CAM++相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.3%20Grad-CAM-PP.ipynbQuantifyingExplanation质量尽管每种解释技术都基于自己的直觉或数学原理,但在更抽象的层面上识别良好解释的特征并能够定量测试这些特征也很重要。这里再推荐两种基于质量和评价的可解释性方法。具体如下:5.1ExplanationContinuity相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/5.1%20Explanation%20Continuity.ipynb5.2ExplanationSelectivity相关代码如下:http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/5.2%20Explanation%20Selectivity.ipynb