ID 原文 译文
56488 近几年,深度模型在诸多任务中取得了巨大成功,但是深度模型需要大量的存储和计算资源实现精确决策,研究者为了将深度模型应用到资源受限的终端设备中,设计了模型压缩的优化策略来降低模型占存和计算量. Although deep learning models have recently achieved remarkable performance in many tasks, theyrequire massive memory footprint and computing power to achieve efficient inference. The researchers propose anumber of compression methods to compress the capacity and computation of the model so that deep learning canbe deployed to resource-constrained mobile terminals.
56489 本文基于剪枝压缩框架,从卷积核重要度评价的角度提出了两种模型剪枝算法. Based on the pruning framework, two pruning methodsare proposed from the perspective of filter importance evaluation.
56490 (1)由于每个卷积核都可以学习到其独有特征信息,因此本文提出了一种归因评价机制用于评价卷积核所学特征与因果特征的相关度,将模型中与因果特征相关度较低的卷积核进行裁剪,以实现模型压缩的目的,同时也能够保留原模型的归因特征,称此算法为归因剪枝. (1) As every filter can learn unique features,we propose an attribution mechanism to evaluate the correlation between the features learned by a filter and thecausal features. We prune the filter with low correlation so as to compress the model and retain the attributioncharacteristics of the original model; the process is called attribution pruning.
56491 (2)第2种剪枝算法基于迭代优化剪枝框架,采用卷积通道和梯度中正相关特征评价相应卷积核重要度,以便于提高剪枝冗余卷积核的精准度,称为Taylor-guided剪枝算法. (2) The second pruning methoduses positive correlation features in a channel and gradient to evaluate the importance of the filter, which isbased on an iterative optimization pruning framework. This method, which is called Taylor-guided pruning, canimprove the accuracy of pruning redundant filters.
56492 本文在VGGNet和ResNet两种网络架构上进行实验验证,结果表明:归因剪枝算法可以极大地保留原模型的归因特征; We implement two pruning methods in VGGNet and ResNet. Extensive experiments demonstrate that attribution pruning can greatly retain the attribution characteristics ofthe original model.
56493 并且两种剪枝算法能够取得比当前主流剪枝算法更优异的压缩效果. Moreover, the two pruning methods can achieve better compression than current mainstreampruning methods.
56494 本文研究K-近邻分类器的鲁棒性验证问题. We study the robustness verification problem for K-NN classifiers.
56495 形式化鲁棒性验证的目标是计算分类器在给定样本点上的最小对抗扰动的精确值或者最小对抗扰动的非平凡下界. The objective of formal robustnessverification is to find the exact minimal adversarial perturbation or a guaranteed lower bound of the perturba?tion.
56496 我们将计算K-近邻分类器的最小对抗扰动形式化为一组二次规划问题. We find that the robustness verification of K-NN classifiers could be formalized as a series of quadraticprogramming problems.
56497 二次规划问题的数目随近邻参数K的增大呈指数级增长,精确求解该组二次规划问题往往不可行.约束放松法通过放松优化的约束项,可以在多项式时间内求解最小对抗扰动的下界.然而,本文通过理论分析和实验发现,当近邻参数K取值较大时,约束放松法求得的下界往往过于宽松,甚至会出现K越大下界越小的反直觉结果. Solving these quadratic programming problems is not possible in general because thenumber of problems grows exponentially with respect to K. The constraint relaxation method is proposed tocompute the lower bound of the minimal adversarial perturbation in polynomial time. However, we find that theresulting lower bound tends to be extremely loose when K is large; hence, K-NN with a large K being less robustis counterintuitive.