ID 原文 译文
38896 与传统三维卷积神经网络相比,通道可分离卷积神经网络加入模型正则化,通过降低训练精度同时提高测试精度,降低了模型的过度拟合。 Compared with the traditional 3D convolutional neural network, the channel separable convolutional neural network adds model regularization, which reduces the overfitting of the model by reducing the training accuracy and improving the testing accuracy.
38897 在UCF-101和HMDB-51数据集上的实验分别达到92.7%和64.5%的准确率。 Experiments on UCF-101 and HMDB-51 datasets have achieved 92.7% and 64.5% accuracy, respectively.
38898 结果表明,通道可分离卷积神经网络可以提高准确率并降低计算复杂度。 The results show that the channel separable convolutional neural network can improve the accuracy and reduce the computational complexity.
38899 阴影检测向来是计算机视觉领域的一个基础性挑战。 Shadow detection is always a basic challenge in the computer vision area.
38900 它需要网络理解图像的全局语义和局部细节信息。 It needs an understanding of global image semantic and local detail information.
38901 本文提出了一种检测阴影区域的先验特征金字塔网络结构。 In this paper, we proposed a novel Prior Feature Pyramid Network for shadow detection.
38902 该网络搭建了先验加权模块来提取图像中蕴含的阴影先验信息,通过使用阴影先验信息加权卷积特征,引导网络学习到阴影区域。 The framework constructed the prior attention module to extract the shadow prior information and employed it to weigh convolutional features to guide the network to learn shadow regions.
38903 同时,该网络还应用了特征融合模块来融合粗略的语义信息和自上而下路径中的精细特征,并且加入了后处理,进一步优化网络的预测结果。 Meanwhile, we also applied a feature polymerization module to make the coarse-level semantic information well fused with the fine-level features from the top-down pathway and used post-process operation to help the network optimize prediction results.
38904 本文在两个公开的阴影检测基准数据集上进行了实验来评估其网络性能。 We employed two common shadow detection benchmark datasets and perform experiments to evaluate our network.
38905 实验表明,本文的方法能够更准确地检测到阴影,和过去最先进的方法相比也表现出色,在SBU数据集上正确率达到了96.6%,平衡检测错误因子为6.22。 Experiment results show that our proposed approach can more accurately detect the shadow regions with sharpened details and hence substantially improve the performance compared to the previous state-of-the-arts. Our approach achieves excellent performance with 96.6% accuracy value and 6.22 balance error rate on the SBU dataset.