ID 原文 译文
26085 通过 Rayleigh 杂波中 Swerling 1 型起伏目标的检测及跟踪结果证明了所提算法的性能。 The performance of the proposed algorithm is verified by the simulation results of Swerling 1 fluctuating targets detecting and tracking in Ray-leigh clutter.
26086 为构建拥有 2D 神经网络速度同时保持 3D 神经网络性能的视频行为识别模型,提出 3D 多支路聚合轻量网络行为识别算法。 To construct a video action recognition model with 2D neural network speed while maintaining the performance of 3D neural network, the 3D multi-branch aggregation lightweight network action recognition algorithm is proposed.
26087 首先,利用分组卷积将神经网络分割成多个支路; Firstly, the neural network is divided into multiple branches by using grouped convolution.
26088 其次,为促进支路间信息流动,加入具有信息聚合功能的多路复用模块; Secondly, to promote theinformation exchange between branches, a multiplexer module with information aggregation function is added.
26089 最后,引入自适应注意力机制,对通道与时空信息进行重定向。 Finally, the adaptive attention mechanism is introduced to redirect channel and spatio-temporal information.
26090 实验表明,本算法在 UCF101数据集上的计算成本为 11. 5GFlops,准确率为 96. 2% ; Experiments show that, the computational cost of the algorithm on the UCF101 dataset is 11. 5GFlops, and the accuracy is 96. 2% ;
26091 HMDB51 数据集上的计算成本为 11. 5GFlops,准确率为74. 7% the computational cost on the HMDB51 dataset is 11. 5GFlops, and the accuracy is 74. 7%.
26092 与其他行为识别算法相比,提高了视频识别网络的效率,体现出一定识别速度和准确率优势。 Compared with other action recognition algorithms, it improves the efficiency of the video recognition network and reflects certain recognition speed and accuracy advantages.
26093 针对现有 RGBD 场景流计算模型在复杂场景、非刚性运动和运动遮挡等情况下易产生场景过度平滑和运动边缘模糊的问题,提出一种基于 FRFCM (Fast and Robust Fuzzy C-Means)聚类与深度优化的 RGBD 场景流计算方法。 In order to address the issues of scene over-smoothing and motion edge-blurring caused by the existingRGBD scene flow methods under complex scenes, non-rigid movement and motion occlusions, this paper proposes a RGBD scene flow method based on FRFCM (Fast and Robust Fuzzy C-Means) clustering and depth optimization.
26094 首先以图像序列连续帧间光流信息为基准,利用 FRFCM 聚类算法对输入图像进行初始分割,然后根据深度图像的运动边缘信息优化初始分割结果,提取高置信度的运动分层信息。 First, the optical flow information from the consecutive frames is marked as the benchmark and the FRFCM clustering approach is utilized to obtain the initial segmentation of the input image sequences. Second, according to the motion edge information of the depthimage, we further optimize the initial segmentation to extract the high-confidence hierarchical motion information.