ID 原文 译文
19705 针对这一不足,该文提出一种基于双模板Siamese网络的视觉跟踪算法。 In view of this deficiency, an algorithm based on the Siamese network with double templates is proposed.
19706 首先,保留响应图中响应值稳定的初始帧作为基准模板R,同时使用改进的APCEs模型更新策略确定动态模板T。 First, the base template R which is the initial frame target with stable response map score and the dynamic template T which is using the improved APCEs model update strategy to determine are kept.
19707 然后,通过对候选目标区域与2个模板匹配度结果的综合分析,对结果响应图进行融合,以得到更加准确的跟踪结果。 Then, the candidate targets region and the two template matching results are analyzed, meanwhile the result response maps are fused, which could ensure more accurate tracking results.
19708 在OTB2013和OTB2015数据集上的实验结果表明,与当前5种主流跟踪算法相比,该文算法的跟踪精度和成功率具有明显优势,不仅在尺度变化、平面内旋转、平面外旋转、遮挡、光照变化情况下具有较好的跟踪效果,而且达到了46 帧/s的跟踪速度。 The experimental results on the OTB2013and OTB2015 datasets show that comparing with the 5 current mainstream tracking algorithms, the tracking accuracy and success rate of the proposed algorithm are superior. The proposed algorithm not only displaysbetter tracking effects under the conditions of scale variation, in-plane rotation, out-of-plane rotation, occlusion,and illumination variation, but also achieves real-time tracking by a speed of 46 frames per second.
19709 在机器视觉领域,预测人体运动对于及时的人机交互及人员跟踪等是非常有必要的。 In the field of computer vision, predicting human motion is very necessary for timelyhuman–computer interaction and personnel tracking.
19710 为了改善人机交互及人员跟踪等的性能,该文提出一种基于双向门控循环单元(GRU)的编-解码器模型(EBiGRU-D)来学习3D人体运动并给出一段时间内的运动预测。 In order to improve the performance of human–computer interaction and personnel tracking, an encoder-decoder model called Bi–directional Gated Recurrent UnitEncoder–Decoder (EBiGRU–D) based on Gated Recurrent Unit (GRU) is proposed to learn 3D human motion and give a prediction of motion over a period of time.
19711 EBiGRU-D是一种深递归神经网络(RNN),其中编码器是一个双向GRU(BiGRU)单元,解码器是一个单向GRU单元。 EBiGRU–D is a deep Recurrent Neural Network (RNN)in which the encoder is a Bidirectional GRU (BiGRU) unit and the decoder is a unidirectional GRU unit.
19712 BiGRU使原始数据从正反两个方向同时输入并进行编码,编成一个状态向量然后送入解码器进行解码。BiGRU将当前的输出与前后时刻的状态关联起来,使输出充分考虑了前后时刻的特征,从而使预测更加准确。 BiGRU allows raw data to be simultaneously input from both the forward and reverse directions and then encoded into a state vector, which is then sent to the decoder for decoding. BiGRU associates the current output with the state of the front and rear time, so that the output fully considers the characteristics of the time before and after, so that the prediction is more accurate.
19713 在human3.6m数据集上的实验表明EBiGRU-D不仅极大地改善了3D人体运动预测的误差还大大地增加了准确预测的时间。 Experimental results on the human3.6m dataset demonstrate that EBiGRU–D not only improves greatly the error of 3D human motion prediction but also increases greatly the time for accurate prediction.
19714 针对微机电惯性导航系统(MEMS-INS)定位解算存在积累误差及低功耗蓝牙技术iBeacon指纹定位存在跳变误差等问题,该文提出一种基于无迹卡尔曼滤波器(UKF)的iBeacon/MEMS-INS数据融合定位算法。 In order to overcome the accumulation error in Micro-Electro-Mechanical System-Inertial NavigationSystem (MEMS-INS) and the jump error in iBeacon fingerprint positioning,an iBencon/MEMS-INS datafusion location algorithm based on Unscented Kalman Filter (UKF) is proposed.