ID 原文 译文
25115 模板张量可以在双向 LSTM 神经网络分类模型的训练过程不断的被优化,使得分解后的核心张量包含更加有效的张量结构和特征信息,有助于提高后续分类模型的准确性,实现案件罪名的精准认定。 The template tensor can be continuously optimized during the training process of Bi-LSTM neural network classification model, so that the decomposed core tensor contains more effective tensor structure and feature information, which is helpful to improve the accuracy of the subsequent classification model and realize the accurate conviction in judicial cases.
25116 实验结果表明:所提出的基于张量分解和双向 LSTM 的司法案件定罪方法比现有方法具有更好的准确性。 The experimental results show that the proposed method for conviction in judicial cases based on tensor decomposition and Bi-LSTM has better accuracy than the existing methods.
25117 针对卷积神经网络中卷积运算复杂度高、计算量大及算法在 CPU GPU 上计算时存在延时及功耗限制问题,从提高现有硬件平台计算速率、降低功耗角度出发,设计了一种基于 ZYNQ 的具有高吞吐率和低功耗的可重构神经网络加速系统。 Aiming at the problems of high complexity of convolution operation, large amount of calculation and the limitation of delay and power consumption when the algorithm is calculated on the CPU and GPU in the convolutional neural network,from the perspective of increasing the calculation rate and reducing power consumption of existing hardware platforms, a reconfigurable neural network acceleration system with high throughput and low power consumption based on ZYNQ is presented.
25118 为充分利用运算资源,探索了一种卷积运算循环优化电路; In order to make full use of computing resources, a convolution operation loop optimization circuit is explored;
25119 为降低带宽访问量,设计了一种数据在内存中的特殊排列方式。 in order to reduce the bandwidth access, a special arrangement of the data in memory is designed.
25120 VGG16 网络为例,利用 ZYNQ 对系统进行加速,在计算性能上达到 62. 00GPOS 的有效算力,分别是 GPU CPU 2. 58 倍和 6. 88 倍,其 MAC 利用率高达 98. 20% ,逼近 Roofline 模型理论值。 Taking the VGG16 network as an example, using ZYNQ to accelerate the system, 62. 00 GPOS effective computing power was reached, which was 2. 58 times and 6. 88 times that of the GPU and CPU respectively. Its MAC utilization rate was as high as 98.20% , which was close to the theoretical value of the Roofline model.
25121 加速器的计算功耗为 2.0W,能效比为 31. 00 GOPS/W,是 GPU 112. 77 倍和 CPU 334. 41 倍。 The computing power consumption of the accelerator was 2.0W, and the energy efficiency ratio was 31. 00 GOPS /W, which was 112. 77 times that of the GPU and 334. 41 times that of the CPU.
25122 针对图像序列病态区域匹配歧义性以及稠密视差图连通性的问题,本文提出一种基于特征级联卷积神经网络的双目立体匹配计算方法。 In order to overcome the ambiguity of ill-posed regions matching while enhancing the connectivity of dense disparity map, this paper proposes a binocular stereo matching method based on feature cascade convolutional neural network.
25123 构造特征重用的全卷积密集块,利用“跳连接”机制将浅层提取的特征图级联到后续子层,对深层卷积丢失的局部特征信息进行补偿。 We constructed a fully convolutional densely block with feature reuse to utilize the "skip-connection" mechanism to transmit the feature maps extracted from the previous layers to all subsequent layers, and compensated the local feature information of deep convolution losing.
25124 引入指示函数划分一定大小的训练集,将其批量输入特征级联卷积网络模型进行前向传播,同时通过小批量梯度下降(Mini-Batch Gradient Descent ,MBGD)策略更新初始权重和偏置参数。 At the forward propagation stage, we designed an indicator function to divide a certain size of the training set as the input of the feature cascade convolutional network model, and applied the Mini-Batch Gradient Descent (MBGD) strategy to update the initial weight and bias parameters.