ID |
原文 |
译文 |
58718 |
在收敛速度最快的前提下,很大程度避免了其他几种优化算法易陷入局部最优的问题,整体性能最佳。 |
On the premise of the fastest convergence speed, the QOGSA can greatly avoid the problem that other optimization algorithms tend to fall into local optimization, with the overall performance being the best. |
58719 |
针对分布式同步正交匹配跟踪算法精度不高的问题,基于多次迭代搜索的思想,提出两种高精度宽带欠定信号到达角估计方法。 |
In order to handle the low accuracy of the algorithm DCS-SOMP, two underdetermined wideband direction of arrival estimation methods with a high accuracy are proposed from the perspective of multiple iteration. |
58720 |
首先,构建稀疏阵列宽带信号处理模型,并通过稀疏表示将到达角估计转化为分布式压缩感知问题; |
First, the wideband signal processing model using the sparse array is established and transformed into a distributed compressive sensing problem through sparse representation. |
58721 |
其次,利用矩阵变换去除噪声污染项,以消除噪声功率的影响; |
Then, the noises are eliminated through matrix transformation. |
58722 |
然后,分别利用邻近网格搜索和网格精细化搜索两种方式进行改进,以提高无网格失配和有网格失配条件下的到达角估计精度。 |
After that, an algorithm utilizing proximity searching is proposed to improve the estimation accuracy and an algorithm based on the refined grid is proposed to compensate grid mismatch. |
58723 |
仿真结果表明,所提算法是有效的,在保持运算速度优势的前提下,较分布式同步正交匹配跟踪算法显著提升了估计精度。 |
Simulation results show that the proposed algorithms outperform the DCS-SOMP with a higher estimation accuracy and possess the advantage in computational speed. |
58724 |
虽然批归一化算法能有效加速深度卷积网络模型的收敛速度,但其数据依赖性复杂,训练时会导致严重的“存储墙”瓶颈。 |
Batch Normalization (BN) can effectively speed up deep neural network training, while its complex data dependence leads to the serious "memory wall" bottleneck. |
58725 |
故对使用批归一化算法的卷积神经网络,提出多层融合且重构批归一化层的训练方法,减少模型训练过程中的访存量。 |
Aiming at the "memory wall" bottleneck for the training of the convolutional neural network(CNN) with BN layers, an effective memory access optimization method is proposed through BN reconstruction and fused-layers computation. |
58726 |
首先,通过分析训练时批归一化层的数据依赖、访存特征及模型训练时的访存特征,分析访存瓶颈的关键因素; |
First, through detailed analysis of BN’s data dependence and memory access features during training, some key factors for large amounts of memory access are identified. |
58727 |
其次,使用“计算换访存”思想,提出融合“卷积层+批归一化层+激活层”结构的方法, |
Second, the “Convolution + BN + ReLU (Rectified Linear Unit)” block is fused as a computational block to reduce memory access with re-computing strategy in training. |