ID 原文 译文
20725 算法均以网络连通性为基础,且均以传播时延为目标重新更新控制器集合。 The initial set of controllers is updated for minimizing propagation delay in the algorithms’ last step. The algorithm is based on the connectivity of intra-domain and inter-domain.
20726 仿真实验表明,该算法在保证任意时刻网络负载均衡的同时,可以保证较低的传播时延, Simulation results show that the proposed algorithms not only guarantee the load balancingamong controllers, but also guarantee the lower propagation delay.
20727 与Pareto模拟退火算法、改进的K-Means算法等相比,可以使网络负载均衡情况平均提高40.65%。 As to compare to PSA algorithm, optimizedK-Means algorithm, etc., it can make Network Load Balancing Index (NLBI) averagely increase by 40.65%.
20728 基于灰度图像隐写算法直接应用于彩色图像引起的安全性问题,该文针对彩色分量提出一种动态更新失真代价的空域隐写算法。 Considering the possible security problems of directly extending steganographic schemes for gray-scale images to color images, an adaptive distortion-updated steganography method is put forward based on the Modification Strategy for Color Components (CCMS).
20729 首先,分析了彩色分量内容特性与通道间相关性的关系,提出中心元素的失真更新准则。 First, the correlation between color components and RGB channels is analyzed, and the principle of distortion cost modification is proposed.
20730 随后,考虑到隐写过程中邻域分量嵌入修改产生的交互影响,得到维持邻域相关性的最优修改方式。 Moreover, the optimal modification mode is conducted to maintain the statistical correlation of adjacent components.
20731 最后,提出彩色分量的失真代价动态更新策略(CCMS) Finally, colorimage steganography schemes called CCMS are proposed.
20732 实验表明,在5种嵌入率下HILL-CCMS,WOW-CCMS算法对彩色隐写特征CRM,SCCRM的抗检测能力明显高于HILL和WOW算法。 The experimental results show that the proposedHILL-CCMS and WOW-CCMS make great improvement over HILL and WOW methods under 5 embeddingrates in resisting state-of-the-art color steganalytic methods such as CRM and SCCRM.
20733 双向长短时记忆模型(BLSTM)由于其强大的时间序列建模能力,以及良好的训练稳定性,已经成为语音识别领域主流的声学模型结构。 Bi-direction Long Short-Term Memory (BLSTM) model is widely used in large scale acoustic modeling recently. It is superior to many other neural networks on performance and stability.
20734 但是该模型结构很容易过拟合。在实际应用中,通常会使用一些技巧来缓解过拟合问题,例如在待优化的目标函数中加入L2正则项就是常用的方法之一。 However, one of the biggest problem of BLSTM is overfitting, there are some common ways to get over it, for example, multitask learning, L2 model regularization.