ID | 原文 | 译文 |
7804 | 利用连续时间马尔可夫链捕获中继行为和系统随机状态两者之间的关系,通过实验得到了中继的不可信行为概率,验证了该预测模型的有效性。 | Using continuous time markov chain capture relay random state the relationship between the behavior and system, the relay not nobuyuki for probability is obtained by experiment, verify the effectiveness of the proposed prediction model. |
7805 | 针对深度卷积神经网络(deep convolutional neural network,DCNN)迁移至高分辨率遥感场景分类的问题。 | For deep convolution neural network (deep convolutional neural network, DCNN) migrated to high resolution remote sensing scene classification problems. |
7806 | 设计了有效的网络结构用于增强DCNN在高分辨率遥感场景分类任务中的泛化能力。 | Design the effective network structure used to enhance the DCNN generalization ability in high resolution remote sensing scene classification task. |
7807 | 首先,线性主成分分析网络被用于整合高分辨率遥感图像的空间信息,减小DCNN在迁移过程中源数据集与目标数据集之间的空间差异。 | First, linear principal component analysis (network is used in the integration of high resolution remote sensing image spatial information, reduce the DCNN during the migration process source data set and space differences between the target data set. |
7808 | 随后,经整合的图像输入预训练的DCNN,提取到更具泛化性能的全局特征表达。 | Then, the integration of image input DCNN training, to extract more global features of generalization capability of expression. |
7809 | 两个公开遥感数据集(UC Merced 21和WHU-RS 19)的试验结果表明,在不改变DCNN结构参数的情况下,相比现有方法,所设计的网络结构能够有效提升遥感场景分类精度。 | Two open remote sensing data sets (UC Merced 21 and WHU - RS (19) of the test results show that without changing DCNN structure parameters, compared with existing methods, the designed network structure can effectively improve the classification precision of remote sensing scene. |
7810 | 强化学习作为自学习和在线学习方法,以试错的方式与动态环境进行持续交互,进而学习到最优策略,成为机器学习领域一个重要的分支。 | As a self learning and online learning, reinforcement learning and dynamic environment in the form of trial and error continuous interaction, and then learn the optimal strategy, become an important branch of machine learning field. |
7811 | 针对当前无线通信干扰策略研究依赖先验信息以及学习速度过慢的缺点,提出了基于正强化学习-正交分解的干扰策略选择算法。 | In view of the current wireless communication jamming strategy research depend on a priori information, and the shortcoming of slow learning speed and, based on positive reinforcement learning - the interference of orthogonal decomposition strategy selection algorithm. |
7812 | 该算法利用正强化的思想提高了最优动作被选中的概率,进而加快了系统的学习速度。 | The algorithm using the ideas of positive reinforcement increased the probability of the optimal action is selected, and accelerate the learning speed of the system. |
7813 | 特别地,当通信信号星座图因诸多因素而产生畸变时,利用提出的正交分解算法能够学习到最佳干扰信号的同相分量和正交分量,即通过学习获得最佳干扰样式。 | Constellation in particular, when the signal distortion due to many factors, using the orthogonal decomposition algorithm can learn the best jamming signal in-phase component and quadrature component, namely the optimal interference by learning style. |