ID 原文 译文
643 该算法结合 HRRP 数据特性,利用平均像在散射点不发生越距离单元走动的方位帧内具有稳健物理特性的性质,基于变分自编码器构建了稳健变分自编码模型。 According to the stable physical properties of the average profile in each HRRP frame without migra-tion through resolution cell, RVAE is developed based on variational auto-encoder.
644 该模型不仅能够获取稳健有效的识别特征,而且在一定程度上保存了数据的帧内结构信息,较大地提高了目标的平均识别率。 And such model is able to not only ex-plore the latent representations of HRRP but also reserve structure characteristics of the HRRP frame.
645 基于实测 HRRP 数据验证了所提算法的有效性。 We use the measured HRRP data to show the effectiveness and efficiency of our algorithm.
646 生成适应模型利用生成对抗网络实现模型结构,并在领域适应学习上取得了突破。 Generate-to-adapt model has used generative adversarial network to implement model structure and has made a breakthrough in domain adaptation learning.
647 但其部分网络结构缺少信息交互,且仅使用对抗学习不足以完全减小域间距离,从而使分类精度受到影响。 However, some of its network structures lack information interaction, and the ability to use only adversarial learning is not sufficient to completely reduce the inter-domain distance.
648 为此,提出一种基于生成对抗网络的无监督域适应分类模型(Unsupervised Domain Adaptation classification model based on GAN,UDAG)。 In this paper, an unsupervised domain adaptation classification model based on generative adversarial network (UDAG)is proposed.
649 该模型通过联合使用生成对抗网络和多核最大均值差异度量准则优化域间差异,并充分利用无监督对抗训练及监督分类训练之间的信息传递以学习源域分布和目标域分布之间的共享特征。 This model optimizes inter-domain differences and makes full use of the information between unsupervised confrontation training and supervised classification training to learn the shared features between the source and target domain distribution.
650 通过在四种域适应情况下的实验结果表明,UDAG 模型学习到更优的共享特征嵌入并实现了域适应图像分类,且分类精度有明显提高。 The ex-perimental results under four domain adaptation conditions show that the UDAG model learns better shared feature embed-ding and implements domain adaptive classification, and the classification accuracy is significantly improved.
651 视频帧中复杂的环境背景、照明条件等与行为无关的视觉信息给行为空间特征带来了大量的冗余和噪声,一定程度上影响了行为识别的准确性。 In video frames, the complex environment background, lighting conditions and other visual information un-related to action bring a lot of redundancy and noise to action spatial feature, which affects the accuracy of action recognitionto some extent.
652 针对这一点,本文提出了一种循环区域关注单元以捕捉空间特征中与行为相关的区域视觉信息,并根据视频的时序特性又提出了循环区域关注模型。 In view of this, this paper proposes a recurrent region attention cell to capture the visual information of theregion related to the action in spatial features. Based on the sequence nature of video, a recurrent region attention model(RRA)is proposed.