ID 原文 译文
24685 最后通过仿真算例检验所提方法的有效性。 Finally, the validity of the proposed method is verified by simulation examples.
24686 结果表明,该定位方法在一定条件下可以较精确地确定场源位置、分界面方位。 The results show that the localization method can accurately determine the position of the field source and the orientation of the interface under certain conditions.
24687 该方法经适当完善还可以应用于鱼雷等水下航行器的制导过程。 The method can also be applied to torpedo guidance process after proper improvement.
24688 针对合成孔径雷达(Synthetic Aperture Radar,SAR)图像中飞机目标散射点离散化程度高,周围背景干扰复杂,现有算法对飞机浅层语义特征表征能力弱等问题,本文提出了基于注意力特征融合网络(Attention Feature Fu⁃sion Network,AFFN)的 SAR 图像飞机目标检测算法。 Aiming at the problems of high discretization of aircraft's back scattering points, complex background interference of surroundings in Synthetic Aperture Radar (SAR) images and weak representation of shallow semantic features of aircraft by existing algorithms, an Attention Feature Fusion Network (AFFN) was proposed for aircraft detection in SAR images.
24689 通过引入瓶颈注意力模块(Bottleneck Attention Module,BAM),本文在 AFFN 中构建了包含注意力双向特征融合模块(Attention Bidirectional Feature Fusion Module,ABFFM)与注意力传输连接模块(Attention Transfer Connection Block,ATCB)的注意力特征融合策略并合理优化了网络结构,提升了算法对飞机离散化散射点浅层语义特征的提取与判别。 By introducing Bottleneck Attention Module (BAM), this article constructed an attention feature fusion strategy consisting of Attention Bidirectional Feature Fusion Module (ABFFM) and Attention Transfer Connection Block (ATCB) in AFFN, and rationally optimized the network structure so as to strengthen the abilities of extracting and discriminating shallow semantic features of aircraft.
24690 基于自建的 Gaofen-3 TerraSAR-X 卫星图像混合飞机目标实测数据集,实验对 AFFN 与基于深度学习的通用目标检测以及 SAR 图像特定目标检测算法进行了比较,其结果验证了 AFFN对SAR图像飞机目标检测的准确性与高效性。 Based on a self-built Gaofen-3 and TerraSAR-X mixed aircraft dataset, AFFN was compared with several CNN-based general object detection methods and methods designed for specific objects in SAR images. The experimental results illustrated the accuracy and effectiveness of our method for aircraft detection in SAR images.
24691 针对循环生成对抗网络 CycleGAN(Cycle Generative Adversarial Networks)在光学图像迁移生成水下小目标合成孔径声纳图像过程中存在质量差和速度慢的问题,本文提出一种新的特征提取单元 SDK(Selective DilatedKernel),并利用 SDK 设计了一个新的生成器网络 SDKNet。 The original CycleGAN show poor quality and time consuming in optical image to underwater small target synthetic aperture sonar image translation task. To address those problems, a novel convolution building block, SDK (Selective Dilated Kernel), is proposed. By stacking SDK blocks, a generator SDKNet is created.
24692 与此同时,提出了一种新的循环一致损失函数 MS-CCLF(Multiscale Cyclic Consistent Loss Function),MS-CCLF 增加了图像多尺度结构相似性约束。 At the same time, Multiscale Cycle Consistent Loss Function (MS-CCLF) is proposed, which add the Multiscale Structural Similarity Index (MS-SSIM) between input images and reconstructed images.
24693 在自建的图像迁移数据集OPT-SAS 上,本文 SM-CycleGAN(Selective and Multiscale Cycle Generative Adversarial Networks)比原始 CycleGAN 的图像迁移质量提升 4.64%,生成器网络参数降低 4.13MB, 运算时间减少 0.143s。 On our image translation dataset (OPT-SAS), the classification accuracy of our SM-CycleGAN is 4.64% higher than that of original CycleGAN. The generator parameters of SM-CycleGAN is 4.13MB lower than that of CycleGAN, and the time consuming of SM-CycleGAN is 0.143s less than that of CycleGAN.
24694 实验结果表明,SM-CycleGAN 更适合水下小目标光学图像到合成孔径声纳图像的迁移任务。 The experimental results show that SM -CycleGAN is more suitable for the translation task of optical image to small underwater target synthetic aperture sonar image.