ID 原文 译文
40066 最后利用多向延迟嵌入的逆向操作,得到补全的数据。该算法在BraTS脑胶质瘤影像数据集上进行了高低级别肿瘤分类实验,并与7种基线模型进行了比较。 The algorithm is used to classify high-level and low-level tumors on the BraTS glioma image data set and compared with seven baseline models.
40067 实验结果表明,本文提出方法的平均分类准确率可达91.31%,与传统补齐算法相比具有较好的准确性。 The average classification accuracy of the proposed method achieves 91.31%, and experimental results show that the method has better accuracy compared with the traditional complement algorithm.
40068 AVS3作为中国第三代国家数字音视频编码技术标准,在消除视频时域/空域冗余信息方面发挥了重要的作用,但在消除感知冗余方面仍存在进一步优化的空间。 The hybrid coding framework of the third generation audio and video coding standard(AVS3)plays an important role in eliminating redundant information in the video time domain/space domain, but needs to be further improved in eliminating perceptual redundancy and further improving coding performance.
40069 本文提出一种数据驱动的AVS3像素域最小可觉差(Just noticeable distortion,JND)预测模型,在尽量保证视觉主观质量的前提下,对AVS3视频编码器进行优化。 This paper proposes a just noticeable distortion(JND)prediction model of data-driven pixel domain to optimize AVS3 video encoder under the premise of ensuring the subjective quality of vision.
40070 首先基于主流的大型JND主观数据库,获取符合人眼视觉特性的像素域JND阈值; Firstly, based on the current large subjective database of JND, the threshold of perceptive perception distortion in the pixel domain is obtained according to the human eye characteristics.
40071 然后基于深度神经网络构建像素域JND预测模型; Secondly, the pixel domain JND prediction model based on deep neural network is constructed.
40072 最后通过预测的像素域JND阈值建立残差滤波器,消除AVS3的感知冗余,降低编码比特率。 Finally, the residual filter established by the predicted pixel domain JND threshold is used to eliminate perceptual redundancy in AVS3 and reduce coding bitrate.
40073 实验结果表明,与AVS3的标准测试模型HPM5.0相比,在人眼主观感知质量几乎无损的情况下,所提出的像素域JND模型最高可节省21.52%的码率,平均可节省5.11%的码率。 The experimental results show that compared with the AVS3 standard test model HPM5.0, the proposed JND model can save up to 21.52% bitrate and an average of 5.11%bitrate.
40074 光流信息是图像像素的运动表示,现有光流估计方法在应对图像遮挡、大位移和细节呈现等复杂情况时难以保证高精度。 The optical flow information is the motion representation of the image pixels. The existing optical flow estimation methods are difficult to ensure high precision in dealing with complex situations, such as occlusion, large displacement and detailed presentation.
40075 为了克服这些难点问题,本文建立一种新型的卷积神经网络模型,通过改进卷积形式和特征融合的方式来提高估计精度。 In order to overcome these difficult problems, a new convolutional neural network is proposed. The model improves the estimation accuracy by improving the convolution form and feature fusion.