ID 原文 译文
1793 然后根据松弛时间,通过使截止时间流尽可能接近其规定截止时间完成,降低非截止时间流的完成时间; Then the cost of complete time of non-deadline flows is reduced by completing deadline flows barely before their deadlines.
1794 最后,利用最小累计发送量优先策略进一步降低非截止时间流的平均完成时间。 Lastly, non-deadline flow is scheduled according to the number of bytes it has sent to reduce the average flow completion time.
1795 仿真结果表明,该机制能有效降低非截止时间流的平均完成时间,同时保证较低的截止时间错失率。 Experiment results show that the proposedmechanism can effectively reduce the average flow completion time of non-deadline flows while maintaining low deadlinemiss rate.
1796 针对传统去雾算法出现色彩失真、去雾不完全、出现光晕等现象,本文提出了一种基于霾层学习的卷积神经网络的单幅图像去雾算法。 Considering the disadvantage of traditional dehazing algorithm, a single image dehazing algorithm based onhaze layers learning is proposed.
1797 首先,依据大气散射物理模型进行理论推导,本文设计了一种能够直接学习和估计有雾图像和霾层图像之间的映射关系的网络模型。 According to the atmospheric scattering model, the end-to-end network is designed which directly learn the mapping between the haze images and their corresponding haze layers.
1798 采用有雾图像作为输入,并输出有雾图像与无雾图像之间的残差图像,随后直接从有雾图像中去除此霾层图像,即可恢复出无雾图像。 The network takes the haze image as the input. Then the recovered haze-free image can be gotten by removing the residual image from the hazy image.
1799 残差学习的引入,使得网络来直接估计初始霾层,利用相对大的学习率,减少计算量,加快收敛过程。 Residual learning allows the network to estimate the initial haze layer with relatively high learning rates, which can reduce computa-tional complexity and speed up the convergence process.
1800 再利用引导滤波进行细化,使得恢复出的无雾图像更接近真实场景。 Otherwise, we use guided filter to refine images avoiding halos andblock artifacts, which make the recovered image more similar to the real scene.
1801 本文对不同雾浓度的有雾图片的去雾效果进行测试,并与当前主流深度学习去雾算法及其他经典算法进行对比。 Finally, the experimental results are analyzedand contrasted carefully. In this paper, the effect on fog images with different fog density is tested, and many comparisons arelisted with other classical algorithms.
1802 实验结果显示,本文设计的卷积神经网络模型在图像去雾的应用,不论在主观效果还是客观指标上,都有优势。 Experiments demonstrate that the proposed algorithm has better results than state-of-the-art methods on both synthetic and real-world images qualitatively and quantitatively.