• ISSN 0258-2724
  • CN 51-1277/U
  • EI Compendex
  • Scopus 收录
  • 全国中文核心期刊
  • 中国科技论文统计源期刊
  • 中国科学引文数据库来源期刊

基于DenseNet结构的轨道暗光环境实时增强算法

王银 王立德 邱霁

王银, 王立德, 邱霁. 基于DenseNet结构的轨道暗光环境实时增强算法[J]. 西南交通大学学报, 2022, 57(6): 1349-1357. doi: 10.3969/j.issn.0258-2724.20210199
引用本文: 王银, 王立德, 邱霁. 基于DenseNet结构的轨道暗光环境实时增强算法[J]. 西南交通大学学报, 2022, 57(6): 1349-1357. doi: 10.3969/j.issn.0258-2724.20210199
WANG Yin, WANG Lide, QIU Ji. Real-Time Enhancement Algorithm Based on DenseNet Structure for Railroad Low-Light Environment[J]. Journal of Southwest Jiaotong University, 2022, 57(6): 1349-1357. doi: 10.3969/j.issn.0258-2724.20210199
Citation: WANG Yin, WANG Lide, QIU Ji. Real-Time Enhancement Algorithm Based on DenseNet Structure for Railroad Low-Light Environment[J]. Journal of Southwest Jiaotong University, 2022, 57(6): 1349-1357. doi: 10.3969/j.issn.0258-2724.20210199

基于DenseNet结构的轨道暗光环境实时增强算法

doi: 10.3969/j.issn.0258-2724.20210199
基金项目: 中国国家铁路集团有限公司科技研究开发计划(N2020J007)
详细信息
    作者简介:

    王银(1989—),男,博士研究生,研究方向为图像处理、计算机视觉检测技术,E-mail:16117393@bjtu.edu.cn

    通讯作者:

    王立德(1960—),男,教授,硕士,研究方向检测技术与故障诊断、计算机控制网络技术,E-mail:ldwang@bjtu.edu.cn

  • 中图分类号: TP751

Real-Time Enhancement Algorithm Based on DenseNet Structure for Railroad Low-Light Environment

  • 摘要:

    车载视觉系统是未来城市轨道交通安全运行的重要保障,列车在封闭环境或夜间运行时所处的弱光照环境会严重影响车载视觉系统的检测效果. 为此,提出了一种针对铁路封闭环境或夜间行车环境下低照度图像的实时视觉增强算法. 该算法以密集连接网络(densely connected network,DenseNet)结构为骨干网建立特征尺寸不变网络,提取图像光照、颜色等信息输出光照增强率图,并基于非线性映射函数调整每个像素的光照强度,通过分级结构将低照度输入图像的曝光率由低层到高层不断增强. 建立的深度学习网络模型采用自监督的方式训练网络参数,利用低照度图像自身特征和先验知识构建损失函数,其由曝光损失、色彩恒定损失及光照平滑度损失3个分量组成. 多种场景下的低照度增强实验结果显示:本文算法能够对输入图像曝光值进行自适应,对低曝光以及高曝光区域动态调整曝光率从而改善低照度图像的可视化效果,处理速度能够达到160帧/s,满足实时性处理的要求;通过在低照度增强前后的轨道分割及行人检测算法性能对比实验证明:所提出的算法能够大大提高暗光环境下的视觉检测效果,在RSDS (railroad segmentation dataset)数据集中轨道分割F值提高5%以上,在轨道场景下行人检测误检率及漏检率均有效降低.

     

  • 图 1  暗光级联增强网络整体结构

    Figure 1.  Structure of low-light enhancement network

    图 2  像素亮度变换示意

    Figure 2.  Pixel brightness conversion

    图 3  增强网络不同层输出结果

    Figure 3.  Enhancement output of different layers

    图 4  将损失分量分别移除后的实验结果

    Figure 4.  Experimental results after removing loss components separately

    图 5  不同 $ \varepsilon $ 值对输出的影响

    Figure 5.  Effect of $ \varepsilon $ on output

    图 6  网络中对于低照度图像中每个像素增强率的可视化结果

    Figure 6.  Visualization results for each pixel enhancement rate in network for low-light images

    图 7  增强网络中某一通道1 ~ 8层各像素增强率统计结果

    Figure 7.  Statistical results of the enhancement rate of each pixel in layers 1 to layer 8

    图 8  暗光环境下轨道检测对比实验

    Figure 8.  Comparative experiment of railroad detection under low-light environment

    图 9  轨道暗光环境下行人检测对比实验

    Figure 9.  Comparative experiment of pedestrian detection in low-light railroad environment

    图 10  低照度轨道环境下图像增强结果

    Figure 10.  Image enhancement results in low-light railroad environment

    表  1  不同分割算法在暗光增强前、后的性能对比

    Table  1.   Performance comparison of different segmentation algorithms before and after low-light enhancement

    算法像素准确度平均交并比精确度F
    FCN[18]0.7470.7980.7160.7620.7430.8080.7450.803
    DeconvNet[19]0.7940.8520.7650.8170.7800.7990.7870.825
    Mask RCNN[20]0.8320.8950.8180.8780.7520.8040.7900.847
    RailNet[2]0.8450.9070.8330.8860.7960.8190.8200.861
    下载: 导出CSV
  • [1] 董昱,郭碧. 基于Hu不变矩特征的铁路轨道识别检测算法[J]. 铁道学报,2018,40(10): 64-70.

    DONG Yu, GUO Bi. Railway track detection algorithm based on Hu invariant moment feature[J]. Journal of the China Railway Society, 2018, 40(10): 64-70.
    [2] WANG Y, WANG L D, HU Y H, et al. RailNet:a segmentation network for railroad detection[J]. IEEE Access, 2019, 7: 143772-143779. doi: 10.1109/ACCESS.2019.2945633
    [3] 王尧,余祖俊,朱力强,等. 基于高阶全连接条件随机场的高速铁路异物入侵检测方法[J]. 铁道学报,2019,41(5): 82-92.

    WANG Yao, YU Zujun, ZHU Liqiang, et al. Incursion detection method based on higher-order fully-connected conditional random fields[J]. Journal of the China Railway Society, 2019, 41(5): 82-92.
    [4] 史红梅,柴华,王尧,等. 基于目标识别与跟踪的嵌入式铁路异物侵限检测算法研究[J]. 铁道学报,2015,37(7): 58-65.

    SHI Hongmei, CHAI Hua, WANG Yao, et al. Study on railway embedded detection algorithm for railway intrusion based on object recognition and tracking[J]. Journal of the China Railway Society, 2015, 37(7): 58-65.
    [5] 王银,王立德,邱霁,等. 一种基于多层RBM网络和SVM的行人检测方法研究[J]. 铁道学报,2018,40(3): 95-100.

    WANG Yin, WANG Lide, QIU Ji, et al. Research on pedestrian detection method based on multilayer RBM network and SVM[J]. Journal of the China Railway Society, 2018, 40(3): 95-100.
    [6] REZA A M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement[J]. Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, 2004, 38(1): 35-44. doi: 10.1023/B:VLSI.0000028532.53893.82
    [7] CHEN S D, RAMLI A R. Minimum mean brightness error bi-histogram equalization in contrast enhancement[J]. IEEE Transactions on Consumer Electronics, 2003, 49(4): 1310-1319. doi: 10.1109/TCE.2003.1261234
    [8] LAND E H. The retinex theory of color vision[J]. Scientific American, 1977, 237(6): 108-128. doi: 10.1038/scientificamerican1277-108
    [9] LI M, LIU J, YANG W, et al. Joint Denoising and enhancement for low-light images via retinex model[C]//International Forum on Digitial TV and Wireless Multimedia Communications. Shanghai: [s.n.], 2017: 91-99.
    [10] LI M D, LIU J Y, YANG W H, et al. Structure-revealing low-light image enhancement via robust retinex model[J]. IEEE Transactions on Image Processing, 2018, 27(6): 2828-2841. doi: 10.1109/TIP.2018.2810539
    [11] LORE K G, AKINTAYO A, SARKAR S. LLNet:a deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650-662. doi: 10.1016/j.patcog.2016.06.008
    [12] WEI C, WANG W J, YANG W H, et al. Deep retinex decomposition for low-light enhancement[DB/OL]. (2018-08-14). https://doi.org/10.48550/arXiv.1808.04560
    [13] CHEN C, CHEN Q F, XU J, et al. Learning to see in the dark[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 3291-3300.
    [14] CAI J R, GU S H, ZHANG L. Learning a deep single image contrast enhancer from multi-exposure images[J]. IEEE Transactions on Image Processing, 2018, 27: 2049-2062. doi: 10.1109/TIP.2018.2794218
    [15] JIANG Y F, GONG X Y, LIU D. EnlightenGAN: deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349.
    [16] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 2261-2269.
    [17] BUCHSBAUM G. A spatial processor model for object colour perception[J]. Journal of the Franklin Institute, 1980, 310(1): 1-26. doi: 10.1016/0016-0032(80)90058-7
    [18] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation [C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston: IEEE, 2015: 3431-3440.
    [19] NOH H, HONG S, HAN B. Learning deconvolution network for semantic segmentation[C]//IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 1520-1528.
    [20] HE K M, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]//IEEE International Conference on Computer Vision (ICCV). Venice: IEEE, 2017: 2980-2988.
  • 加载中
图(10) / 表(1)
计量
  • 文章访问数:  284
  • HTML全文浏览量:  259
  • PDF下载量:  34
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-03-17
  • 修回日期:  2021-06-09
  • 网络出版日期:  2022-08-05
  • 刊出日期:  2021-09-08

目录

    /

    返回文章
    返回