• ISSN 0258-2724
  • CN 51-1277/U
  • EI Compendex
  • Scopus
  • Indexed by Core Journals of China, Chinese S&T Journal Citation Reports
  • Chinese S&T Journal Citation Reports
  • Chinese Science Citation Database
Turn off MathJax
Article Contents
YANG bin, HU Jinming, ZHANG Qilin, WANG Congjun. Location Information Perception of Onsite Construction Crew Based on Person Re-identification[J]. Journal of Southwest Jiaotong University. doi: 10.3969/j.issn.0258-2724.20230125
Citation: YANG bin, HU Jinming, ZHANG Qilin, WANG Congjun. Location Information Perception of Onsite Construction Crew Based on Person Re-identification[J]. Journal of Southwest Jiaotong University. doi: 10.3969/j.issn.0258-2724.20230125

Location Information Perception of Onsite Construction Crew Based on Person Re-identification

doi: 10.3969/j.issn.0258-2724.20230125
  • Received Date: 28 Mar 2023
  • Rev Recd Date: 21 Jun 2023
  • Available Online: 19 Nov 2024
  • To obtain location information of onsite construction crew continuously with the consideration of dynamical changing, occluding, and high appearance similarity in construction scenes, a computer vision-based location information perception method for onsite construction crew was proposed. Firstly, a deep learning-based object detection method was utilized to percept targets preliminarily. Then, a data association method based on person re-identification was used, where ID assignment was completed by matching the deep learning-based feature. A distance metric method based on re-ranking was utilized to optimize the similarity measurement results, and the matching result was processed by using a buffering mechanism and a dynamical feature updating mechanism, so as to mitigate mismatch due to difficulties in construction scenes. 2D coordinates and movement information corresponding to ID were obtained using perspective transformation of images to provide basic data for productivity analysis. Finally, standard test videos were created from images collected at different construction stages to test the proposed method. The test results show that in different scenes, the average F1 score of ID (IDF1) and multiple object tracking accuracy (MOTA) of the algorithm are 85.4% and 75.4%, respectively. The proposed re-ranking method and post-processing mechanism for matching effectively improve the tracking accuracy. Compared with the algorithm after removing these optimization mechanisms, the average improvement of IDF1 and MOTA is 52.8% and 3.8%, respectively.

     

  • loading
  • [1]
    PARK M W, BRILAKIS I. Continuous localization of construction workers via integration of detection and tracking[J]. Automation in Construction, 2016, 72: 129-142. doi: 10.1016/j.autcon.2016.08.039
    [2]
    LEE Y J, PARK M W. 3D tracking of multiple onsite workers based on stereo vision[J]. Automation in Construction, 2019, 98: 146-159. doi: 10.1016/j.autcon.2018.11.017
    [3]
    SON H, KIM C. Integrated worker detection and tracking for the safe operation of construction machinery[J]. Automation in Construction, 2021, 126: 103670.1-103670.11.
    [4]
    KONSTANTINOU E, LASENBY J, BRILAKIS I. Adaptive computer vision-based 2D tracking of workers in complex environments[J]. Automation in Construction, 2019, 103: 168-184. doi: 10.1016/j.autcon.2019.01.018
    [5]
    SON H, CHOI H, SEONG H, et al. Detection of construction workers under varying poses and changing background in image sequences via very deep residual networks[J]. Automation in Construction, 2019, 99: 27-38. doi: 10.1016/j.autcon.2018.11.033
    [6]
    BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[EB/OL]. (2020-04-23) [2023-02-23]. http://arxiv.org/abs/2004.10934.
    [7]
    ANGAH O, CHEN A Y. Tracking multiple construction workers through deep learning and the gradient based method with re-matching based on multi-object tracking accuracy[J]. Automation in Construction, 2020, 119: 103308.1-103308.9.
    [8]
    HE K M, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]//2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE, 2017: 2980-2988.
    [9]
    ZHANG Q L, WANG Z C, YANG B, et al. Reidentification-based automated matching for 3D localization of workers in construction sites[J]. Journal of Computing in Civil Engineering, 2021, 35(6): 04021019.1-04021019.18.
    [10]
    罗浩,姜伟,范星,等. 基于深度学习的行人重识别研究进展[J]. 自动化学报,2019,45(11): 2032-2049.

    LUO H, JIANG W, FAN X, et al. A survey on deep learning based person re-identification[J]. Acta Automatica Sinica, 2019, 45(11): 2032-2049.
    [11]
    YE M, SHEN J B, LIN G J, et al. Deep learning for person re-identification: a survey and outlook[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(6): 2872-2893. doi: 10.1109/TPAMI.2021.3054775
    [12]
    JOCHER G. YOLOv5 by ultralytics[CP/OL]. (2020-05)[2023-02-23]. https://github.com/ultralytics/yolov5.
    [13]
    AN X H, ZHOU L, LIU Z G, et al. Dataset and benchmark for detecting moving objects in construction sites[J]. Automation in Construction, 2021, 122: 103482.1-103482.18.
    [14]
    LUO H, GU Y Z, LIAO X Y, et al. Bag of tricks and a strong baseline for deep person re-identification [C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach: IEEE, 2019: 1487-1495.
    [15]
    ZHENG L, SHEN L Y, TIAN L, et al. Scalable person re-identification: a benchmark[C]//2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 1116-1124.
    [16]
    JIA J R, RUAN Q Q, JIN Y, et al. View-specific subspace learning and re-ranking for semi-supervised person re-identification[J]. Pattern Recognition, 2020, 108: 107568.1-107568.12.
    [17]
    GARCIA J, MARTINEL N, GARDEL A, et al. Discriminant context information analysis for post-ranking person re-identification[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2017, 26(4): 1650-1665. doi: 10.1109/TIP.2017.2652725
    [18]
    CROUSE D F. On implementing 2D rectangular assignment algorithms[J]. IEEE Transactions on Aerospace and Electronic Systems, 2016, 52(4): 1679-1696. doi: 10.1109/TAES.2016.140952
    [19]
    KIM D, LIU M Y, LEE S, et al. Remote proximity monitoring between mobile construction resources using camera-mounted UAVs[J]. Automation in Construction, 2019, 99: 168-182. doi: 10.1016/j.autcon.2018.12.014
    [20]
    SUN S J, AKHTAR N, SONG H S, et al. Deep affinity network for multiple object tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(1): 104-119.
    [21]
    WU J L, CAO J L, SONG L C, et al. Track to detect and segment: an online multi-object tracker[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville: IEEE, 2021: 12347-12356.
    [22]
    ZHANG Y F, WANG C Y, WANG X G, et al. FairMOT: on the fairness of detection and re-identification in multiple object tracking[J]. International Journal of Computer Vision, 2021, 129(11): 3069-3087. doi: 10.1007/s11263-021-01513-4
    [23]
    MEINHARDT T, KIRILLOV A, LEAL-TAIXÉ L, et al. TrackFormer: multi-object tracking with transformers[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans: IEEE, 2022: 8834-8844.
    [24]
    YAN B, JIANG Y, SUN P Z, et al. Towards grand unification of object tracking[C]//European Conference on Computer Vision. Cham: Springer, 2022: 733-751.
    [25]
    ZHANG Y F, SUN P Z, JIANG Y, et al. ByteTrack: multi-object tracking by associating every detection box[C]//European Conference on Computer Vision. Cham: Springer, 2022: 1-21.
    [26]
    朱军,张天奕,谢亚坤,等. 顾及小目标特征的视频人流量智能统计方法[J]. 西南交通大学学报,2022,57(4): 705-712,736. doi: 10.3969/j.issn.0258-2724.20200425

    ZHU, ZHANG T Y, XIE Y K, et al. Intelligent statistic method for video pedestrian flow considering small object features[J]. Journal of Southwest Jiaotong University, 2022, 57(4): 705-712,736. doi: 10.3969/j.issn.0258-2724.20200425
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(5)

    Article views(39) PDF downloads(11) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return