• ISSN 0258-2724
  • CN 51-1277/U
  • EI Compendex
  • Scopus
  • Indexed by Core Journals of China, Chinese S&T Journal Citation Reports
  • Chinese S&T Journal Citation Reports
  • Chinese Science Citation Database
GONG Xun, ZHANG Zhiying, LIU Lu, MA Bing, WU Kunlun. A Survey of Human-Object Interaction Detection[J]. Journal of Southwest Jiaotong University, 2022, 57(4): 693-704. doi: 10.3969/j.issn.0258-2724.20210339
Citation: GONG Xun, ZHANG Zhiying, LIU Lu, MA Bing, WU Kunlun. A Survey of Human-Object Interaction Detection[J]. Journal of Southwest Jiaotong University, 2022, 57(4): 693-704. doi: 10.3969/j.issn.0258-2724.20210339

A Survey of Human-Object Interaction Detection

doi: 10.3969/j.issn.0258-2724.20210339
  • Received Date: 28 Apr 2021
  • Rev Recd Date: 14 Sep 2021
  • Publish Date: 27 Oct 2021
  • As an interdisciplinary subject of object detection, action recognition and visual relationship detection, human-object interaction (HOI) detection aims to identify the interaction between humans and objects in specific application scenarios. Here, recent work in the field of image-based HOI detection is systematically summarized. Firstly, based on the theory of interaction modeling, HOI detection methods can be divided into two categories: global instance based and local instance based, and the representative methods are elaborated and analyzed in detail. Further, according to the differences in visual features, the methods based on the global instance are further subdivided into fusion of spatial information, fusion of appearance information and fusion of body posture information. Finally, the applications of zero-shot learning, weakly supervised learning and Transformer model in HOI detection are discussed. From three aspects of HOI, visual distraction and motion perspective, the challenges faced by HOI detection are listed, and it is pointed out that domain generalization, real-time detection and end-to-end network are the future development trends.

     

  • [1]
    JOHNSON J, KRISHNA R, STARK M, et al. Image retrieval using scene graphs[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE Computer Society, 2015: 3668-3678.
    [2]
    LI Y K, OUYANG W L, ZHOU B L, et al. Scene graph generation from objects, phrases and region captions[DB/OL]. (2017-06-31)[2021-02-02]. https://arxiv.org/abs/1707.09700.
    [3]
    XU D F, ZHU Y K, CHOY C B, et al. Scene graph generation by iterative message passing[EB/OL]. (2017-01-10)[2021-02-02]. https://arxiv.org/abs/1701.02426.
    [4]
    BERGSTROM T, SHI H. Human-object interaction detection: a quick survey and examination of methods[DB/OL]. (2020-09-27)[2021-02-02]. https://arxiv.org/abs/2009.12950.
    [5]
    GUPTA A, KEMBHAVI A, DAVIS L S. Observing human-object interactions: using spatial and functional compatibility for recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(10): 1775-1789. doi: 10.1109/TPAMI.2009.83
    [6]
    ALESSANDRO P, CORDELIA S, VITTORIO F. Weakly supervised learning of interactions between humans and objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(3): 601-614. doi: 10.1109/TPAMI.2011.158
    [7]
    LI L J, LI F F. What, where and who? Classifying events by scene and object recognition[C]//Proceedings of IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2007: 1-8.
    [8]
    LE D T, UIJLINGS J, BERNARDI R. TUHOI: trento universal human object interaction dataset[C]// Proceedings of the Third Workshop on Vision and Language. Brighton: Brighton University, 2014: 17-24.
    [9]
    CHAO Y W, WANG Z, HE Y, et al. HICO: a benchmark for recognizing human-object interactions in images[C]//IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2015: 1-9.
    [10]
    ANDRILUKA M, PISHCHULIN L, GEHLER P, et al. 2d human pose estimation: New benchmark and state of the art analysis[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2014: 3686-3693.
    [11]
    GUPTA S, MALIK J. Visual semantic role labeling[DB/OL]. (2015-03-17)[2021-02-02]. https://arxiv.org/abs/1505.04474.pdf.
    [12]
    CHAO Y W, LIU Y, LIU X, et al. Learning to detect human-object interactions[C]//2018 IEEE Winter Conference on Applications of Computer Vision. [S.l.]: IEEE, 2018: 381-389.
    [13]
    LI Y L, XU L, LIU X, et al. Pastanet: Toward human activity knowledge engine[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2020: 379-388.
    [14]
    LIAO Y, LIU S, WANG F, et al. PPDM: Parallel point detection and matching for real-time human-object interaction detection[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2020: 479-487.
    [15]
    ZHUANG B, WU Q, SHEN C, et al. Hcvrd: a benchmark for large-scale human-centered visual relationship detection[C/OL]//Proceedings of the AAAI Conference on Artificial Intelligence, 2018. [2021-02-22]. https://ojs.aaai.org/index.php/AAAI/article/view/12260.
    [16]
    XU B J, LI J N, YONGKANG W, et al. Interact as You intend:intention-driven human-object interaction detection[J]. IEEE Transactions on Multimedia, 2019, 22(6): 1423-1432.
    [17]
    ULUTAN O, IFTEKHAR A S M, MANJUNATH B S. Vsgnet: spatial attention network for detecting human object interactions using graph convolutions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2020: 13617-13626.
    [18]
    GIRSHICK R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2015: 1440-1448.
    [19]
    GAO C, ZOU Y, HUANG J B. iCAN: instance-centric attention network for human-object interaction detection[DB/OL]. (2018-08-30)[2021-02-22]. https://arxiv.org/abs/1808.10437.
    [20]
    WANG T, ANWER R M, KHAN M H, et al. Deep contextual attention for human-object interaction detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. [S.l.]: IEEE, 2019: 5694-5702.
    [21]
    PENG C, ZHANG X, YU G, et al. Large kernel matters-improve semantic segmentation by global con- volutional network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2017: 4353-4361.
    [22]
    GIRDHAR R, RAMANAN D. Attentional pooling for action recognition[DB/OL]. (2017-11-04)[2021-02-15]. https://doi.org/10.48550/arXiv.1711.01467.
    [23]
    BANSAL A, RAMBHATLA S S, SHRIVASTAVA A, et al. Spatial priming for detecting human-object interactions[DB/OL]. (2020-04-09)[2021-02-15]. https://arxiv.org/abs/2004.04851.
    [24]
    GKIOXARI G, GIRSHICK R, DOLLÁR P, et al. Detecting and recognizing human-object interactions[DB/OL]. (2017-04-24)[2021-02-22]. https://arxiv.org/abs/1704.07333
    [25]
    REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
    [26]
    GUPTA T, SCHWING A, HOIEM D. No-frills human-object interaction detection: factorization, layout encodings, and training techniques[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. [S.l.]: IEEE, 2019: 9677-9685.
    [27]
    YU F, WANG D, SHELHAMER E, et al. Deep layer aggregation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2018: 2403-2412.
    [28]
    ZHOU X Y, WANG D Q, KRÄHENBÜHL P. Objects as points[DB/OL]. (2019-04-16)[2021-02-15]. http://arxiv.org/abs/1904.07850.
    [29]
    LAW H, DENG J. Cornernet: detecting objects as paired keypoints[C]//Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2018: 734-750.
    [30]
    NEWELL A, YANG K, DENG J. Stacked hourglass networks for human pose estimation[C]//European Conference on Computer Vision. [S.l.]: Springer, 2016: 483-499.
    [31]
    LI Y L, ZHOU S, HUANG X, et al. Transferable interactiveness knowledge for human-object interaction detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2019: 3585-3594.
    [32]
    LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[C]//European Conference on Computer Vision. Cham: Springer, 2014: 740-755
    [33]
    LI J, WANG C, ZHU H, et al. Crowdpose: efficient crowded scenes pose estimation and a new benchmark[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2019: 10863-10872.
    [34]
    WAN B, ZHOU D, LIU Y, et al. Pose-aware multi-level feature network for human object interaction detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. [S.l.]: IEEE, 2019: 9469-9478.
    [35]
    CHEN Y, WANG Z, PENG Y, et al. Cascaded pyramid network for multi-person pose estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2018: 7103-7112.
    [36]
    LIANG Z J, LIU J F, GUAN Y S, et al. Pose-based modular network for human-object interaction detection[DB/OL]. (2020-08-05)[2021-02-22]. https://arxiv.org/abs/2008.02042
    [37]
    LIANG Z J, LIU J F, GUAN Y S, et al. Visual-semantic graph attention networks for human-object interaction detection[DB/OL]. (2020-01-07)[2021-02-22]. https://arxiv.org/abs/2001.02302
    [38]
    FANG H S, CAO J, TAI Y W, et al. Pairwise body-part attention for recognizing human-object interactions[C]//Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2018: 51-67.
    [39]
    FANG H S, XIE S, TAI Y W, et al. Rmpe: regional multi-person pose estimation[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2017: 2334-2343.
    [40]
    MALLYA A, LAZEBNIK. Learning models for actions and person-object interactions with transfer to question answering[C]//Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2016: 414-428.
    [41]
    ZHOU P, CHI M. Relation parsing neural network for human-object interaction detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. [S.l.]: IEEE, 2019: 843-851.
    [42]
    GIRSHICK R, RADOSAVOVIC I, GKIOXARI G, et al.Detectron[CP/OL]. (2020-09-22)[2021-02-11]. https://github.com/facebookresearch/detectron.
    [43]
    HE K, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2017: 2961-2969.
    [44]
    QI S, WANG W, JIA B, et al. Learning human-object interactions by graph parsing neural networks[C]//Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2018: 401-417.
    [45]
    LIU H C, MU T J, HUANG X L. Detecting human-object interaction with multi-level pairwise feature network[J]. Computational Visual Media, 2021, 7(2): 229-239. doi: 10.1007/s41095-020-0188-2
    [46]
    ZHONG X, QU X, DING C, et al. Glance and gaze: inferring action-aware points for one-stage human-object interaction detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2021: 13234-13243.
    [47]
    LAMPERT C H, NICKISCH H, HARMELING S. Learning to detect unseen object classes by between-class attribute transfer[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). [S.l.]: IEEE, 2009: 951-958.
    [48]
    SHEN L, YEUNG S, HOFFMAN J, et al. Scaling human-object interaction recognition through zero-shot learning[C]//2018 IEEE Winter Conference on Applications of Computer Vision. [S.l.]: IEEE, 2018: 1568-1576.
    [49]
    EUM S, KWON H. Semantics to space (S2S): embedding semantics into spatial space for zero-shot verb-object query inferencing[DB/OL]. (2019-06-13)[2022-02-22]. https://arxiv.org/abs/1906.05894
    [50]
    RAHMAN S, KHAN S, PORIKLI F. Zero-shot object detection: learning to simultaneously recognize and localize novel concepts[DB/OL]. (2018-03-16)[2021-02-22]. https://arxiv.org/abs/1803.06049
    [51]
    PEYRE J, LAPTEV I, SCHMID C, et al. Detecting unseen visual relations using analogies[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. [S.l.]: IEEE, 2019: 1981-1990.
    [52]
    ALESSANDRO P, SCHMID C, FERRARI V. Weakly supervised learning of interactions between humans and objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 34(3): 601-614.
    [53]
    PEYRE J, LAPTEV I, SCHMID C, et al. Weakly-supervised learning of visual relations[DB/OL]. (2017-07-29)[2021-02-22]. https://arxiv.org/abs/1707.09472.
    [54]
    SARULLO A, MU T T. Zero-shot human-object interaction recognition via affordance graphs[DB/OL]. (2020-09-02)[2021-02-22]. https://arxiv.org/abs/2009. 01039.
    [55]
    VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[DB/OL]. (2017-06-12)[2022-02-26]. https://doi.org/10.48550/arXiv.1706.03762
    [56]
    KIM B, LEE J, KANG J, et al. HOTR: end-to-end human-object interaction detection with transfor- mers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2021: 74-83.
    [57]
    TAMURA M, OHASHI H, YOSHINAGA T. QPIC: query-based pairwise human-object interaction detection with image-wide contextual information[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2021: 10410-10419.
  • Relative Articles

    [1]HUA Zexi, SHI Huibin, LUO Yan, ZHANG Ziyuan, LI Weilong, TANG Yongchuan. Detection and Recognition of Digital Instruments Based on Lightweight YOLO-v4 Model at Substations[J]. Journal of Southwest Jiaotong University, 2024, 59(1): 70-80. doi: 10.3969/j.issn.0258-2724.20210544
    [2]PAN Lei, GUO Yushi, LI Hengchao, WANG Weiye, LI Zechen, MA Tianyu. SAR Image Generation Method via PCGAN for Ship Detection[J]. Journal of Southwest Jiaotong University, 2024, 59(3): 547-555. doi: 10.3969/j.issn.0258-2724.20210630
    [3]ZHU Jun, ZHANG Tianyi, XIE Yakun, ZHANG Jie, LI Chuangnong, ZHAO Li, LI Weilian. Intelligent Statistic Method for Video Pedestrian Flow Considering Small Object Features[J]. Journal of Southwest Jiaotong University, 2022, 57(4): 705-712, 736. doi: 10.3969/j.issn.0258-2724.20200425
    [4]GUO Lie, GE Pingshu, WANG Xiao, WANG Dongxing. Visual Simultaneous Localization and Mapping Algorithm Based on Convolutional Neural Network to Optimize Loop Detection[J]. Journal of Southwest Jiaotong University, 2021, 56(4): 706-712, 768. doi: 10.3969/j.issn.0258-2724.20190723
    [5]LI Zechen, LI Hengchao, HU Wenshuai, YANG Jinyu, HUA Zexi. Masked Face Detection Model Based on Multi-scale Attention-Driven Faster R-CNN[J]. Journal of Southwest Jiaotong University, 2021, 56(5): 1002-1010. doi: 10.3969/j.issn.0258-2724.20210017
    [6]FAN Hong, HOU Yun, LI Bailin, XIONG Ying. Adaptive Detection Algorithm for High-Speed Railway Fasteners by Vision[J]. Journal of Southwest Jiaotong University, 2020, 55(4): 896-902. doi: 10.3969/j.issn.0258-2724.20180496
    [7]WANG Huifeng, ZHANG Jiajia, ZHAO Xiangmo, WEI Feiting, WANG Guiping. Lane Line Detection and Recognition by Polarisation Imaging[J]. Journal of Southwest Jiaotong University, 2019, 54(2): 415-420. doi: 10.3969/j.issn.0258-2724.20160412
    [8]LI Ying, LI Bo, GAO Xinbo. Efficient Compression Algorithm for Improving Visual Quality ofWeak Targets in High Precision Images[J]. Journal of Southwest Jiaotong University, 2019, 54(5): 1012-1020. doi: 10.3969/j.issn.0258-2724.20180180
    [9]PAN Yi, WANG Xiaoyue, XU Hu, XIE Dan. Seismic Fragility Analysis of Nepalese Brick-Timber Heritage Structures under Near-Fault Pulse-Like Ground Motions[J]. Journal of Southwest Jiaotong University, 2017, 30(6): 1156-1163. doi: 10.3969/j.issn.0258-2724.2017.06.016
    [10]WU Shengchuan, YU Cheng, ZHANG Weihua, XUE Biyi. Simulation of Interactions between Pores and Cracks inside Fusion Welded Aluminum Alloys[J]. Journal of Southwest Jiaotong University, 2014, 27(5): 855-861. doi: 10.3969/j.issn.0258-2724.2014.05.018
    [11]LI Renxian, GUAN Yongjiu, ZHAO Jing, ZHAO Jiwei. Aerodynamic Analysis on Entrance Hood of High Speed railway Tunnels[J]. Journal of Southwest Jiaotong University, 2012, 25(2): 175-180. doi: 10.3969/j.issn.0258-2724.2012.02.001
    [12]Guo Lie, GAO Long, ZHAO Zongyan. PedestrianDetectionandTrackingBasedonAutomotiveVision[J]. Journal of Southwest Jiaotong University, 2012, 25(1): 19-25. doi: 10.3969/j.issn.0258-2724.2012.021.01.004
    [13]HUANG Jin, JIN Weidong, QIN Na. Moving Objects Detection Algorithm Based on Three-Dimensional Gaussian Mixture Codebook Model[J]. Journal of Southwest Jiaotong University, 2012, 25(4): 662-668. doi: 10.3969/j.issn.0258-2724.2012.04.020
    [14]ZHAO Jing, LI Renxian. Numerical Analysis of Aerodynamics of High-Speed Trains Running into Tunnels[J]. Journal of Southwest Jiaotong University, 2009, 22(1): 96-100.
    [15]RONG Jian, SHEN Jine, ZHONG Xiaochun. New Method for Infrared Dim Target Detection Based on Wavelet and SVR[J]. Journal of Southwest Jiaotong University, 2008, 21(5): 555-560.
    [16]LUTao. OperationalEquation and Setting ofDifferentialProtection for Impedance-M atching Balance Transformer[J]. Journal of Southwest Jiaotong University, 2005, 18(2): 158-162.
    [17]PENG Qiang, JIANGHao. Vision Subsystem and Identification Algorithm for M iroSotLarge Field Soccer-Robot System[J]. Journal of Southwest Jiaotong University, 2005, 18(2): 168-172.
    [18]JINJian, DU Wen. Reliability Evaluation of Sight for Trainman[J]. Journal of Southwest Jiaotong University, 2000, 13(5): 543-545.
    [19]Luo Gang, Chen Chunjun, Li Zhi. Contradictory Relations between the Objects in Multi-Objective Optimization Problem[J]. Journal of Southwest Jiaotong University, 1999, 12(5): 471-475.
    [20]LUO  Gang, CHEN Chun-Dun, LI  Chi. Contradictory Relations between the Objects in Multi-Objective Optimization Problem[J]. Journal of Southwest Jiaotong University, 1999, 12(5): 471-475.
  • Cited by

    Periodical cited type(4)

    1. 曾文献,李岳松. 面向人体姿态图像关键点检测的深度学习算法. 计算机仿真. 2024(05): 209-213+219 .
    2. 李宽,龚勋,樊剑锋. 结合时空距离的多网络互学习行人重识别. 中国图象图形学报. 2023(05): 1409-1421 .
    3. 张润江,郭杰龙,俞辉,兰海,王希豪,魏宪. 面向多姿态点云目标的在线类增量学习. 液晶与显示. 2023(11): 1542-1553 .
    4. 刘沅畅,钱秋林,钟淼. 深度卷积神经网络在网络哑资源管理上的应用. 通信与信息技术. 2022(S1): 81-84 .

    Other cited types(12)

  • Created with Highcharts 5.0.7Amount of accessChart context menuAbstract Views, HTML Views, PDF Downloads StatisticsAbstract ViewsHTML ViewsPDF Downloads2024-092024-102024-112024-122025-012025-022025-032025-032025-042025-052025-062025-07020406080
    Created with Highcharts 5.0.7Chart context menuAccess Class DistributionFULLTEXT: 36.8 %FULLTEXT: 36.8 %META: 53.1 %META: 53.1 %PDF: 10.2 %PDF: 10.2 %FULLTEXTMETAPDF
    Created with Highcharts 5.0.7Chart context menuAccess Area Distribution其他: 19.3 %其他: 19.3 %其他: 0.7 %其他: 0.7 %China: 0.2 %China: 0.2 %Greensburg: 0.1 %Greensburg: 0.1 %Okinawa: 0.1 %Okinawa: 0.1 %San Lorenzo: 0.1 %San Lorenzo: 0.1 %[]: 0.3 %[]: 0.3 %上海: 2.5 %上海: 2.5 %东京: 0.2 %东京: 0.2 %东京都: 0.1 %东京都: 0.1 %东莞: 1.5 %东莞: 1.5 %中山: 0.1 %中山: 0.1 %临汾: 0.1 %临汾: 0.1 %丽水: 0.0 %丽水: 0.0 %乌鲁木齐: 0.2 %乌鲁木齐: 0.2 %乐山: 0.2 %乐山: 0.2 %云浮: 0.1 %云浮: 0.1 %伊利诺伊州: 0.0 %伊利诺伊州: 0.0 %伊犁: 0.0 %伊犁: 0.0 %佛山: 0.1 %佛山: 0.1 %佳木斯: 0.1 %佳木斯: 0.1 %保定: 0.1 %保定: 0.1 %保山: 0.0 %保山: 0.0 %信阳: 0.1 %信阳: 0.1 %六安: 0.0 %六安: 0.0 %兰州: 0.2 %兰州: 0.2 %加州: 0.0 %加州: 0.0 %北京: 5.8 %北京: 5.8 %北堪萨斯城: 0.0 %北堪萨斯城: 0.0 %十堰: 0.2 %十堰: 0.2 %南京: 0.9 %南京: 0.9 %南宁: 0.0 %南宁: 0.0 %南昌: 0.5 %南昌: 0.5 %南通: 0.1 %南通: 0.1 %南阳: 0.1 %南阳: 0.1 %厄巴纳: 0.0 %厄巴纳: 0.0 %厦门: 0.2 %厦门: 0.2 %台北: 0.1 %台北: 0.1 %台州: 0.2 %台州: 0.2 %台湾: 0.1 %台湾: 0.1 %合肥: 0.5 %合肥: 0.5 %吉安: 0.0 %吉安: 0.0 %吉林: 0.2 %吉林: 0.2 %名古屋: 0.2 %名古屋: 0.2 %呼和浩特: 0.0 %呼和浩特: 0.0 %哈尔滨: 0.1 %哈尔滨: 0.1 %哥伦布: 0.1 %哥伦布: 0.1 %唐山: 0.2 %唐山: 0.2 %商丘: 0.0 %商丘: 0.0 %嘉兴: 0.2 %嘉兴: 0.2 %圣何塞: 0.0 %圣何塞: 0.0 %大理: 0.0 %大理: 0.0 %大连: 0.5 %大连: 0.5 %天水围: 0.1 %天水围: 0.1 %天津: 1.3 %天津: 1.3 %太原: 0.2 %太原: 0.2 %宁波: 0.2 %宁波: 0.2 %安庆: 0.1 %安庆: 0.1 %安康: 0.3 %安康: 0.3 %安顺: 0.1 %安顺: 0.1 %宣城: 0.3 %宣城: 0.3 %山景城: 0.2 %山景城: 0.2 %岳阳: 0.0 %岳阳: 0.0 %巴彦淖尔: 0.0 %巴彦淖尔: 0.0 %常州: 0.3 %常州: 0.3 %常德: 0.0 %常德: 0.0 %广州: 1.4 %广州: 1.4 %延边: 0.1 %延边: 0.1 %弗吉尼亚州: 0.0 %弗吉尼亚州: 0.0 %张家口: 0.8 %张家口: 0.8 %徐州: 0.3 %徐州: 0.3 %悉尼: 0.1 %悉尼: 0.1 %成都: 1.2 %成都: 1.2 %扬州: 0.4 %扬州: 0.4 %承德: 0.1 %承德: 0.1 %抚顺: 0.3 %抚顺: 0.3 %揭阳: 0.2 %揭阳: 0.2 %新乡: 0.2 %新乡: 0.2 %新加坡: 0.0 %新加坡: 0.0 %昆明: 0.6 %昆明: 0.6 %晋中: 0.0 %晋中: 0.0 %景德镇: 0.0 %景德镇: 0.0 %曼彻斯特: 0.0 %曼彻斯特: 0.0 %曼谷: 0.0 %曼谷: 0.0 %朝阳: 0.1 %朝阳: 0.1 %杭州: 2.1 %杭州: 2.1 %桂林: 0.2 %桂林: 0.2 %武汉: 1.5 %武汉: 1.5 %汕头: 0.1 %汕头: 0.1 %江门: 0.2 %江门: 0.2 %池州: 0.3 %池州: 0.3 %沈阳: 1.4 %沈阳: 1.4 %沙田: 0.1 %沙田: 0.1 %河池: 0.0 %河池: 0.0 %泉州: 0.1 %泉州: 0.1 %法兰克福: 0.2 %法兰克福: 0.2 %泸州: 0.1 %泸州: 0.1 %洛阳: 0.5 %洛阳: 0.5 %济南: 0.7 %济南: 0.7 %淄博: 0.0 %淄博: 0.0 %淮北: 0.1 %淮北: 0.1 %淮安: 0.1 %淮安: 0.1 %深圳: 2.3 %深圳: 2.3 %温州: 0.1 %温州: 0.1 %湖州: 0.0 %湖州: 0.0 %湘潭: 0.1 %湘潭: 0.1 %湛江: 0.0 %湛江: 0.0 %漯河: 0.8 %漯河: 0.8 %漳州: 0.7 %漳州: 0.7 %澳门: 0.4 %澳门: 0.4 %烟台: 0.0 %烟台: 0.0 %石家庄: 1.6 %石家庄: 1.6 %福州: 0.0 %福州: 0.0 %秦皇岛: 0.2 %秦皇岛: 0.2 %纽约: 0.2 %纽约: 0.2 %绍兴: 0.3 %绍兴: 0.3 %绵阳: 0.0 %绵阳: 0.0 %聊城: 0.0 %聊城: 0.0 %舟山: 0.1 %舟山: 0.1 %芒廷维尤: 18.7 %芒廷维尤: 18.7 %芝加哥: 0.5 %芝加哥: 0.5 %苏州: 0.3 %苏州: 0.3 %荆州: 0.1 %荆州: 0.1 %莆田: 0.1 %莆田: 0.1 %萨尔瓦多: 0.1 %萨尔瓦多: 0.1 %蚌埠: 0.0 %蚌埠: 0.0 %衡水: 0.1 %衡水: 0.1 %衢州: 0.1 %衢州: 0.1 %西宁: 10.9 %西宁: 10.9 %西安: 1.8 %西安: 1.8 %西雅图: 0.0 %西雅图: 0.0 %诺沃克: 0.5 %诺沃克: 0.5 %贵阳: 0.3 %贵阳: 0.3 %费利蒙: 0.0 %费利蒙: 0.0 %运城: 0.4 %运城: 0.4 %邢台: 0.0 %邢台: 0.0 %邯郸: 0.0 %邯郸: 0.0 %郑州: 0.7 %郑州: 0.7 %鄂尔多斯: 0.0 %鄂尔多斯: 0.0 %重庆: 0.2 %重庆: 0.2 %金华: 0.0 %金华: 0.0 %铁岭: 0.0 %铁岭: 0.0 %镇江: 0.1 %镇江: 0.1 %长春: 0.1 %长春: 0.1 %长沙: 2.6 %长沙: 2.6 %长治: 0.0 %长治: 0.0 %阳江: 0.0 %阳江: 0.0 %雪邦: 0.0 %雪邦: 0.0 %青岛: 0.9 %青岛: 0.9 %首尔: 0.0 %首尔: 0.0 %首尔特别: 0.1 %首尔特别: 0.1 %香港: 0.2 %香港: 0.2 %马鞍山: 0.1 %马鞍山: 0.1 %驻马店: 0.1 %驻马店: 0.1 %鹰潭: 0.0 %鹰潭: 0.0 %黄冈: 0.1 %黄冈: 0.1 %其他其他ChinaGreensburgOkinawaSan Lorenzo[]上海东京东京都东莞中山临汾丽水乌鲁木齐乐山云浮伊利诺伊州伊犁佛山佳木斯保定保山信阳六安兰州加州北京北堪萨斯城十堰南京南宁南昌南通南阳厄巴纳厦门台北台州台湾合肥吉安吉林名古屋呼和浩特哈尔滨哥伦布唐山商丘嘉兴圣何塞大理大连天水围天津太原宁波安庆安康安顺宣城山景城岳阳巴彦淖尔常州常德广州延边弗吉尼亚州张家口徐州悉尼成都扬州承德抚顺揭阳新乡新加坡昆明晋中景德镇曼彻斯特曼谷朝阳杭州桂林武汉汕头江门池州沈阳沙田河池泉州法兰克福泸州洛阳济南淄博淮北淮安深圳温州湖州湘潭湛江漯河漳州澳门烟台石家庄福州秦皇岛纽约绍兴绵阳聊城舟山芒廷维尤芝加哥苏州荆州莆田萨尔瓦多蚌埠衡水衢州西宁西安西雅图诺沃克贵阳费利蒙运城邢台邯郸郑州鄂尔多斯重庆金华铁岭镇江长春长沙长治阳江雪邦青岛首尔首尔特别香港马鞍山驻马店鹰潭黄冈

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(4)

    Article views(1380) PDF downloads(265) Cited by(16)
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return