Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery

被引:159
作者
Akcay, Samet [1 ]
Kundegorski, Mikolaj E. [1 ,2 ]
Willcocks, Chris G. [1 ]
Breckon, Toby P. [1 ]
机构
[1] Univ Durham, Dept Comp Sci, Durham DH1 3LE, England
[2] WHO, CH-1211 Geneva, Switzerland
关键词
Deep convolutional neural networks; transfer learning; image classification; detection; X-ray baggage security;
D O I
10.1109/TIFS.2018.2812196
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We consider the use of deep convolutional neural networks (CNNs) with transfer learning for the image classification and detection problems posed within the context of X-ray baggage security imagery. The use of the CNN approach requires large amounts of data to facilitate a complex end-to-end feature extraction and classification process. Within the context of X-ray security screening, limited availability of object of interest data examples can thus pose a problem. To overcome this issue, we employ a transfer learning paradigm such that a pre-trained CNN, primarily trained for generalized image classification tasks where sufficient training data exists, can be optimized explicitly as a later secondary process towards this application domain. To provide a consistent feature-space comparison between this approach and traditional feature space representations, we also train support vector machine (SVM) classifier on CNN features. We empirically show that fine-tuned CNN features yield superior performance to conventional hand-crafted features on object classification tasks within this context. Overall we achieve 0.994 accuracy based on AlexNet features trained with SVM classifier. In addition to classification, we also explore the applicability of multiple CNN driven detection paradigms, such as sliding window-based CNN (SW-CNN), Faster region-based CNNs (F-RCNNs), region-based fully convolutional networks (R-FCN), and YOLOv2. We train numerous networks tackling both single and multiple detections over SW-CNN/F-RCNN/R-FCN/YOLOv2 variants. YOLOv2, Faster-RCNN, and R-FCN provide superior results to the more traditional SW-CNN approaches. With the use of YOLOv2, using input images of size 544 x 544, we achieve 0.885 mean average precision (mAP) for a six-class object detection problem. The same approach with an input of size 416 x 416 yields 0.974 mAP for the two-class firearm detection problem and requires approximately 100 ms per image. Overall we illustrate the comparative performance of these techniques and show that object localization strategies cope well with cluttered X-ray security imagery, where classification techniques fail.
引用
收藏
页码:2203 / 2215
页数:13
相关论文
共 50 条
[1]   Improving weapon detection in single energy X-ray images through pseudocoloring [J].
Abidi, Besma R. ;
Zheng, Yue ;
Gribok, Andrei V. ;
Abidi, Mongi A. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2006, 36 (06) :784-796
[2]  
[Anonymous], 2016, ADV NEURAL INF PROCE
[3]  
[Anonymous], PROC CVPR IEEE
[4]  
[Anonymous], IEEE IMAGE PROC
[5]  
[Anonymous], SEGNET DEEP CONVOLUT
[6]  
[Anonymous], PROC CVPR IEEE
[7]  
[Anonymous], 2016, INT C IM CRIM DET PR, DOI DOI 10.1049/IC.2016.0080
[8]  
[Anonymous], 2016, GRAD CAM VISUAL EXPL
[9]  
[Anonymous], 2014, P ADV NEUR INF PROC
[10]  
[Anonymous], 2013, ARXIV PREPRINT ARXIV