Responsive image
博碩士論文 etd-0613122-103530 詳細資訊
Title page for etd-0613122-103530
論文名稱
Title
使用5G與WiFi的監控攝影機預測入侵物的停留時間
Estimation of Intruder’s Residence Time Using 5G and WiFi Surveillance Cameras
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
54
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2022-07-08
繳交日期
Date of Submission
2022-07-13
關鍵字
Keywords
入侵物、移動速度、停留時間、5G、WiFi
Intruder, Moveing speeds, Residence time, 5G, WiFi
統計
Statistics
本論文已被瀏覽 314 次,被下載 0
The thesis/dissertation has been browsed 314 times, has been downloaded 0 times.
中文摘要
入侵物(Intruder)入侵一個監控範圍時必須提早預知入侵物在監控畫面的停留時間以讓監控軟體做出即時反應,本論文提出一個入侵物停留時間的預估機制(Estimation Mechanism for Instruder’s Residence Time, EMIRT),我們在EMIRT中設計一個行動模式判斷模組,行動模式判斷模組會根據入侵物的移動速度來辨識三種運動模式,分別為持續走路、持續跑步或是由走路改為跑步,當入侵物入侵攝影機的監控畫面時,行動模式判斷模組會根據座標來計算入侵物的平均移動速度與最大速度相差值來分辨三種運動模式。另外,因為監控攝影機可能連在不同的無線網路環境中,例如5G行動通訊網路與區域無線網路(WiFi),因為不同的無線網路會產生不同的Data Rates,不同的Data Rates會導致不同的Frames per Second (FPS),較小的FPS會讓辨識產生錯誤,本論文針對不同的無線網路取得不同數量的入侵物座標來克服此問題。為了驗證本論文的機制確實能夠預測入侵物在攝影機畫面中的停留時間,我們在Mavenir 5G與Wi-Fi的網路環境中實作EMIRT機制,在實作完成後,我們比較兩個時間,第一個時間為入侵物實際停留時間(Residence Time, RT)與使用本機制預測停留時間的誤差百分比,第二個時間為在不同網路環境中(5G與WiFi)預測停留時間時所必須花的CPU時間。
Abstract
When an intruder invades a surveillance sarea, it is necessary to predict the residence time of the intruder in advance so that the monitoring software can respond the emergency immediately. In this thesis, we propose an estimation mechanism for instruder’s residence time (EMIRT). In EMIRT, we design an action-mode judgment module. The action-mode judgment module will identify three types of movement modes according to the moving speed of the intruder. These three types are continuous walking, continuous running, and walking-before-running. When an intruder invades the camera's surveillance area, the action-mode judgment module will calculate the difference between the average moveing speed and the maximum speed of the intruder to identify the three different types of movement modes. Additionally, we consider the issues of surveillance camera which may be placed in different wireless network environments, the 5G mobile communication networks and the local wireless network (WiFi). Since different wireless technologies may incur different data rates, which results in different frames per second (FPS). Smaller FPS can easily lead to the recognition errors. We overcome this problem by using different numbers of intruder coordinates for 5G and WiFi, respectively. Finally, we implement the proposed EMIRT mechanism. After the implementations on 5G and WiFi networks, we measure the residence time of an intruder in the surveillance sarea. We first compare the percentage of estimation errors between the actual residence time and the time predicted using EMIRT. We then compare the CPU time it may take to predict the residence time in different network environments (5G and WiFi).
目次 Table of Contents
論文審定書 i
致謝 ii
摘要 iii
Abstract iv
目錄 v
圖目錄 vii
表目錄 viii
第一章 導論 1
1.1研究動機 1
1.2研究方法 1
1.3章節介紹 2
第二章 入侵物的辨識 3
2.1 影像的智慧監控 3
2.1.1 入侵物的圍籬 3
2.1.2入侵物的辨識 4
2.2 YOLO 4
2.2.1物件位置的偵測 5
2.2.2物件類別的辨識 6
2.3 相關研究 6
第三章 入侵物的停留時間預估 10
3.1 EMIRT的系統架構 10
3.2行動模式判斷模組 11
3.3 EMIRT演算法 12
第四章 實作與結果分析 22
4.1實驗環境與設備規格 22
4.2. YOLOv4 Server上的實作 23
4.2.1 行動模式判斷模組的虛擬碼 23
4.2.2 ID Table的虛擬碼 26
4.2.3統計停留時間的虛擬碼 28
4.2.4輸出結果的虛擬碼 29
4.3實作的結果與分析 32
4.3.1實驗內容與參數設定 32
4.3.2結果分析 33
第五章 結論與未來工作 36
5.1 結論 36
5.2遭遇問題 37
5.3未來工作 37
Reference 38
Acronyms 42
Index 43
參考文獻 References
[1] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779-788, 27-30 Jun. 2016.
[2] J. Redmon and A. Farhadi. “YOLOv3: An Incremental Improvement,” arXiv preprint arXiv:1804.02767, Apr. 2018.
[3] A. Bochkovskiy, C. -Y. Wang, and H. -Y. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv preprint arXiv:2004.10934, Apr. 2020.
[4] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” arXiv preprint arXiv:1506.01497, Jun. 2015.
[5] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 42, Issue 2, pp.386-397, Feb. 2020.
[6] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” arXiv preprint arXiv: 1311.2524, Oct. 2014.
[7] J.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders, “Selective Search for Object Recognition,” International Journal of Computer Vision, pp.154-171, Sep. 2013.
[8] J. Tobias Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for Simplicity: The All Convolutional Net,” arXiv preprint arXiv:1412.6806, Dec. 2014.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems 25, 2012.
[10] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN Features off-the-shelf: An Astounding Baseline for Recognition ,” arXiv preprint arXiv: 1403.6382,
Mar 2014.
[11] Maria Tz. Pavlova, “A Comparison of the Accuracies of a Convolution Neural Network Built on Different Types of Convolution Layers,” 2021 56th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), Sozopol, Bulgaria, Jul. 2021.
[12] T. -Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal Loss for Dense Object Detection,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 2999-3007, Oct. 2017.
[13] Z. -Q. Zhao, P. Zheng, S. -T. Xu, and X. Wu, “Object Detection With Deep Learning: A Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212-3232, Nov. 2019.
[14] A. P. Jana, A.Biswas, and Mohana, “YOLO Based Detection and Classification of Objects in Video Records,” 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, pp. 2448-2452, May 2018.
[15] A. Neubeck and L. Van Gool, “Efficient Non-Maximum Suppression,” 18th International Conference on Pattern Recognition (ICPR'06), Hong Kong, China, Aug. 2006.
[16] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, “Soft-NMS — Improving Object Detection with One Line of Code,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, Oct. 2017.
[17] J. Hosang, R. Benenson, and B. Schiele, “Learning Non-maximum Suppression,” arXiv preprint arXiv: 1705.02950, May 2017.
[18] T. Rahman, M.A.L Siregar; A. Kurniawan, S. Juniastuti, and E. M. Yuniarno, “Vehicle Speed Calculation from Drone Video Based on Deep Learning,” 2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Surabaya, Indonesia, pp. 229-233 , Nov. 2020.
[19] B. Krishnakumar, K. Kousalya, R.S. Mohana, E.K. Vellingiriraj, K. Maniprasanth, and E. Krishnakumar, “Detection of Vehicle Speeding Violation using Video Processing Techniques,” 2022 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, Jan. 2022.
[20] G. Garibotto, P. Castello, E. Del Ninno, P. Pedrazzi, and G. Zan, “Speed-vision: Speed Measurement by License Plate Reading and Tracking,” ITSC 2001. 2001 IEEE Intelligent Transportation Systems, Oakland, CA, USA, pp. 585-590, Aug. 2001.
[21] C. Lin, S. Jeng, and H. Lioa, “A Real-Time Vehicle Counting, Speed Estimation, and Classification System Based on Virtual Detection Zone and YOLO,” Mathematical Problems in Engineering, vol. 2021, Article ID 1577614, pp.2448-2452, May 2018.
[22] A. P. Samant, K. Warhade, and K. Gunale, “Pedestrian Intent Detection using Skeleton-based Prediction for Road Safety,” 2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS), Ernakulam, India, pp.238-242, Sep. 2021.
[23] Iu. Chyrka and V. Kharchenko, “1D Direction Estimation with a YOLO Network ,” 2019 European Microwave Conference in Central Europe (EuMCE), Prague, Czech Republic, pp.358-361, May 2019.
[24] Z. Rahman, A. M. Ami, and M. A. Ullah, “A Real-Time Wrong-Way Vehicle Detection Based on YOLO and Centroid Tracking,” 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, pp.916-920, Jun. 2020.
[25] Y. Cai, L. Dai, H. Wang, L. Chen, Y. Li, M. A. Sotelo, and Zhixiong Li, “Pedestrian Motion Trajectory Prediction in Intelligent Driving from Far Shot First-Person Perspective Video,” IEEE Transactions on Intelligent Transportation Systems, pp 5298 – 5313, Jan. 2021.

電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus:開放下載的時間 available 2025-07-13
校外 Off-campus:開放下載的時間 available 2027-07-13

您的 IP(校外) 位址是 18.119.107.161
現在時間是 2024-05-11
論文校外開放下載的時間是 2027-07-13

Your IP address is 18.119.107.161
The current date is 2024-05-11
This thesis will be available to you on 2027-07-13.

紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 2027-07-13

QR Code