Design and Implementation of an IoT-Enabled Deep Learning Vision System for Automated Dimensional Measurement in Smart Manufacturing

##plugins.themes.academic_pro.article.main##

Waluyo Nugroho
Afianto
Agus Ponco

Abstract

The rapid advancement of Industry 4.0 has brought the convergence of Internet of Things (IoT), computer vision, and deep learning to enhance automation and precision in manufacturing. This paper presents the design and implementation of an IoT-enabled deep learning vision system for automated dimensional measurement, integrated with programmable logic controller (PLC) control and real-time monitoring. The system employs a Raspberry Pi 5 as an edge computing unit, Logitech C270 camera for visual data acquisition, and an Omron CP2E PLC for process control. A YOLOv5 deep learning model is trained to detect and measure object dimensions with sub-millimeter accuracy. The Node-RED platform is utilized for dashboard visualization and communication, interfaced through Omron FINS protocol, with MySQL as the database for data logging. Experimental results show a high detection accuracy of 98.6% and an average measurement error of less than 0.5 mm, demonstrating the system’s effectiveness for smart manufacturing applications.

##plugins.themes.academic_pro.article.details##

How to Cite
Nugroho, W., Afianto, & Agus Ponco. (2026). Design and Implementation of an IoT-Enabled Deep Learning Vision System for Automated Dimensional Measurement in Smart Manufacturing. Jurnal E-Komtek (Elektro-Komputer-Teknik), 9(2), 537-554. https://doi.org/10.37339/e-komtek.v9i2.2855

References

[1] L. Palazzetti, D. Giannetti, A. Verolino, D. A. Grasso, C. M. Pinotti, and F. B. Sorbelli, “AntPi: A Raspberry Pi based edge-cloud system for real-time ant species detection using YOLO $,” 2025, doi: 10.5281/zenodo.1674045.
[2] Y. R, S. K. D, S. A. Bhalerao, K. Murugesan, S. Vellaiyan, and N. Van Minh, “Real-time fire detection and suppression system using YOLO11n and Raspberry Pi for thermal safety applications,” Case Studies in Thermal Engineering, vol. 75, p. 107159, Nov. 2025, doi: 10.1016/j.csite.2025.107159.
[3] W. Nugroho, R. R. Isnanto, and A. F. Rochim, “Comparison of Mycobacterium Tuberculosis Image Detection Accuracy Using CNN and Combination CNN-KNN,” Jurnal RESTI, vol. 7, no. 1, pp. 80–86, Feb. 2023, doi: 10.29207/resti.v7i1.4626.
[4] J. Qiu, W. Zhang, S. Xu, and H. Zhou, “DP-YOLO: A lightweight traffic sign detection model for small object detection,” Digital Signal Processing: A Review Journal, vol. 165, Oct. 2025, doi: 10.1016/j.dsp.2025.105311.
[5] S. E. Mathe, H. K. Kondaveeti, S. Vappangi, S. D. Vanambathina, and N. K. Kumaravelu, “A comprehensive review on applications of Raspberry Pi,” May 01, 2024, Elsevier Ireland Ltd. doi: 10.1016/j.cosrev.2024.100636.
[6] S. G. Koustas, S. J. Oks, and K. M. Möslein, “Developing industrial smart product-service systems: Opportunities and challenges for manufacturing firms,” in Procedia CIRP, Elsevier B.V., 2025, pp. 1008–1013. doi: 10.1016/j.procir.2025.08.171.
[7] H. Kabir, J. Wu, S. Dahal, T. Joo, and N. Garg, “Automated estimation of cementitious sorptivity via computer vision,” Nature Communications , vol. 15, no. 1, Dec. 2024, doi: 10.1038/s41467-024-53993-w.
[8] H. Zia et al., “Gesture-controlled omnidirectional autonomous vehicle: A web-based approach for gesture recognition,” Array, vol. 26, Jul. 2025, doi: 10.1016/j.array.2025.100408.
[9] B. Mills, M. N. Zervas, and J. A. Grant-Jacob, “Imaging pollen using a Raspberry Pi and LED with deep learning,” Science of the Total Environment, vol. 955, Dec. 2024, doi: 10.1016/j.scitotenv.2024.177084.
[10] M. Raisul Islam et al., “Deep Learning and Computer Vision Techniques for Enhanced Quality Control in Manufacturing Processes,” IEEE Access, vol. 12, pp. 121449–121479, 2024, doi: 10.1109/ACCESS.2024.3453664.
[11] Á. Kálnai et al., “Real-time component-based particle size measurement and dissolution prediction during continuous powder feeding using machine vision and artificial intelligence-based object detection,” European Journal of Pharmaceutical Sciences, vol. 209, Jun. 2025, doi: 10.1016/j.ejps.2025.107080.
[12] V. Mahore, P. Soni, P. Patidar, H. Nagar, A. Chouriya, and R. Machavaram, “Development and implementation of a raspberry Pi-based IoT system for real-time performance monitoring of an instrumented tractor,” Smart Agricultural Technology, vol. 9, Dec. 2024, doi: 10.1016/j.atech.2024.100530.
[13] W. Nugroho, Rifdah Zahabiyah, Afianto, and Mada Jimmy Fonda Arifianto, “Application of Deep Learning YOLO in IoT System for Personal Protective Equipment Detection,” Jurnal E-Komtek (Elektro-Komputer-Teknik), vol. 8, no. 2, pp. 428–437, Dec. 2024, doi: 10.37339/e-komtek.v8i2.2187.
[14] W. Nugroho, R. Zahabiyah, M. J. F. Arifiant, and A. Afianto, “Automated Component Detection for Quality PCB Using YOLO Algorithm with IoT Real-Time Streaming on Raspberry Pi,” JURNAL INFOTEL, vol. 17, no. 2, Jul. 2025, doi: 10.20895/infotel.v17i2.1313.
[15] K. Tan, J. Wu, H. Zhou, Y. Wang, and J. Chen, “Integrating Advanced Computer Vision and AI Algorithms for Autonomous Driving Systems,” www.centuryscipub.com, vol. 4, p. 2024, doi: 10.53469/jtpes.2024.04(01).06.
[16] K. Rzepka, P. Szary, K. Cabaj, and W. Mazurczyk, “Performance evaluation of Raspberry Pi 4 and STM32 Nucleo boards for security-related operations in IoT environments,” Computer Networks, vol. 242, Apr. 2024, doi: 10.1016/j.comnet.2024.110252.
[17] S. Liawatimena and N. Isworo, “Annotated drowsiness detection dataset captured using Raspberry Pi 5,” Data Brief, vol. 63, p. 112211, Dec. 2025, doi: 10.1016/j.dib.2025.112211.
[18] T. Ederer and I. Ivkić, “Implementing video monitoring capabilities by using hardware-based encoders of the Raspberry Pi Zero 2 W,” SoftwareX, vol. 31, Sep. 2025, doi: 10.1016/j.softx.2025.102274.
[19] M. D. Rakesh, M. Jeevankumar, and S. B. Rudraswamy, “Implementation of real time root crop leaf classification using CNN on raspberry-Pi microprocessor,” Smart Agricultural Technology, vol. 10, Mar. 2025, doi: 10.1016/j.atech.2024.100714.
[20] P. Rajkumar, “Humidity and temperature monitoring using Raspberry Pi via RS232 networking,” Array, vol. 27, Sep. 2025, doi: 10.1016/j.array.2025.100464.
[21] X. Zhao, L. Wang, Y. Zhang, X. Han, M. Deveci, and M. Parmar, “A review of convolutional neural networks in computer vision,” Artif Intell Rev, vol. 57, no. 4, Apr. 2024, doi: 10.1007/s10462-024-10721-6.
[22] C. Lin, J. Li, and H. Hao, “Deep learning-based motion magnification and frames matching for structural displacement measurement using computer vision,” Eng Struct, vol. 346, p. 121647, Jan. 2026, doi: 10.1016/j.engstruct.2025.121647.
[23] T. D. Phuc and B. C. Son, “Development of an autonomous chess robot system using computer vision and deep learning,” Results in Engineering, vol. 25, Mar. 2025, doi: 10.1016/j.rineng.2025.104091.
[24] M. T. Okano, W. A. Celestino Lopes, and S. M. Ruggero, “Automated dimensional inspection of automotive components using computer vision through YOLO,” Procedia Comput Sci, vol. 270, pp. 3133–3141, 2025, doi: 10.1016/j.procs.2025.09.438.
[25] V. Lepetit, “Object size measurement and camera distance evaluation for electronic components using Fixed-Position camera,” Computer Vision Studies, pp. 13–16, 2023, doi: 10.58396/cvs020101.
[26] Y. Zhang, Y. Xu, T. Xu, C. Wang, C. Li, and H. Wang, “RSD-YOLO: An improved YOLOv7-tiny framework for oat disease severity identification with integration of ReXNet and decoupled head,” Smart Agricultural Technology, vol. 12, Dec. 2025, doi: 10.1016/j.atech.2025.101433.