Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Recent advances in robotics and intelligent robots applications.

robotics engineering research paper

Conflicts of Interest

List of contributions.

  • Bai, X.W.; Kong, D.Y.; Wang, Q.; Yu, X.H; Xie, X.X. Bionic Design of a Miniature Jumping Robot. Appl. Sci. 2023 , 13 , 4534. https://doi.org/10.3390/app13074534 .
  • Yang, M.Y.; Xu, L.; Tan, X.; Shen, H.H. A Method Based on Blackbody to Estimate Actual Radiation of Measured Cooperative Target Using an Infrared Thermal Imager. Appl. Sci. 2023 , 13 , 4832. https://doi.org/10.3390/app13084832 .
  • Dai, S.; Song, K.F.; Wang, Y.L.; Zhang, P.J. Two-Dimensional Space Turntable Pitch Axis Trajectory Prediction Method Based on Sun Vector and CNN-LSTM Model. Appl. Sci. 2023 , 13 , 4939. https://doi.org/10.3390/app13084939 .
  • Gao, L.Y.; Xiao, S.L.; Hu, C.H.; Yan, Y. Hyperspectral Image Classification Based on Fusion of Convolutional Neural Network and Graph Network. Appl. Sci. 2023 , 13 , 7143. https://doi.org/10.3390/app13127143 .
  • Yang, T.; Xu, F.; Zeng, S.; Zhao, S.J.; Liu, Y.W.; Wang, Y.B. A Novel Constant Damping and High Stiffness Control Method for Flexible Space Manipulators Using Luenberger State Observer. Appl. Sci. 2023 , 13 , 7954. https://doi.org/10.3390/app13137954 .
  • Ma, Z.L.; Zhao, Q.L.; Che, X.; Qi, X.D.; Li, W.X.; Wang, S.X. An Image Denoising Method for a Visible Light Camera in a Complex Sky-Based Background. Appl. Sci. 2023 , 13 , 8484. https://doi.org/10.3390/app13148484 .
  • Liu, L.D.; Long, Y.J.; Li, G.N.; Nie, T.; Zhang, C.C.; He, B. Fast and Accurate Visual Tracking with Group Convolution and Pixel-Level Correlation. Appl. Sci. 2023 , 13 , 9746. https://doi.org/10.3390/app13179746 .
  • Kee, E.; Chong, J.J.; Choong, Z.J.; Lau, M. Development of Smart and Lean Pick-and-Place System Using EfficientDet-Lite for Custom Dataset. Appl. Sci. 2023 , 13 , 11131. https://doi.org/10.3390/app132011131 .
  • Li, Y.F.; Wang, Q.H.; Liu, Q. Developing a Static Kinematic Model for Continuum Robots Using Dual Quaternions for Efficient Attitude and Trajectory Planning. Appl. Sci. 2023 , 13 , 11289. https://doi.org/10.3390/app132011289 .
  • Yu, J.; Zhang, Y.; Qi, B.; Bai, X.T.; Wu, W.; Liu, H.X. Analysis of the Slanted-Edge Measurement Method for the Modulation Transfer Function of Remote Sensing Cameras. Appl. Sci. 2023 , 13 , 13191. https://doi.org/10.3390/app132413191 .
  • Jia, L.F.; Zeng, S.; Feng, L.; Lv, B.H.; Yu, Z.Y.; Huang, Y.P. Global Time-Varying Path Planning Method Based on Tunable Bezier Curves. Appl. Sci. 2023 , 13 , 13334. https://doi.org/10.3390/app132413334 .
  • Lin, H.Y.; Quan, P.K.; Liang, Z.; Wei, D.B.; Di, S.C. Low-Cost Data-Driven Robot Collision Localization Using a Sparse Modular Point Matrix. Appl. Sci. 2024 , 14 , 2131. https://doi.org/10.3390/app14052131 .
  • Cai, R.G.; Li, X. Path Planning Method for Manipulators Based on Improved Twin Delayed Deep Deterministic Policy Gradient and RRT*. Appl. Sci. 2024 , 14 , 2765. https://doi.org/10.3390/app14072765 .
  • Muñoz-Barron, B.; Sandoval-Castro, X.Y.; Castillo-Castaneda, E.; Laribi, M.A. Characterization of a Rectangular-Cut Kirigami Pattern for Soft Material Tuning. Appl. Sci. 2024 , 14 , 3223. https://doi.org/10.3390/app14083223 .
  • Garcia, E.; Jimenez, M.A.; Santos, P.G.D.; Armada, M. The evolution of robotics research. IEEE Robot. Autom. Mag. 2007 , 14 , 90–103. [ Google Scholar ] [ CrossRef ]
  • Murphy, R.R. Introduction to AI Robotics ; MIT Press: Cambridge, UK; London, UK, 2019. [ Google Scholar ]
  • Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016 , 32 , 1309–1332. [ Google Scholar ] [ CrossRef ]
  • Nof, S.Y. Handbook of Industrial Robotics ; Wiley Press: New York, NY, USA, 1999. [ Google Scholar ]
  • Kleeberger, K.; Bormann, R.; Kraus, W.; Huber, M.F. A survey on learning-based robotic grasping. Curr. Robot. Rep. 2020 , 1 , 239–249. [ Google Scholar ] [ CrossRef ]
  • Iida, F.; Laschi, C. Soft robotics: Challenges and perspectives. Procedia Computer Science 2011 , 7 , 99–102. [ Google Scholar ] [ CrossRef ]
  • Pfeifer, R.; Lungarella, M.; Iida, F. Self-organization, embodiment, and biologically inspired robotics. Science 2007 , 318 , 1088–1093. [ Google Scholar ] [ CrossRef ]
  • Hsia, T.C.S.; Lasky, T.A.; Guo, Z.Y. Robust independent joint controller design for industrial robot manipulators. IEEE Trans. Ind. Electron. 1991 , 38 , 21–25. [ Google Scholar ] [ CrossRef ]
  • Song, Q.; Zhao, Q.L.; Wang, S.X.; Liu, Q.; Chen, X.H. Dynamic path planning for unmanned vehicles based on fuzzy logic and improved ant colony optimization. IEEE Access 2020 , 8 , 62107–62115. [ Google Scholar ] [ CrossRef ]
  • Wang, S.; Chen, X.H.; Ding, G.Y.; Li, Y.Y.; Xu, W.C.; Zhao, Q.L.; Gong, Y.; Song, Q. A Lightweight Localization Strategy for LiDAR-Guided Autonomous Robots with Artificial Landmarks. Sensors 2021 , 21 , 4479. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gasparetto, A.; Boscariol, P.; Lanzutti, A.; Vidoni, R. Path planning and trajectory planning algorithms: A general overview. In Motion and Operation Planning of Robotic Systems: Background and Practical Approaches ; Springer: Berlin/Heidelberg, Germany, 2015. [ Google Scholar ]
  • Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017 , 30 . [ Google Scholar ]
  • Flores-Abad, A.; Ma, O.; Pham, K.; Ulrich, S. A review of space robotics technologies for on-orbit servicing. Prog. Aerosp. Sci. 2014 , 68 , 1–26. [ Google Scholar ] [ CrossRef ]
  • Ma, O.; Dang, H.; Pham, K. On-orbit identification of inertia properties of spacecraft using a robotic arm. J. Guid. Control. Dyn. 2008 , 31 , 1761–1771. [ Google Scholar ] [ CrossRef ]
  • Wei, L.; Zhang, L.; Gong, X.; Ma, D.M. Design and optimization for main support structure of a large-area off-axis three-mirror space camera. Appl. Opt. 2017 , 56 , 1094–1100. [ Google Scholar ] [ CrossRef ] [ PubMed ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Song, Q.; Zhao, Q. Recent Advances in Robotics and Intelligent Robots Applications. Appl. Sci. 2024 , 14 , 4279. https://doi.org/10.3390/app14104279

Song Q, Zhao Q. Recent Advances in Robotics and Intelligent Robots Applications. Applied Sciences . 2024; 14(10):4279. https://doi.org/10.3390/app14104279

Song, Qi, and Qinglei Zhao. 2024. "Recent Advances in Robotics and Intelligent Robots Applications" Applied Sciences 14, no. 10: 4279. https://doi.org/10.3390/app14104279

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Email Alert

robotics engineering research paper

论文  全文  图  表  新闻 

  • Abstracting/Indexing
  • Journal Metrics
  • Current Editorial Board
  • Early Career Advisory Board
  • Previous Editor-in-Chief
  • Past Issues
  • Current Issue
  • Special Issues
  • Early Access
  • Online Submission
  • Information for Authors
  • Share facebook twitter google linkedin

robotics engineering research paper

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3 , Top 1 (SCI Q1) CiteScore: 23.5 , Top 2% (Q1) Google Scholar h5-index: 77, TOP 5
Y. Tong, H. Liu, and  Z. Zhang,  “Advancements in humanoid robots: A comprehensive review and future prospects,” , vol. 11, no. 2, pp. 301–328, Feb. 2024. doi:
Y. Tong, H. Liu, and  Z. Zhang,  “Advancements in humanoid robots: A comprehensive review and future prospects,” , vol. 11, no. 2, pp. 301–328, Feb. 2024. doi:

Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects

Doi:  10.1109/jas.2023.124140.

  • Yuchuang Tong ,  ,  , 
  • Haotian Liu ,  , 
  • Zhengtao Zhang ,  , 

Yuchuang Tong (Member, IEEE) received the Ph.D. degree in mechatronic engineering from the State Key Laboratory of Robotics, Shenyang Institute of Automation (SIA), Chinese Academy of Sciences (CAS) in 2022. Currently, she is an Assistant Professor with the Institute of Automation, Chinese Academy of Sciences. Her research interests include humanoid robots, robot control and human-robot interaction. Dr. Tong has authored more than ten publications in journals and conference proceedings in the areas of her research interests. She was the recipient of the Best Paper Award from 2020 International Conference on Robotics and Rehabilitation Intelligence, the Dean’s Award for Excellence of CAS and the CAS Outstanding Doctoral Dissertation

Haotian Liu received the B.Sc. degree in traffic equipment and control engineering from Central South University in 2021. He is currently a Ph.D. candidate in control science and control engineering at the CAS Engineering Laboratory for Industrial Vision and Intelligent Equipment Technology, Institute of Automation, Chinese Academy of Sciences (IACAS) and University of Chinese Academy of Sciences (UCAS). His research interests include robotics, intelligent control and machine learning

Zhengtao Zhang (Member, IEEE) received the B.Sc. degree in automation from the China University of Petroleum in 2004, the M.Sc. degree in detection technology and automatic equipment from the Beijing Institute of Technology in 2007, and the Ph.D. degree in control science and engineering from the Institute of Automation, Chinese Academy of Sciences in 2010. He is currently a Professor with the CAS Engineering Laboratory for Industrial Vision and Intelligent Equipment Technology, IACAS. His research interests include industrial vision inspection, and intelligent robotics

This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview of the current state of humanoid robot research is presented. Furthermore, emerging challenges in the field are identified, emphasizing the necessity for a deeper understanding of biological motion mechanisms, improved structural design, enhanced material applications, advanced drive and control methods, and efficient energy utilization. The integration of bionics, brain-inspired intelligence, mechanics, and control is underscored as a promising direction for the development of advanced humanoid robotic systems. This paper serves as an invaluable resource, offering insightful guidance to researchers in the field, while contributing to the ongoing evolution and potential of humanoid robots across diverse domains.

  • Future trends and challenges , 
  • humanoid robots , 
  • human-robot interaction , 
  • key technologies , 
  • potential applications
--> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> -->
[1] , vol. 4, no. 26, p. eaaw3520, Jan. 2019. doi:
[2] , vol. 610, no. 7931, pp. 283–289, Oct. 2022. doi:
[3] , New Delhi, India, 2019, pp. 1–6.
[4] , vol. 6, no. 54, p. eabd9461, May 2021. doi:
[5] , vol. 39, no. 4, pp. 3332–3346, 2023. doi:
[6] , vol. 23, no. 3, p. 1506, Jan. 2023. doi:
[7] , vol. 15, no. 2, pp. 419–433, Jun. 2023. doi:
[8] , vol. 8, no. 10, pp. 6435–6442, Oct. 2023. doi:
[9] , vol. 34, p. e20, Dec. 2019. doi:
[10] , vol. 16, no. 2, Apr. 2019.
[11] , vol. 11, no. 4, pp. 555–573, Feb. 2019. doi:
[12] . Dordrecht, The Netherlands: Springer, 2019.
[13] . 2nd ed. Cham, Germany: Springer, 2016.
[14] , vol. 10, no. 1, p. 5489, Dec. 2019. doi:
[15] , Coimbatore, India, 2018, pp. 555–560.
[16] , vol. 65, no. 3, pp. 147–159, Jul. 1991. doi:
[17] , vol. 115, no. 5, pp. 431–438, Oct. 2021. doi:
[18] , vol. 6, no. 2, pp. 159–168, Apr. 2002. doi:
[19] , Daejeon, South Korea, 2008, pp. 155–160.
[20] , vol. 29, no. 4, p. 73, Jul. 2010.
[21] , vol. 48, no. 11, pp. 2919–2924, Aug. 2015. doi:
[22] ( ), vol. 42, no. 5, pp. 728–743, Sept. 2012. doi:
[23] , vol. 41, pp. 147–155, May 2013. doi:
[24] , Tehran, Iran, 2019, pp. 498–503.
[25] , vol. 34, no. 21–22, pp. 1338–1352, Jun. 2020. doi:
[26] , vol. 34, no. 21–22, pp. 1370–1379, Aug. 2020. doi:
[27] , Shenzhen, China, 2013, pp. 1276–1281.
[28] , vol. 47, no. 2-3, pp. 129–141, Jun. 2004. doi:
[29] , vol. 13, no. 12, pp. 3835–3851, Dec. 2020.
[30] , vol. 39, no. 3, pp. 1706–1727, Jun. 2023. doi:
[31] , vol. 3, no. 2, pp. 93–100, Jun. 1984. doi:
[32] , Stuttgart, Germany, 2018, pp. 97–102.
[33] , Taipei, China, 2010, pp. 3617–3622.
[34] , Long Beach, USA, 2005, pp. 167–173.
[35] , vol. 365, no. 1850, pp. 11–19, Jan. 2007.
[36] , Nagoya, Japan, 2004, pp. 23–28.
[37] , London, UK, 2000, pp. 285–293.
[38] , Kobe, Japan, 2003, pp. 938–943.
[39] , Beijing, China, 2006, pp. 1428–1433.
[40] , Lausanne, Switzerland, 2002, pp. 2478–2483.
[41] , vol. 43, no. 3, pp. 253–270, Mar. 2008. doi:
[42] , vol. 40, no. 3, pp. 429–455, Mar. 2016. doi:
[43] , vol. 46, pp. 1441–1448, Dec. 2015. doi:
[44] , vol. 30, no. 4, pp. 372–377, May 2012. doi:
[45] , Hong Kong, China, 2014, pp. 5983–5989.
[46] , vol. 7, p. 62, May 2020. doi:
[47] , 2023. DOI:
[48] , vol. 52, no. 10, pp. 11267–11280, Oct. 2022. doi:
[49] , vol. 20, no. 1, pp. 1–18, Jan. 2023. doi:
[50] , vol. 27, no. 5, pp. 3463–3473, Oct. 2022. doi:
[51] humanoid robot: An open-systems platform for research in cognitive development,” , vol. 23, no. 8–9, pp. 1125–1134, Oct.–Nov. 2010. doi:
[52] , vol. 21, no. 10, pp. 1151–1175, Oct. 2007. doi:
[53] , vol. 55, no. 5, pp. 2111–2120, May 2008. doi:
[54] , vol. 36, no. 2, pp. 517–536, Apr. 2020. doi:
[55] , vol. 31, no. 9, pp. 1094–1113, Jul. 2012. doi:
[56] , vol. 32, no. 9, pp. 4013–4025, Sept. 2021. doi:
[57] , vol. 7, no. 65, p. eabm6010, Apr. 2022. doi:
[58] , vol. 35, no. 1, pp. 78–94, Feb. 2019. doi:
[59] , vol. 26, no. 5, pp. 2700–2711, Oct. 2021. doi:
[60] , vol. 67, no. 8, pp. 6629–6638, Aug. 2020. doi:
[61] , vol. 7, no. 2, pp. 2977–2984, Apr. 2022. doi:
[62] , vol. 29, no. 5, pp. 412–419, Oct. 2002. doi:
[63] , Pasadena, USA, 2008, pp. 905–910.
[64] , Busan, South Korea, 2006, pp. I-31–I-34.
[65] , Orlando, USA, 2006, pp. 76–81.
[66] , New Orleans, USA, 2004, pp. 1083–1090.
[67] , Montreal, Canada, 2019, pp. 277–283.
[68] , C. Pradalier, R. Siegwart, and G. Hirzinger, Eds. Berlin, Heidelberg, Germany: Springer, 2011, pp. 301–314.
[69] , vol. 4, no. 2, pp. 1431–1438, Apr. 2019. doi:
[70] , Vilamoura-Algarve, Portugal, 2012, pp. 3687–3692.
[71] , Tokyo, Japan, 2013, pp. 935–940.
[72] , Osaka, Japan, 2012, pp. 1–6.
[73] , vol. 35, no. S2, pp. ii24–ii26, Sept. 2006.
[74] , Leuven, Belgium, 1998, pp. 1321–1326.
[75] , vol. 26, no. 4, pp. 260–266, Jun. 1999. doi:
[76] , Barcelona, Spain, 2005, pp. 629–634.
[77] , Osaka, Japan, 2010, pp. 73–74.
[78] , San Diego, USA, 2007, pp. 2578–2579.
[79] , Nice, France, 2008, pp. 779–786.
[80] , Toyama, Japan, 2009, pp. 1125–1130.
[81] , vol. 25, no. 2, pp. 414–425, Apr. 2009. doi:
[82] , Big Sky, USA, 2003, pp. 3939–3947.
[83] , Taipei, China, 2003, pp. 2543–2548.
[84] , Shanghai, China, 2011, pp. 2178–2183.
[85] , vol. 14, no. 2-3, pp. 179–197, Mar. 2003.
[86] , vol. 28, no. 1, pp. 35–42, Feb. 2001. doi:
[87] , Las Vegas, USA, 2020, pp. 3739–3746.
[88] , Philadelphia, USA, 2019, pp. 4559–4566.
[89] , Kansas City, USA, 2016, pp. 340–350.
[90] , vol. 9, no. 12, Dec. 2017.
[91] , Nanjing, China, 2014, pp. 8518–8523.
[92] , Prague, Czech Republic, 2021, pp. 6–9.
[93] , vol. 31, no. 10, pp. 1117–1133, Aug. 2012. doi:
[94] , Orlando, USA, 2008, pp. 69621F.
[95] , vol. 10, no. 1, p. 47, Jan. 2021.
[96] , Edinburgh, UK, 2016, pp. 998602.
[97] , vol. 32, no. 2, pp. 192–208, Mar. 2015. doi:
[98] , Atlanta, USA, 2013, pp. 307–314.
[99] , Tokyo, Japan, 2013, pp. 2479–2484.
[100] , vol. 365, no. 1850, pp. 79–107, Jan. 2007.
[101] , Cancun, Mexico, 2016, pp. 876–883.
[102] , vol. 35, no. 6, pp. 504–506, Oct. 2008. doi:
[103] , vol. 100, no. 8, pp. 2410–2428, Aug. 2012. doi:
[104] , vol. 32, no. 3, pp. 397–419, May 2015. doi:
[105] , Boston, USA, 2012, pp. 391–398.
[106] , vol. 9, no. 4, p. 1250027, Dec. 2012. doi:
[107] , Stockholm, Sweden, 2016, pp. 1817–1824.
[108] , Karlsruhe, Germany, 2013, pp. 673–678.
[109] , Tokyo, Japan, 2013, pp. 4145–4151.
[110] , St. Louis, USA, 2009, pp. 5481–5486.
[111] , A. Morecki, G. Bianchi, and C. Rzymkowski, Eds. Vienna, Austria: Springer, 2000, pp. 307–312.
[112] , vol. 23, no. 2, pp. 74–80, Jun. 2016. doi:
[113] , Daejeon, South Korea, 2008, pp. 477–483.
[114] , Karlsruhe, Germany, 2013, pp. 5253–5258.
[115] , Cluj-Napoca, Romania, 2022, pp. 127–131.
[116] , Koscielisko, Poland, 2017, pp. 1–6.
[117] , Chengdu, China, 2018, pp. 112–117.
[118] , Orlando, USA, 1993, pp. 567–572.
[119] , Seville, Spain, 2002, pp. 2451–2456.
[120] , vol. 53, no. 4, pp. 411–434, Nov. 2021. doi:
[121] , vol. 34, no. 1, pp. 1–17, Feb. 2018. doi:
[122] , vol. 6, no. 8, Jan. 2014.
[123] , vol. 101, pp. 34–44, Mar. 2018. doi:
[124] , Hong Kong, China, 2014, pp. 6212–6217.
[125] , 2001, pp. 164–167.
[126] , vol. 27, no. 3, pp. 223–232, Feb. 2013. doi:
[127] , Kobe, Japan, 2009, pp. 769–774.
[128] , Madrid, Spain, 2014, pp. 959–966.
[129] , Madrid, Spain, 2017, pp. 66–78.
[130] , Seoul, South Korea, 2015, pp. 33–40.
[131] , vol. 55, pp. 38–53, Nov. 2018. doi:
[132] , Tehran, Iran, 2017, pp. 132–137.
[133] , Beijing, China, 2018, pp. 747–754.
[134] , vol. 17, no. 5, p. 2050021, Oct. 2020. doi:
[135] , Kobe, Japan, 2009, pp. 2516–2521.
[136] , Boston, USA, 2012, pp. 417–417.
[137] , Mexico City, Mexico, 2015, pp. 1–6.
[138] , Madrid, Spain, 2014, pp. 916–923.
[139] , Baltimore, USA, 2012, pp. 199–206.
[140] , vol. 10, p. 450, Feb. 2019. doi:
[141] , Naples, Italy, 2019, pp. 445–446.
[142] , Atlanta, USA, 2013, pp. 1–7.
[143] , Catania, Italy, 2003, pp. 485–492.
[144] , Barcelona, Spain, 2005, pp. 1431–1436.
[145] , Tsukuba, Japan, 2005, pp. 135–140.
[146] , Tsukuba, Japan, 2005, pp. 321–326.
[147] , vol. 22, no. 2-3, pp. 159–190, Mar. 2008. doi:
[148] , Taipei, China, 2003, pp. 2472–2477.
[149] , Wuhan, China, 2017, pp. 286–297.
[150] , Shanghai, China, 2002, pp. 1265–1269.
[151] , Pittsburgh, USA, 2007, pp. 577–582.
[152] , vol. 26, no. 1, pp. 109–116, Jan. 2008. doi:
[153] , vol. 36, no. 7, pp. 784–795, Feb. 2019. doi:
[154] , vol. 28, no. 10, p. 103002, Sept. 2019. doi:
[155] , vol. 6, no. 3, p. eaav8219, Jan. 2020. doi:
[156] , vol. 18, no. 4, pp. 764–785, Aug. 2021. doi:
[157] , vol. 53, no. 1, pp. 17–94, Jan. 2020. doi:
[158] , vol. 24, no. 1, pp. 104–121, Feb. 2021. doi:
[159] , vol. 27, no. 3, pp. 574–588, Mar. 2019. doi:
[160] , vol. 9, no. 2, pp. 113–125, Jun. 2001. doi:
[161] , vol. 28, no. 3, p. 60, Aug. 2009.
[162] , vol. 24, no. 4, pp. 532–543, Aug. 2005. doi:
[163] , H. Akiyama, O. Obst, C. Sammut, and F. Tonidandel, Eds. Cham, Germany: Springer, 2018, pp. 423–434.
[164] , vol. 17, no. 1, p. 94, Jan. 2017. doi:
[165] , Portland, USA, 2017, pp. 101631U.
[166] , Paris, France, 2012, pp. 337–344.
[167] , vol. 5, pp. 1–31, May 2022. doi:
[168] , Detroit, USA, 1999, pp. 368–374.
[169] , Washington, USA, 2002, pp. 1404–1409.
[170] , vol. 14, p. 600885, Jan. 2021. doi:
[171] , Singapore, Singapore, 2018, pp. 436–441.
[172] , vol. 8, no. 8, pp. 5031–5038, Aug. 2023. doi:
[173] , vol. 33, no. 5, p. e5999, Mar. 2021. doi:
[174] , Karon Beach, Thailand, 2011, pp. 2241–2242.
[175] , vol. 13, no. 2, p. 77, Mar.–Apr. 2016. doi:
[176] , Vilamoura-Algarve, Portugal, 2012, pp. 4019–4026.
[177] , vol. 13, p. 70, Aug. 2019. doi:
[178] , vol. 6, no. 4, pp. 8261–8268, Oct. 2021. doi:
[179] , vol. 6, no. 4, pp. 8561–8568, Oct. 2021. doi:
[180] , Seoul, South Korea, 2015, pp. 1013–1019.
[181] , vol. 7, no. 2, pp. 2779–2786, Apr. 2022. doi:
[182] , vol. 67, no. 9, p. e17306, Sept. 2021. doi:
[183] , Genova, Italy, 2006, pp. 200–207.
[184] , Taipei, China, 2003, pp. 1620–1626.
[185] , vol. 34, no. 21–22, pp. 1353–1369, Nov. 2020. doi:
[186] , vol. 158, p. 104269, Dec. 2022. doi:
[187] , vol. 31, pp. 17–32, Feb. 2019. doi:
[188] , vol. 51, no. 4, pp. 2332–2341, Apr. 2021. doi:
[189] , vol. 7, no. 3, pp. 8225–8232, Jul. 2022. doi:
[190] , vol. 38, no. 6, p. 206, Dec. 2019.
[191] , M. Diehl and K. Mombaur, Eds. Berlin, Heidelberg, Germany: Springer, 2006, vol. 340, pp. 299–324.
[192] , vol. 32, no. 6, pp. 907–934, Sept. 2014. doi:
[193] , vol. 4, no. 2, pp. 2116–2123, Apr. 2019. doi:
[194] , vol. 18, no. 2, pp. 484–494, Apr. 2021. doi:
[195] , vol. 86, pp. 13–28, Dec. 2016. doi:
[196] , Madrid, Spain, 2018, pp. 8227–8232.
[197] , vol. 31, no. 5, pp. 2231–2244, Sept. 2023. doi:
[198] , vol. 39, no. 2, pp. 905–922, Apr. 2023. doi:
[199] , vol. 8, no. 7, pp. 4307–4314, Jul. 2023. doi:
[200] , vol. 39, no. 4, pp. 3154–3166, Aug. 2023. doi:
[201] , vol. 28, no. 6, pp. 3029–3040, Dec. 2023. doi:
[202] , vol. 8, no. 1, p. 67, Feb. 2023. doi:
[203] , vol. 71, no. 2, pp. 1708–1717, Feb. 2024. doi:
[204] , vol. 11, pp. 20284–20297, Feb. 2023. doi:
[205] , vol. 8, no. 5, pp. 3039–3046, May 2023. doi:
[206] , vol. 34, no. 4, pp. 953–965, Aug. 2018. doi:
[207] , vol. 10, no. 2, pp. 1401–1413, Jan. 2023. doi:
[208] , vol. 28, no. 2, pp. 322–329, Apr. 2023. doi:
[209] , vol. 11, no. 1, p. 1332, Mar. 2020. doi:
[210] , Orlando, USA, 2008, pp. 6–11.
[211] , vol. 7, no. 13, p. 20005484, Jul. 2020.
[212] , vol. 17, no. 7, pp. 4492–4502, Jul. 2021. doi:
[213] , vol. 390, pp. 260–267, May 2020. doi:
[214] , vol. 67, no. 10, pp. 8608–8617, Oct. 2020. doi:
[215] , vol. 29, no. 1, pp. 10–24, Jan. 2018. doi:
[216] , vol. 39, no. 1, pp. 3–20, Jan. 2020. doi:
[217] , vol. 9, no. 3, pp. 318–333, Jun. 2005. doi:
[218] , vol. 55, no. 3, pp. 1444–1452, Mar. 2008. doi:
[219] , vol. 18, no. 3, pp. 1864–1872, Mar. 2022. doi:
[220] , vol. 22, no. 30, pp. 1–82, Jan. 2021.
[221] , vol. 50, no. 10, pp. 3701–3712, Oct. 2020. doi:
[222] , vol. 13, no. 6, pp. 1235–1252, Sept. 2021. doi:
[223] , vol. 3, no. 4, pp. 3247–3254, Oct. 2018. doi:
[224] , Anchorage, USA, 2010, pp. 2369–2374.
[225] , vol. 84, pp. 1–16, Dec. 2016. doi:
[226] , vol. 7, p. 61, Jun. 2020. doi:
[227] , Baltimore, USA, 2022, pp. 8387–8406.
[228] , vol. 4, no. 3, pp. 2407–2414, Jul. 2019. doi:
[229] , vol. 13, no. 1, pp. 105–117, Mar. 2021. doi:
[230] , vol. 13, no. 1, pp. 162–170, Mar. 2021. doi:
[231] , vol. 3, no. 6, pp. 233–242, Jun. 1999. doi:
[232] , vol. 24, no. 3, pp. 1117–1128, Jun. 2019. doi:
[233] , vol. 30, no. 3, pp. 777–787, Mar. 2019. doi:
[234] , vol. 451, pp. 205–214, Sept. 2021. doi:
[235] , vol. 62, no. 10, pp. 1517–1530, Oct. 2014. doi:
[236] , vol. 61, no. 12, pp. 1323–1334, Dec. 2013. doi:
[237] , vol. 21, no. 4, p. 1278, Feb. 2021. doi:
[238] , vol. 77, no. 2, pp. 257–286, Feb. 1989. doi:
[239] , vol. 46, no. 3, pp. 706–717, Mar. 2016. doi:
[240] , Washington, USA, 2002, pp. 1398–1403.
[241] , vol. 38, no. 7, pp. 833–852, May 2019. doi:
[242] , vol. 7, no. 2, pp. 4917–4923, Apr. 2022. doi:
[243] , Honolulu, USA, 2019, pp. 7749–7758.
[244] , Macao, China, 2019, pp. 2692–2700.
[245] , Honolulu, USA, 2017, pp. 1256–1261.
[246] , Portland, USA, 2019, pp. 1536–1540.
[247] , Xiamen, China, 2019, pp. 102–109.
[248] , Stockholm, Sweden, 2018, pp. 2204–2206.
[249] , vol. 2019, p. 4834516, Apr. 2019.
[250] , Canberra, Australia, 2020, pp. 241–249.
[251] , vol. 602, pp. 328–350, Jul. 2022. doi:
[252] , vol. 5, no. 4, pp. 5355–5362, Oct. 2020. doi:
[253] , vol. 63, no. 9, pp. 2787–2802, Sept. 2018. doi:
[254] , New York, USA, 2016, pp. 49–58.
[255] , Stockholm, Sweden, 2016, pp. 512–519.
[256] , vol. 520, pp. 1–14, May 2020. doi:
[257] , vol. 388, pp. 60–69, May 2020. doi:
[258] , Barcelona, Spain, 2016, pp. 4572–4580.
[259] , vol. 457, pp. 365–376, Oct. 2021. doi:
[260] , Baltimore, USA, 2022, pp. 24725–24742.
[261] , vol. 44, no. 10, pp. 6968–6980, Oct. 2022. doi:
[262] , Xi’an, China, 2021, pp. 2797–2803.
[263] , Ginowan, Japan, 2022, pp. 714–721.
[264] , vol. 22, pp. 1–49, Apr. 2021.
[265] , vol. 38, no. 2-3, pp. 126–145, Mar. 2019. doi:
[266] , vol. 26, no. 1, pp. 1–20, Feb. 2010. doi:
[267] , vol. 31, no. 34, p. 1803637, Aug. 2019. doi:
[268] , vol. 20, no. 14, pp. 7525–7531, Jul. 2020. doi:
[269] , vol. 46, no. 15, pp. 23592–23598, Oct. 2020. doi:
[270] , vol. 19, no. 2, pp. 58–71, Jun. 2012. doi:
[271] , Stockholm, Sweden, 2016, pp. 4851–4858.
[272] , Rome, Italy, 2007, pp. 2162–2168.
[273] , New Orleans, USA, 2004, pp. 592–597.
[274] , vol. 33, no. 9, pp. 1251–1270, Aug. 2014. doi:
[275] , vol. 28, no. 2, pp. 427–439, Apr. 2012. doi:
[276] , vol. 29, no. 2, pp. 331–345, Apr. 2013. doi:
[277] , Nagoya Aichi, Japan, 2007, pp. 228–235.
[278] , Barcelona, Spain, 2005, pp. 1066–1071.
[279] ( ), vol. 32, no. 1, pp. 57–65, Feb. 2002. doi:
[280] , vol. 54, no. 12, pp. 1005–1014, Dec. 2006. doi:
[281] , San Diego, USA, 2007, pp. 798–805.
[282] , vol. 13, no. 1, pp. 24–32, Mar. 2005. doi:
[283] , Zurich, Switzerland, 2007, pp. 768–773.
[284] , vol. 6, no. 2, pp. 170–187, Jun. 2001. doi:
[285] , vol. 8, no. 3, pp. 401–409, Sept. 2003. doi:
[286] , Kobe, Japan, 2009, pp. 2972–2978.
[287] , Xi’an, China, 2021, pp. 1622–1628.
[288] , vol. 11, no. 10, pp. 10226–10236, Feb. 2019. doi:
[289] , Kobe, Japan, 2009, pp. 2118–2123.
[290] , Seoul, South Korea, 2015, pp. 610–615.
[291] , Montreal, Canada, 2019, pp. 4303–4309.
[292] , vol. 46, no. 3, pp. 655–667, Mar. 2016. doi:
[293] , vol. 43, no. 5, pp. 535–551, Aug. 2016. doi:
[294] , vol. 51, no. 7, pp. 3824–3835, Jul. 2021. doi:
[295] , vol. 32, p. e1, 2017. doi:
[296] , vol. 34, no. 2, pp. 229–240, Mar. 2017. doi:
[297] , vol. 26, no. 1, pp. 11–17, Feb. 2011. doi:
[298] , New Orleans, USA, 2004, pp. 1713-1718.
[299] , Seoul, South Korea, 2015, pp. 623–630.
[300] , vol. 32, no. 2, pp. 275–292, Mar. 2015. doi:
[301] , vol. 93, pp. 157–163, Apr. 2019. doi:
[302] , vol. 34, no. 5, pp. 518–526, Sept. 2022. doi:
[303] , vol. 29, no. 3, pp. 269–290, Apr. 2020. doi:
[304] , vol. 10, no. 1, p. e033096, 2020. doi:
[305] , vol. 56, no. 4, pp. 535–556, Aug. 2019. doi:
[306] , vol. 24, no. 3, pp. 354–371, Aug. 2021. doi:
[307] , Vienna, Austria, 2017, pp. 332–340.
[308] , vol. 33, no. 4, pp. 507–518, Jul. 2019. doi:
[309] , vol. 32, no. 3, pp. 1367–1383, Apr. 2020. doi:
[310] , vol. 85, p. 104309, Aug. 2021. doi:
[311] , vol. 44, no. 10, pp. 1309–1317, Sept. 2022. doi:
[312] , vol. 8, pp. 75264–75278, Apr. 2020. doi:
[313] , F. De la Prieta, R. Gennari, M. Temperini, T. Di Mascio, P. Vittorini, Z. Kubincova, E. Popescu, D. R. Carneiro, L. Lancia, and A. Addone, Eds. Cham, Germany: Springer, 2022, pp. 217–226.
[314] , vol. 111, p. 103749, Nov. 2020. doi:
[315] , vol. 3, no. 1, p. e000371, Jan. 2019. doi:
[316] , vol. 8, no. 9, pp. 5624–5631, Sept. 2023. doi:
[317] , vol. 8, no. 2, p. 258, Jun. 2023. doi:
[318] , vol. 52, no. 4, pp. 964–974, Mar. 2019. doi:
[319] , vol. 28, no. 5, pp. 1131–1144, Oct. 2012. doi:
[320] , vol. 48, no. 6, pp. 1741–1786, Jan. 2019. doi:
[321] , vol. 4, p. 8, Feb. 2023. doi:
[322] , vol. 5, p. 1, Jan. 2018. doi:
[323] , vol. 32, no. 6, p. 1906171, Feb. 2020. doi:
[324] , vol. 19, no. 5, pp. 3305–3312, Apr. 2019. doi:
[325] , vol. 107, no. 10, pp. 2011–2015, Oct. 2019. doi:
[326] , vol. 107, no. 2, pp. 247–252, Feb. 2019. doi:
[327] , vol. 10, no. 20, p. 2207273, Jul. 2023. doi:
[328] , vol. 298, p. 122111, Jul. 2023. doi:
[329] (MXene) for electronic skin,” , vol. 7, no. 44, pp. 25314–25323, Oct. 2019. doi:
[330] , vol. 20, no. 3, pp. 873–899, May 2023. doi:
[331] , vol. 18, no. 3, pp. 501–533, Jun. 2021. doi:
[332] , vol. 67, no. 5, pp. 3819–3829, May 2020. doi:
[333] , vol. 15, no. 2, pp. 287–300, Apr. 2007. doi:
[334] , vol. 66, no. 10, pp. 7788–7799, Oct. 2019. doi:
[335] , vol. 4, no. 26, p. eaao4900, Jan. 2019. doi:
[336] , vol. 49, no. 12, pp. 4097–4127, Dec. 2019. doi:
[337] , vol. 35, no. 1, pp. 64–77, Feb. 2019. doi:
[338] , Matsue, Japan, 2016, pp. 110–111.
[339] , vol. 118, no. 7, pp. 3862–3886, Mar. 2018. doi:
[340] , vol. 10, no. 4, pp. 61–68, Nov. 2010. doi:
[341] , vol. 146, no. 2, pp. 267–281, May 1946. doi:
[342] , vol. 28, no. 1, pp. 15–23, Feb. 2000. doi:
[343] , vol. 113, no. 19, pp. 6583–6599, Mar. 2009. doi:
[344] , vol. 1, no. 1, p. 18, Jun. 2011. doi:
[345] , vol. 115, no. 15, pp. 7502–7542, Jun. 2015. doi:
[346] , vol. 116, no. 16, pp. 9305–9374, Jul. 2016. doi:
[347] , vol. 93, no. 9, pp. 771–780, Sept. 2017. doi:
[348] , vol. 19, no. 21, p. 4740, Oct. 2019. doi:
[349] , Tampere, Finland, 2020, pp. 1–6.
[350] , Big Sky, USA, 2021, pp. 1-8.
[351] , Madrid, Spain, 2018, pp. 5018–5025.
[352] , vol. 23, no. 7, p. 3625, Mar. 2023. doi:
[353] , vol. 50, no. 4, pp. 699–704, Aug. 2003. doi:
[354] , vol. 22, no. 4, pp. 637–649, Aug. 2006. doi:
[355] , vol. 3, no. 1, pp. 15–25, Jan. 2016. doi:
[356] , vol. 21, no. 24, pp. 7351–7362, Dec. 2017. doi:
[357] , vol. 102, pp. 274–286, Jan. 2020. doi:
[358] , vol. 12, no. 6, pp. 1179–1201, Dec. 2020. doi:
[359] , vol. 7, no. 2, pp. 5520–5527, Apr. 2022. doi:
[360] , vol. 22, no. 3, pp. 724–734, May 2020. doi:
[361] , vol. 90, pp. 308–314, Jan. 2019. doi:
[362] , vol. 25, no. 10, p. 10LT01, Sept. 2016. doi:
[363] , vol. 5, no. 3, pp. 4345–4351, Jul. 2020. doi:
[364] , vol. 5, no. 7, pp. 2190–2208, Jul. 2022. doi:
[365] , vol. 8, no. 8, pp. 5172–5179, Aug. 2023. doi:
[366] , 2023. DOI:
[367] , vol. 27, no. 3, pp. 401–410, Jun. 2011. doi:
[368] , vol. 30, pp. 262–272, Jan. 2014. doi:
[369] , vol. 48, pp. 56–66, May 2018. doi:
[370] , , , vol. 24, no. 5, pp. 294–299, May 2021. doi:

Proportional views

通讯作者: 陈斌, [email protected]

沈阳化工大学材料科学与工程学院 沈阳 110142

Figures( 7 )  /  Tables( 5 )

Article Metrics

  • PDF Downloads( 1035 )
  • Abstract views( 4342 )
  • HTML views( 207 )
  • The current state, advancements and future prospects of humanoid robots are outlined
  • Fundamental techniques including structure, control, learning and perception are investigated
  • This paper highlights the potential applications of humanoid robots
  • This paper outlines future trends and challenges in humanoid robot research
  • Copyright © 2022 IEEE/CAA Journal of Automatica Sinica
  • 京ICP备14019135号-24
  • E-mail: [email protected]  Tel: +86-10-82544459, 10-82544746
  • Address: 95 Zhongguancun East Road, Handian District, Beijing 100190, China

robotics engineering research paper

Export File

shu

  • Figure 1. Historical progression of humanoid robots.
  • Figure 2. The mapping knowledge domain of humanoid robots. (a) Co-citation analysis; (b) Country and institution analysis; (c) Cluster analysis of keywords.
  • Figure 3. The number of papers varies with each year.
  • Figure 4. Research status of humanoid robots
  • Figure 5. Comparison of Child-size and Adult-size humanoid robots
  • Figure 6. Potential applications of humanoid robots.
  • Figure 7. Key technologies of humanoid robots.

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Robotics

Title: software engineering for robotics: future research directions; report from the 2023 workshop on software engineering for robotics.

Abstract: Robots are experiencing a revolution as they permeate many aspects of our daily lives, from performing house maintenance to infrastructure inspection, from efficiently warehousing goods to autonomous vehicles, and more. This technical progress and its impact are astounding. This revolution, however, is outstripping the capabilities of existing software development processes, techniques, and tools, which largely have remained unchanged for decades. These capabilities are ill-suited to handling the challenges unique to robotics software such as dealing with a wide diversity of domains, heterogeneous hardware, programmed and learned components, complex physical environments captured and modeled with uncertainty, emergent behaviors that include human interactions, and scalability demands that span across multiple dimensions. Looking ahead to the need to develop software for robots that are ever more ubiquitous, autonomous, and reliant on complex adaptive components, hardware, and data, motivated an NSF-sponsored community workshop on the subject of Software Engineering for Robotics, held in Detroit, Michigan in October 2023. The goal of the workshop was to bring together thought leaders across robotics and software engineering to coalesce a community, and identify key problems in the area of SE for robotics that that community should aim to solve over the next 5 years. This report serves to summarize the motivation, activities, and findings of that workshop, in particular by articulating the challenges unique to robot software, and identifying a vision for fruitful near-term research directions to tackle them.
Comments: 16 pages
Subjects: Robotics (cs.RO); Software Engineering (cs.SE)
Cite as: [cs.RO]
  (or [cs.RO] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Artificial Intelligence

  • Data Science
  • Hardware & Sensors

Machine Learning

Agriculture.

  • Defense & Cyber Security
  • Healthcare & Sports
  • Hospitality & Retail
  • Logistics & Industrial
  • Office & Household
  • Write for Us

robotics engineering research paper

Robotics and AI research & development centers in India

Current challenges in the indian robotics ecosystem, how to secure your rpa ecosystem from cybersecurity risks – checklist, how to protect your drone from crashing – pre-flight checklist [updated], 29 robotics projects ideas for engineers and students, 20 ai project ideas for school students [updated], how robotics are used in the weight loss industry, common mistakes to avoid when using a demat account, warehouse robotics companies in europe transforming order fulfillment [updated], why businesses should invest in decentralized apps, a close watch: how uk businesses benefit from advanced cctv systems, empowering small businesses: the role of it support in growth and….

  • Technologies

500 research papers and projects in robotics – Free Download

robotics engineering research paper

The recent history of robotics is full of fascinating moments that accelerated the rapid technological advances in artificial intelligence , automation , engineering, energy storage, and machine learning. The result transformed the capabilities of robots and their ability to take over tasks once carried out by humans at factories, hospitals, farms, etc.

These technological advances don’t occur overnight; they require several years of research and development in solving some of the biggest engineering challenges in navigation, autonomy, AI and machine learning to build robots that are much safer and efficient in a real-world situation. A lot of universities, institutes, and companies across the world are working tirelessly in various research areas to make this reality.

In this post, we have listed 500+ recent research papers and projects for those who are interested in robotics. These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision, artificial intelligence, collaborative robotics, nano robotics, social robotics, cloud, swan robotics, sensors, mobile robotics, humanoid, service robots, automation, autonomous, etc. Feel free to download. Share your own research papers with us to be added into this list. Also, you can ask a professional academic writer from  CustomWritings – research paper writing service  to assist you online on any related topic.

Navigation and Motion Planning

  • Robotics Navigation Using MPEG CDVS
  • Design, Manufacturing and Test of a High-Precision MEMS Inclination Sensor for Navigation Systems in Robot-assisted Surgery
  • Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment
  • One Point Perspective Vanishing Point Estimation for Mobile Robot Vision Based Navigation System
  • Application of Ant Colony Optimization for finding the Navigational path of Mobile Robot-A Review
  • Robot Navigation Using a Brain-Computer Interface
  • Path Generation for Robot Navigation using a Single Ceiling Mounted Camera
  • Exact Robot Navigation Using Power Diagrams
  • Learning Socially Normative Robot Navigation Behaviors with Bayesian Inverse Reinforcement Learning
  • Pipelined, High Speed, Low Power Neural Network Controller for Autonomous Mobile Robot Navigation Using FPGA
  • Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology
  • Optimality and limit behavior of the ML estimator for Multi-Robot Localization via GPS and Relative Measurements
  • Aerial Robotics: Compact groups of cooperating micro aerial vehicles in clustered GPS denied environment
  • Disordered and Multiple Destinations Path Planning Methods for Mobile Robot in Dynamic Environment
  • Integrating Modeling and Knowledge Representation for Combined Task, Resource and Path Planning in Robotics
  • Path Planning With Kinematic Constraints For Robot Groups
  • Robot motion planning for pouring liquids
  • Implan: Scalable Incremental Motion Planning for Multi-Robot Systems
  • Equilibrium Motion Planning of Humanoid Climbing Robot under Constraints
  • POMDP-lite for Robust Robot Planning under Uncertainty
  • The RoboCup Logistics League as a Benchmark for Planning in Robotics
  • Planning-aware communication for decentralised multi- robot coordination
  • Combined Force and Position Controller Based on Inverse Dynamics: Application to Cooperative Robotics
  • A Four Degree of Freedom Robot for Positioning Ultrasound Imaging Catheters
  • The Role of Robotics in Ovarian Transposition
  • An Implementation on 3D Positioning Aquatic Robot

Robotic Interactions

  • On Indexicality, Direction of Arrival of Sound Sources and Human-Robot Interaction
  • OpenWoZ: A Runtime-Configurable Wizard-of-Oz Framework for Human-Robot Interaction
  • Privacy in Human-Robot Interaction: Survey and Future Work
  • An Analysis Of Teacher-Student Interaction Patterns In A Robotics Course For Kindergarten Children: A Pilot Study
  • Human Robotics Interaction (HRI) based Analysis–using DMT
  • A Cautionary Note on Personality (Extroversion) Assessments in Child-Robot Interaction Studies
  • Interaction as a bridge between cognition and robotics
  • State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction
  • Eliciting Conversation in Robot Vehicle Interactions
  • A Comparison of Avatar, Video, and Robot-Mediated Interaction on Users’ Trust in Expertise
  • Exercising with Baxter: Design and Evaluation of Assistive Social-Physical Human- Robot Interaction
  • Using Narrative to Enable Longitudinal Human- Robot Interactions
  • Computational Analysis of Affect, Personality, and Engagement in HumanRobot Interactions
  • Human-robot interactions: A psychological perspective
  • Gait of Quadruped Robot and Interaction Based on Gesture Recognition
  • Graphically representing child- robot interaction proxemics
  • Interactive Demo of the SOPHIA Project: Combining Soft Robotics and Brain-Machine Interfaces for Stroke Rehabilitation
  • Interactive Robotics Workshop
  • Activating Robotics Manipulator using Eye Movements
  • Wireless Controlled Robot Movement System Desgined using Microcontroller
  • Gesture Controlled Robot using LabVIEW
  • RoGuE: Robot Gesture Engine

Obstacle Avoidance

  • Low Cost Obstacle Avoidance Robot with Logic Gates and Gate Delay Calculations
  • Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance
  • Controlling Obstacle Avoiding And Live Streaming Robot Using Chronos Watch
  • Movement Of The Space Robot Manipulator In Environment With Obstacles
  • Assis-Cicerone Robot With Visual Obstacle Avoidance Using a Stack of Odometric Data.
  • Obstacle detection and avoidance methods for autonomous mobile robot
  • Moving Domestic Robotics Control Method Based on Creating and Sharing Maps with Shortest Path Findings and Obstacle Avoidance
  • Control of the Differentially-driven Mobile Robot in the Environment with a Non-Convex Star-Shape Obstacle: Simulation and Experiments
  • A survey of typical machine learning based motion planning algorithms for robotics
  • Linear Algebra for Computer Vision, Robotics , and Machine Learning
  • Applying Radical Constructivism to Machine Learning: A Pilot Study in Assistive Robotics
  • Machine Learning for Robotics and Computer Vision: Sampling methods and Variational Inference
  • Rule-Based Supervisor and Checker of Deep Learning Perception Modules in Cognitive Robotics
  • The Limits and Potentials of Deep Learning for Robotics
  • Autonomous Robotics and Deep Learning
  • A Unified Knowledge Representation System for Robot Learning and Dialogue

Computer Vision

  • Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot
  • Non-Euclidean manifolds in robotics and computer vision: why should we care?
  • Topology of singular surfaces, applications to visualization and robotics
  • On the Impact of Learning Hierarchical Representations for Visual Recognition in Robotics
  • Focused Online Visual-Motor Coordination for a Dual-Arm Robot Manipulator
  • Towards Practical Visual Servoing in Robotics
  • Visual Pattern Recognition In Robotics
  • Automated Visual Inspection: Position Identification of Object for Industrial Robot Application based on Color and Shape
  • Automated Creation of Augmented Reality Visualizations for Autonomous Robot Systems
  • Implementation of Efficient Night Vision Robot on Arduino and FPGA Board
  • On the Relationship between Robotics and Artificial Intelligence
  • Artificial Spatial Cognition for Robotics and Mobile Systems: Brief Survey and Current Open Challenges
  • Artificial Intelligence, Robotics and Its Impact on Society
  • The Effects of Artificial Intelligence and Robotics on Business and Employment: Evidence from a survey on Japanese firms
  • Artificially Intelligent Maze Solver Robot
  • Artificial intelligence, Cognitive Robotics and Human Psychology
  • Minecraft as an Experimental World for AI in Robotics
  • Impact of Robotics, RPA and AI on the insurance industry: challenges and opportunities

Probabilistic Programming

  • On the use of probabilistic relational affordance models for sequential manipulation tasks inrobotics
  • Exploration strategies in developmental robotics: a unified probabilistic framework
  • Probabilistic Programming for Robotics
  • New design of a soft-robotics wearable elbow exoskeleton based on Shape Memory Alloy wires actuators
  • Design of a Modular Series Elastic Upgrade to a Robotics Actuator
  • Applications of Compliant Actuators to Wearing Robotics for Lower Extremity
  • Review of Development Stages in the Conceptual Design of an Electro-Hydrostatic Actuator for Robotics
  • Fluid electrodes for submersible robotics based on dielectric elastomer actuators
  • Cascaded Control Of Compliant Actuators In Friendly Robotics

Collaborative Robotics

  • Interpretable Models for Fast Activity Recognition and Anomaly Explanation During Collaborative Robotics Tasks
  • Collaborative Work Management Using SWARM Robotics
  • Collaborative Robotics : Assessment of Safety Functions and Feedback from Workers, Users and Integrators in Quebec
  • Accessibility, Making and Tactile Robotics : Facilitating Collaborative Learning and Computational Thinking for Learners with Visual Impairments
  • Trajectory Adaptation of Robot Arms for Head-pose Dependent Assistive Tasks

Mobile Robotics

  • Experimental research of proximity sensors for application in mobile robotics in greenhouse environment.
  • Multispectral Texture Mapping for Telepresence and Autonomous Mobile Robotics
  • A Smart Mobile Robot to Detect Abnormalities in Hazardous Zones
  • Simulation of nonlinear filter based localization for indoor mobile robot
  • Integrating control science in a practical mobile robotics course
  • Experimental Study of the Performance of the Kinect Range Camera for Mobile Robotics
  • Planification of an Optimal Path for a Mobile Robot Using Neural Networks
  • Security of Networking Control System in Mobile Robotics (NCSMR)
  • Vector Maps in Mobile Robotics
  • An Embedded System for a Bluetooth Controlled Mobile Robot Based on the ATmega8535 Microcontroller
  • Experiments of NDT-Based Localization for a Mobile Robot Moving Near Buildings
  • Hardware and Software Co-design for the EKF Applied to the Mobile Robotics Localization Problem
  • Design of a SESLogo Program for Mobile Robot Control
  • An Improved Ekf-Slam Algorithm For Mobile Robot
  • Intelligent Vehicles at the Mobile Robotics Laboratory, University of Sao Paolo, Brazil [ITS Research Lab]
  • Introduction to Mobile Robotics
  • Miniature Piezoelectric Mobile Robot driven by Standing Wave
  • Mobile Robot Floor Classification using Motor Current and Accelerometer Measurements
  • Sensors for Robotics 2015
  • An Automated Sensing System for Steel Bridge Inspection Using GMR Sensor Array and Magnetic Wheels of Climbing Robot
  • Sensors for Next-Generation Robotics
  • Multi-Robot Sensor Relocation To Enhance Connectivity In A WSN
  • Automated Irrigation System Using Robotics and Sensors
  • Design Of Control System For Articulated Robot Using Leap Motion Sensor
  • Automated configuration of vision sensor systems for industrial robotics

Nano robotics

  • Light Robotics: an all-optical nano-and micro-toolbox
  • Light-driven Nano- robotics
  • Light-driven Nano-robotics
  • Light Robotics: a new tech–nology and its applications
  • Light Robotics: Aiming towards all-optical nano-robotics
  • NanoBiophotonics Appli–cations of Light Robotics
  • System Level Analysis for a Locomotive Inspection Robot with Integrated Microsystems
  • High-Dimensional Robotics at the Nanoscale Kino-Geometric Modeling of Proteins and Molecular Mechanisms
  • A Study Of Insect Brain Using Robotics And Neural Networks

Social Robotics

  • Integrative Social Robotics Hands-On
  • ProCRob Architecture for Personalized Social Robotics
  • Definitions and Metrics for Social Robotics, along with some Experience Gained in this Domain
  • Transmedia Choreography: Integrating Multimodal Video Annotation in the Creative Process of a Social Robotics Performance Piece
  • Co-designing with children: An approach to social robot design
  • Toward Social Cognition in Robotics: Extracting and Internalizing Meaning from Perception
  • Human Centered Robotics : Designing Valuable Experiences for Social Robots
  • Preliminary system and hardware design for Quori, a low-cost, modular, socially interactive robot
  • Socially assistive robotics: Human augmentation versus automation
  • Tega: A Social Robot

Humanoid robot

  • Compliance Control and Human-Robot Interaction – International Journal of Humanoid Robotics
  • The Design of Humanoid Robot Using C# Interface on Bluetooth Communication
  • An Integrated System to approach the Programming of Humanoid Robotics
  • Humanoid Robot Slope Gait Planning Based on Zero Moment Point Principle
  • Literature Review Real-Time Vision-Based Learning for Human-Robot Interaction in Social Humanoid Robotics
  • The Roasted Tomato Challenge for a Humanoid Robot
  • Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality

Cloud Robotics

  • CR3A: Cloud Robotics Algorithms Allocation Analysis
  • Cloud Computing and Robotics for Disaster Management
  • ABHIKAHA: Aerial Collision Avoidance in Quadcopter using Cloud Robotics
  • The Evolution Of Cloud Robotics: A Survey
  • Sliding Autonomy in Cloud Robotics Services for Smart City Applications
  • CORE: A Cloud-based Object Recognition Engine for Robotics
  • A Software Product Line Approach for Configuring Cloud Robotics Applications
  • Cloud robotics and automation: A survey of related work
  • ROCHAS: Robotics and Cloud-assisted Healthcare System for Empty Nester

Swarm Robotics

  • Evolution of Task Partitioning in Swarm Robotics
  • GESwarm: Grammatical Evolution for the Automatic Synthesis of Collective Behaviors in Swarm Robotics
  • A Concise Chronological Reassess Of Different Swarm Intelligence Methods With Multi Robotics Approach
  • The Swarm/Potential Model: Modeling Robotics Swarms with Measure-valued Recursions Associated to Random Finite Sets
  • The TAM: ABSTRACTing complex tasks in swarm robotics research
  • Task Allocation in Foraging Robot Swarms: The Role of Information Sharing
  • Robotics on the Battlefield Part II
  • Implementation Of Load Sharing Using Swarm Robotics
  • An Investigation of Environmental Influence on the Benefits of Adaptation Mechanisms in Evolutionary Swarm Robotics

Soft Robotics

  • Soft Robotics: The Next Generation of Intelligent Machines
  • Soft Robotics: Transferring Theory to Application,” Soft Components for Soft Robots”
  • Advances in Soft Computing, Intelligent Robotics and Control
  • The BRICS Component Model: A Model-Based Development Paradigm For ComplexRobotics Software Systems
  • Soft Mechatronics for Human-Friendly Robotics
  • Seminar Soft-Robotics
  • Special Issue on Open Source Software-Supported Robotics Research.
  • Soft Brain-Machine Interfaces for Assistive Robotics: A Novel Control Approach
  • Towards A Robot Hardware ABSTRACT ion Layer (R-HAL) Leveraging the XBot Software Framework

Service Robotics

  • Fundamental Theories and Practice in Service Robotics
  • Natural Language Processing in Domestic Service Robotics
  • Localization and Mapping for Service Robotics Applications
  • Designing of Service Robot for Home Automation-Implementation
  • Benchmarking Speech Understanding in Service Robotics
  • The Cognitive Service Robotics Apartment
  • Planning with Task-oriented Knowledge Acquisition for A Service Robot
  • Cognitive Robotics
  • Meta-Morphogenesis theory as background to Cognitive Robotics and Developmental Cognitive Science
  • Experience-based Learning for Bayesian Cognitive Robotics
  • Weakly supervised strategies for natural object recognition in robotics
  • Robotics-Derived Requirements for the Internet of Things in the 5G Context
  • A Comparison of Modern Synthetic Character Design and Cognitive Robotics Architecture with the Human Nervous System
  • PREGO: An Action Language for Belief-Based Cognitive Robotics in Continuous Domains
  • The Role of Intention in Cognitive Robotics
  • On Cognitive Learning Methodologies for Cognitive Robotics
  • Relational Enhancement: A Framework for Evaluating and Designing Human-RobotRelationships
  • A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering
  • Spatial Cognition in Robotics
  • IOT Based Gesture Movement Recognize Robot
  • Deliberative Systems for Autonomous Robotics: A Brief Comparison Between Action-oriented and Timelines-based Approaches
  • Formal Modeling and Verification of Dynamic Reconfiguration of Autonomous RoboticsSystems
  • Robotics on its feet: Autonomous Climbing Robots
  • Implementation of Autonomous Metal Detection Robot with Image and Message Transmission using Cell Phone
  • Toward autonomous architecture: The convergence of digital design, robotics, and the built environment
  • Advances in Robotics Automation
  • Data-centered Dependencies and Opportunities for Robotics Process Automation in Banking
  • On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications
  • Advances in RoboticsAutomation
  • Meshworm With Segment-Bending Anchoring for Colonoscopy. IEEE ROBOTICS AND AUTOMATION LETTERS. 2 (3) pp: 1718-1724.
  • Recent Advances in Robotics and Automation
  • Key Elements Towards Automation and Robotics in Industrialised Building System (IBS)
  • Knowledge Building, Innovation Networks, and Robotics in Math Education
  • The potential of a robotics summer course On Engineering Education
  • Robotics as an Educational Tool: Impact of Lego Mindstorms
  • Effective Planning Strategy in Robotics Education: An Embodied Approach
  • An innovative approach to School-Work turnover programme with Educational Robotics
  • The importance of educational robotics as a precursor of Computational Thinking in early childhood education
  • Pedagogical Robotics A way to Experiment and Innovate in Educational Teaching in Morocco
  • Learning by Making and Early School Leaving: an Experience with Educational Robotics
  • Robotics and Coding: Fostering Student Engagement
  • Computational Thinking with Educational Robotics
  • New Trends In Education Of Robotics
  • Educational robotics as an instrument of formation: a public elementary school case study
  • Developmental Situation and Strategy for Engineering Robot Education in China University
  • Towards the Humanoid Robot Butler
  • YAGI-An Easy and Light-Weighted Action-Programming Language for Education and Research in Artificial Intelligence and Robotics
  • Simultaneous Tracking and Reconstruction (STAR) of Objects and its Application in Educational Robotics Laboratories
  • The importance and purpose of simulation in robotics
  • An Educational Tool to Support Introductory Robotics Courses
  • Lollybot: Where Candy, Gaming, and Educational Robotics Collide
  • Assessing the Impact of an Autonomous Robotics Competition for STEM Education
  • Educational robotics for promoting 21st century skills
  • New Era for Educational Robotics: Replacing Teachers with a Robotic System to Teach Alphabet Writing
  • Robotics as a Learning Tool for Educational Transformation
  • The Herd of Educational Robotic Devices (HERD): Promoting Cooperation in RoboticsEducation
  • Robotics in physics education: fostering graphing abilities in kinematics
  • Enabling Rapid Prototyping in K-12 Engineering Education with BotSpeak, a UniversalRobotics Programming Language
  • Innovating in robotics education with Gazebo simulator and JdeRobot framework
  • How to Support Students’ Computational Thinking Skills in Educational Robotics Activities
  • Educational Robotics At Lower Secondary School
  • Evaluating the impact of robotics in education on pupils’ skills and attitudes
  • Imagining, Playing, and Coding with KIBO: Using Robotics to Foster Computational Thinking in Young Children
  • How Does a First LEGO League Robotics Program Provide Opportunities for Teaching Children 21st Century Skills
  • A Software-Based Robotic Vision Simulator For Use In Teaching Introductory Robotics Courses
  • Robotics Practical
  • A project-based strategy for teaching robotics using NI’s embedded-FPGA platform
  • Teaching a Core CS Concept through Robotics
  • Ms. Robot Will Be Teaching You: Robot Lecturers in Four Modes of Automated Remote Instruction
  • Robotic Competitions: Teaching Robotics and Real-Time Programming with LEGO Mindstorms
  • Visegrad Robotics Workshop-different ideas to teach and popularize robotics
  • LEGO® Mindstorms® EV3 Robotics Instructor Guide
  • DRAFT: for Automaatiop iv t22 MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • MOKASIT: Multi Camera System for Robotics Monitoring and Teaching
  • Autonomous Robot Design and Build: Novel Hands-on Experience for Undergraduate Students
  • Semi-Autonomous Inspection Robot
  • Sumo Robot Competition
  • Engagement of students with Robotics-Competitions-like projects in a PBL Bsc Engineering course
  • Robo Camp K12 Inclusive Outreach Program: A three-step model of Effective Introducing Middle School Students to Computer Programming and Robotics
  • The Effectiveness of Robotics Competitions on Students’ Learning of Computer Science
  • Engaging with Mathematics: How mathematical art, robotics and other activities are used to engage students with university mathematics and promote
  • Design Elements of a Mobile Robotics Course Based on Student Feedback
  • Sixth-Grade Students’ Motivation and Development of Proportional Reasoning Skills While Completing Robotics Challenges
  • Student Learning of Computational Thinking in A Robotics Curriculum: Transferrable Skills and Relevant Factors
  • A Robotics-Focused Instructional Framework for Design-Based Research in Middle School Classrooms
  • Transforming a Middle and High School Robotics Curriculum
  • Geometric Algebra for Applications in Cybernetics: Image Processing, Neural Networks, Robotics and Integral Transforms
  • Experimenting and validating didactical activities in the third year of primary school enhanced by robotics technology

Construction

  • Bibliometric analysis on the status quo of robotics in construction
  • AtomMap: A Probabilistic Amorphous 3D Map Representation for Robotics and Surface Reconstruction
  • Robotic Design and Construction Culture: Ethnography in Osaka University’s Miyazaki Robotics Lab
  • Infrastructure Robotics: A Technology Enabler for Lunar In-Situ Resource Utilization, Habitat Construction and Maintenance
  • A Planar Robot Design And Construction With Maple
  • Robotics and Automations in Construction: Advanced Construction and FutureTechnology
  • Why robotics in mining
  • Examining Influences on the Evolution of Design Ideas in a First-Year Robotics Project
  • Mining Robotics
  • TIRAMISU: Technical survey, close-in-detection and disposal mine actions in Humanitarian Demining: challenges for Robotics Systems
  • Robotics for Sustainable Agriculture in Aquaponics
  • Design and Fabrication of Crop Analysis Agriculture Robot
  • Enhance Multi-Disciplinary Experience for Agriculture and Engineering Students with Agriculture Robotics Project
  • Work in progress: Robotics mapping of landmine and UXO contaminated areas
  • Robot Based Wireless Monitoring and Safety System for Underground Coal Mines using Zigbee Protocol: A Review
  • Minesweepers uses robotics’ awesomeness to raise awareness about landminesexplosive remnants of war
  • Intelligent Autonomous Farming Robot with Plant Disease Detection using Image Processing
  • Auotomatic Pick And Place Robot
  • Video Prompting to Teach Robotics and Coding to Students with Autism Spectrum Disorder
  • Bilateral Anesthesia Mumps After RobotAssisted Hysterectomy Under General Anesthesia: Two Case Reports
  • Future Prospects of Artificial Intelligence in Robotics Software, A healthcare Perspective
  • Designing new mechanism in surgical robotics
  • Open-Source Research Platforms and System Integration in Modern Surgical Robotics
  • Soft Tissue Robotics–The Next Generation
  • CORVUS Full-Body Surgical Robotics Research Platform
  • OP: Sense, a rapid prototyping research platform for surgical robotics
  • Preoperative Planning Simulator with Haptic Feedback for Raven-II Surgical Robotics Platform
  • Origins of Surgical Robotics: From Space to the Operating Room
  • Accelerometer Based Wireless Gesture Controlled Robot for Medical Assistance using Arduino Lilypad
  • The preliminary results of a force feedback control for Sensorized Medical Robotics
  • Medical robotics Regulatory, ethical, and legal considerations for increasing levels of autonomy
  • Robotics in General Surgery
  • Evolution Of Minimally Invasive Surgery: Conventional Laparoscopy Torobotics
  • Robust trocar detection and localization during robot-assisted endoscopic surgery
  • How can we improve the Training of Laparoscopic Surgery thanks to the Knowledge in Robotics
  • Discussion on robot-assisted laparoscopic cystectomy and Ileal neobladder surgery preoperative care
  • Robotics in Neurosurgery: Evolution, Current Challenges, and Compromises
  • Hybrid Rendering Architecture for Realtime and Photorealistic Simulation of Robot-Assisted Surgery
  • Robotics, Image Guidance, and Computer-Assisted Surgery in Otology/Neurotology
  • Neuro-robotics model of visual delusions
  • Neuro-Robotics
  • Robotics in the Rehabilitation of Neurological Conditions
  • What if a Robot Could Help Me Care for My Parents
  • A Robot to Provide Support in Stigmatizing Patient-Caregiver Relationships
  • A New Skeleton Model and the Motion Rhythm Analysis for Human Shoulder Complex Oriented to Rehabilitation Robotics
  • Towards Rehabilitation Robotics: Off-The-Shelf BCI Control of Anthropomorphic Robotic Arms
  • Rehabilitation Robotics 2013
  • Combined Estimation of Friction and Patient Activity in Rehabilitation Robotics
  • Brain, Mind and Body: Motion Behaviour Planning, Learning and Control in view of Rehabilitation and Robotics
  • Reliable Robotics – Diagnostics
  • Robotics for Successful Ageing
  • Upper Extremity Robotics Exoskeleton: Application, Structure And Actuation

Defence and Military

  • Voice Guided Military Robot for Defence Application
  • Design and Control of Defense Robot Based On Virtual Reality
  • AI, Robotics and Cyber: How Much will They Change Warfare
  • BORDER SECURITY ROBOT
  • Brain Controlled Robot for Indian Armed Force
  • Autonomous Military Robotics
  • Wireless Restrained Military Discoursed Robot
  • Bomb Detection And Defusion In Planes By Application Of Robotics
  • Impacts Of The Robotics Age On Naval Force Design, Effectiveness, And Acquisition

Space Robotics

  • Lego robotics teacher professional learning
  • New Planar Air-bearing Microgravity Simulator for Verification of Space Robotics Numerical Simulations and Control Algorithms
  • The Artemis Rover as an Example for Model Based Engineering in Space Robotics
  • Rearrangement planning using object-centric and robot-centric action spaces
  • Model-based Apprenticeship Learning for Robotics in High-dimensional Spaces
  • Emergent Roles, Collaboration and Computational Thinking in the Multi-Dimensional Problem Space of Robotics
  • Reaction Null Space of a multibody system with applications in robotics

Other Industries

  • Robotics in clothes manufacture
  • Recent Trends in Robotics and Computer Integrated Manufacturing: An Overview
  • Application Of Robotics In Dairy And Food Industries: A Review
  • Architecture for theatre robotics
  • Human-multi-robot team collaboration for efficent warehouse operation
  • A Robot-based Application for Physical Exercise Training
  • Application Of Robotics In Oil And Gas Refineries
  • Implementation of Robotics in Transmission Line Monitoring
  • Intelligent Wireless Fire Extinguishing Robot
  • Monitoring and Controlling of Fire Fighthing Robot using IOT
  • Robotics An Emerging Technology in Dairy Industry
  • Robotics and Law: A Survey
  • Increasing ECE Student Excitement through an International Marine Robotics Competition
  • Application of Swarm Robotics Systems to Marine Environmental Monitoring

Future of Robotics / Trends

  • The future of Robotics Technology
  • RoboticsAutomation Are Killing Jobs A Roadmap for the Future is Needed
  • The next big thing (s) in robotics
  • Robotics in Indian Industry-Future Trends
  • The Future of Robot Rescue Simulation Workshop
  • PreprintQuantum Robotics: Primer on Current Science and Future Perspectives
  • Emergent Trends in Robotics and Intelligent Systems

RELATED ARTICLES MORE FROM AUTHOR

How ai & robotics are addressing rising fuel costs, how robots are used to handle explosives, the use of robotics in shipwreck discovery, how to solve social and ethical challenges in robotics and ai [updated], bioinspired robots – top 25 robots inspired by animals, adaptive robots: the next wave transforming industrial automation.

  • Privacy Policy
  • Terms & Conditions

Advertisement

Advertisement

Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure

  • Open access
  • Published: 29 April 2022
  • Volume 2 , article number  8 , ( 2022 )

Cite this article

You have full access to this open access article

robotics engineering research paper

  • Michael O. Macaulay   ORCID: orcid.org/0000-0003-2027-0545 1 &
  • Mahmood Shafiee 1  

11k Accesses

19 Citations

1 Altmetric

Explore all metrics

Machine learning and in particular deep learning techniques have demonstrated the most efficacy in training, learning, analyzing, and modelling large complex structured and unstructured datasets. These techniques have recently been commonly deployed in different industries to support robotic and autonomous system (RAS) requirements and applications ranging from planning and navigation to machine vision and robot manipulation in complex environments. This paper reviews the state-of-the-art with regard to RAS technologies (including unmanned marine robot systems, unmanned ground robot systems, climbing and crawler robots, unmanned aerial vehicles, and space robot systems) and their application for the inspection and monitoring of mechanical systems and civil infrastructure. We explore various types of data provided by such systems and the analytical techniques being adopted to process and analyze these data. This paper provides a brief overview of machine learning and deep learning techniques, and more importantly, a classification of the literature which have reported the deployment of such techniques for RAS-based inspection and monitoring of utility pipelines, wind turbines, aircrafts, power lines, pressure vessels, bridges, etc. Our research provides documented information on the use of advanced data-driven technologies in the analysis of critical assets and examines the main challenges to the applications of such technologies in the industry.

Similar content being viewed by others

robotics engineering research paper

Remote Inspection and Monitoring of Civil Engineering Structures Based on Unmanned Aerial Vehicles

robotics engineering research paper

Machine learning in telemetry data mining of space mission: basics, challenging and future directions

robotics engineering research paper

Deep learning implementations in mining applications: a compact critical review

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

There has been considerable literature concerning the deterioration of critical systems and infrastructure around the world, and the resulting health and safety implications, whether these are roads, bridges, or energy related infrastructure. As reported by [ 1 ], there are at least 150,000 bridges in the United States alone that have lost their structural integrity and are no longer fit for purpose. Mechanical systems and civil infrastructure deemed critical assets by both government and industry, are vulnerable to damage mechanisms, which can adversely affect social services and the overall productivity of an economy.

This has ensured that regular inspection and maintenance is now standard practice. The operation and maintenance (O&M) costs, resulting from standardized inspection and maintenance practices, have been quite considerable for government and industry. O&M cost accounts for a large proportion of lifecycle costs in critical systems; for instance, the O&M expenditure in the wind energy industry amounts to 25%-30% of total costs [ 2 ]. Challenges to conventional maintenance and inspection practices of civil infrastructure and mechanical systems involves the fact that most methods and protocols employed are bureaucratic and labour intensive. The inspection and monitoring of assets are usually undertaken manually, with technicians and operators having to travel sometimes to distant locations hundreds of miles away. In some cases, operators and technicians must work in environments which are subject to intensive conditions caused by heat, cold, noise, wetness, dryness, etc. In other cases, the location may be inaccessible to human technicians, as in the case of large storage tanks or underground pipelines.

Technological advancements and emergence of robotics and autonomous systems (RAS) have begun to revolutionize monitoring and inspection of mechanical systems and civil infrastructure. This revolution has provided an interest and demand for the use of RAS technologies to support the monitoring, inspection and maintenance of offshore wind farms, gas and utility pipelines, power lines, bridges, railways, high rise buildings, vessels, storage tanks, underwater infrastructure, etc., in order to mitigate the current health and safety risks that human operators currently experience while inspecting or monitoring such infrastructure within the energy, transport, aerospace and manufacturing sectors [ 3 ]. There is a drive in both industry and government for the development and availability of RAS technologies that can be deployed to provide data on the condition of assets and help technicians undertake actions deemed necessary, based on the information provided by RAS. This information can be signals provided by hardware instruments, or images taken by cameras from damaged, shadowed, rough, or rusty surfaces.

Robot inspections have been proven to be more efficient and faster than human inspections. For instance, the inspection of wind turbines using unmanned aerial vehicles (UAVs) take considerably shorter time than that using conventional visual inspection [ 3 ]. As indicated in a case study reported by [ 4 ], the traditional rope access method can inspect only one wind turbine per day, whereas a UAV can inspect up to three wind turbines in a day. Vast amount of data with diverse formats (such as, audio, video, or digital codes) can be collected by numerous RAS technologies that are deployed to monitor and inspect infrastructure. However, it will be quite time-consuming, if not impossible for human operators to analyze this volume of incoming data using conventional computing models. Machine learning (ML) techniques provide advanced computational tools to process and analyze all the data provided by RAS technologies efficiently, speedily, and accurately. The evolution from teleoperated robot systems which require remote human control to autonomous systems, which when pre-programmed can operate without human intervention has helped in the maturity and ascendance of RAS technologies which remove the need for travel, bureaucratic paperwork requirements, etc. While there might still be a number of RAS technologies that work offline, there is a growing number of RAS technologies that are wireless, remotely transferring data, e.g., images of structures, materials, etc. to a control office through inter-networks for analysis.

The aim of this paper is to provide an academic contribution by reporting on literature and research related to the use of ML in RAS-based inspection and monitoring of mechanical systems and civil infrastructure. It also proposes a classification and analysis of different ML techniques used for the analysis of data yielded from RAS-based inspections. This means that the research in this paper investigates and identifies which study, in which literature, has used which ML technique, to support different RAS technologies deployed for inspection purposes. To achieve this aim, we identify the relevant literature with keywords including: robotics, inspection, machine learning, maintenance, mechanical engineering, civil infrastructure, and asset. We will also provide a review and classification of ML techniques; the types of damage mechanisms being considered, e.g., corrosion, erosion, fatigue, cracks, etc.; the types of inspections; and robotic platforms that have been used to support both industry and academic research. In addition, a review will be conducted on the characteristics of datasets collected during RAS inspections of civil and mechanical infrastructure, including: sources of data (public or non-public); types of data (e.g., image, video, documents etc.); size of data; velocity or rate of data generation and transmission; and the variety of data (structured or unstructured). Following on from this, there will be an evaluation of the results and findings. Finally, there will an exploration of potential development in RAS for inspection and monitoring of future assets.

The rest of this paper is organized as follows. Section  2 reviews different types of RAS technologies that have been proposed and designed to support the inspection and monitoring of mechanical systems and civil infrastructure. Section  3 reviews the characteristics of the data collected by RAS systems for inspection and monitoring purposes. Section  4 reviews various types of ML techniques that can and have been used to process and analyze data from RAS inspections. Section  5 discusses the findings of the literature review undertaken in this research and then finally, Section  6 reviews some of the current technology gaps and challenges in the application of ML techniques for RAS based inspection of mechanical systems and civil infrastructure. The organization of this literature review is schematically illustrated in Fig.  1 .

figure 1

Schematic illustration of organization for the literature review

2 RAS technologies for monitoring and inspection

Today, there are a variety of robotic and autonomous systems being developed and deployed in various industries, including aerospace, manufacturing, energy, transport, agriculture, healthcare, etc. RAS systems are widely used to support monitoring, maintenance and inspection of mechanical systems and civil infrastructure. These technologies are provided with artificial intelligence (AI), sometimes referred to as machine learning (ML), to enable and complete complex tasks, as well as process vast amounts of data. The mechanical design for an RAS system used to support inspection, monitoring and maintenance purposes, can be categorized by specific locomotion and adhesion mechanisms. The adoption of an inspection robot’s locomotion and adhesion mechanisms is sometimes offset against task or application specific requirements such as payload, power requirements, velocity, and mobility [ 5 , 6 ].

Locomotion in robotics specifically refers to directional movement that makes it possible for an object to move from location to another, the mechanism that makes a robot capable of moving in its environment. The literature states that there are four main types of locomotion that a robot system could be fitted with, depending on the task and environment they are being built to support [ 5 ]. These four locomotion types include: arms and legs, wheels and chains, sliding frame, and wires and rails. Considering the pros and cons of each type, arm and legged robots are better suited for maneuver around obstacles in the environment when compared to other locomotion systems. Conversely, wheels and chain-driven locomotion are best suited to environments with a flat and even surface and ill-suited for navigating obstacles in the environment. The sliding frame locomotion mechanism comprises of a mechanical design that has two frames which move against one another in rotation. This mechanical design however provides for low speeds. Finally, locomotion involving wires and rails comprises of a simple system where the robot is held in place by wires and rails [ 5 ]. Adhesion in robotics refers to the mechanism by which robot systems can attach or cling to surfaces in their environment. Common adhesion mechanisms for RAS systems that support inspection tasks ranging from magnetic adhesion and pneumatic adhesion (whether that be the passive suction cups type or active suction chambers type or vortex thrust systems type) to vacuum sucker, propeller, and dry adhesion.

In the following subsections, we review RAS-based inspection systems that are currently being used in different industry sectors. These systems range from platforms operating below sea level to those operating within the troposphere (ground level to about 10-20 km above sea level) and the ones purposed to operate in the thermosphere and exosphere (space and beyond). We therefore suggest five robot categories, including: unmanned marine robots, ground-based robots, climbing and crawler inspection robots, unmanned aerial robots, and space inspection robot systems.

2.1 Unmanned marine robot systems

Some literature use the term ‘unmanned marine vehicles (UMV)’ as an umbrella term for unmanned surface vehicles (USV) and unmanned underwater vehicles (UUV) respectively [ 6 ]. UUVs however can be further classified as either autonomous underwater vehicles (AUV) or remotely operated vehicles (ROV) [ 7 ].

AUVs are unmanned, pre-programmed robot vehicles, purposed and deployed into the ocean depth, autonomously without the support of cabling and human intervention. When the AUV completes its task, it returns to a pre-programmed location, where its data can be retrieved, downloaded, processed, and analyzed. An ROV is an unmanned robot which is deployed into ocean depths; however, the difference is that an ROV is connected to a ship by cables. An operator located on the ship pilots the ROV. The cables attached to an ROV are used to transmit commands and data between the operator and robot. AUVs can be deployed to support inspection of hazardous objects, surveying and mapping wrecks, and deep underwater infrastructure (e.g., subsea cables). ROVs are usually deployed into dangerous or challenging deep water environments for human divers. Therefore, both AUV and ROV robot systems are fitted with and supported by a variety of sensors to collect data. The data provided may be used for military or civilian surveys, inspections, surveillance, and exploration purposes. AUVs and ROVs are usually equipped with cameras for obtaining video images underwater. ROVs use cameras to transmit video telemetry to human operators for analysis and decision-making. Sound navigation ranging (sonar) and fiber optic gyros (FOG) support object detection, obstacle avoidance and navigation. ROVs might also be fitted with robotic arms for collecting underwater samples [ 8 ].

2.2 Unmanned ground robot systems

Unmanned ground-based robots operate autonomously on ground surfaces. They are sometimes referred to as mobile robots in some academic literature, alternatively they are otherwise referred to as unmanned ground vehicles (UGV) or land-based robots. Ground-based robots in literature are also sometimes categorized based on their locomotion, which among other criteria is based on the environment it is deployed into, which are usually even, stable environments. A ground-based robot has the advantage of being able to support and carry maximum amounts of payload where appropriate, however the disadvantage of these robot types is their lack of mobility with uneven terrain [ 6 ].

Unmanned ground-based robots can be categorized as wheeled robots, walking (or legged) robots, tracked robots, or robots with a hybrid of either wheeled, legged or tracking [ 9 ]. Wheeled robots navigate on the ground using motorized wheels to propel themselves [ 9 ]. Literature states that there are four types of wheeled robot, which can be differentiated by the number of degrees of freedom (DOF) they hold. DOF is defined as the number of independent variables that can define the motion or position of an object (or mechanism) in space. These four types include the fixed standard wheel; the castor wheel; Swedish wheel, and finally the ball or spherical wheel. There are also several types of wheeled robots, including the single-wheeled robot, two-wheeled robot, three-wheeled robot, and so on, each with their unique mobility feature or characteristic [ 9 ].

Legged (or walking) robots, unlike wheeled robots, navigate on both even and uneven surfaces, hard and soft surfaces, and can detect obstacles in their path or environment. Legged robots can be classified as one-legged (hoppers), two-legged (humanoid), three-legged, four legged (quadruped), five-legged, six-legged (hexapod), and so on [ 9 ]. Hybrid ground-based robots are robots that combine legged, wheel and track locomotion systems in any given configuration [ 9 ].

The applications of unmanned ground robots are numerous, and they range from the nuclear industry, where human operators are replaced by robots to operate in radioactive environments, to military operations for surface repairs, navigating minefields, explosive ordinance disposal (EOD), carrying and transporting payload, etc. Other state-of-the-art applications include reconnaissance, surveillance, and target acquisition operations; and space exploration as in the case of NASA’s planetary rovers [ 10 ]. Unmanned ground-based vehicles are also fitted with array of sensor payload options to support autonomous operations, navigation through the environment and data collection. Cameras are used to scan the robot’s environment and support calculation of its position. Furthermore, motion detectors, infrared (IR) sensors, temperature and contact sensors support object detection, obstacle and collision avoidance and obstacle localization. Laser range finder sensors which use a laser beam to generate distance measurements, producing range data, also support object detection and obstacle avoidance [ 8 ].

2.3 Climbing and crawler inspection robots

Wall climbing and crawler robots were developed for movement on vertical plane environments for the inspection and maintenance of a range of assets such as storage tanks, nuclear power facilities, and high-rise buildings [ 11 , 12 ]. Oil refineries consist of storage tanks that require cleaning; along with a requirement for routine inspection and non-destructive testing (NDT) of these tanks to check for cracks and leaks. The traditional and manual implementation of these routine inspection and maintenance tasks results in very high labor and financial costs. The development and deployment of climbing and crawling robot systems can help with automating these tasks [ 12 ].

Climbing robots adopt an adhesion mechanism, based on the type of the environment they are deployed into [ 6 ]. These robots employ magnetic adhesion or pneumatic or negative pressure (depending on the suction or thrust type) adhesion. According to literature, climbing robot systems are usually fitted with arm and leg locomotion mechanism systems. The number of arms and legs can vary from two to eight legs, albeit eight legged robots are not as common. Alternatively, climbing and crawler robots can otherwise be fitted with wheels of chain-driven locomotion [ 5 ]. The adhesion mechanisms available today make climbing and crawling robots capable of attaching to structures and materials, while also providing a reliable platform for attached payload and tools [ 6 ].

Literature reports some information about the velocity, and mobility of the climbing and crawler robots. Climbing robot systems might need to reach high velocity on a vertical plane, for optimal movement between inspection locations. With respect to mobility, climbing robot systems which are fitted with arms and legs can navigate uneven surface, steps, and other objects in the environment [ 5 ]. When considering the payload requirements for climbing and crawler robots, some sensors such as ultrasonic sensors, gravity sensors, acceleration sensors etc. are used to measure and provide data about the distance of objects or obstacles in front [ 5 , 6 , 11 – 13 ]. The literature provides a varied advice on the weight of the payload capacity, ranging between 10 kg and 30 kg. Obviously, the weight requirements will be dependent on the tasks that the robot system is deployed to complete [ 5 , 11 , 12 ]. However, robotics engineers must engage with the problem that climbing and crawling robots will need to suitably offset the take up of heavy payloads, while securing adhesion on the challenging surfaces that they are designed for [ 6 ].

2.4 Unmanned aerial robot systems (or drones)

Unmanned aerial robot systems are interchangeably referred to as unmanned aerial vehicles (UAVs), and more commonly referred to as drones in the literature. Various multidisciplinary disciplines, ranging from environmental monitoring to civil engineering are increasingly deploying drones to support with various inspection type applications [ 14 ]. This is because research has continuously shown that the use of drones for inspection-based tasks reduce the need for human actors, their risk and exposure to injury and fatality, maintenance, and downtime costs [ 14 – 16 ]. Seo et al . [ 14 ] indicates that the selection of a drone for a particular application is based on some criteria including the mission duration, battery life, camera and video resolution, payload capacity, GPS and collision avoidance and cost performance [ 14 ]. Locomotion in drones is provided by propellers, or as referred to rotors in certain literature. The term propellers and rotors are used interchangeably in the literature, although technically, a rotor could be considered a horizontal propeller (such as those mounted on a helicopter), while propeller could technically be the vertical rotor mounted on an airplane. Nevertheless, they are identical objects seen from different angles. A propeller propels an object, using the thrust as the force for horizontal movement and lift for vertical movement – providing for a vertical take-off and landing (VTOL) capability. Motors provide power to propellers and spin them at high speeds. These high speeds in turn create the thrust or lift which provide the drone with the required locomotion [ 15 ]. Lattanzi et al . [ 6 ] contended that drones compromise with having reduced stability and how much payload they can support, with the advantage of increased movement and mobility [ 6 ]. In literature, flight time or duration of operation is inextricably linked to battery life or number of batteries available in a drone. The battery provides the electricity to power the drone, and most drones are only capable of providing enough power to support 20-30 minutes of flight time [ 14 ].

Drones are described in literature as technology platforms that can support a variety of applications and carry a variety of sensor payloads. Sensor payloads can vary from thermal and infrared to optical cameras. Light detection and ranging (LIDAR) is used to measure and provide data on the distance of objects, and with radar, the angle and velocity of objects as well [ 16 ]. Literature indicates that the higher the number of propellers (or rotors) supporting the drone, the greater the drone’s payload capacity. Although there are caveats to this guidance, one of which is that increased payload results in a decrease in the drone’s flight time and range capacity [ 17 ]. Drones can support payload from 150 g to 830 g, depending on battery life [ 17 ]. Literature and research that have deployed drones for infrastructure inspection have indicated the use of commercial cameras with resolutions of 12 to 18 megapixel, with each 15-minute flight instance providing over 1200 images [ 18 ].

2.5 Space inspection robot systems

RAS systems have increasingly become a critical aspect of space technology, supporting a variety of space missions. Space robotics can be classified as either small or large manipulators, or humanoid robots [ 19 ]. Space robots, also referred to as space manipulators in some literature, meet decreased and applied gravitational forces on them. In most cases, these robots rotate and hover and glide in orbit [ 20 ]. Literature demonstrates two types of applications for space-based robotics, these include on-orbit assembly and on-orbit maintenance (OMM). OMM applications involve repair, refueling, debris removal, inspection, etc.

Since the scope of this paper is concerned with inspection purposed platforms, we review only those space robot systems that are developed to support inspection tasks. This includes the development of the orbiter boom sensor system (OBSS) by the Canadian Space Agency (CSA). The OBSS was deployed to inspect the façade of the thermal protection system of space shuttles [ 19 ]. Space robots can be deployed to ferry varied payloads of kilograms to tons on space installations [ 21 ].

[ 22 ] provided a description of the teleoperated robotic flying camera called the Autonomous Extra Vehicular Robotic Camera (AERCam), that is used to support astronauts by providing them with a way to inspect and monitor the shuttle and space station. The first version of the robot was called AERCam Sprint and was deployed on a shuttle in 1997. The AERCam Sprint was fitted with a ring of twelve infrared detectors and two color cameras to enable vision capability [ 22 ]. [ 23 ] provided a report on a space inspection robot developed by NASA, called Tendril. The Tendril is a manipulator robot type purposed to support space missions by inspecting difficult to reach locations, e.g. fissures, craters, etc. Nishida et al . [ 24 ] provided a report on a prototype model for what they describe as an ‘end-effector’ for an inspection and diagnosis space robot. Pedersen et al . [ 25 ] mentioned a space robot designed to inspect the Mir station called Inspector, indicating however that it failed while in flight.

3 Features of data collected from RAS-based inspections

This section reviews the characteristics of data collected by RAS technologies, which are then processed and analyzed by analytical methods and techniques to support the monitoring and inspection of mechanical systems and civil infrastructure. We will explore the characteristics of input data collected by RAS systems using the four Vs data model [ 26 – 29 ]. Literature indicates that data can be categorized through its volume, which refers to the quantity of data that can be generated and stored; veracity, which refers to the quality of data collected as input; velocity, which refers to the speed at which data can be produced; and variety, which refers to the type and format of data collected. These four characteristics are briefly described below:

3.1 Volume (quantity or size of data)

Meyrowitz et al . [ 8 ] advised that there is a direct relationship between the type of sensor fitted to a robotic autonomous system and the volume of data produced. This position considers that certain sensors by default generate larger quantities of data compared to others, e.g., cameras which produce video data can generate millions of bits of data.

3.2 Variety (type and format of data)

Literature review showed that most research papers that have deployed climbing and crawler robots fit them with an array of sensors to collect a variety of data types. The data types include sound waves and their distance to an object, using ultrasonic sensors; acceleration and velocity using accelerometers and gravity sensors [ 13 ]. Climbing and crawling robots also collect image and video data using cameras [ 11 , 13 ]. Literature review of the types of data collected by UAVs have demonstrated that they have been purposed to collect image and video data [ 15 , 18 , 30 – 32 ]. In a study by Alharam et al . [ 33 ], the UAVs have also been purposed to collect data on gas leakage, specifically methane (CH4) from oil and gas pipelines. The types of data collected by UUVs, specifically ROVs for underwater inspection, include image and video data; angular, velocity, orientation, depth, and pressure data, collected by optical and gyro sensors [ 7 , 13 ]. The types of data collected by unmanned ground robots (UGRs) vary from images and videos collected by cameras; distance measurements collected by range finder sensors; and sound wave data collected by ultrasonic sensors [ 10 ].

3.3 Velocity (speed of data generation)

While the literature indicates that certain types of sensors produce higher volumes of data compared to others, it is also indicated that the speed of data generation and transmission has a direct correlation with the transmission medium or link used, and sometimes the environment the data is transferred within [ 8 ]. Except for on-board RAS data processing and analysis, data rates are slowed depending on their environment, Meyrowitz et al . [ 8 ] demonstrated that, with current technology, underwater RAS systems operate in an environment that reduce the rate of data transmission [ 8 ].

3.4 Veracity (quality and accuracy of data)

Literature indicates that the quality and accuracy of data can be directly linked to the frequency and type of transmission link used for data on RAS systems. In the instance of UUVs, Meyrowitz et al . [ 8 ] provided the instance of sonar imaging, where the combination of low frequency and bad transmission links result in reduced resolution and lots of interference that will require cleaning in image and acoustic data [ 8 ]. This has led to research into the development of better, high performance data transmission links, ranging from fibre optics to laser links. Alternatively, RAS systems with ML technology on-board, can process and analyze data with greater veracity because the data has not yet been subject to the degradation that is directly linked to transmitting the input data.

4 Machine learning techniques

A review of literature provides a distinct categorization between different ML algorithms. These algorithms are referred to as either supervised learning, unsupervised learning, or reinforcement learning algorithms. This section provides an overview of some of the popular ML techniques documented in literature and used to process and analyze data collected from RAS-based inspection operations.

4.1 Supervised learning

There are supervised learning techniques that are used to provide prediction-based solutions for problems that can either be categorized as classification or regression. These techniques require vast amounts of labelled data as input. In this approach to ML, the outputs (sometimes referred to as targets in literature) are pre-determined and directed towards interpretation and prediction. The dataset provided is separated into a training set and test set respectively, and then it is labelled with features of interest. When trained, the system can identify and apply these labels to new data, e.g., when taken through the supervised ML process, a system can be trained to identify new images of an object. Therefore, given input (x) and output (y), a supervised learning algorithm can learn the mapping function y = f(x); so that given input (x), this can lead to a prediction of output (y) for the given data [ 34 – 37 ].

There are two types of supervised learning approaches to ML. The first type is the regression learning process, where a model predicts a continuous quantity based on its input variables. Regression predicts continuous values such as height, weight etc., these values are referred to as continuous because they reside within an infinite number of possibilities, e.g., weight is a continuous value because there is an infinite number of possible values for a person’s weight [ 38 , 39 ]. The other type refers to the classification-based supervised learning process, where the output or target is categorical (discrete or finite number of distinct groups or classes). Classification refers to the process of identifying a model that takes a given dataset and sorts the data within it into distinct or separate classes or labels. Classification models in supervised machine learning are often described as a technique for predicting a class or label [ 34 – 37 , 40 , 41 ].

While most ML algorithms can be applied to solve both classification and regression problems, algorithms best suited to support classification-based problems include the K-nearest neighbors (KNN), logistic regression, support vector machine (SVM), decision tree, naive bayes, random forest, and artificial neural network (ANN). Conversely, algorithms best suited to support with regression-based problems include linear regression as well as random forest [ 39 ]. These techniques are briefly reviewed in the following.

4.1.1 Linear regression

Linear regression is a supervised model or algorithm used to predict the value of a variable based on the value of another variable. The model assumes that linear relationship between the input variables (x) and a single output (y), and that the output (y) can be calculated from a linear combination of the input variable (x). This is referred to as a simple linear regression. Literature describes multiple input variables as multiple linear regression. Linear regression models fit a straight line to a dataset, to describe the relationship between two variables.

4.1.2 Support vector machine (SVM)

SVM is a supervised machine learning algorithm that can be applied to both classification and regression approaches. SVM algorithm identifies a “decision boundary” or “hyper plane” to separate a dataset into two distinct classifications. The algorithm attempts to maximize the distance between the nearest data points of the two classes within the dataset. Support vectors are the data points nearest to the decision boundary; and a change in the position of the support vectors will result in the change in the position of the decision boundary. The greater the distance the data points are from the decision boundary, the more concrete their classification. The distance between the decision boundary and the nearest data point is called the margin. SVM is very accurate and works very well with small datasets. However, with large datasets it usually results in longer training times [ 33 , 35 , 42 – 44 ].

4.1.3 Decision trees (DT)

Decision Trees (DTs) are a supervised ML algorithm, that builds classification or regression models. DTs take the form of a tree diagram, breaking down a dataset into smaller subsets, to facilitate the development of a tree with a root node, a decision node, with each outward branch of the node representing a possible decision, outcome, or reaction. The decision tree comprises of decision nodes and leaf nodes, with leaf nodes representing a classification or a decision. DTs are typically used to determine a statistical probability or more simply, a course of action for complex problems. DTs provide a visual output of a given decision-making process and they can process both numerical and categorical data. However, DTs are susceptible to unbalanced datasets which generate biased models. DTs are also susceptible to overfitting - which occurs when a model fits too closely to the training data and is less accurate when introduced to new and previously unseen data [ 33 – 35 , 42 – 44 ].

4.1.4 Random forest (RF)

Random forest is a supervised ML algorithm that can be applied to both classification and regression-based problems. It grows multiple individual decision trees for a given problem and merges them together to make a more accurate prediction. The RF technique uses randomness and ensemble learning to produce uncorrelated forests of decision trees. Ensemble learning is a method that combines various classifiers such as decision trees, and takes the aggregation of their predictions to provide solutions. The most commonly known ensemble methods are bagging and boosting. Bagging creates a different subset from the training data, with the final output based on majority voting. Boosting on the other hand is a method (e.g., ADA Boost, XG Boost) that combines “weak learners” into “strong learners” by creating sequential models so that the final model generated delivers the highest accuracy. Random forests, unlike DTs, are not susceptible to over-fitting, however, they are a time-consuming and resource-intensive technique [ 33 ].

4.1.5 XGBoost (Extreme Gradient Boosting)

XGBoost is short for ‘Extreme Gradient Boosting’. The XGBoost is a supervised ML algorithm implementing the gradient boosting decision tree framework. The algorithm can be applied to solve classification, regression, and prediction problems. It creates and works to optimize (through a boosting technique) each upcoming decision tree, so that the errors of each following decision tree are reduced compared to the previous tree that came before it. The boosting technique involves a process where there is a gradual learning from data, resulting in improved prediction for building subsequent decision iterations.

4.1.6 K-Nearest Neighbor (K-NN)

The KNN algorithm is a supervised ML algorithm, best suited to classification models. The algorithm makes an estimation of the probability that a new data point belongs to a particular group. This process involves looking at the data points in proximity and then identifying which data points have similar features to the new data point. The new data point is then assigned to the group which has most data points with similar features close the new data point. The KNN algorithm is very easy to implement and fast to execute. However, KNN does not classify data points very well and the accuracy of the algorithm is dependent on the quality of the dataset [ 35 , 42 – 44 ].

4.1.7 Naive Bayes

Naive Bayes is a supervised ML technique used to solve classification problems, and is based around counting and conditional probability. It uses the Bayes theorem to classify data. The Naive Bayes algorithm naively assumes that all characteristics of a data point are independent of one another. The Bayes’ theorem is based on the understanding that the probability of an event may require to be updated as new data becomes available. The algorithm seems to perform much better with categorical data (for example, it works well when applied to document classification and spam filtering) than with numerical data [ 34 , 35 , 42 ].

4.1.8 Logistic regression

Logistic regression is a supervised learning and classification algorithm for predicting a binary outcome, where an event occurs (True) or does not occur (False). The algorithm is used to distinguish between two distinct classes. It is considered a supervised ML algorithm because it has X input features and a y target value, and uses labels on the dataset for training. The algorithm works to find the logistic function of best fit to describe the relationship between X and y . Logistic regression algorithm is similar to the linear regression algorithm, except that the linear regression works with continuous target variables (numbers within a range) while the logistic regression is used when the target variable is categorical. The algorithm transforms its output using the sigmoid function to return a value which is then mapped to two or more discrete classes. Binary regression, multinominal logistic regression and ordinal regression are the three main types of logistic regression. Binary regression is used to process Boolean values, multinomial is used to process n ≥3 values, and ordinal logistic regression processes n ≥3 ordered classes. Logistic regression has been used to support various applications from medical diagnosis to fraud detection in banking [ 34 , 42 ].

4.1.9 Artificial Neural Network (ANN)

Artificial Neural Networks (ANNs), which can solve both regression and classification problems, are modelled on the neural networks in the human brain. Like the human brain that contains billions of neuron cells that are connected and distribute signals in the human brain, ANNs are made up of artificial neurons, called units, grouped into three different layers. The first layer is called the ‘input layer’ which receives data and then forwards the data received to the second layer called the ‘hidden layer’. The hidden layer performs mathematical computations on the data received from the input layer. The last layer is the output layer, which returns data as output. Deep neural networks (sometimes called deep learning in academic literature) refers to neural networks that contain multiple hidden layers [ 35 , 42 – 46 ].

4.1.10 Convolutional Neural Network (CNN)

Convolutional Neural Network (CNN) is a type of ANN that detects patterns and helps with processing of vision-based tasks. CNN is made up of an ML unit algorithm, called perceptions. A CNN can make predictions by analysing an image, check to identify features, and classify the images based on this analysis. CNN consists of multiple layers that process and extract features from data. These layers include the Convolutional Layer, Rectified Linear Unit (ReLU), Pooling Layer, and Fully Connected Network (FCN). The Convolutional Layer contains filters that perform the convolution operation, while the ReLU layer performs operations on elements and outputs a rectified feature map. The Pooling Layer takes the rectified feature map as input, and performs a ‘down-sampling’ operation that reduces the dimensions of the feature map. The pooling layer then converts the two-dimensional array output from the pooled feature map into a linear vector by flattening it. FCN layer takes the flattened matrix from the pooling layer as input and then proceeds to classify and identify the images [ 45 – 48 ].

4.2 Unsupervised learning

Literature tells us that unsupervised learning is where algorithms identify patterns within a given dataset. Unsupervised learning process involves searching for similarities that can be used to group data. Some of the most used unsupervised learning algorithms include the K-means clustering algorithm, Hierarchical clustering, Anomaly detection, principle component analysis (PCA), Independent Component Analysis, Apriori algorithm, singular value decomposition [ 34 – 37 ].

4.3 Reinforcement learning

Reinforcement learning (RL) takes an alternative approach to supervised and unsupervised learning. RL does not require the system to learn from data, instead learning is the result of feedback and reward. This involves a series of trial and error by a software agent. Some of the most common unsupervised learning algorithms include: SARSA – Lambda algorithm; Deep Q Network (DQN) algorithm; Deep Deterministic Policy Gradient (DDPG) and the Asynchronous Advantage Actor-Critic algorithm (A3C) [ 35 – 37 ].

4.4 Deep learning

The term deep learning refers to a subset of ML techniques that requires vast amounts of data to train models to output values, interpretations, or predictions. Deep learning methods are ANNs with more than one hidden layer and they can be supervised or unsupervised. Applications that require the application of deep learning techniques as supervised learning, include image classification, object detection and face recognition. Alternatively, applications that require the deep learning techniques as unsupervised learning are usually instances where there is no labelled data and for clustering problems, e.g., image encoding and word embedding.

4.4.1 Deep Neural Network (DNN)

A deep neural network (DNN) are ANNs that have more than one hidden layer (therefore the term “deep”), that are trained with vast amounts of data. Each hidden layer comprises of neurons that map a function to input to provide an output. DNNs are trained through the adjustment of its neurons, biases and weight features. These types of neural networks are also supported by various techniques such as the back-propagation algorithm and optimization methods such as stochastic gradient descent. Three types of deep neural networks include multi-layer perceptrons (MLP), convolutional neural networks (CNN) and recurrent neural networks (RNN). DNN features support speech recognition systems, and translation systems like Google Translate [ 49 , 50 ].

4.4.2 Deep Belief Networks (DBNs)

Deep belief networks (DBNs) consist of unsupervised networks, that comprise of a stack and sequence of connected restricted Boltzmann machines (RBMs). The DBN trains each of the Boltzmann machine layers until they converge. The value of the output layer of a Boltzmann machine is input into the next Boltzmann machine in the sequence, then again trained until convergence is reached. This process is repeated with each Boltzmann machine until the whole network has been successfully trained. Applications of DBNs vary from generating of images to video sequences and motion capture [ 51 – 53 ].

4.4.3 Recurrent Convolutional Neural Networks (RCNN)

Recurrent Convolutional Neural Networks (RCNN) algorithms detect and localize objects in an image. This is done by drawing rectangular boundary like boxes around objects contained within an image, placing a label on, or categorizing each defined box in an image, extracting features in the image using the SVM algorithm, and then processing the features using a pre-trained CNN. The last stage in the process brings separate regions together to obtain the original image with the identification of the objects within the image [ 47 , 54 – 58 ].

4.4.4 Fast R-CNN

An iteration or evolution and improvement of the R-CNN model can be found with the Fast R-CNN algorithm. Fast R-CNN model takes the image as a whole and passes it to its neural network to output, the output is then sliced into region of interests (ROI).

4.4.5 Faster R-CNN

A further evolution of the R-CNN model is the Faster R-CNN algorithm [ 59 ]. Faster R-CNN is a better performing and faster algorithm than R-CNN and Fast R-CNN, because it only uses CNNs and does not use SVMs, and provides a single feature extraction of an image, instead of region-by-region extractions of an image like R-CNN. According to literature, this results in Faster R-CNN training networks at least nine times faster, with more accuracy than R-CNN [ 59 , 60 ]. However, what makes Faster R-CNN distinct to its predecessor Fast R-CNN is the use of the Region Proposal Network (RPN) technique [ 60 ].

4.4.6 Mask R-CNN

The Mask R-CNN is an extension of the Faster R-CNN technique. Literature describes the Mask R-CNN technique as an advanced image segmentation method, which takes a digital image and breaks it down into segments or pixels, and then categorizes the segments. For example, a single image is segmented and categorized to identify multiple objects in the image [ 61 ].

4.4.7 R-FCN

Literature describes the R-FCN model as being based on region proposal. The difference between the R-FCN and R-CNN techniques (which is also based on region proposal) is that R-FCN applies the selective pooling technique that extracts features for prediction on the last layer of its network [ 62 ].

4.4.8 Single Shot Detector (SSD)

The Single Shot Detector (SSD) is an ML technique that breaks down an image into a grid of cells. In turn each cell has the function of detecting objects, by predicting the category and location of objects in the region where the images are located within. Literature indicates that the SSD model is faster than the Faster R-CNN model. However, when the object size is small, the model’s performance decreases [ 63 ].

4.4.9 You Only Look Once (YOLO)

The YOLO (You Only Look Once) algorithm uses CNNs to detect and recognize objects in a picture in real-time. YOLO first takes an entire image as input, divides the image into grids (this is different to R-CNN that uses regions to localize objects in an image), image classification and localization are applied to each grid. The algorithm then predicts the rectangular (bounding) boxes and their associated classes. The YOLO model does however find it difficult to localize objects properly compared with R-CNN [ 37 , 64 ].

4.4.10 Recurrent Neural Networks (RNNs)

Classic neural networks are described as ‘feed forward’ networks because they channel information in a single forward direction, through a series of mathematical operations performed at the nodes of the network. Data is fed through each node as input, never visiting a node more than once, before being processed and converted into an output. Feed forward networks only perceive the current sample data that has been provided in present time and have no facility for memory with respect to previous data samples processed. In other words, classic neural networks do not have the facility for data persistence.

RNNs are a type of deep neural network, and unlike classic neural networks they take both the current data sample and the previously received samples as input. RNNs can process data from the first input to the last output and initiate feedback loops throughout the entire computation process, enabling the loop of data back into the network. RNNs are distinct from feed forward networks by the feedback loop connected to their past decisions. RNNs allow previous outputs to be provided as inputs, while also having hidden states. RNN models are commonly used in the natural language processing (NLP) and speech recognition domain [ 65 , 66 ].

4.4.11 Long Short-Term Memory Networks (LSTMs)

LSTMs are a special type of RNN that help preserve the error that can be back propagated through layers and time. LSTMs provide recurrent networks with the ability to learn over time. This is made possible in large part to LSTMs’ gated cell, from which data can be written to, read from and stored into, all external to the back and forth of the recurrent network [ 67 ].

4.4.12 Generative Adversarial Networks (GANs)

GANs are described as generative deep learning unsupervised learning algorithms. The technique was introduced in 2014 by Ian Goodfellow. The premise of GANs involves a neural called a generator, which produces fake data samples. The generator works in concert with another network called the discriminator which has to differentiate between two different input data samples. The first being the original data samples and the second being the fake data samples being created and output by the generator. The discriminator has to evaluate, learn and make decisions as to which data sample is from the actual training set and which are form the generator [ 68 , 69 ].

4.4.13 Multilayer Perceptrons (MLPs)

A perceptron is an input layer and an output layer that are fully connected; and comprise of input values, weights and bias, net sum, and an activation function. A fully connected neural network with multiple layers is called Multilayer Perceptron (MLP). MLP is a supervised learning feed forward deep neural network that connects multiple layers in a directed graph, in other words, where the signal path is through a single direction through the nodes, between the input and output layers. In this network, every node, with the exception of the input nodes, contains a non-linear activation function. MLPs can be used to build speech-recognition, image-recognition, and machine-translation applications [ 70 , 71 ].

4.4.14 Restricted Boltzmann Machines (RBMs)

Boltzmann machines are non-deterministic, generative deep learning models with only two types of nodes - hidden and visible nodes. There are no output nodes, which provides them with the non-deterministic feature. Boltzmann Machine has connections among the input nodes, with all nodes connected to all other nodes including input or hidden nodes. This allows universal information sharing of parameters, patterns and correlations of the data. Restricted Boltzmann Machine (RBM) are a special class of Boltzmann Machines. RBM is an unsupervised two-layered (visible layer and hidden layer) neural network. RBM is characterized by restrictions where every node in the visible layer is connected to every node in the hidden layer but no two nodes in the same group are connected to each other [ 51 – 53 ].

4.4.15 Autoencoders

Autoencoders are an unsupervised type of neural network that can be used to detect patterns or structure within data to learn a compressed representation of data provided as input. The autoencoder learns how to compress the data based on its attributes during training. An autoencoder is a feed forward neural network where the input is the same as the output. It is made up of encoder and decoder models. The encoder compresses the input, and the decoder works to re-create the input from the compressed version of the input that was provided by the encoder. Applications of autoencoders vary from anomaly detection, data denoising (audio and images), dimensionality reduction etc. [ 72 – 76 ].

Table  1 reviews the advantages and disadvantages of different ML techniques reviewed in Section  4 of this paper.

4.5 Performance evaluation metrics for machine learning techniques

In this section, we examine the academic conventions in literature for measuring the performance of machine learning techniques in the detection of material defects and equipment failures. Current academic literature has provided methods for evaluating the performance of computer vision and especially machine learning methods and techniques, when applied to extract, process and analyze datasets. The literature indicates that while ML techniques can be used to extract and analyze data, these techniques however can also generate false results due to misclassification or misinterpretation of data collected. This is the reason for performance measurement metrics, which evaluate the performance of ML methods and the ratio of correct predictions or classifications to misclassification or incorrect predictions.

Classifiers or classification-based ML techniques in literature usually use the confusion matrix, accuracy/error, precision, recall, F1 measure (or F-measure), ROC, AUC, Hypothesis test (t-test, Wilcoxon signed-rank test, Kappa test) as the metric system or evaluation measurements [ 78 ]. Regression based problems use MSE (mean squared error), MAE (mean absolute error), MAPE (mean absolute percentage error), RMSE (root mean squared error) which is the square root of the average distance between the actual score and the predicted score, and quantiles error for evaluating the performance of machine learning methods applied as solutions [ 79 ].

4.5.1 Confusion matrix

Given that classification is when the output data is one or more discrete labels, regression however is when the output data or prediction of the model is a continuous quantity or value. Binary classification is where there are only two class categories of vector y (target label) (e.g., True or False, 1 or 0, etc.) in the dataset. Conversely, multi-class classification is when there are three or more classes or categories of the vector y (target label) in the dataset. The performance measure of ML techniques, specifically for classification tasks (e.g., binary classification, multi-class classification) are commonly analyzed using a confusion matrix, which is a two-dimensional matrix that measures the False Negative (FN), True Negative (TN), False Positive (FP), and True Positive (TP) for each model used. TP is in reference to the positive points that are correctly labelled by the classifier. TN refers to the negative points that were correctly labelled by the classifier. FP are the negative points that were incorrectly labelled as positive, and FN are the positive points that we mislabeled as negative.

These measurements are then used to calculate the performance metrics: recall, F1 score, accuracy, and precision. The accuracy is a percentage of predictions that are correct (TP + TN). The precision measures how accurate the model being used is for predicting values. The recall measures the sensitivity of the model to predict positive outcomes. The F-measure is a combination of the precision score and recall score [ 33 , 80 ].

4.5.2 Mean Square Error (MSE) and Mean Absolute Error (MSD)

The performance measure of a regression model is commonly analyzed using either the mean square error (MSE) or the mean absolute error (MSD). The MSE measures the average of the squares of the errors, that is the average squared difference between the predicted value and the target value. The lower the MSE, the better. The MSD measures the average of the absolute differences between model prediction and target value.

4.5.3 Mean Average Precision (MAP)

Some studies have recommended the use of the evaluation metric called the mean average precision (MAP) to analyse the performance measure of object detection (localization and classification) models such as SSD, R-CNN and Faster RCNN and YOLO [ 31 ]. The MAP is also commonly applied to analyze the performance of computer vision models and image segmentation problems. In MAP, an object (the predicted value) is taken as accurate on the condition it overlaps with what is labelled as the ground truth (the original damage annotated by human inspectors) that is greater than a given threshold. This is calculated using the Intersection over Union measure, which is expressed as the area overlap (predicted value overlap with the target value) over the area of union (area of the union of the predicted value and target value) [ 81 ].

5 Findings of literature review

This section provides an overview of the findings of the literature review in terms of types of mechanical systems and civil infrastructure assets. We then proceed to examine different types of damage mechanisms that can be found on these assets. This is followed by a review of our findings regarding the application of ML techniques used to support RAS-based monitoring and inspection.

5.1 Types of assets under inspection

We begin the review of our findings with discussing the types of various mechanical systems and civil infrastructure assets that literature indicate are subject to routine inspection and monitoring due to their vulnerability to catastrophic damage.

5.1.1 Pipelines

Pipelines in the energy industry support the transport and distribution of water, oil, and gas. Similar to most infrastructure, pipelines are subject to internal and external mechanical stresses. Such stresses can lead to damage mechanisms ranging from corrosion, cracks scale formation etc. This phenomenon can be mitigated through regular monitoring, inspection, and maintenance. There has been some research aimed at methods and techniques to support the inspection of pipes or pipelines more efficiently, this research ranges from Mohamed et al . [ 82 ], who looked at the use of mobile in-pipe inspection robots (IPIR) for inspection of corrosion and cracks in pipelines, using NDT sensors, e.g., ultrasonic. There is also the work carried out by Bastian et al . [ 30 ], who applied CNN architecture-based techniques to detect corrosion from pipeline images and DNN to extract features from those images.

5.1.2 Wind turbines

Wind turbines are another part of the energy industry and are generally located in remote environments, which are subject to extreme external stresses, e.g., wind, water, heat, etc. Similarly, just like pipelines, wind turbine infrastructure is also subject to mechanical stresses resulting in damage mechanisms such as erosion, cracks, etc. This is the reason for the emphasis and importance placed on regular inspection, monitoring and maintenance in industry planning and cost allocation. While wind turbine inspection is generally carried out using traditional methods, involving manual climbing of heights by technical inspection teams, in sometimes hazardous conditions, the last decade has seen a lot of research into the development of cost effective and safe methods and techniques that support the inspection and monitoring of asset infrastructure. This research has provided the alternative use of robot platforms, which vary from magnetic climbing robots to unmanned aerial drones. Current research has also seen the development of artificial intelligence-based algorithms and techniques that will analyze the vast amount of data collected by robot platforms, during inspection. Wang and Zhang [ 2 ] has researched the use of Haar-like feature algorithms for the automatic detection of surface crack images collected by UAVs; while Frank et al . [ 83 ] proposed applying a deep convolutional neural network (DCNN) technique on images from a wind turbine, taken by a multi-robot system. Shihavuddin et al . [ 31 ] carried out research using convolutional neural network (CNN) techniques to extract feature descriptors from images of wind turbines taken by drones; and the Faster R-CNN technique to train for object detection.

5.1.3 Aircraft fuselage

In the aerospace industry, the aircraft fuselage is one of the core and most important regular inspection tasks performed by maintenance technicians. The process involves deploying platforms that elevate technicians so that they can reach and inspect the external surface of the aircraft’s fuselage, searching for damage mechanisms or defects. The main damage or defect mechanism found on aircraft surfaces is corrosion. Corrosion is a main cause of fuselage fatigue. Alongside checks for corrosion, technicians also look for rust, cracks, and deformation of the aircraft surface during their inspections. This process has traditionally involved a painstaking, methodical task, undertaken by a technician, equipped with a flashlight and mobile elevation platform [ 84 ]. Research into alternative methods of inspection have been published over recent years. This includes Malekzadeh et al . [ 85 ] work which applied deep learning techniques: SURF, AlexNet and VGG-F to images of aircraft fuselage, taken by a custom-made platform. Miranda et al . [ 86 ] did similar work when they applied a CNN based application, along with an SVM model to images of an aircraft fuselage, taken by UAVs. The most recent work is research carried out by Brandoli et al. [ 84 ] which involved applying CNN models: DenseNet and SqueezeNet to detect corrosion pillowing in images taken from an aircraft fuselage.

5.1.4 Power lines

In the energy industry, power transmission lines act as connections between the source of power (the power plants) and endpoints (the consumers) [ 87 ]. The regular inspection of power transmission lines is considered vital in ensuring uninterrupted power supply, as damage to this part of the electrical infrastructure through rusted conductors etc. can result in downtime and power interruption [ 88 ]. As indicated by Titov et al. [ 88 ], the traditional operation for power line inspection involves a process characterized by high cost and safety risk, where human technicians manually taking images from the ground or from above with the support of a helicopter. There has been research in recent years that have demonstrated the use of UAVs (drones) to support the collection of image data. Jalil et al . [ 87 ] carried out work on power lines, employing a drone, fitted with image capture equipment to collect data, which is then passed on to a neural network model for detection and analysis of damage or defects from the image dataset. Research undertaken by Titov et al. [ 88 ] employed UAVs to take images of power lines for detection and analysis of cracks on concrete poles, missing or dirty insulator plates etc. using the YOLO version 3 deep learning technique.

5.1.5 Vessels

The maintenance of vessels, especially maritime transport ships such as oil tankers or very large crude carriers (VLCC) require regular monitoring and inspection schedules [ 89 , 90 ]. These vessels are subject to typical internal and external phenomenon such as cracks, corrosion etc. Current inspection procedures for vessels today require are expensive and the vessel to dock at a shipyard, where inspectors, with the support of all sorts of mobile platforms undertake visual assessment of the structural health and condition of the vessel. Recent research over the years have studied the use of robot platforms in support of vessel inspections [ 89 , 90 ]. This research has been accompanied with the development of various image processing and damage (e.g., corrosion and cracks) detection algorithms to analyze image datasets. However, there is evidence of very little to no research of the use of deep learning (neural network) based techniques.

5.1.6 Bridges

Bridges are typical civil infrastructures that are subject to external phenomenon such as wind, heat, water, and vibrations. The inspection of bridges currently requires manual visual inspection by human inspectors, working at different levels of elevation, with various levels of risk associated, to access and view parts of the bridge. This manual approach requires high time durations, in some cases road closures, high costs and due to the vast number of bridges in cities today, a lot of manpower. The last decade has seen quite a significant amount of research into the use of robot and autonomous systems (RAS) to support bridge inspections. These robot platforms are fitted with sensors ranging from infrared (IR) cameras to ultrasonic sensors [ 14 , 91 ]. This has been accompanied with research into computer vision and image processing techniques that detect damage mechanisms, defects, and features (e.g., cracks) [ 1 ]. This research has recently moved towards the use of deep learning methods to analyze images and detect cracks [ 49 , 92 , 93 ].

5.1.7 Automotive vehicles (cars)

Currently most literature available has concentrated on the analysis of image datasets have been focused on vehicle make and model classification in support of the transport and security industry [ 94 ] defect and damage detection and analysis in support of the automotive industry and the insurance sector that supports it [ 95 , 96 ]. Most automotive inspection required are focused on analysis of vehicular accidents, requiring image analysis of damages such as bumper dents, door dents, glass shutters, broken head lamps and tail lamps, as well as scratches etc. [ 95 – 97 ]. Recent years has produced research in the development of computer vision and neural network techniques for image analysis of car damage [ 95 – 99 ].

5.2 Damage mechanisms on mechanical systems and civil infrastructure

Corrosion and cracks are damage mechanisms associated with bridges, roads, rail, and levees in literature [ 1 , 49 , 93 , 100 ]. Most of the current literature focus on the inspection of wind turbines on surface damage, specifically, leading edge erosion, surface cracks, damaged lightning receptors and damaged vortex receptors [ 2 , 31 , 83 ]. In the case of pipelines, literature focus inspection of cracks, corrosion, or erosion [ 30 , 33 , 37 ]. Literature discussions of aircraft inspection consistently focuses on the fuselage, for damage mechanisms such as corrosion pillowing and insulator surface cracks. Some literature on aircraft inspection however do not specify damage mechanisms, and instead uniformly refer to defect regions [ 84 – 86 ].

Power lines and transmission lines in literature, are associated with cracks on concrete poles, identification of missing or dirty insulator plates, rusted conductors, broken cables, insulator damage, conductor corrosion and cracks on insulator surfaces [ 87 , 88 , 101 , 102 ]. The concentration on the inspection of vessels or ships in literature surrounds the detection, localization and classification of cracks, corrosion, or coating breakdown, pitting, and buckling [ 89 , 90 , 103 ].

While there could be a case for the monitoring and inspection of automotive vehicles during servicing and checkups for damage mechanisms from internal and external phenomenon (e.g., wind, water, heat), such as corrosion, fatigue etc. there seems to be little to no literature on such research available at this time. The literature available is mostly focused on damage to cars because of vehicular accidents [ 95 – 99 ]. A review of literature shows that corrosion and cracks are the most addressed damage mechanism, while erosions and fatigue seem to be the least addressed damage mechanisms. Table  2 demonstrates the typical research into damage mechanisms on mechanical infrastructure, that machine learning techniques have been used to detect, classify, and model.

Based on the literature reviewed, manual visual, data collection is still the norm for bridge inspection, however, in research where a robot platform is deployed to collect image datasets of damage mechanisms on bridges, most papers have reported the deployment of UAVs [ 49 , 93 , 100 ]. Likewise, literature on wind turbine damage inspection showed a majority preference for drones to support with data collection [ 17 , 31 , 83 ].

In the case of pipelines, the literature shows a mixed picture, with a balanced preference for deploying drones for some, and a preference for the use of mobile in-pipe inspection (IPIR) robots for others [ 33 , 37 ]. The literature, however, provides a different landscape for the use of RAS systems in aircraft inspections. While some literature has demonstrated the use of UAVs in recent years, the literature however, indicates a preference for the use of D-Sight Aircraft inspection system (DAIS) platforms. These are portable non-destructive devices, that support the visual analysis of aircraft surface areas of the fuselage [ 84 – 86 ].

While in some cases, piloted helicopters are still used to gather data, however, literature demonstrates that power lines and transmission lines inspection overwhelmingly deploy UAVs for data collection of damage mechanisms [ 87 , 88 , 101 , 102 ]. Literature regarding vessels or ships, present a very mixed preference for the use of semi-autonomous micro-aerial vehicles (MAVs) and more recently, climber or UAV robot platforms for inspection and data collection of damage mechanisms [ 89 , 90 , 103 ]. Literature review of automotive vehicles for cars, has unfortunately provided very little to no research with respect to the use of robot and autonomous system platforms for damage inspection.

5.3 Application of machine learning techniques for RAS inspection

This section reviews the application of machine learning techniques used to support RAS system inspection of civil and mechanical infrastructure (wind turbines, pipelines, rail, aircraft fuselage, power lines, vessels, and automobiles) in literature and performance evaluations of their use where documented. Gopalakrishnan et al . [ 49 ] looked at the processing and analysis of images of cracks on civil and mechanical infrastructure, obtained through UAVs (or drones). They applied deep convolutional neural networks (DCNN) models to process and analyze the image data and lauded the efficacy of DCNN as the more efficient technique for processing and analyzing both images and video type data [ 49 ]. Nguyen et al . [ 104 ] reviewed the use of various deep learning techniques in support of RAS systems inspection of power line infrastructure and indicated that Region-Based Convolutional Neural Networks (R-CNN) and the You Only Look Twice (YOLO) techniques as the optimal techniques for object detection for inspection tasks by RAS systems.

Shihavuddin et al . [ 31 ] explored the use of drones to obtain image data of damage on wind turbines (WT), for processing, analysis and classification using deep learning techniques. The research paper indicates that the research used Convolutional Neural Networks (CNN) as the backbone framework for processing and extracting features from image data obtained from WT using drones. Shihavuddin report their research proceeded to then use the faster R-CNN technique to train their models for object detection. They report achieving high accuracy results compared to other deep learning type algorithms used to train the models during their research. Shihavuddin et al . [ 31 ] also indicated that employing the technique called advanced image augmentation, allows you to expand your dataset, as this technique creates additional images for the training model, by altering existing image data obtained and fed into training sets. The utility of this technique is invaluable, as the larger the dataset, the more efficient the training model.

Franko et al . [ 83 ]’s research provieded findings in the use of a combined, multiple RAS platform, ranging from climbing and multicopter robots, fitted with LiDAR, RGB and ZED cameras, ultrasonic, radar and other vision type sensors, to inspect and detect corrosion and welding line damage mechanisms on the tower surfaces of WTs [ 83 ]. Alharam et al . [ 33 ] provided a case study in the use of UAVs to provide inspection of oil and gas pipelines in Bahrain. The UAVs are fitted with GPS, thermal cameras, and gas detectors to obtain image and methane (CH4) readings from gas and oil pipelines. The research looked at the use of the Decision Tree (DT), Support Vector Machine (SVM) and Random Forest (RF) techniques to process and analyze the data obtained from the drones. Franko et al . [ 83 ] reported that the RF technique provided 93% accuracy and much better performance than other classification techniques used in their research [ 33 ].

Bastian et al . [ 30 ] studied the external corrosion on pipelines and used deep neural networks to process and analyze the image and video data obtained from inspections. In their research paper, they proposed the use of a DNN technique, based on the CNN architecture, to extract and distinguish between images with corrosion and those without, from image data taken from pipelines by the UAV. They boast in their paper that CNNs have the most optimal results in terms of object detection and image classification.

Table  3 shows us that the literature demonstrates that the DCNN architecture provides up to 90% accuracy for the detection of cracks in civil infrastructure [ 49 ]; 92% accuracy in defect detection in rail infrastructure [ 105 ]. Table  1 also indicates that while the Random Forest algorithm is the best performing algorithm when compared to the Decision Tree and Support Vector Machine algorithms for detection of cracks, corrosion, and erosion on pipelines, with SVM yielding the least precision and accuracy between them [ 33 ]. However, custom CNN architectures have been reported to provide over 93% with respect to precision, accuracy, and other metrics within the confusion matrix [ 30 ]. Table  3 also advises that research on the RAS inspection and monitoring of aircraft fuselage has demonstrated that CNNs can provide up to 92% accuracy in the detection of surface and joints corrosion [ 84 ], while DNNs can provide a better performing accuracy of 96% [ 85 ]. In the case of power lines, Table  1 shows us that there is a preference for the use of custom -CNNs, Faster R-CNN or the YOLO v3 technique for extracting, analyzing, and classifying data collected from the RAS inspection of power lines; with these techniques having been reported to provide over 90% in their precision or accuracy in classifying image data [ 30 , 88 , 104 ]. There seems to currently be very little literature on the use machine learning techniques in the support of RAS inspection of vessels, one of the exceptions is research documented by Ortiz et al. [ 103 ], where the ANN technique has been used to extract, classify, and analyze corrosion, cracks and coating breakdown from image data collected by a micro-aerial vehicle [ 103 ].

Most of the limited literature that examines the application of machine learning techniques supporting the RAS inspection of automobiles focuses on the damage classification of damaged vehicles, because of accidents and associated insurance claims. The research shows a preference for CNNs or Mask R-CNN object recognition or damage detection of automobiles, with CNNs providing accuracy as high as 87% and the Mask R-CNN as high as 94% [ 95 , 96 ].

Figure  2 provides an illustration of our findings regarding the frequency of use of popular ML techniques to process, analyze and model damage mechanisms on mechanical systems and civil infrastructure.

figure 2

The use of different Machine Learning techniques in literature

Table  3 lists and maps the machine learning methods used in the robot inspection of mechanical systems and infrastructure that have been reviewed in this paper. Also, following on from the performance evaluation metrics discussed in Section  4 of this paper, Table  3 provides us with performance evaluation figures and results of the machine learning techniques deployed to process and analyze data collected by RAS platforms for civil mechanical infrastructure in reviewed papers.

6 Technology gaps and challenges

This section reviews technology gaps and challenges in the application of machine learning techniques for robotic inspection of mechanical systems and civil infrastructure.

6.1 Challenges of small object detection for deep learning techniques

Object detection of small (perhaps even undetectable to the human eye) damage mechanisms on mechanical and civil infrastructure has invaluable application to industries ranging from aerospace (for detecting cracks on aircraft) to energy and utility (for detecting erosion or corrosion). A small object has been defined by [ 106 , 107 ] as a 32 × 32 pixels object within an image. Current literature acknowledges that while object detection of medium to large size objects in image data is now a proven technology, accurate detection of small objects has not yet been mastered and it remains a challenge for researchers [ 106 , 108 – 111 ].

The reasons for this research gap are a result of several realities and constraints of current state of the art object detection technology. The first is that small objects are challenging to detect because high-level resolution-based feature maps, that are characteristic of CNN architectures, used to identify large objects in images, do not support the identification of small objects in images. This is because small objects in images are mostly in low resolution. The second is due to currently limited context data; there is significantly less pixels associated with small objects, resulting in little to nothing for the detection algorithms to identify. Furthermore, there is a class imbalance in the datasets that are currently being used to train deep learning models. Current image datasets usually comprise of large to medium sized objects, this results in an imbalance in the groups of object sizes in images available to deep learning models for training.

There is a gap in the research and development of deep learning techniques or models that could provide the higher precision required for accurate localization for small objects in images; as well as an on-going race between researchers in the improvement of current object detection deep learning algorithms for small objects in image datasets [ 106 , 108 – 111 ].

6.2 Evaluating accuracy and performance of machine learning techniques

Following this paper’s review of the methods and metrics used in academic literature to evaluate the performance of machine learning methods, techniques and models when trained on datasets to output values, interpretations, or predictions; this section briefly reviews critics of these metrics.

There are arguments in literature that contend that the current methods (such as the confusion matrix, accuracy, precision, MAP, RSME and quantile error) used for evaluating the performance and utility of machine learning techniques when applied as solutions to extract or analyze data, et al., can only be understood and applied by subject matter experts in statistics, computer science, artificial intelligence (AI), etc. [ 78 , 112 ].

Both Shen et al . [ 112 ] and Beauxis-Aussalet et al . [ 78 ] contended that non-subject matter experts do not always have the background knowledge to understand terminology such as true negative (TN) or false positive (FP), that form part of the underlying metric framework for evaluating machine learning techniques. Furthermore, Shen et al .’s [ 112 ] research found that non-experts found it challenging to both use and relate some of the evaluation metrics back to the problem that the techniques are being applied to solve. Beauxis-Aussalet et al . [ 78 ] underscored the fact that some of the evaluation metrics that exist can even be misunderstood, misinterpreted, or even deployed incorrectly to case studies by non-subject matter experts [ 78 ]. It is therefore a contention in literature that there is a gap or requirement for more accessible methods for evaluating the performance of machine learning techniques, that can be understood and used by both subject matter experts in AI and their lay colleagues.

6.3 Machine learning challenges with unstructured data

Throughout the course of this paper, we have reviewed the data collected by RAS systems during inspection of mechanical infrastructure, the types of data collected, the techniques that have been deployed to process and analyze the data collected. This section extends our review of data collected by robotic and autonomous systems to examine the structure of this data and the gaps in our ability to work with this data.

Structured and unstructured is a type of description that data scientists and researchers use to categorize data. Structured data categorizes data with a schema, which means that data is seen as having some sort of logical organization. Structured data is quantitative and is usually displayed as numbers, dates, values, and strings. Structured data can be queried, searched, and analyzed because it is organized in rows and columns, e.g., CSV files, spreadsheets, SQL databases, etc. Traditional sources for structured data vary from sensors, weblogs, network traffic, etc.

Unstructured data however, cannot be contained in rows and columns and has no discernable structure or logic. It is qualitative data, comprising of video, audio, images etc. Structured data cannot be processed or analyzed with the same methods used for structured data, e.g., rows and columns, databases etc.

The challenge for computer scientists and data and AI scientists is that most of the machine learning tools available today, are better suited to train on datasets with structured data. However, most data in the world are unstructured, data in unstructured formats. As indicated by Rai et al . [ 113 ], literature and research estimate that over 80% of the collected data in the world is unstructured [ 113 , 114 ]. Traditional sources for unstructured data include social media platforms, images videos and audio data [ 113 ]. While there are some AI techniques and tools that can process, analyze, and train unstructured data, e.g., Natural Language Processing (NLP) techniques are used to add structure, such as context and syntax to unstructured text data, while AI techniques such as autoencoders are used to extract and analyze unstructured data. However, these tools are few and the technology has not yet sufficiently matured. This therefore means that there is a gap or requirement for research and development into effective machine learning tools and techniques that can process and analyze unstructured data [ 113 , 114 ].

6.4 Big data and challenges with real-time data analytics

This paper has reviewed how literature has characterized the data collected by RAS systems in terms of volume, veracity, variety, and velocity. It is also noteworthy to mention the fact that current literature indicates that the vast amounts of volume, variety, and complexity of data being collected by modern day sensors, robotic and autonomous systems has resulted in today’s large datasets being referred to as ‘Big Data’. The term Big Data refers to a variety of high volume and high velocity datasets, comprising of structured, semi-structured and unstructured data, collected from, and feeding into social networks, academia, sensors networks, international trading markets, surveillance, and communication networks [ 82 , 115 – 117 ].

Big Data is exceeding the capacity and capability of current technology to contain, process and analyze in real-time, without the need for data storage and batch processing, in support of real-time dependent applications for international trading markets, smart city infrastructure and robot autonomous systems, e.g. self-driving cars etc. [ 82 , 115 , 116 , 118 ]. The very fundamentals of neural networks (deep learning) techniques means that they are best suited for the processing and analysis of Big Data, as neural network algorithms require vast amounts of data to train on to provide meaningful predictions, pattern recognition or representations of any real use. However, while these techniques have resulted in technologies e.g., speech recognition, computer vision and natural language processing (NLP) that can be applied to volumes of unsupervised and unstructured data, however, these technologies are still in their infancy and have not yet matured to the point of coping with complex variety, high volume, and velocity of Big Data [ 82 , 115 – 118 ].

6.5 On-board integration of machine learning with RAS platforms

In this paper, we have described robotic platforms as autonomous systems. We have also discussed the machine learning techniques, algorithms that process and analyze the data they collect. However, Panella [ 119 ] made an argument that while UAVs are capable of semi-autonomous operation, there is still no integrated unitary technique providing complete autonomy for UAV platforms, that allows for real-time decision-making within the environment they are located within; compared to responding to stimuli or events based on pre-programming. This was one of the stated reasons for the development of an on-board integration of various AI or machine learning techniques to deliver and effect the fully autonomous UAV system that can “think” like humans and make decisions within their environment.

Despite current strides in research and development, there are still challenges in the integration of machine learning techniques on-board RAS platforms. Ono et al . [ 120 ] noted that while there is a suite of on-board algorithms that can be integrated as part or alongside the Robot Operating System (ROS) providing robotic systems with the autonomy to respond to events in their environment and complete tasks (e.g., the Mars rover); there is still a gap in available algorithms and technology that could provide complete on-board autonomy for future rover missions [ 120 ].

Furthermore, Ono et al . [ 120 ] discussed about the gaps in intelligent algorithms on-board robot systems which could result in what the paper refers to as the “unnoticed green monster problem” where human decision-makers and operators are not able to take real-time action to events or stimuli detected by the RAS system (the mars rover in this particular case study), due to delay or loss of data (imagery or otherwise), being fed from the robot system, e.g., Mars to the human operator, in this case, a control centre on Earth. The point being that this demonstrated the need for the development and on-board integration of AI algorithms that would provide on-board decision-making on the robot platform, to enable real-time responses for what Ono et al . [ 120 ] describes as “scientific opportunities and avoid the “green monster problem” [ 120 ].

Hillebrand et al . [ 121 ] and Contreras et al . [ 122 ] suggested the use of deep learning (neural networks), specifically reinforcement learning, as a response to the absence of neural network design methodology in robotic systems [ 121 , 122 ]. Chen et al . [ 123 ] noted that while autonomous robot navigation exists as a mainstream technology, the current capability still has challenges in its ability to manage complex and dynamic environments and reduce misclassifications by current perception algorithms [ 123 ].

7 Conclusions

This review has reported on the types of robotic platforms deployed for inspection of different mechanical systems and civil infrastructure such as storage tanks, high rise facilities and nuclear power plants. While unmanned marine vehicles are deployed for systems located underwater (such as subsea power cables); unmanned ground robots are better suited to horizontal ground surface environments, with UAVs mostly deployed for both indoor and outdoor, remote, hazardous environments.

This paper demonstrated through an extensive literature review that machine learning, has been used with varied efficacy, to support the processing and analysis and classification of data collected by RAS systems during inspection of mechanical and civil infrastructure. The review revealed that there are few studies demonstrating use of deep learning techniques for the analysis of datasets collected during structural health inspections. In these studies, it was shown that deep learning techniques performed better than most machine learning methods in the processing and analysis of image (damage mechanism) datasets. Furthermore, almost all research reviewed have focused on the inspection, analysis, and classification of single damage mechanisms, e.g., corrosion, cracks, erosion, etc. This indicated a research gap in the use and application of machine learning techniques to analyze and classify multiple types of damage mechanisms from video or image datasets collected during the inspection of mechanical systems and civil infrastructure.

Availability of data and materials

The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials.

H.M. La, N. Gucunski, K. Dana, S.-H. Kee, Development of an autonomous bridge deck inspection robotic system. J. Field Robot. 2017 , 1489 (2017)

Article   Google Scholar  

L. Wang, Z. Zhang, Automatic detection of wind turbine blade surface cracks based on UAV-taken images. IEEE Trans. Ind. Electron. 64 (9), 7293–7303 (2017)

S. Bernardini, F. Jovan, Z. Jiang, S. Watson, A. Weightman, P. Moradi, T. Richardson, R. Sadeghian, S. Sareh, A multi-robot platform for the autonomous operation and maintenance of offshore wind farms blue sky ideas track, in Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2020 , May 9–13, 2020, Auckland, New Zealand (2020)

Google Scholar  

C. Stout, D. Thompson, UAV Approaches to Wind Turbine Inspection: Reducing Reliance on Rope-Access. Offshore Renewable Energy Catapult. (2019)

D. Schmidt et al., Climbing robots for maintenance and inspections of vertical structures—A survey of design aspects and technologies. Robot. Auton. Syst. (2013). https://doi.org/10.1016/j.robot.2013.09.002

D. Lattanzi et al., Review of Robotic Infrastructure Inspection Systems. J. Infrastruct. Syst. (2017). https://doi.org/10.1061/(ASCE)IS.1943-555X.0000353

M.A.M. Yusoff et al., Development of a Remotely Operated Vehicle (ROV) for underwater inspection. Jurutera (2013)

A.L. Meyrowitz et al., Autonomous vehicles, in Proceedings of the IEEE 1996 (1996). https://doi.org/10.1109/5.533960

Chapter   Google Scholar  

F. Rubio et al., A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019 (2019). https://doi.org/10.1177/1729881419839596

D.W. Gage, A Brief History of Unmanned Ground Vehicle (UGV) Development Efforts (1995)

W. Shen et al., Proposed wall climbing robot with permanent magnetic tracks for inspecting oil tanks, in IEEE International Conference Mechatronics and Automation (2005). https://doi.org/10.1109/ICMA.2005.1626882

L.P. Kalra et al., A wall climbing robot for oil tank inspection, in 2006 IEEE International Conference on Robotics and Biomimetics (2006). https://doi.org/10.1109/ROBIO.2006.340155

S. Campbell et al., Sensor technology in autonomous vehicles: a review, in 2018 29th Irish Signals and Systems Conference , ISSC, 2018 (2018). https://doi.org/10.1109/ISSC.2018.8585340

J. Seo et al., Drone-enabled bridge inspection methodology and application. Autom. Constr. (2018). https://doi.org/10.1016/j.autcon.2018.06.006 . https://www.sciencedirect.com/science/article/pii/S0926580517309755 DOI

M. Shafiee et al., Unmanned Aerial Drones for Inspection of Offshore Wind Turbines: A Mission-Critical Failure Analysis. Robotics J. (2021). https://doi.org/10.3390/robotics10010026

M.H. Frederiksen et al., Drones for inspection of infrastructure: Barriers, opportunities and successful uses. Center for Integrative Innovation Management (2019)

M. Drones Lt, Best Commercial Drones for Beginners, Sep. 02, 2019, 2018. https://www.coptrz.com/best-commercial-drones-for-beginners/

C. Eschmann et al., High-resolution multisensor infrastructure inspection with unmanned aircraft systems, in ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2013 (2013). https://doi.org/10.5194/isprsarchives-XL-1-W2-125-2013 https://ui.adsabs.harvard.edu/abs/2013ISPAr.XL1b.125E

X.L. Ding et al., A review of structures, verification, and calibration technologies of space robotic systems for on-orbit servicing (2020). https://doi.org/10.1007/s11431-020-1737-4

A. Flores-Abad et al., A Review of Space Robotics Technologies for on-Orbit Servicing (Elsevier, Amsterdam, 2014). https://doi.org/10.1016/j.paerosci.2014.03.002

Book   Google Scholar  

P.J. Staritz et al., Skyworker: A Robot for Assembly, Inspection and Maintenance of Large-Scale Orbital Facilities. IEEE (2001). https://doi.org/10.1109/ROBOT.2001.933271

H. Choset, D. Kortenkamp, Path planning and control for free-flying inspection robot in space. J. Aerosp. Eng. (1999). https://doi.org/10.1061/(ASCE)0893-1321(1999)12:2(74)

J.S. Mehling et al., A minimally invasive tendril robot for in-space inspection, in The First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, BioRob 2006 (2006), pp. 690–695. https://doi.org/10.1109/BIOROB.2006.1639170

S.-I. Nishida et al., Prototype of an end-effector for a space inspection robot. Adv. Robot. (2012). https://doi.org/10.1163/156855301300235788

L. Pedersen et al., A survey of space robotics, in ISAIRAS (2003)

J. Redmon et al., You only look once: unified, real-time object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

W. Fan et al., Mining big data: current status, and forecast to the future, in 2013 Association for Computing Machinery (2013). https://doi.org/10.1145/2481244.2481246

B. Matturdi et al., Big data security and privacy: a review. China Commun. 11 (14), 135–145 (2014). https://doi.org/10.1109/CC.2014.7085614

D. Laney, 3-D Data Management: Controlling Data Volume, Velocity and Variety . META Group Research Note, February, vol. 6 (2001)

B.T. Bastian, J. N, S.K. Kumar, C.V. Jiji, Visual inspection and characterization of external corrosion in pipelines using deep neural network. NDT & E International Journal 107 , 102134 (2019)

A. Shihavuddin et al., Wind turbine surface damage detection by deep learning aided drone inspection analysis. Energies 12 (4), 676 (2019). https://doi.org/10.3390/en12040676

M. Hassanalian et al., Classifications, applications, and design challenges of drones: a review. Prog. Aerosp. Sci. (2017). https://doi.org/10.1016/j.paerosci.2017.04.003 . https://www.sciencedirect.com/science/article/pii/S0376042116301348

A. Alharam et al., Real time AI-based pipeline inspection using drone for oil and gas industries in Bahrain, in 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT) (2020)

V. Nasteski, An overview of the supervised machine learning methods. Horizons B 4 (2017). https://doi.org/10.20544/HORIZONS.B.04.1.17.P05

B. Mahesh, Machine Learning Algorithms – a Review (2019). https://doi.org/10.21275/ART20203995

A. Carrio et al., A review of deep learning methods and applications for unmanned aerial vehicles. Hindawi J. Sens. (2017). https://doi.org/10.1155/2017/3296874

M.N. Mohammed et al., Design and Development of Pipeline Inspection Robot for Crack and Corrosion Detection (2018)

https://www.analyticssteps.com/blogs/how-does-k-nearest-neighbor-works-machine-learning-classification-problem

F. Hoffmann et al., Benchmarking in classification and regression. WIREs Data Mining Knowl. Discov. 9 , e1318 (2019). https://doi.org/10.1002/widm.1318

A. Geron, Hands-On Machine Learning with Scikit-Learn, Keras &TensorFlow, 2nd edn. (2019). 2019

I. Goodfellow et al., Deep Learning (MIT Press, Cambridge, 2016)

MATH   Google Scholar  

F.Y. Osisanwo et al., Supervised machine learning algorithms: classification and comparison. Int. J. Comput. Trends. Technol. (IJCTT) 48 (3) 128–138 (2017)

C.-F. Tsai et al., Intrusion detection by machine learning: a review. Expert Syst. Appl. 36 (10), 11994–12000 (2009)

A. Matsunaga et al., On the use of machine learning to predict the time and resources consumed by applications, in 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing (2010), pp. 495–504. https://doi.org/10.1109/CCGRID.2010.98

M. Jogin et al., Feature extraction using Convolution Neural Networks (CNN) and deep learning, in 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT) (2018), pp. 2319–2323. https://doi.org/10.1109/RTEICT42901.2018.9012507

https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/

https://morioh.com/p/73fce91e9846

Y. Guo et al., Deep learning for visual understanding: a review. Neurocomputing 187 , 27–48 (2016). https://doi.org/10.1016/j.neucom.2015.09.116

K. Gopalakrishnan et al., Crack damage detection in unmanned aerial vehicle images of civil infrastructure using pre-trained deep learning model. Int. J. Traffic Transp. Eng. (IJTTE) (2017)

H. Larochelle et al., Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1 , 1–40 (2009). https://doi.org/10.1145/1577069.1577070

Article   MATH   Google Scholar  

A. Fischer, C. Igel, An Introduction to Restricted Boltzmann Machines . Iberoamerican Congress on Pattern Recognition (Springer, Berlin, 2012)

N. Agarwalla et al., Deep learning using restricted Boltzmann machines. Int. J. Comput. Sci. Inf. Secur. 7 (3), 1552–1556 (2016)

Y. Hua et al., Deep belief networks and deep learning, in Proceedings of 2015 International Conference on Intelligent Computing and Internet of Things (2015), pp. 1–4. https://doi.org/10.1109/ICAIOT.2015.7111524

https://blog.paperspace.com/faster-r-cnn-explained-object-detection/

https://neurohive.io/en/popular-networks/r-cnn/

Z.-Q. Zhao et al., Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30 (11), 3212–3232 (2019). https://doi.org/10.1109/TNNLS.2018.2876865

https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4

P.S. Bithas et al., A Survey on Machine-Learning Techniques for UAV-Based Communications. Sensors (Basel, Switzerland) 26 November 2019 (2019). https://europepmc.org/articles/PMC6929112 . Accessed September 2020

R. Girshick et al., Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)

R. Girshick, Fast r-cnn, in Proceedings of the IEEE International Conference on Computer Vision (2015)

K. He et al. Mask R-CNN. In ICCV, 2017

J. Dai et al., R-FCN: Object Detection via Region-based Fully Convolutional Networks (2016). arXiv:1605.06409

W. Liu et al., Ssd: Single shot multibox detector (2015). Preprint arXiv:1512.02325

S. Ren et al., Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015 (2015)

S. Grossberg, Recurrent neural networks. Scholarpedia 8 (2), 1888 (2013)

J.A. Bullinaria, Recurrent neural networks. Neural Computation: Lecture 12 (2013)

S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput. 9 (8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

I.J. Goodfellow et al., Generative Adversarial Networks (2014). arXiv, stat.ML

M. Mirza, S. Osindero, Conditional Generative Adversarial Nets (2014)

L. Noriega, Multilayer perceptron tutorial. School of Computing. Staffordshire University (2005)

H. Taud, J. Mas, Multilayer perceptron (MLP), in Geomatic Approaches for Modeling Land Change Scenarios . Lecture Notes in Geoinformation and Cartography. (Springer, Cham, 2018). https://doi.org/10.1007/978-3-319-60801-3_27

G. Alain, Y. Bengio, What Regularized Auto-Encoders Learn from the Data Generating Distribution (2014)

F.Q. Lauzon, An introduction to deep learning, in 2012 11th International Conference on Information Science, Signal Processing and Their Applications (ISSPA) , (2012), pp. 1438–1439. https://doi.org/10.1109/ISSPA.2012.6310529

P. Baldi, Autoencoders, unsupervised learning, and deep architectures, in Proceedings of ICML Workshop on Unsupervised and Transfer Learning (2012)

Q.V. Le, A tutorial on deep learning part 2: autoencoders, convolutional neural networks and recurrent neural networks. Google Brain 20 , 1–20 (2015)

A. Agarwal, A. Motwani, An Overview of Convolutional and AutoEncoder Deep Learning Algorithm (2016)

Y. Coadou, Boosted decision trees and applications. EPJ Web Conf. 55 , 02004 (2013). https://doi.org/10.1051/epjconf/20135502004

E. Beauxis-Aussalet et al., Visualization of confusion matrix for non-expert users, in IEEE Conference on Visual Analytics Science and Technology (VAST) - Poster Proceedings (2014)

G. Shobha et al., Machine learning, in Handbook of Statistics , vol. 38 (Elsevier, Amsterdam, 2018), pp. 197–228. https://doi.org/10.1016/bs.host.2018.07.004 . https://www.sciencedirect.com/science/article/pii/S0169716118300191 . ISSN 0169-7161. ISBN 9780444640420

A. Kulkarni et al., Foundations of data imbalance and solutions for a data democracy, in Data Democracy (Academic Press, San Diego, 2020), pp. 83–106. https://doi.org/10.1016/B978-0-12-818366-3.00005-8 . ISBN 9780128183663

https://towardsdatascience.com/map-mean-average-precision-might-confuse-you-5956f1bfa9e2

N. Mohamed et al., Real-time big data analytics: applications and challenges, in Proc. Int. Conf. High Perform. Comput. Simulation (2014), pp. 305–310

J. Franko et al., Design of a multi-robot system for wind turbine maintenance. Energies (2020)

B. Brandoli et al., Aircraft fuselage corrosion detection using artificial intelligence. Sensors 2021 (21), 4026 (2021). https://doi.org/10.3390/s21124026

T. Malekzadeh et al., Aircraft Fuselage Defect Detection using Deep Neural Networks (2017). arXiv:1712.09213

J. Miranda et al., Machine learning approaches for defect classification on aircraft fuselage images aquired by an UAV, in Fourteenth International Conference on Quality Control by Artificial Vision . 16 July 2019, Proc. SPIE, vol. 11172 (2019), p. 1117208. https://doi.org/10.1117/12.2520567 .

B. Jalil et al., Fault detection in power equipment via an unmanned aerial system using multi modal data. Sensors 2019 (19), 3014 (2019). https://doi.org/10.3390/s19133014

E. Titov et al., The deep learning based power line defect detection system built on data collected by the cablewalker drone, in 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON) (2019), pp. 0700–0704. https://doi.org/10.1109/SIBIRCON48586.2019.8958397

A. Ortiz et al., First steps towards a roboticized visual inspection system for vessels, in 2010 IEEE 15th Conference on Emerging Technologies & Factory Automation (ETFA 2010) (2010), pp. 1–6. https://doi.org/10.1109/ETFA.2010.5641246

F. Bonnin-Pascual et al., Semi-autonomous visual inspection of vessels assisted by an unmanned micro aerial vehicle, in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (2012), pp. 3955–3961. https://doi.org/10.1109/IROS.2012.6385891

S. Kawabata et al., Autonomous flight drone with depth camera for inspection task of infra structure, in Proceedings of the International MultiConference of Engineers and Computer Scientists , vol. 2 (2018)

Y.-J. Cha et al., Deep learning-based crack damage detection using convolutional neural networks. Comput.-Aided Civ. Infrastruct. Eng. 32 , 361–378 (2017)

M.M. Karim et al., Modeling and simulation of a robotic bridge inspection system, in Procedia Computer Science (2020), pp. 177–185. https://doi.org/10.1016/j.procs.2020.02.276 . https://www.sciencedirect.com/science/article/pii/S1877050920304154 . ISSN 1877-0509

M.A. Manzoor et al., Vehicle make and model classification system using bag of SIFT features, in 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC) , (2017), pp. 1–5. https://doi.org/10.1109/CCWC.2017.7868475

P. Rakshata et al., Car damage detection and analysis using deep learning algorithm for automotive. Int. J. Sci. Technol. Res. 5 (6) (2019). Nov-Dec-2019, ISSN (Online): 2395-566X

Q. Zhang et al., Vehicle-damage-detection segmentation algorithm based on improved mask RCNN. IEEE Access 8 , 6997–7004 (2020). https://doi.org/10.1109/ACCESS.2020.2964055

H. Bandi et al., Assessing car damage with convolutional neural networks, in 2021 International Conference on Communication Information and Computing Technology (ICCICT) (2021), pp. 1–5. https://doi.org/10.1109/ICCICT50803.2021.9510069

C. Giovany Pachón-Suescún et al., Scratch Detection in Cars Using a Convolutional Neural Network by Means of Transfer Learning. IJAER (2018) 16 Nov 2018

K. Patil et al., Deep learning based car damage classification, in 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA) (2017), pp. 50–54. https://doi.org/10.1109/ICMLA.2017.0-179

R. Ali et al., Subsurface damage detection of a steel bridge using deep learning and uncooled micro-bolometer. Constr. Build. Mater. 226 (2019). https://doi.org/10.1016/j.conbuildmat.2019.07.293 . 2019, 376-387, ISSN 0950-0618. https://www.sciencedirect.com/science/article/pii/S0950061819319671

Y. Liu et al., The method of insulator recognition based on deep learning, in Proceedings of the 2016 4th International Conference on Applied Robotics for the Power Industry (CARPI) , Jinan, China, 11–13 October, 2016 (2016), pp. 1–5

Z. Zhao et al., Multi-patch deep features for power line insulator status classification from aerial images, in 2016 International Joint Conference on Neural Networks (IJCNN), 2016 (2016), pp. 3187–3194. https://doi.org/10.1109/IJCNN.2016.7727606

A. Ortiz et al., Vision-based corrosion detection assisted by a micro-aerial vehicle in a vessel inspection application. Sensors 2016 (16), 2118 (2016). https://doi.org/10.3390/s16122118

V.N. Nguyen et al., Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning. Int. J. Electr. Power Energy Syst. 2018 (2018)

S. Faghih-Roohi et al., Deep convolutional neural networks for detection of rail surface defects, in Neural Networks (IJCNN), 2016 International Joint Conference on, 2016 (2016), pp. 2584–2589

Z.-Q. Tong et al., Recent advances in small object detection based on deep learning: a review. Image Vis. Comput. 97 , 103910 (2020)

T.-Y. Lin et al., Microsoft Coco: Common Objects in Context . European Conference on Computer Vision (Springer, Cham, 2014)

Y. Liu et al., A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 172 , 114602 (2021)

N.-D. Nguyen et al., An evaluation of deep learning methods for small object detection. J. Electr. Comput. Eng. 2020 , 3189691 (2020)

C. Chenyi et al., R-CNN for small object detection, in Asian Conference on Computer Vision (Springer, Cham, 2016)

Z.-Q. Zhao et al., Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30 (11), 3212–3232 (2019)

Y. Shen et al., Design alternative representations of confusion matrices to support non-expert public understanding of algorithm performance. Proc. ACM Hum. Comput. Interact. 4 (CSCW2), 153 (2020)

R.K. Rai et al., Intricacies of Unstructured Data. EAI Endorsed Transactions on Scalable Information Systems 4 (14) (2017). https://doi.org/10.4108/eai.25-9-2017.153151

A. Gandomi et al., Beyond the hype: big data concepts, methods, and analytics. Int. J. Inf. Manag. 35 (2), 137–144 (2015)

D.P. Acharjya et al., A survey on big data analytics: challenges, open research issues and tools. Int. J. Adv. Comput. Sci. Appl. 7 , 511–518 (2016)

A. Oussous et al., Big data technologies: a survey. J. King Saud Univ, Comput. Inf. Sci. 30 , 431–448 (2018)

X. Jin et al., Significance andchallenges of big data research. Big Data Res. 2 (2), 59–64 (2015)

M.M. Najafabadi et al., Deep learning applications and challenges in big data analytics. Big Data 2 (1), 1–21 (2015)

I. Panella, Artificial intelligence methodologies applicable to support the decision-making capability on board unmanned aerial vehicles, in ECSIS Symposium on Bio-Inspired Learning and Intelligent Systems for Security , Edinburgh (2008), pp. 111–118. https://doi.org/10.1109/BLISS.2008.14

M. Ono et al., MAARS: machine learning-based analytics for automated rover systems, in Proc. IEEE Aerosp. Conf (2020), pp. 1–17

M. Hillebrand et al., A design methodology for deep reinforcement learning in autonomous systems. Procedia Manufacturing 52 (2020). https://doi.org/10.1016/j.promfg.2020.11.044 . https://www.sciencedirect.com/science/article/pii/S2351978920321879

S. Contreras et al., Using deep learning for exploration and recognition of objects based on images, in 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR) (2016), pp. 1–6. https://doi.org/10.1109/LARS-SBR.2016.8

W. Chen et al., Door recognition and deep learning algorithm for visual based robot navigation, in 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014) (2014), pp. 1793–1798. https://doi.org/10.1109/ROBIO.2014.7090595

Download references

This research was funded by EPSRC https://gtr.ukri.org/projects?ref=studentship-2466663#/tabOverview , grant number 2466663.

Author information

Authors and affiliations.

Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury, CT2 7NT, United Kingdom

Michael O. Macaulay & Mahmood Shafiee

You can also search for this author in PubMed   Google Scholar

Contributions

“Conceptualization, MM and MS; methodology, MM and MS; investigation, MM; resources, MM; MS; writing—original draft preparation, MM and MS; writing—review and editing, MM and MS; supervision, MS; project administration, MM; funding acquisition, MS. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Michael O. Macaulay .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Code availability (software application or custom code).

Not applicable.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Macaulay, M.O., Shafiee, M. Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure. Auton. Intell. Syst. 2 , 8 (2022). https://doi.org/10.1007/s43684-022-00025-3

Download citation

Received : 07 January 2022

Accepted : 05 April 2022

Published : 29 April 2022

DOI : https://doi.org/10.1007/s43684-022-00025-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Machine learning
  • Deep learning
  • Robotics and autonomous system (RAS)
  • Mechanical systems
  • Civil infrastructure
  • Find a journal
  • Publish with us
  • Track your research

Advanced Applications of Industrial Robotics: New Trends and Possibilities

  • December 2021
  • Applied Sciences 12(1):135

Andrius Dzedzickis at Vilnius Gediminas Technical University

  • Vilnius Gediminas Technical University

Jurga Subaciute-Zemaitiene at Vilnius Gediminas Technical University

  • This person is not on ResearchGate, or hasn't claimed this research yet.

Urte Prentice at Vilnius University

  • Vilnius University

Abstract and Figures

Comparison of the operating environment: (a) industrial robots (adapted from [6]); (b) collaborative robots (adapted from [7]).

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Vinay Kumar Deolia
  • Jitendra Kumar
  • Dariusz Sala
  • Pavlo Pikulin

Valentyn Sobchuk

  • Igor Kotsan

Vildan Işık

  • Mariusz Piotr Hetmanczyk

Md. Mijanur Rahman

  • Ismat Jahan
  • Md. Al-Amin Bhuiyan
  • Ankur Tayal
  • Saurabh Agrawal
  • Rajan Yadav

Nicola Stampone

  • Roberto Chang López

José Luis Ordóñez Ávila

  • Nguyễn Hồng Anh
  • Tran Vu Minh
  • Pierluigi Beomonte Zobel

Minh-Quang Tran

  • Karar Mahmoud

M. M. F. Darwish

  • Zixiang Wang
  • Guodong Chen
  • Hussah Albinali

Fatimah Abdulraheem

  • Matti Lehtonen

Changsheng Li

  • Hongliang Ren

Ozan Kaya

  • Daniel Alonso Paredes Soto

Sandor Veres

  • Anthony Rossiter

Jonathan Makomo

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Who’s Teaching What
  • Subject Updates
  • MEng program
  • Opportunities
  • Minor in Computer Science
  • Resources for Current Students
  • Program objectives and accreditation
  • Graduate program requirements
  • Admission process
  • Degree programs
  • Graduate research
  • EECS Graduate Funding
  • Resources for current students
  • Student profiles
  • Instructors
  • DEI data and documents
  • Recruitment and outreach
  • Community and resources
  • Get involved / self-education
  • Rising Stars in EECS
  • Graduate Application Assistance Program (GAAP)
  • MIT Summer Research Program (MSRP)
  • Sloan-MIT University Center for Exemplary Mentoring (UCEM)
  • Electrical Engineering
  • Computer Science
  • Artificial Intelligence + Decision-making
  • AI and Society
  • AI for Healthcare and Life Sciences
  • Artificial Intelligence and Machine Learning
  • Biological and Medical Devices and Systems
  • Communications Systems
  • Computational Biology
  • Computational Fabrication and Manufacturing
  • Computer Architecture
  • Educational Technology
  • Electronic, Magnetic, Optical and Quantum Materials and Devices
  • Graphics and Vision
  • Human-Computer Interaction
  • Information Science and Systems
  • Integrated Circuits and Systems
  • Nanoscale Materials, Devices, and Systems
  • Natural Language and Speech Processing
  • Optics + Photonics
  • Optimization and Game Theory
  • Programming Languages and Software Engineering
  • Quantum Computing, Communication, and Sensing
  • Security and Cryptography
  • Signal Processing
  • Systems and Networking
  • Systems Theory, Control, and Autonomy
  • Theory of Computation
  • Departmental History
  • Departmental Organization
  • Visiting Committee
  • Explore all research areas

Our research focuses on robotic hardware and algorithms, from sensing to control to perception to manipulation.

robotics engineering research paper

Latest news in robotics

Helping robots practice skills independently to adapt to unfamiliar environments.

A new algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.

Creating and verifying stable AI-controlled systems in a rigorous and flexible way

Neural network controllers provide complex robots with stability guarantees, paving the way for the safer deployment of autonomous vehicles and industrial machines.

A technique for more effective multipurpose robots

With generative AI models, researchers combined robotics data from different sources to help robots learn better.

QS ranks MIT the world’s No. 1 university for 2024-25

Ranking at the top for the 13th year in a row, the Institute also places first in 11 subject areas.

Five MIT faculty elected to the National Academy of Sciences for 2024

Guoping Feng, Piotr Indyk, Daniel Kleitman, Daniela Rus, Senthil Todadri, and nine alumni are recognized by their peers for their outstanding contributions to research.

Upcoming events

Ai and the future of your career, eecs career fair, five rings tech talk – demystifying proprietary trading .

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Open access
  • Published: 10 February 2023

Trends and research foci of robotics-based STEM education: a systematic review from diverse angles based on the technology-based learning model

  • Darmawansah Darmawansah   ORCID: orcid.org/0000-0002-3464-4598 1 ,
  • Gwo-Jen Hwang   ORCID: orcid.org/0000-0001-5155-276X 1 , 3 ,
  • Mei-Rong Alice Chen   ORCID: orcid.org/0000-0003-2722-0401 2 &
  • Jia-Cing Liang   ORCID: orcid.org/0000-0002-1134-527X 1  

International Journal of STEM Education volume  10 , Article number:  12 ( 2023 ) Cite this article

22k Accesses

29 Citations

5 Altmetric

Metrics details

Fostering students’ competence in applying interdisciplinary knowledge to solve problems has been recognized as an important and challenging issue globally. This is why STEM (Science, Technology, Engineering, Mathematics) education has been emphasized at all levels in schools. Meanwhile, the use of robotics has played an important role in STEM learning design. The purpose of this study was to fill a gap in the current review of research on Robotics-based STEM (R-STEM) education by systematically reviewing existing research in this area. This systematic review examined the role of robotics and research trends in STEM education. A total of 39 articles published between 2012 and 2021 were analyzed. The review indicated that R-STEM education studies were mostly conducted in the United States and mainly in K-12 schools. Learner and teacher perceptions were the most popular research focus in these studies which applied robots. LEGO was the most used tool to accomplish the learning objectives. In terms of application, Technology (programming) was the predominant robotics-based STEM discipline in the R-STEM studies. Moreover, project-based learning (PBL) was the most frequently employed learning strategy in robotics-related STEM research. In addition, STEM learning and transferable skills were the most popular educational goals when applying robotics. Based on the findings, several implications and recommendations to researchers and practitioners are proposed.

Introduction

Over the past few years, implementation of STEM (Science, Technology, Engineering, and Mathematics) education has received a positive response from researchers and practitioners alike. According to Chesloff ( 2013 ), the winning point of STEM education is its learning process, which validates that students can use their creativity, collaborative skills, and critical thinking skills. Consequently, STEM education promotes a bridge between learning in authentic real-life scenarios (Erdoğan et al., 2016 ; Kelley & Knowles, 2016 ). This is the greatest challenge facing STEM education. The learning experience and real-life situation might be intangible in some areas due to pre- and in-conditioning such as unfamiliarity with STEM content (Moomaw, 2012 ), unstructured learning activities (Sarama & Clements, 2009), and inadequate preparation of STEM curricula (Conde et al., 2021 ).

In response to these issues, the adoption of robotics in STEM education has been encouraged as part of an innovative and methodological approach to learning (Bargagna et al., 2019 ; Ferreira et al., 2018 ; Kennedy et al., 2015 ; Köse et al., 2015 ). Similarly, recent studies have reported that the use of robots in school settings has an impact on student curiosity (Adams et al., 2011 ), arts and craftwork (Sullivan & Bers, 2016 ), and logic (Bers, 2008 ). When robots and educational robotics are considered a core part of STEM education, it offers the possibility to promote STEM disciplines such as engineering concepts or even interdisciplinary practices (Okita, 2014 ). Anwar et. al. ( 2019 ) argued that integration between robots and STEM learning is important to support STEM learners who do not immediately show interest in STEM disciplines. Learner interest can elicit the development of various skills such as computational thinking, creativity and motivation, collaboration and cooperation, problem-solving, and other higher-order thinking skills (Evripidou et al., 2020 ). To some extent, artificial intelligence (AI) has driven the use of robotics and tools, such as their application to designing instructional activities (Hwang et al., 2020 ). The potential for research on robotics in STEM education can be traced by showing the rapid increase in the number of studies over the past few years. The emphasis is on critically reviewing existing research to determine what prior research already tells us about R-STEM education, what it means, and where it can influence future research. Thus, this study aimed to fill the gap by conducting a systematic review to grasp the potential of R-STEM education.

In terms of providing the core concepts of roles and research trends of R-STEM education, this study explored beyond the scope of previous reviews by conducting content analysis to see the whole picture. To address the following questions, this study analyzed published research in the Web of Science database regarding the technology-based learning model (Lin & Hwang, 2019 ):

In terms of research characteristic and features, what were the location, sample size, duration of intervention, research methods, and research foci of the R-STEM education research?

In terms of interaction between participants and robots, what were the participants, roles of the robot, and types of robot in the R-STEM education research?

In terms of application, what were the dominant STEM disciplines, contribution to STEM disciplines, integration of robots and STEM, pedagogical interventions, and educational objectives of the R-STEM research?

  • Literature review

Previous studies have investigated the role of robotics in R-STEM education from several research foci such as the specific robot users (Atman Uslu et al., 2022 ; Benitti, 2012 ; Jung & Won, 2018 ; Spolaôr & Benitti, 2017 ; van den Berghe et al., 2019 ), the potential value of R-STEM education (Çetin & Demircan, 2020 ; Conde et al., 2021 ; Zhang et al., 2021 ), and the types of robots used in learning practices (Belpaeme et al., 2018 ; Çetin & Demircan, 2020 ; Tselegkaridis & Sapounidis, 2021 ). While their findings provided a dynamic perspective on robotics, they failed to contribute to the core concept of promoting R-STEM education. Those previous reviews did not summarize the exemplary practice of employing robots in STEM education. For instance, Spolaôr and Benitti ( 2017 ) concluded that robots could be an auxiliary tool for learning but did not convey whether the purpose of using robots is essential to enhance learning outcomes. At the same time, it is important to address the use and purpose of robotics in STEM learning, the connections between theoretical pedagogy and STEM practice, and the reasons for the lack of quantitative research in the literature to measure student learning outcomes.

First, Benitti ( 2012 ) reviewed research published between 2000 and 2009. This review study aimed to determine the educational potential of using robots in schools and found that it is feasible to use most robots to support the pedagogical process of learning knowledge and skills related to science and mathematics. Five years later, Spolaôr and Benitti ( 2017 ) investigated the use of robots in higher education by employing the adopted-learning theories that were not covered in their previous review in 2012. The study’s content analysis approach synthesized 15 papers from 2002 to 2015 that used robots to support instruction based on fundamental learning theory. The main finding was that project-based learning (PBL) and experiential learning, or so-called hands-on learning, were considered to be the most used theories. Both theories were found to increase learners’ motivation and foster their skills (Behrens et al., 2010 ; Jou et al., 2010 ). However, the vast majority of discussions of the selected reviews emphasized positive outcomes while overlooking negative or mixed outcomes. Along the same lines, Jung and Won ( 2018 ) also reviewed theoretical approaches to Robotics education in 47 studies from 2006 to 2017. Their focused review of studies suggested that the employment of robots in learning should be shifted from technology to pedagogy. This review paper argued to determine student engagement in robotics education, despite disagreements among pedagogical traits. Although Jung and Won ( 2018 ) provided information of teaching approaches applied in robotics education, they did not offer critical discussion on how those approaches were formed between robots and the teaching disciplines.

On the other hand, Conde et. al. ( 2021 ) identified PBL as the most common learning approach in their study by reviewing 54 papers from 2006 to 2019. Furthermore, the studies by Çetin and Demircan ( 2020 ) and Tselegkaridis and Sapounidis ( 2021 ) focused on the types of robots used in STEM education and reviewed 23 and 17 papers, respectively. Again, these studies touted learning engagement as a positive outcome, and disregarded the different perspectives of robot use in educational settings on students’ academic performance and cognition. More recently, a meta-analysis by Zhang et. al. ( 2021 ) focused on the effects of robotics on students’ computational thinking and their attitudes toward STEM learning. In addition, a systematic review by Atman Uslu et. al. ( 2022 ) examined the use of educational robotics and robots in learning.

So far, the review study conducted by Atman Uslu et. al. ( 2022 ) could be the only study that has attempted to clarify some of the criticisms of using educational robots by reviewing the studies published from 2006 to 2019 in terms of their research issues (e.g., interventions, interactions, and perceptions), theoretical models, and the roles of robots in educational settings. However, they failed to take into account several important features of robots in education research, such as thematic subjects and educational objectives, for instance, whether robot-based learning could enhance students’ competence of constructing new knowledge, or whether robots could bring either a motivational facet or creativity to pedagogy to foster students’ learning outcomes. These are essential in investigating the trends of technology-based learning research as well as the role of technology in education as a review study is aimed to offer a comprehensive discussion which derived from various angles and dimensions. Moreover, the role of robots in STEM education was generally ignored in the previous review studies. Hence, there is still a need for a comprehensive understanding of the role of robotics in STEM education and research trends (e.g., research issues, interaction issues, and application issues) so as to provide researchers and practitioners with valuable references. That is, our study can remedy the shortcomings of previous reviews (Additional file 1 ).

The above comments demonstrate how previous scholars have understood what they call “the effectiveness of robotics in STEM education” in terms of innovative educational tools. In other words, despite their useful findings and ongoing recommendations, there has not been a thorough investigation of how robots are widely used from all angles. Furthermore, the results of existing review studies have been less than comprehensive in terms of the potential role of robotics in R-STEM education after taking into account various potential dimensions based on the technology-based model that we propose in this study.

The studies in this review were selected from the literature on the Web of Science, our sole database due to its rigorous journal research and qualified studies (e.g., Huang et al., 2022 ), discussing the adoption of R-STEM education, and the data collection procedures for this study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher et al., 2009 ) as referred to by prior studies (e.g., Chen et al., 2021a , 2021b ; García-Martínez et al., 2020 ). Considering publication quality, previous studies (Fu & Hwang, 2018 ; Martín-Páez et al., 2019 ) suggested using Boolean expressions to search Web of Science databases. The search terms for “robot” are “robot” or “robotics” or “robotics” or “Lego” (Spolaôr & Benitti, 2017 ). According to Martín-Páez et. al. ( 2019 ), expressions for STEM education include “STEM” or “STEM education” or “STEM literacy” or “STEM learning” or “STEM teaching” or “STEM competencies”. These search terms were entered into the WOS database to search only for SSCI papers due to its wide recognition as being high-quality publications in the field of educational technology. As a result, 165 papers were found in the database. The search was then restricted to 2012–2021 as suggested by Hwang and Tsai ( 2011 ). In addition, the number of papers was reduced to 131 by selecting only publications of the “article” type and those written in “English”. Subsequently, we selected the category “education and educational research” which reduced the number to 60 papers. During the coding analysis, the two coders screened out 21 papers unrelated to R-STEM education. The coding result had a Kappa coefficient of 0.8 for both coders (Cohen, 1960 ). After the screening stage, a final total of 39 articles were included in this study, as shown in Fig.  1 . Also, the selected papers are marked with an asterisk in the reference list and are listed in Appendixes 1 and 2 .

figure 1

PRISMA procedure for the selection process

Theoretical model, data coding, and analysis

This study comprised content analysis using a coding scheme to provide insights into different aspects of the studies in question (Chen et al., 2021a , 2021b ; Martín-Páez et al., 2019 ). The coding scheme adopted the conceptual framework proposed by Lin and Hwang ( 2019 ), comprising “STEM environments”, “learners”, and “robots”, as shown in Fig.  2 . Three issues were identified:

In terms of research issues, five dimensions were included: “location”, “sample size”, “duration of intervention”, (Zhong & Xia, 2020 ) “research methods”, (Johnson & Christensen, 2000 ) and “research foci”. (Hynes et al., 2017 ; Spolaôr & Benitti, 2017 ).

In terms of interaction issues, three dimensions were included: “participants”, (Hwang & Tsai, 2011 ), “roles of the robot”, and “types of robot” (Taylor, 1980 ).

In terms of application, five dimensions were included, namely “dominant STEM disciplines”, “integration of robot and STEM” (Martín‐Páez et al., 2019 ), “contribution to STEM disciplines”, “pedagogical intervention”, (Spolaôr & Benitti, 2017 ) and “educational objectives” (Anwar et al., 2019 ). Table 1 shows the coding items in each dimension of the investigated issues.

figure 2

Model of R-STEM education theme framework

Figure  3 shows the distribution of the publications selected from 2012 to 2021. The first two publications were found in 2012. From 2014 to 2017, the number of publications steadily increased, with two, three, four, and four publications, respectively. Moreover, R-STEM education has been increasingly discussed within the last 3 years (2018–2020) with six, three, and ten publications, respectively. The global pandemic in the early 2020s could have affected the number of papers published, with only five papers in 2021. This could be due to the fact that most robot-STEM education research is conducted in physical classroom settings.

figure 3

Number of publications on R-STEM education from 2012 to 2021

Table 2 displays the journals in which the selected papers were published, the number of papers published in each journal, and the journal’s impact factor. It can be concluded that most of the papers on R-STEM education research were published in the Journal of Science Education and Technology , and the International Journal of Technology and Design Education , with six papers, respectively.

Research issues

The geographic distribution of the reviewed studies indicated that more than half of the studies were conducted in the United States (53.8%), while Turkey and China were the location of five and three studies, respectively. Taiwan, Canada, and Italy were indicated to have two studies each. One study each was conducted in Australia, Mexico, and the Netherlands. Figure  4 shows the distribution of the countries where the R-STEM education was conducted.

figure 4

Locations where the studies were conducted ( N  = 39)

Sample size

Regarding sample size, there were four most common sample sizes for the selected period (2012–2021): greater than 80 people (28.21% or 11 out of 39 studies), between 41 and 60 (25.64% or 10 out of 39 studies), 1 to 20 people (23.08% or 9 out of 39), and between 21 and 40 (20.51% or 8 out of 39 studies). The size of 61 to 80 people (2.56% or 1 out of 39 studies) was the least popular sample size (see Fig.  5 ).

figure 5

Sample size across the studies ( N  = 39)

Duration of intervention

Regarding the duration of the study (see Fig.  6 ), experiments were mostly conducted for less than or equal to 4 weeks (35.9% or 14 out of 39 studies). This was followed by less than or equal to 8 weeks (25.64% or 10 out of 39 studies), less than or equal to 6 months (20.51% or 8 out 39 studies), less than or equal to 12 months (10.26% or 4 out of 39 studies), while less than or equal to 1 day (7.69% or 3 out of 39 studies) was the least chosen duration.

figure 6

Duration of interventions across the studies ( N  = 39)

Research methods

Figure  7 demonstrates the trends in research methods from 2012 to 2021. The use of questionnaires or surveys (35.9% or 14 out of 39 studies) and mixed methods research (35.9% or 14 out of 39 studies) outnumbered other methods such as experimental design (25.64% or 10 out of 39 studies) and system development (2.56% or 1 out of 39 studies).

figure 7

Frequency of each research method used in 2012–2021

Research foci

In these studies, research foci were divided into four aspects: cognition, affective, operational skill, and learning behavior. If the study involved more than one research focus, each issue was coded under each research focus.

In terms of cognitive skills, students’ learning performance was the most frequently measured (15 out of 39 studies). Six studies found that R-STEM education brought a positive result to learning performance. Two studies did not find any significant difference, while five studies showed mixed results or found that it depends. For example, Chang and Chen ( 2020 ) revealed that robots in STEM learning improved students’ cognition such as designing, electronic components, and computer programming.

In terms of affective skills, just over half of the reviewed studies (23 out of 39, 58.97%) addressed the students’ or teachers’ perceptions of employing robots in STEM education, of which 14 studies showed positive perceptions. In contrast, nine studies found mixed results. For instance, Casey et. al. ( 2018 ) determined students’ mixed perceptions of the use of robots in learning coding and programming.

Five studies were identified regarding operational skills by investigating students’ psychomotor aspects such as construction and mechanical elements (Pérez & López, 2019 ; Sullivan & Bers, 2016 ) and building and modeling robots (McDonald & Howell, 2012 ). Three studies found positive results, while two reported mixed results.

In terms of learning behavior, five out of 39 studies measured students’ learning behavior, such as students’ engagement with robots (Ma et al., 2020 ), students’ social behavior while interacting with robots (Konijn & Hoorn, 2020 ), and learner–parent interactions with interactive robots (Phamduy et al., 2017 ). Three studies showed positive results, while two found mixed results or found that it depends (see Table 3 ).

Interaction issues

Participants.

Regarding the educational level of the participants, elementary school students (33.33% or 13 studies) were the most preferred study participants, followed by high school students (15.38% or 6 studies). The data were similar for preschool, junior high school, in-service teachers, and non-designated personnel (10.26% or 4 studies). College students, including pre-service teachers, were the least preferred study participants. Interestingly, some studies involved study participants from more than one educational level. For example, Ucgul and Cagiltay ( 2014 ) conducted experiments with elementary and middle school students, while Chapman et. al. ( 2020 ) investigated the effectiveness of robots with elementary, middle, and high school students. One study exclusively investigated gifted and talented students without reporting their levels of education (Sen et al., 2021 ). Figure  8 shows the frequency of study participants between 2012 and 2021.

figure 8

Frequency of research participants in the selected period

The roles of robot

For the function of robots in STEM education, as shown in Fig.  9 , more than half of the selected articles used robots as tools (31 out of 39 studies, 79.49%) for which the robots were designed to foster students’ programming ability. For instance, Barker et. al. ( 2014 ) investigated students’ building and programming of robots in hands-on STEM activities. Seven out of 39 studies used robots as tutees (17.95%), with the aim of students and teachers learning to program. For example, Phamduy et. al. ( 2017 ) investigated a robotic fish exhibit to analyze visitors’ experience of controlling and interacting with the robot. The least frequent role was tutor (2.56%), with only one study which programmed the robot to act as tutor or teacher for students (see Fig.  9 ).

figure 9

Frequency of roles of robots

Types of robot

Furthermore, in terms of the types of robots used in STEM education, the LEGO MINDSTORMS robot was the most used (35.89% or 14 out of 39 studies), while Arduino was the second most used (12.82% or 5 out of 39 studies), and iRobot Create (5.12% or 2 out of 39 studies), and NAO (5.12% or 2 out of 39 studies) ranked third equal, as shown in Fig.  10 . LEGO was used to solve STEM problem-solving tasks such as building bridges (Convertini, 2021 ), robots (Chiang et al., 2020 ), and challenge-specific game boards (Leonard et al., 2018 ). Furthermore, four out of 36 studies did not specify the robots used in their studies.

figure 10

Frequency of types of robots used

Application issues

The dominant disciplines and the contribution to stem disciplines.

As shown in Table 4 , the most dominant discipline in R-STEM education research published from 2012 to 2021 was technology. Engineering, mathematics, and science were the least dominant disciplines. Programming was the most common subject for robotics contribution to the STEM disciplines (25 out of 36 studies, 64.1%), followed by engineering (12.82%), and mathematical method (12.82%). We found that interdisciplinary was discussed in the selected period, but in relatively small numbers. However, this finding is relevant to expose the use of robotics in STEM disciplines as a whole. For example, Barker et. al. ( 2014 ) studied how robotics instructional modules in geospatial and programming domains could be impacted by fidelity adherence and exposure to the modules. The dominance of STEM subjects based on robotics makes it necessary to study the way robotics and STEM are integrated into the learning process. Therefore, the forms of STEM integration are discussed in the following sub-section to report how teaching and learning of these disciplines can have learning goals in an integrated STEM environment.

Integration of robots and STEM

There are three general forms of STEM integration (see Fig.  11 ). Of these studies, robot-STEM content integration was commonly used (22 studies, 56.41%), in which robot activities had multiple STEM disciplinary learning objectives. For example, Chang and Chen ( 2020 ) employed Arduino in a robotics sailboat curriculum. This curriculum was a cross-disciplinary integration, the objectives of which were understanding sailboats and sensors (Science), the direction of motors and mechanical structures (Engineering), and control programming (Technology). The second most common form was supporting robot-STEM content integration (12 out of 39 studies, 30.76%). For instance, KIBO robots were used in the robotics activities where the mechanical elements content area was meaningfully covered in support of the main programming learning objectives (Sullivan & Bers, 2019 ). The least common form was robot-STEM context integration (5 out of 39 studies, 12.82%) which was implemented through the robot to situate the disciplinary content goals in another discipline’s practices. For example, Christensen et. al. ( 2015 ) analyzed the impact of an after-school program that offered robots as part of students’ challenges in a STEM competition environment (geoscience and programming).

figure 11

The forms of robot-STEM integration

Pedagogical interventions

In terms of instructional interventions, as shown in Fig.  12 , project-based learning (PBL) was the preferred instructional theory for using robots in R-STEM education (38.46% or 15 out 39 studies), with the aim of motivating students or robot users in the STEM learning activities. For example, Pérez and López ( 2019 ) argued that using low-cost robots in the teaching process increased students’ motivation and interest in STEM areas. Problem-based learning was the second most used intervention in this dimension (17.95% or 7 out of 39 studies). It aimed to improve students’ motivation by giving them an early insight into practical Engineering and Technology. For example, Gomoll et. al. ( 2017 ) employed robots to connect students from two different areas to work collaboratively. Their study showed the importance of robotic engagement in preliminary learning activities. Edutainment (12.82% or 5 out of 39 studies) was the third most used intervention. This intervention was used to bring together students and robots and to promote learning by doing. Christensen et. al. ( 2015 ) and Phamduy et. al. ( 2017 ) were the sample studies that found the benefits of hands-on and active learning engagement; for example, robotics competitions and robotics exhibitions could help retain a positive interest in STEM activities.

figure 12

The pedagogical interventions in R-STEM education

Educational objectives

As far as the educational objectives of robots are concerned (see Fig.  13 ), the majority of robots are used for learning and transfer skills (58.97% or 23 out of 39 studies) to enhance students’ construction of new knowledge. It emphasized the process of learning through inquiry, exploration, and making cognitive associations with prior knowledge. Chang and Chen’s ( 2020 ) is a sample study on how learning objectives promote students’ ability to transfer science and engineering knowledge learned through science experiments to design a robotics sailboat that could navigate automatically as a novel setting. Moreover, it also explicitly aimed to examine the hands-on learning experience with robots. For example, McDonald and Howell ( 2012 ) described how robots engaged with early year students to better understand the concepts of literacy and numeracy.

figure 13

Educational objectives of R-STEM education

Creativity and motivation were found to be educational objectives in R-STEM education for seven out of 39 studies (17.94%). It was considered from either the motivational facet of social trend or creativity in pedagogy to improve students’ interest in STEM disciplines. For instance, these studies were driven by the idea that employing robots could develop students’ scientific creativity (Guven et al., 2020 ), confidence and presentation ability (Chiang et al., 2020 ), passion for college and STEM fields (Meyers et al., 2012 ), and career choice (Ayar, 2015 ).

The general benefits of educational robots and the professional development of teachers were equally found in four studies each. The first objective, the general benefits of educational robotics, was to address those studies that found a broad benefit of using robots in STEM education without highlighting the particular focus. The sample studies suggested that robotics in STEM could promote active learning and improve students’ learning experience through social interaction (Hennessy Elliott, 2020 ) and collaborative science projects (Li et al., 2016 ). The latter, teachers’ professional development, was addressed by four studies (10.25%) to utilize robots to enhance teachers’ efficacy. Studies in this category discussed how teachers could examine and identify distinctive instructional approaches with robotics work (Bernstein et al., 2022 ), design meaningful learning instruction (Ryan et al., 2017 ) and lesson materials (Kim et al., 2015 ), and develop more robust cultural responsive self-efficacy (Leonard et al., 2018 ).

This review study was conducted using content analysis from the WOS collection of research on robotics in STEM education from 2012 to 2021. The findings are discussed under the headings of each research question.

RQ 1: In terms of research, what were the location, sample size, duration of intervention, research methods, and research foci of the R-STEM education research?

About half of the studies were conducted in North America (the USA and Canada), while limited studies were found from other continents (Europe and the Asia Pacific). This trend was identified in the previous study on robotics for STEM activities (Conde et al., 2021 ). Among 39 studies, 28 (71.79%) had fewer than 80 participants, while 11 (28.21%) had more than 80 participants. The intervention’s duration across the studies was almost equally divided between less than or equal to a month (17 out of 39 studies, 43.59%) and more than a month (22 out of 39 studies, 56.41%). The rationale behind the most popular durations is that these studies were conducted in classroom experiments and as conditional learning. For example, Kim et. al. ( 2018 ) conducted their experiments in a course offered at a university where it took 3 weeks based on a robotics module.

A total of four different research methodologies were adopted in the studies, the two most popular being mixed methods (35.89%) and questionnaires or surveys (35.89%). Although mixed methods can be daunting and time-consuming to conduct (Kucuk et al., 2013 ), the analysis found that it was one of the most used methods in the published articles, regardless of year. Chang and Chen ( 2022 ) embedded a mixed-methods design in their study to qualitatively answer their second research question. The possible reason for this is that other researchers prefer to use mixed methods as their research design. Their main research question was answered quantitatively, while the second and remaining research questions were reported through qualitative analysis (Casey et al., 2018 ; Chapman et al., 2020 ; Ma et al., 2020 ; Newton et al., 2020 ; Sullivan & Bers, 2019 ). Thus, it was concluded that mixed methods could lead to the best understanding and integration of research questions (Creswell & Clark, 2013 ; Creswell et al., 2003 ).

In contrast, system development was the least used compared to other study designs, as most studies used existing robotic systems. It should be acknowledged that the most common outcome we found was to enable students to understand these concepts as they relate to STEM subjects. Despite the focus on system development, the help of robotics was identified as increasing the success of STEM learning (Benitti, 2012 ). Because limited studies focused on system development as their primary purpose (1 out of 39 studies, 2.56%), needs analyses may ask whether the mechanisms, types, and challenges of robotics are appropriate for learners. Future research will need further design and development of personalized robots to fill this part of the research gap.

About half of the studies (23 studies, 58.97%) were focused on investigating the effectiveness of robots in STEM learning, primarily by collecting students’ and teachers’ opinions. This result is more similar to Belpaeme et al. ( 2018 ) finding that users’ perceptions were common measures in studies on robotics learning. However, identifying perceptions of R-STEM education may not help us understand exactly how robots’ specific features afford STEM learning. Therefore, it is argued that researchers should move beyond such simple collective perceptions in future research. Instead, further studies may compare different robots and their features. For instance, whether robots with multiple sensors, a sensor, or without a sensor could affect students’ cognitive, metacognitive, emotional, and motivational in STEM areas (e.g., Castro et al., 2018 ). Also, there could be instructional strategies embedded in R-STEM education that can lead students to do high-order thinking, such as problem-solving with a decision (Özüorçun & Bicen, 2017 ), self-regulated and self-engagement learning (e.g., Li et al., 2016 ). Researchers may also compare the robotics-based approach with other technology-based approaches (e.g., Han et al., 2015 ; Hsiao et al., 2015 ) in supporting STEM learning.

RQ 2: In terms of interaction, what were the participants, roles of the robots, and types of robots of the R-STEM education research?

The majority of reviewed studies on R-STEM education were conducted with K-12 students (27 studies, 69.23%), including preschool, elementary school, junior, and high school students. There were limited studies that involved higher education students and teachers. This finding is similar to the previous review study (Atman Uslu et al., 2022 ), which found a wide gap among research participants between K-12 students and higher education students, including teachers. Although it is unclear why there were limited studies conducted involving teachers and higher education students, which include pre-service teachers, we are aware of the critical task of designing meaningful R-STEM learning experiences which is likely to require professional development. In this case, both pre- and in-service teachers could examine specific objectives, identify topics, test the application, and design potential instruction to align well with robots in STEM learning (Bernstein et al., 2022 ). Concurrently, these pedagogical content skills in R-STEM disciplines might not be taught in the traditional pre-service teacher education and particular teachers’ development program (Huang et al., 2022 ). Thus, it is recommended that future studies could be conducted to understand whether robots can improve STEM education for higher education students and teachers professionally.

Regarding the role of robots, most were used as learning tools (31 studies, 79.48%). These robots are designed to have the functional ability to command or program some analysis and processing (Taylor, 1980 ). For example, Leonard et. al. ( 2018 ) described how pre-service teachers are trained in robotics activities to facilitate students’ learning of computational thinking. Therefore, robots primarily provide opportunities for learners to construct knowledge and skills. Only one study (2.56%), however, was found to program robots to act as tutors or teachers for students. Designing a robot-assisted system has become common in other fields such as language learning (e.g., Hong et al., 2016 ; Iio et al., 2019 ) and special education (e.g., Özdemir & Karaman, 2017 ) where the robots instruct the learning activities for students. In contrast, R-STEM education has not looked at the robot as a tutor, but has instead focused on learning how to build robots (Konijn & Hoorn, 2020 ). It is argued that robots with features as human tutors, such as providing personalized guidance and feedback, could assist during problem-solving activities (Fournier-Viger et al., 2013 ). Thus, it is worth exploring in what teaching roles the robot will work best as a tutor in STEM education.

When it comes to types of robots, the review found that LEGO dominated robots’ employment in STEM education (15 studies, 38.46%), while the other types were limited in their use. It is considered that LEGO tasks are more often associated with STEM because learners can be more involved in the engineering or technical tasks. Most researchers prefer to use LEGO in their studies (Convertini, 2021 ). Another interesting finding is about the cost of the robots. Although robots are generally inexpensive, some products are particularly low-cost and are commonly available in some regions (Conde et al., 2021 ). Most preferred robots are still considered exclusive learning tools in developing countries and regions. In this case, only one study offered a low-cost robot (Pérez & López, 2019 ). This might be a reason why the selected studies were primarily conducted in the countries and continents where the use of advanced technologies, such as robots, is growing rapidly (see Fig.  4 ). Based on this finding, there is a need for more research on the use of low-cost robots in R-STEM instruction in the least developed areas or regions of the world. For example, Nel et. al. ( 2017 ) designed a STEM program to build and design a robot which exclusively enabling students from low-income household to participate in the R-STEM activities.

RQ 3: In terms of application, what were the dominant STEM disciplines, contribution to STEM disciplines, integration of robots and STEM, pedagogical interventions, and educational objectives of the R-STEM research?

While Technology and Engineering are the dominant disciplines, this review found several studies that directed their research to interdisciplinary issues. The essence of STEM lies in interdisciplinary issues that integrate one discipline into another to create authentic learning (Hansen, 2014 ). This means that some researchers are keen to develop students’ integrated knowledge of Science, Technology, Engineering, and Mathematics (Chang & Chen, 2022 ; Luo et al., 2019 ). However, Science and Mathematics were given less weight in STEM learning activities compared to Technology and Engineering. This issue has been frequently reported as a barrier to implementing R-STEM in the interdisciplinary subject. Some reasons include difficulties in pedagogy and classroom roles, lack of curriculum integration, and a limited opportunity to embody one learning subject into others (Margot & Kettler, 2019 ). Therefore, further research is encouraged to treat these disciplines equally, so is the way of STEM learning integration.

The subject-matter results revealed that “programming” was the most common research focus in R-STEM research (25 studies). Researchers considered programming because this particular topic was frequently emphasized in their studies (Chang & Chen, 2020 , 2022 ; Newton et al., 2020 ). Similarly, programming concepts were taught through support robots for kindergarteners (Sullivan & Bers, 2019 ), girls attending summer camps (Chapman et al., 2020 ), and young learners with disabilities (Lamptey et al., 2021 ). Because programming simultaneously accompanies students’ STEM learning, we believe future research can incorporate a more dynamic and comprehensive learning focus. Robotics-based STEM education research is expected to encounter many interdisciplinary learning issues.

Researchers in the reviewed studies agreed that the robot could be integrated with STEM learning with various integration forms. Bryan et. al. ( 2015 ) argued that robots were designed to develop multiple learning goals from STEM knowledge, beginning with an initial learning context. It is parallel with our finding that robot-STEM content integration was the most common integration form (22 studies, 56.41%). In this form, studies mainly defined their primary learning goals with one or more anchor STEM disciplines (e.g., Castro et al., 2018 ; Chang & Chen, 2020 ; Luo et al., 2019 ). The learning goals provided coherence between instructional activities and assessments that explicitly focused on the connection among STEM disciplines. As a result, students can develop a deep and transferable understanding of interdisciplinary phenomena and problems through emphasizing the content across disciplines (Bryan et al., 2015 ). However, the findings on learning instruction and evaluation in this integration are inconclusive. A better understanding of the embodiment of learning contexts is needed, for instance, whether instructions are inclusive, socially relevant, and authentic in the situated context. Thus, future research is needed to identify the quality of instruction and evaluation and the specific characteristics of robot-STEM integration. This may place better provision of opportunities for understanding the form of pedagogical content knowledge to enhance practitioners’ self-efficacy and pedagogical beliefs (Chen et al., 2021a , 2021b ).

Project-based learning (PBL) was the most used instructional intervention with robots in R-STEM education (15 studies, 38.46%). Blumenfeld et al. ( 1991 ) credited PBL with the main purpose of engaging students in investigating learning models. In the case of robotics, students can create robotic artifacts (Spolaôr & Benitti, 2017 ). McDonald and Howell ( 2012 ) used robotics to develop technological skills in lower grades. Leonard et. al. ( 2016 ) used robots to engage and develop students’ computational thinking strategies in another example. In the aforementioned study, robots were used to support learning content in informal education, and both teachers and students designed robotics experiences aligned with the curriculum (Bernstein et al., 2022 ). As previously mentioned, this study is an example of how robots can cover STEM content from the learning domain to support educational goals.

The educational goal of R-STEM education was the last finding of our study. Most of the reviewed studies focused on learning and transferable skills as their goals (23 studies, 58.97%). They targeted learning because the authors investigated the effectiveness of R-STEM learning activities (Castro et al., 2018 ; Convertini, 2021 ; Konijn & Hoorn, 2020 ; Ma et al., 2020 ) and conceptual knowledge of STEM disciplines (Barak & Assal, 2018 ; Gomoll et al., 2017 ; Jaipal-Jamani & Angeli 2017 ). They targeted transferable skills because they require learners to develop individual competencies in STEM skills (Kim et al., 2018 ; McDonald & Howell, 2012 ; Sullivan & Bers, 2016 ) and to master STEM in actual competition-related skills (Chiang et al., 2020 ; Hennessy Elliott, 2020 ).

Conclusions and implications

The majority of the articles examined in this study referred to theoretical frameworks or certain applications of pedagogical theories. This finding contradicts Atman Uslu et. al. ( 2022 ), who concluded that most of the studies in this domain did not refer to pedagogical approaches. Although we claim the employment pedagogical frameworks in the examined articles exist, those articles primarily did not consider a strict instructional design when employing robots in STEM learning. Consequently, the discussions in the studies did not include how the learning–teaching process affords students’ positive perceptions. Therefore, both practitioners and researchers should consider designing learning instruction using robots in STEM education. To put an example, the practitioners may regard students’ zone of proximal development (ZPD) when employing robot in STEM tasks. Giving an appropriate scaffolding and learning contents are necessary for them to enhance their operational skills, application knowledge and emotional development. Although the integration between robots and STEM education was founded in the reviewed studies, it is worth further investigating the disciplines in which STEM activities have been conducted. This current review found that technology and engineering were the subject areas of most concern to researchers, while science and mathematics did not attract as much attention. This situation can be interpreted as an inadequate evaluation of R-STEM education. In other words, although those studies aimed at the interdisciplinary subject, most assessments and evaluations were monodisciplinary and targeted only knowledge. Therefore, it is necessary to carry out further studies in these insufficient subject areas to measure and answer the potential of robots in every STEM field and its integration. Moreover, the broadly consistent reporting of robotics generally supporting STEM content could impact practitioners only to employ robots in the mainstream STEM educational environment. Until that point, very few studies had investigated the prominence use of robots in various and large-scale multidiscipline studies (e.g., Christensen et al., 2015 ).

Another finding of the reviewed studies was the characteristic of robot-STEM integration. Researchers and practitioners must first answer why and how integrated R-STEM could be embodied in the teaching–learning process. For example, when robots are used as a learning tool to achieve STEM learning objectives, practitioners are suggested to have application knowledge. At the same time, researchers are advised to understand the pedagogical theories so that R-STEM integration can be flexibly merged into learning content. This means that the learning design should offer students’ existing knowledge of the immersive experience in dealing with robots and STEM activities that assist them in being aware of their ideas, then building their knowledge. In such a learning experience, students will understand the concept of STEM more deeply by engaging with robots. Moreover, demonstration of R-STEM learning is not only about the coherent understanding of the content knowledge. Practitioners need to apply both flexible subject-matter knowledge (e.g., central facts, concepts and procedures in the core concept of knowledge), and pedagogical content knowledge, which specific knowledge of approaches that are suitable for organizing and delivering topic-specific content, to the discipline of R-STEM education. Consequently, practitioners are required to understand the nature of robots and STEM through the content and practices, for example, taking the lead in implementing innovation through subject area instruction, developing collaboration that enriches R-STEM learning experiences for students, and being reflective practitioners by using students’ learning artifacts to inform and revise practices.

Limitations and recommendations for future research

Overall, future research could explore the great potential of using robots in education to build students’ knowledge and skills when pursuing learning objectives. It is believed that the findings from this study will provide insightful information for future research.

The articles reviewed in this study were limited to journals indexed in the WOS database and R-STEM education-related SSCI articles. However, other databases and indexes (e.g., SCOPUS, and SCI) could be considered. In addition, the number of studies analyzed was relatively small. Further research is recommended to extend the review duration to cover the publications in the coming years. The results of this review study have provided directions for the research area of STEM education and robotics. Specifically, robotics combined with STEM education activities should aim to foster the development of creativity. Future research may aim to develop skills in specific areas such as robotics STEM education combined with the humanities, but also skills in other humanities disciplines across learning activities, social/interactive skills, and general guidelines for learners at different educational levels. Educators can design career readiness activities to help learners build self-directed learning plans.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Abbreviations

Science, technology, engineering, and mathematics

Robotics-based STEM

Project-based learning

References marked with an asterisk indicate studies included in the systematic review

Adams, R., Evangelou, D., English, L., De Figueiredo, A. D., Mousoulides, N., Pawley, A. L., Schiefellite, C., Stevens, R., Svinicki, M., Trenor, J. M., & Wilson, D. M. (2011). Multiple perspectives on engaging future engineers. Journal of Engineering Education, 100 (1), 48–88. https://doi.org/10.1002/j.2168-9830.2011.tb00004.x

Article   Google Scholar  

Anwar, S., Bascou, N. A., Menekse, M., & Kardgar, A. (2019). A systematic review of studies on educational robotics. Journal of Pre-College Engineering Education Research (j-PEER), 9 (2), 19–24. https://doi.org/10.7771/2157-9288.1223

Atman Uslu, N., Yavuz, G. Ö., & KoçakUsluel, Y. (2022). A systematic review study on educational robotics and robots. Interactive Learning Environments . https://doi.org/10.1080/10494820.2021.2023890

*Ayar, M. C. (2015). First-hand experience with engineering design and career interest in engineering: An informal STEM education case study. Educational Sciences Theory & Practice, 15 (6), 1655–1675. https://doi.org/10.12738/estp.2015.6.0134

*Barak, M., & Assal, M. (2018). Robotics and STEM learning: Students’ achievements in assignments according to the P3 Task Taxonomy—Practice, problem solving, and projects. International Journal of Technology and Design Education, 28 (1), 121–144. https://doi.org/10.1007/s10798-016-9385-9

Bargagna, S., Castro, E., Cecchi, F., Cioni, G., Dario, P., Dell’Omo, M., Di Lieto, M. C., Inguaggiato, E., Martinelli, A., Pecini, C., & Sgandurra, G. (2019). Educational robotics in down syndrome: A feasibility study. Technology, Knowledge and Learning, 24 (2), 315–323. https://doi.org/10.1007/s10758-018-9366-z

*Barker, B. S., Nugent, G., & Grandgenett, N. F. (2014). Examining fidelity of program implementation in a STEM-oriented out-of-school setting. International Journal of Technology and Design Education, 24 (1), 39–52. https://doi.org/10.1007/s10798-013-9245-9

Behrens, A., Atorf, L., Schwann, R., Neumann, B., Schnitzler, R., Balle, J., Herold, T., Telle, A., Noll, T. G., Hameyer, K., & Aach, T. (2010). MATLAB meets LEGO Mindstorms—A freshman introduction course into practical engineering. IEEE Transactions on Education, 53 (2), 306–317. https://doi.org/10.1109/TE.2009.2017272

Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., Tanaka, F. (2018). Social robots for education: A review. Science Robotics, 3 (21), eaat5954. https://doi.org/10.1126/scirobotics.aat5954

Benitti, F. B. V. (2012). Exploring the educational potential of robotics in schools: A systematic review. Computers & Education, 58 (3), 978–988. https://doi.org/10.1016/j.compedu.2011.10.006

*Bernstein, D., Mutch-Jones, K., Cassidy, M., & Hamner, E. (2022). Teaching with robotics: Creating and implementing integrated units in middle school subjects. Journal of Research on Technology in Education . https://doi.org/10.1080/15391523.2020.1816864

Bers, M. U. (2008). Blocks to robots learning with technology in the early childhood classroom . Teachers College Press.

Google Scholar  

Blumenfeld, P. C., Soloway, E., Marx, R. W., Krajcik, J. S., Guzdial, M., & Palincsar, A. (1991). Motivating project-based learning: sustaining the doing, supporting the learning. Educational Psychologist, 26 (3–4), 369–398. https://doi.org/10.1080/00461520.1991.9653139

Bryan, L. A., Moore, T. J., Johnson, C. C., & Roehrig, G. H. (2015). Integrated STEM education. In C. C. Johnson, E. E. Peters-Burton, & T. J. Moore (Eds.), STEM road map: A framework for integrated STEM education (pp. 23–37). Routledge.

Chapter   Google Scholar  

*Casey, J. E., Gill, P., Pennington, L., & Mireles, S. V. (2018). Lines, roamers, and squares: Oh my! using floor robots to enhance Hispanic students’ understanding of programming. Education and Information Technologies, 23 (4), 1531–1546. https://doi.org/10.1007/s10639-017-9677-z

*Castro, E., Cecchi, F., Valente, M., Buselli, E., Salvini, P., & Dario, P. (2018). Can educational robotics introduce young children to robotics and how can we measure it?. Journal of Computer Assisted Learning, 34 (6), 970–977. https://doi.org/10.1111/jcal.12304

Çetin, M., & Demircan, H. Ö. (2020). Empowering technology and engineering for STEM education through programming robots: A systematic literature review. Early Child Development and Care, 190 (9), 1323–1335. https://doi.org/10.1080/03004430.2018.1534844

*Chang, C. C., & Chen, Y. (2020). Cognition, attitude, and interest in cross-disciplinary i-STEM robotics curriculum developed by thematic integration approaches of webbed and threaded models: A concurrent embedded mixed methods study. Journal of Science Education and Technology, 29 , 622–634. https://doi.org/10.1007/s10956-020-09841-9

*Chang, C. C., & Chen, Y. (2022). Using mastery learning theory to develop task-centered hands-on STEM learning of Arduino-based educational robotics: Psychomotor performance and perception by a convergent parallel mixed method. Interactive Learning Environments . https://doi.org/10.1080/10494820.2020.1741400

*Chapman, A., Rodriguez, F. D., Pena, C., Hinojosa, E., Morales, L., Del Bosque, V., Tijerina, Y., & Tarawneh, C. (2020). “Nothing is impossible”: Characteristics of Hispanic females participating in an informal STEM setting. Cultural Studies of Science Education, 15 , 723–737. https://doi.org/10.1007/s11422-019-09947-6

Chen, M. R. A., Hwang, G. J., Majumdar, R., Toyokawa, Y., & Ogata, H. (2021a). Research trends in the use of E-books in English as a foreign language (EFL) education from 2011 to 2020: A bibliometric and content analysis. Interactive Learning Environments . https://doi.org/10.1080/10494820.2021.1888755

Chen, Y. L., Huang, L. F., & Wu, P. C. (2021b). Preservice preschool teachers’ self-efficacy in and need for STEM education professional development: STEM pedagogical belief as a mediator. Early Childhood Education Journal, 49 (2), 137–147.

Chesloff, J. D. (2013). STEM education must start in early childhood. Education Week, 32 (23), 27–32.

*Chiang, F. K., Liu, Y. Q., Feng, X., Zhuang, Y., & Sun, Y. (2020). Effects of the world robot Olympiad on the students who participate: A qualitative study. Interactive Learning Environments . https://doi.org/10.1080/10494820.2020.1775097

*Christensen, R., Knezek, G., & Tyler-Wood, T. (2015). Alignment of hands-on STEM engagement activities with positive STEM dispositions in secondary school students. Journal of Science Education and Technology, 24 (6), 898–909. https://doi.org/10.1007/s10956-015-9572-6

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20 , 37–46. https://doi.org/10.1177/001316446002000104

Conde, M. Á., Rodríguez-Sedano, F. J., Fernández-Llamas, C., Gonçalves, J., Lima, J., & García-Peñalvo, F. J. (2021). Fostering STEAM through challenge-based learning, robotics, and physical devices: A systematic mapping literature review. Computer Applications in Engineering Education, 29 (1), 46–65. https://doi.org/10.1002/cae.22354

*Convertini, J. (2021). An interdisciplinary approach to investigate preschool children’s implicit inferential reasoning in scientific activities. Research in Science Education, 51 (1), 171–186. https://doi.org/10.1007/s11165-020-09957-3

Creswell, J. W., & Clark, V. L. P. (2013). Designing and conducting mixed methods research (3rd ed.). Thousand Oaks: Sage Publications Inc.

Creswell, J. W., Plano-Clark, V. L., Gutmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods research designs. Handbook of mixed methods in social and behavioral research. Sage.

Erdoğan, N., Navruz, B., Younes, R., & Capraro, R. M. (2016). Viewing how STEM project-based learning influences students’ science achievement through the implementation lens: A latent growth modeling. EURASIA Journal of Mathematics, Science & Technology Education, 12 (8), 2139–2154. https://doi.org/10.12973/eurasia.2016.1294a

Evripidou, S., Georgiou, K., Doitsidis, L., Amanatiadis, A. A., Zinonos, Z., & Chatzichristofis, S. A. (2020). Educational robotics: Platforms, competitions and expected learning outcomes. IEEE Access, 8 , 219534–219562. https://doi.org/10.1109/ACCESS.2020.3042555

Ferreira, N. F., Araujo, A., Couceiro, M. S., & Portugal, D. (2018). Intensive summer course in robotics–Robotcraft. Applied Computing and Informatics, 16 (1/2), 155–179. https://doi.org/10.1016/j.aci.2018.04.005

Fournier-Viger, P., Nkambou, R., Nguifo, E. M., Mayers, A., & Faghihi, U. (2013). A multiparadigm intelligent tutoring system for robotic arm training. IEEE Transactions on Learning Technologies, 6 (4), 364–377. https://doi.org/10.1109/TLT.2013.27

Fu, Q. K., & Hwang, G. J. (2018). Trends in mobile technology-supported collaborative learning: A systematic review of journal publications from 2007 to 2016. Computers & Education, 119 , 129–143. https://doi.org/10.1016/j.compedu.2018.01.004

García-Martínez, I., Tadeu, P., Montenegro-Rueda, M., & Fernández-Batanero, J. M. (2020). Networking for online teacher collaboration. Interactive Learning Environments . https://doi.org/10.1080/10494820.2020.1764057

*Gomoll, A., Hmelo-Silver, C. E., Šabanović, S., & Francisco, M. (2016). Dragons, ladybugs, and softballs: Girls’ STEM engagement with human-centered robotics. Journal of Science Education and Technology, 25 (6), 899–914. https://doi.org/10.1007/s10956-016-9647-z

*Gomoll, A. S., Hmelo-Silver, C. E., Tolar, E., Šabanovic, S., & Francisco, M. (2017). Moving apart and coming together: Discourse, engagement, and deep learning. Educational Technology and Society, 20 (4), 219–232.

*Guven, G., KozcuCakir, N., Sulun, Y., Cetin, G., & Guven, E. (2020). Arduino-assisted robotics coding applications integrated into the 5E learning model in science teaching. Journal of Research on Technology in Education . https://doi.org/10.1080/15391523.2020.1812136

Han, J., Jo, M., Hyun, E., & So, H. J. (2015). Examining young children’s perception toward augmented reality-infused dramatic play. Educational Technology Research and Development, 63 (3), 455–474. https://doi.org/10.1007/s11423-015-9374-9

Hansen, M. (2014). Characteristics of schools successful in STEM: Evidence from two states’ longitudinal data. Journal of Educational Research, 107 (5), 374–391. https://doi.org/10.1080/00220671.2013.823364

*Hennessy Elliott, C. (2020). “Run it through me:” Positioning, power, and learning on a high school robotics team. Journal of the Learning Sciences, 29 (4–5), 598–641. https://doi.org/10.1080/10508406.2020.1770763

Hong, Z. W., Huang, Y. M., Hsu, M., & Shen, W. W. (2016). Authoring robot-assisted instructional materials for improving learning performance and motivation in EFL classrooms. Journal of Educational Technology & Society, 19 (1), 337–349.

Hsiao, H. S., Chang, C. S., Lin, C. Y., & Hsu, H. L. (2015). “iRobiQ”: The influence of bidirectional interaction on kindergarteners’ reading motivation, literacy, and behavior. Interactive Learning Environments, 23 (3), 269–292. https://doi.org/10.1080/10494820.2012.745435

Huang, B., Jong, M. S. Y., Tu, Y. F., Hwang, G. J., Chai, C. S., & Jiang, M. Y. C. (2022). Trends and exemplary practices of STEM teacher professional development programs in K-12 contexts: A systematic review of empirical studies. Computers & Education . https://doi.org/10.1016/j.compedu.2022.104577

Hwang, G. J., & Tsai, C. C. (2011). Research trends in mobile and ubiquitous learning: A review of publications in selected journals from 2001 to 2010. British Journal of Educational Technology, 42 (4), E65–E70.

Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1 , 100001. https://doi.org/10.1016/j.caeai.2020.100001

Hynes, M. M., Mathis, C., Purzer, S., Rynearson, A., & Siverling, E. (2017). Systematic review of research in P-12 engineering education from 2000–2015. International Journal of Engineering Education, 33 (1), 453–462.

Iio, T., Maeda, R., Ogawa, K., Yoshikawa, Y., Ishiguro, H., Suzuki, K., Aoki, T., Maesaki, M., & Hama, M. (2019). Improvement of Japanese adults’ English speaking skills via experiences speaking to a robot. Journal of Computer Assisted Learning, 35 (2), 228–245. https://doi.org/10.1111/jcal.12325

*Jaipal-Jamani, K., & Angeli, C. (2017). Effect of robotics on elementary preservice teachers’ self-efficacy, science learning, and computational thinking. Journal of Science Education and Technology, 26 (2), 175–192. https://doi.org/10.1007/s10956-016-9663-z

Johnson, B., & Christensen, L. (2000). Educational research: Quantitative and qualitative approaches . Allyn & Bacon.

Jou, M., Chuang, C. P., & Wu, Y. S. (2010). Creating interactive web-based environments to scaffold creative reasoning and meaningful learning: From physics to products. Turkish Online Journal of Educational Technology-TOJET, 9 (4), 49–57.

Jung, S., & Won, E. (2018). Systematic review of research trends in robotics education for young children. Sustainability, 10 (4), 905. https://doi.org/10.3390/su10040905

Kelley, T. R., & Knowles, J. G. (2016). A conceptual framework for integrated STEM education. International Journal of STEM Education, 3 , 11. https://doi.org/10.1186/s40594-016-0046-z

Kennedy, J., Baxter, P., & Belpaeme, T. (2015). Comparing robot embodiments in a guided discovery learning interaction with children. International Journal of Social Robotics, 7 (2), 293–308. https://doi.org/10.1007/s12369-014-0277-4

*Kim, C., Kim, D., Yuan, J., Hill, R. B., Doshi, P., & Thai, C. N. (2015). Robotics to promote elementary education pre-service teachers’ STEM engagement, learning, and teaching. Computers and Education., 91 , 14–31. https://doi.org/10.1016/j.compedu.2015.08.005

*Kim, C. M., Yuan, J., Vasconcelos, L., Shin, M., & Hill, R. B. (2018). Debugging during block-based programming. Instructional Science, 46 (5), 767–787. https://doi.org/10.1007/s11251-018-9453-5

*Konijn, E. A., & Hoorn, J. F. (2020). Robot tutor and pupils’ educational ability: Teaching the times tables. Computers and Education, 157 , 103970. https://doi.org/10.1016/j.compedu.2020.103970

Köse, H., Uluer, P., Akalın, N., Yorgancı, R., Özkul, A., & Ince, G. (2015). The effect of embodiment in sign language tutoring with assistive humanoid robots. International Journal of Social Robotics, 7 (4), 537–548. https://doi.org/10.1007/s12369-015-0311-1

Kucuk, S., Aydemir, M., Yildirim, G., Arpacik, O., & Goktas, Y. (2013). Educational technology research trends in Turkey from 1990 to 2011. Computers & Education, 68 , 42–50. https://doi.org/10.1016/j.compedu.2013.04.016

Lamptey, D. L., Cagliostro, E., Srikanthan, D., Hong, S., Dief, S., & Lindsay, S. (2021). Assessing the impact of an adapted robotics programme on interest in science, technology, engineering and mathematics (STEM) among children with disabilities. International Journal of Disability, Development and Education, 68 (1), 62–77. https://doi.org/10.1080/1034912X.2019.1650902

*Leonard, J., Buss, A., Gamboa, R., Mitchell, M., Fashola, O. S., Hubert, T., & Almughyirah, S. (2016). Using robotics and game design to enhance children’s self-efficacy, STEM attitudes, and computational thinking skills. Journal of Science Education and Technology, 25 (6), 860–876. https://doi.org/10.1007/s10956-016-9628-2

*Leonard, J., Mitchell, M., Barnes-Johnson, J., Unertl, A., Outka-Hill, J., Robinson, R., & Hester-Croff, C. (2018). Preparing teachers to engage rural students in computational thinking through robotics, game design, and culturally responsive teaching. Journal of Teacher Education, 69 (4), 386–407. https://doi.org/10.1177/0022487117732317

*Li, Y., Huang, Z., Jiang, M., & Chang, T. W. (2016). The effect on pupils’ science performance and problem-solving ability through Lego: An engineering design-based modeling approach. Educational Technology and Society, 19 (3), 143–156. https://doi.org/10.2307/jeductechsoci.19.3.14

Lin, H. C., & Hwang, G. J. (2019). Research trends of flipped classroom studies for medical courses: A review of journal publications from 2008 to 2017 based on the technology-enhanced learning model. Interactive Learning Environments, 27 (8), 1011–1027. https://doi.org/10.1080/10494820.2018.1467462

*Luo, W., Wei, H. R., Ritzhaupt, A. D., Huggins-Manley, A. C., & Gardner-McCune, C. (2019). Using the S-STEM survey to evaluate a middle school robotics learning environment: Validity evidence in a different context. Journal of Science Education and Technology, 28 (4), 429–443. https://doi.org/10.1007/s10956-019-09773-z

*Ma, H. L., Wang, X. H., Zhao, M., Wang, L., Wang, M. R., & Li, X. J. (2020). Impact of robotic instruction with a novel inquiry framework on primary schools students. International Journal of Engineering Education, 36 (5), 1472–1479.

Margot, K. C., & Kettler, T. (2019). Teachers’ perception of STEM integration and education: A systematic literature review. International Journal of STEM Education, 6 (1), 1–16. https://doi.org/10.1186/s40594-018-0151-2

Martín-Páez, T., Aguilera, D., Perales-Palacios, F. J., & Vílchez-González, J. M. (2019). What are we talking about when we talk about STEM education? A review of literature. Science Education, 103 (4), 799–822. https://doi.org/10.1002/sce.21522

*McDonald, S., & Howell, J. (2012). Watching, creating and achieving: Creative technologies as a conduit for learning in the early years. British Journal of Educational Technology, 43 (4), 641–651. https://doi.org/10.1111/j.1467-8535.2011.01231.x

*Meyers, K., Goodrich, V. E., Brockman, J. B., & Caponigro, J. (2012). I2D2: Imagination, innovation, discovery, and design. In 2012 ASEE annual conference & exposition (pp. 25–707). https://doi.org/10.18260/1-2--21464

Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., Prisma Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine . https://doi.org/10.1371/journal.pmed.1000097

Moomaw, S. (2012). STEM Begins in the Early Years. School Science and Mathematics, 112 (2), 57–58. https://doi.org/10.1111/j.1949-8594.2011.00119.x

Nel, H., Ettershank, M., & Venter, J. (2017). AfrikaBot: Design of a robotics challenge to promote STEM in Africa. In M. Auer, D. Guralnick, & J. Uhomoibhi (Eds.), Interactive collaborative learning. ICL 2016. Advances in intelligent systems and computing. Springer. https://doi.org/10.1007/978-3-319-50340-0_44

*Newton, K. J., Leonard, J., Buss, A., Wright, C. G., & Barnes-Johnson, J. (2020). Informal STEM: Learning with robotics and game design in an urban context. Journal of Research on Technology in Education, 52 (2), 129–147. https://doi.org/10.1080/15391523.2020.1713263

Okita, S. Y. (2014). The relative merits of transparency: Investigating situations that support the use of robotics in developing student learning adaptability across virtual and physical computing platforms. British Journal of Educational Technology, 45 (5), 844–862. https://doi.org/10.1111/bjet.12101

Özdemir, D., & Karaman, S. (2017). Investigating interactions between students with mild mental retardation and humanoid robot in terms of feedback types. Education and Science, 42 (191), 109–138. https://doi.org/10.15390/EB.2017.6948

Özüorçun, N. Ç., & Bicen, H. (2017). Does the inclusion of robots affect engineering students’ achievement in computer programming courses? Eurasia Journal of Mathematics, Science and Technology Education, 13 (8), 4779–4787. https://doi.org/10.12973/eurasia.2017.00964a

*Pérez, S. E., & López, J. F. (2019). An ultra-low cost line follower robot as educational tool for teaching programming and circuit’s foundations. Computer Applications in Engineering Education, 27 (2), 288–302. https://doi.org/10.1002/cae.22074

*Phamduy, P., Leou, M., Milne, C., & Porfiri, M. (2017). An interactive robotic fish exhibit for designed settings in informal science learning. IEEE Transactions on Education, 60 (4), 273–280. https://doi.org/10.1109/TE.2017.2695173

*Ryan, M., Gale, J., & Usselman, M. (2017). Integrating engineering into core science instruction: Translating NGSS principles into practice through iterative curriculum design. International Journal of Engineering Education., 33 (1), 321–331.

*Sen, C., Ay, Z. S., & Kiray, S. A. (2021). Computational thinking skills of gifted and talented students in integrated STEM activities based on the engineering design process: The case of robotics and 3D robot modeling. Thinking Skills and Creativity, 42 , 100931. https://doi.org/10.1016/j.tsc.2021.100931

Spolaôr, N., & Benitti, F. B. V. (2017). Robotics applications grounded in learning theories on tertiary education: A systematic review. Computers & Education, 112 , 97–107. https://doi.org/10.1016/j.compedu.2017.05.001

*Stewart, W. H., Baek, Y., Kwid, G., & Taylor, K. (2021). Exploring factors that influence computational thinking skills in elementary students’ collaborative robotics. Journal of Educational Computing Research, 59 (6), 1208–1239. https://doi.org/10.1177/0735633121992479

*Sullivan, A., & Bers, M. U. (2016). Robotics in the early childhood classroom: Learning outcomes from an 8-week robotics curriculum in pre-kindergarten through second grade. International Journal of Technology and Design Education, 26 (1), 3–20. https://doi.org/10.1007/s10798-015-9304-5

*Sullivan, A., & Bers, M. U. (2019). Investigating the use of robotics to increase girls’ interest in engineering during early elementary school. International Journal of Technology and Design Education, 29 , 1033–1051. https://doi.org/10.1007/s10798-018-9483-y

*Taylor, M. S. (2018). Computer programming with pre-K through first-grade students with intellectual disabilities. Journal of Special Education, 52 (2), 78–88. https://doi.org/10.1177/0022466918761120

Taylor, R. P. (1980). Introduction. In R. P. Taylor (Ed.), The computer in school: Tutor, tool, tutee (pp. 1–10). Teachers College Press.

Tselegkaridis, S., & Sapounidis, T. (2021). Simulators in educational robotics: A review. Education Sciences, 11 (1), 11. https://doi.org/10.3390/educsci11010011

*Üçgül, M., & Altıok, S. (2022). You are an astroneer: The effects of robotics camps on secondary school students’ perceptions and attitudes towards STEM. International Journal of Technology and Design Education, 32 (3), 1679–1699. https://doi.org/10.1007/s10798-021-09673-7

*Ucgul, M., & Cagiltay, K. (2014). Design and development issues for educational robotics training camps. International Journal of Technology and Design Education, 24 (2), 203–222. https://doi.org/10.1007/s10798-013-9253-9

van den Berghe, R., Verhagen, J., Oudgenoeg-Paz, O., Van der Ven, S., & Leseman, P. (2019). Social robots for language learning: A review. Review of Educational Research, 89 (2), 259–295. https://doi.org/10.3102/0034654318821286

Zhang, Y., Luo, R., Zhu, Y., & Yin, Y. (2021). Educational robots improve K-12 students’ computational thinking and STEM attitudes: Systematic review. Journal of Educational Computing Research, 59 (7), 1450–1481. https://doi.org/10.1177/0735633121994070

Zhong, B., & Xia, L. (2020). A systematic review on exploring the potential of educational robotics in mathematics education. International Journal of Science and Mathematics Education, 18 (1), 79–101. https://doi.org/10.1007/s10763-018-09939-y

Download references

Acknowledgements

The authors would like to express their gratefulness to the three anonymous reviewers for providing their precious comments to refine this manuscript.

This study was supported by the Ministry of Science and Technology of Taiwan under contract numbers MOST-109-2511-H-011-002-MY3 and MOST-108-2511-H-011-005-MY3; National Science and Technology Council (TW) (NSTC 111-2410-H-031-092-MY2); Soochow University (TW) (111160605-0014). Any opinions, findings, conclusions, and/or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of Ministry of Science and Technology of Taiwan.

Author information

Authors and affiliations.

Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, 43, Sec. 4, Keelung Rd., Taipei, 106, Taiwan

Darmawansah Darmawansah, Gwo-Jen Hwang & Jia-Cing Liang

Department of English Language and Literature, Soochow University, Q114, No. 70, Linhsi Road, Shihlin District, Taipei, 111, Taiwan

Mei-Rong Alice Chen

Yuan Ze University, 135, Yuandong Road, Zhongli District, Taipei, Taiwan

Gwo-Jen Hwang

You can also search for this author in PubMed   Google Scholar

Contributions

DD, MR and GJ conceptualized the study. MR wrote the outline and DD wrote draft. DD, MR and GJ contributed to the manuscript through critical reviews. DD, MR and GJH revised the manuscript. DD, MR and GJ finalized the manuscript. DD edited the manuscript. MR and GJ monitored the project and provided adequate supervision. DD, MR and JC contributed with data collection, coding, analyses and interpretation. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mei-Rong Alice Chen .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Coded papers.

Appendix 1. Summary of selected studies from the angle of research issue

#

Authors

Dimension

Location

Sample size

Duration of intervention

Research methods

Research foci

1

Convertini ( )

Italy

21–40

≤ 1 day

Experimental design

Problem solving, collaboration or teamwork, and communication

2

Lamptey et. al. ( )

Canada

41–60

≤ 8 weeks

Mixed method

Satisfaction or interest, and learning perceptions

3

Üçgül and Altıok ( )

Turkey

41–60

≤ 1 day

Questionnaire or survey

Attitude and motivation, learning perceptions

4

Sen et. al. ( )

Turkey

1–20

≤ 4 weeks

Experimental design

Problem solving, critical thinking, logical thinking, creativity, collaboration or teamwork, and communication

5

Stewart et. al. ( )

USA

> 80

≤ 6 months

Mixed method

Higher order thinking skills, problem-solving, technology acceptance, attitude and motivation, and learning perceptions

6

Bernstein et. al. ( )

USA

1–20

≤ 1 day

Questionnaire or survey

Attitude and motivation, and learning perceptions

7

Chang and Chen ( )

Taiwan

41–60

≤ 8 weeks

Mixed method

Learning performance, problem-solving, satisfaction or interest, and operational skill

8

Chang and Chen ( )

Taiwan

41–60

≤ 8 weeks

Experimental design

Learning perceptions, and operational skill

9

Chapman et al. ( )

USA

> 80

≤ 8 weeks

Mixed method

Learning performance, and learning perceptions

10

Chiang et. al. ( )

China

41–60

≤ 4 weeks

Questionnaire or survey

Creativity, and self-efficacy and confidence

11

Guven et. al. ( )

Turkey

1–20

≤ 6 months

Mixed method

Creativity, technology acceptance, attitude and motivation, self-efficacy or confidence, satisfaction or interest, and learning perception

12

Hennessy Elliott ( )

USA

1–20

≤ 12 months

Experimental design

Collaboration, communication, and preview situation

13

Konijn and Hoorn ( )

Netherlands

41–60

≤ 4 weeks

Experimental design

Learning performance, and learning behavior

14

Ma et. al. ( )

China

41–60

≤ 6 months

Mixed method

Learning performance, learning perceptions, and learning behavior

15

Newton et. al. ( )

USA

> 80

≤ 6 months

Mixed method

Attitude and motivation, and self-efficacy and confidence

16

Luo et. al. ( )

USA

41–60

≤ 4 weeks

Questionnaire or survey

Technology acceptance, attitude and motivation, and self-efficacy

17

Pérez and López ( )

Mexico

21–40

≤ 6 months

System development

Operational skill

18

Sullivan and Bers ( )

USA

> 80

≤ 8 weeks

Mixed method

Attitude and motivation, satisfaction or interest, and learning behavior

19

Barak and Assal ( )

Israel

21–40

≤ 6 months

Mixed method

Learning performance, technology acceptance, self-efficacy, and satisfaction or interest

20

Castro et. al. ( )

Italy

> 80

≤ 8 weeks

Questionnaire or survey

Learning performance, and self-efficacy

21

Casey et. al. ( )

USA

> 80

≤ 12 months

Questionnaire or survey

Learning satisfaction

22

Kim et. al. ( )

USA

1–20

≤ 4 weeks

Questionnaire or survey

Problem solving, and preview situation

23

Leonard et. al. ( )

USA

41–60

≤ 12 months

Questionnaire or survey

Learning performance, self-efficacy, and learning perceptions

24

Taylor ( )

USA

1–20

≤ 1 day

Experimental design

Learning performance, and preview situation

25

Gomoll et. al. ( )

USA

21–40

≤ 8 weeks

Experimental design

Problem solving, collaboration, communication

26

Jaipal-Jamani and Angeli ( )

Canada

21–40

≤ 4 weeks

Mixed method

Learning performance, self-efficacy, and satisfaction or interest

27

Phamduy et. al. ( )

USA

> 80

≤ 4 weeks

Mixed method

Satisfaction or interest, and learning behavior

28

Ryan et. al. ( )

USA

1–20

≤ 12 months

Questionnaire or survey

Learning perceptions

29

Gomoll et. al. ( )

USA

21–40

≤ 6 months

Experimental design

Satisfaction or interest, and learning perceptions

30

Leonard et. al. ( )

USA

61–80

≤ 4 weeks

Mixed method

Attitude and motivation, and self-efficacy

31

Li et. al. ( )

China

21–40

≤ 8 weeks

Experimental design

Learning performance, and problem-solving,

32

Sullivan and Bers ( )

USA

41–60

≤ 8 weeks

Experimental design

Learning performance, and operational skill

33

Ayar ( )

Turkey

> 80

≤ 4 weeks

Questionnaire or survey

Attitude and motivation, satisfaction or interest, and learning perceptions

34

Christensen et. al. ( )

USA

> 80

 ≤ 6 months

Questionnaire or survey

Technology acceptance, satisfaction or interest, and learning perceptions

35

Kim et al. ( )

USA

1–20

≤ 4 weeks

Mixed method

Learning performance, satisfaction or interest, and learning perceptions

36

Barker et. al. ( )

USA

21–40

≤ 4 weeks

Questionnaire or survey

Technology acceptance, attitude and motivation, and learning perceptions

37

Ucgul and Cagiltay ( )

Turkey

41–60

≤ 4 weeks

Questionnaire or survey

Learning performance, satisfaction or interest, and learning perceptions

38

McDonald and Howell ( )

Australia

1–20

≤ 8 weeks

Mixed method

Learning performance, operational skills, and learning behavior

39

Meyers et. al. ( )

USA

> 80

≤ 4 weeks

Questionnaire or survey

Learning perceptions

Appendix 2. Summary of selected studies from the angles of interaction and application

#

Authors

Interaction

Application

Participants

Role of robot

Types of robot

Dominant STEM discipline

Contribution to STEM

Integration of robot and STEM

Pedagogical intervention

Educational objectives

1

Convertini ( )

Preschool or Kindergarten

Tutee

LEGO (Mindstorms)

Engineering

Structure and construction

Context integration

Active construction

Learning and transfer skills

2

Lamptey et. al. ( )

Non-specified

Tool

LEGO (Mindstorms)

Technology

Programming

Supporting content integration

Problem-based learning

Learning and transfer skills

3

Üçgül and Altıok ( )

Junior high school students

Tool

LEGO (Mindstorms)

Technology

Programming

Content integration

Project-based learning

Creativity and motivation

4

Sen et. al. ( )

Others (gifted and talented students)

Tutee

LEGO (Mindstorms)

Technology

Programming, and Mathematical methods

Supporting content integration

Problem-based learning

Learning and transfer skills

5

Stewart et. al. ( )

Elementary school students

Tool

Botball robot

Technology

Programming, and power and dynamical system

Content integration

Project-based learning

Learning and transfer skills

6

Bernstein et. al. ( )

In-service teachers

Tool

Non-specified

Science

Biomechanics

Content integration

Project-based learning

Teachers’ professional development

7

Chang and Chen ( )

High school students

Tool

Arduino

Interdisciplinary

Basic Physics, Programming, Component design, and mathematical methods

Content integration

Project-based learning

Learning transfer and skills

8

Chang and Chen ( )

High school students

Tool

Arduino

Interdisciplinary

Basic Physics, Programming, Component design, and mathematical methods

Content integration

Project-based learning

Learning transfer and skills

9

Chapman et. al. ( )

Elementary, middle, and high school students

Tool

LEGO (Mindstorms) and Maglev trains

Engineering

Engineering

Content integration

Engaged learning

Learning transfer and skills

10

Chiang et. al. ( )

Non-specified

Tool

LEGO (Mindstorms)

Technology

Non-specified

Context integration

Edutainment

Creativity and motivation

11

Guven et. al. ( )

Elementary school students

Tutee

Arduino

Technology

Programming

Content integration

Constructivism

Creativity and motivation

12

Hennessy Elliott ( )

Students and teachers

Tool

Non-specified

Technology

Non-specified

Supporting content integration

Collaborative learning

General benefits of educational robotics

13

Konijn and Hoorn ( )

Elementary school students

Tutor

Nao robot

Mathematics

Mathematical methods

Supporting content integration

Engaged learning

Learning and transfer skills

14

Ma et. al. ( )

Elementary school students

Tool

Microduino and Makeblock

Engineering

Non-specified

Content integration

Experiential learning

Learning and transfer skills

15

Newton et. al. ( )

Elementary school students

Tool

LEGO (Mindstorms)

Technology

Programming

Supporting content integration

Active construction

Learning and transfer skills

16

Luo et. al. ( )

Junior high or middle school

Tool

Vex robots

Interdisciplinary

Programming, Engineering, and Mathematics

Content integration

Constructivism

General benefits of educational robots

17

Pérez and López ( )

High school students

Tutee

Arduino

Engineering

Programming, and mechanics

Content integration

Project-based learning

Learning and transfer skills

18

Sullivan and Bers ( )

Kindergarten and Elementary school students

Tool

KIBO robots

Technology

Programming

Context integration

Project-based learning

Learning and transfer skills

19

Barak and Assal ( )

High school students

Tool

Non-specified

Technology

Programming, mathematical methods

Content integration

Problem-based learning

Learning and transfer skills

20

Castro et. al. ( )

Lower secondary

Tool

Bee-bot

Technology

Programming

Content integration

Problem-based learning

Learning and transfer skills

21

Casey et. al. ( )

Elementary school students

Tool

Roamers robot

Technology

Programming

Content integration

Metacognitive learning

Learning and transfer skills

22

Kim et. al. ( )

Pre-service teachers

Tool

Non-specified

Technology

Programming

Supporting content integration

Problem-based learning

Learning and transfer skills

23

Leonard et. al. ( )

In-service teachers

Tool

LEGO (Mindstorms)

Technology

Programming

Supporting content integration

Project-based learning

Teachers’ professional development

24

Taylor ( )

Kindergarten and elementary school students

Tool

Dash robot

Technology

Programming,

Content integration

Problem-based learning

Learning and transfer skills

25

Gomoll et. al. ( )

Middle school students

Tool

iRobot create

Technology

Programming, and structure and construction

Content integration

Problem-based learning

Learning and transfer skills

26

Jaipal-Jamani and Angeli ( )

Pre-service teachers

Tool

LEGO WeDo

Technology

Programming

Supporting content integration

Project-based learning

Learning and transfer skills

27

Phamduy et. al. ( )

Non-specified

Tutee

Arduino

Science

Biology

Context integration

Edutainment

Diversity and broadening participation

28

Ryan et. al. ( )

In-service teachers

Tool

LEGO (Mindstorms)

Engineering

Engineering

Content integration

Constructivism

Teacher’s professional development

29

Gomoll et. al. ( )

Non-specified

Tool

iRobot create

Technology

Programming

Content integration

Project-based learning

Learning and transfer skill

30

Leonard et. al. ( )

Middle school students

Tool

LEGO (Mindstorms)

Technology

Programming

Content integration

Project-based learning

Learning and transfer skill

31

Li et. al. ( )

Elementary school students

Tool

LEGO Bricks

Engineering

Structure and construction

Supporting content integration

Project-based learning

General benefits of educational robotics

32

Sullivan and Bers ( )

Kindergarten and Elementary school students

Tool

Kiwi Kits

Engineering

Digital signal process

Content integration

Project-based learning

Learning and transfer skill

33

Ayar ( )

High school students

Tool

Nao robot

Engineering

Component design

Content integration

Edutainment

Creativity and 34motivation

34

Christensen et. al. ( )

Middle and high school students

Tutee

Non-specified

Engineering

Engineering

Context integration

Edutainment

Creativity and motivation

35

Kim et. al. ( )

Pre-service teachers

Tool

RoboRobo

Technology

Programming

Supporting content integration

Engaged learning

Teachers’ professional development

36

Barker et. al. ( )

In-service teachers

Tool

LEGO (Mindstorms)

Technology

Geography information system, and programming

Supporting content integration

Constructivism

Creativity and motivation

37

Ucgul and Cagiltay ( )

Elementary and Middle school students

Tool

LEGO (Mindstorms)

Technology

Programming, mechanics, and mathematics

Content integration

Project-based learning

General benefits of educational robots

38

McDonald and Howell ( )

Elementary school students

Tool

LEGO WeDo

Technology

Programming, and students and construction

Content integration

Project-based learning

Learning and transfer skills

39

Meyers et. al. ( )

Elementary school students

Tool

LEGO (Mindstorms)

Engineering

Engineering

Supporting content integration

Edutainment

Creativity and motivation

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Darmawansah, D., Hwang, GJ., Chen, MR.A. et al. Trends and research foci of robotics-based STEM education: a systematic review from diverse angles based on the technology-based learning model. IJ STEM Ed 10 , 12 (2023). https://doi.org/10.1186/s40594-023-00400-3

Download citation

Received : 11 May 2022

Accepted : 13 January 2023

Published : 10 February 2023

DOI : https://doi.org/10.1186/s40594-023-00400-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • STEM education
  • Interdisciplinary projects
  • Twenty-first century skills

robotics engineering research paper

The future of robotics

Blue outline of two robotic arms.

Guest Jeannette Bohg is an expert in robotics who says there is transformation happening in her field brought on by recent advances in large language models.

The LLMs have a certain common sense baked in and robots are using it to plan and to reason as never before. But they still lack low-level sensorimotor control – like the fine skill it takes to turn a doorknob. New models that do for robotic control what LLMs did for language could soon make such skills a reality, Bohg tells host Russ Altman on this episode of Stanford Engineering’s The Future of Everything podcast.

Listen on your favorite podcast platform:

Related : Jeannette Bohg , assistant professor of computer science

[00:00:00] Jeannette Bohg: Through things like ChatGPT, we have been able to do reasoning and planning on the high level, meaning kind of on the level of symbols, very well known in robotics, in a very different way that we could do before.

[00:00:17] Russ Altman: This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you enjoy The Future of Everything, please hit follow in whatever app you're listening to right now. This will guarantee that you never miss an episode. 

[00:00:29] Today, Professor Jeannette Bohg will tell us about robots and the status of robotic work. She'll tell us that ChatGPT is even useful for robots. And that there are huge challenges in getting reliable hardware so we can realize all of our robotic dreams. It's the future of robotics. 

[00:00:48] Before we get started, please remember to follow the show and ensure that you'll get alerted to all the new episodes so you'll never miss the future, and I love saying this, of anything.

[00:01:04] Many of us have been thinking about robots since we were little kids. When are we going to get those robots that can make our dinner, clean our house, drive us around, make life really easy? Well, it turns out that there's still some challenges and they're significant for getting robots to work. There are hardware challenges.

[00:01:20] It turns out that the human hand is way better than most robotic manipulators. In addition, robots break. They work in some situations like factories, but those are dangerous robots. They just go right through whatever's in front of them. 

[00:01:34] Well, Jeannette Bohg is a computer scientist at Stanford University and an expert on robotics. She's going to tell us that we are making good progress in building reliable hardware and in developing algorithms to help make robots do their thing. What's perhaps most surprising is even ChatGPT is helping the robotics community, even though it just does chats. 

[00:01:58] So Jeannette, there's been an increased awareness of AI in the last year, especially because of things like ChatGPT and what they call these large language models. But you work in robotics, you're building robots that sense and move around. Is that AI revolution for like chat, is that affecting your world? 

[00:02:15] Jeannette Bohg: Yeah. Um, yeah, very good question. It definitely does. Um, in, um, surprising ways, honestly. So I think for me, my research language has always been very interesting, but somewhat in the, you know, in the background from the kind of research I'm doing, which is like on robotic manipulation. And with the, um, with this rise of things like ChatGPT or large language models, suddenly, um, doors are being opened, uh, in robotics that were really pretty closed.

[00:02:46] Russ Altman: Metaphorical or physical or both? 

[00:02:48] Jeannette Bohg: Physically. That's exactly, that's a very good question because physically robots are very bad at open doors, but metaphorically speaking, these, uh, we can talk about that as well, metaphorically speaking through things like ChatGPT, we have been able to do reasoning and planning on the high level. Meaning kind of on the level of symbols, very well known in robotics in a very different way that we could do before. 

[00:03:12] So let's say, for example, you're in a kitchen and you want to make dinner. Um, and, uh, you know, there are so many steps that you have to do for that, right? And they don't have to do something, they don't necessarily have to do something with how you move your hands and all of this.

[00:03:27] It's really just like, I'm laying out the steps of what I'm going to do for making dinner. And this kind of reasoning is suddenly possible in a much more open-ended way, right? Because we can, uh, these language models, they have this common-sense knowledge kind of in baked in them. And now we can use them in robotics to do these task plans, right? That, um, that are really consisting of so many steps and they kind of make sense. It's not always correct. 

[00:03:55] Russ Altman: Right, right. 

[00:03:55] Jeannette Bohg: Um, I mean, if you try ChatGPT, you know, it's hallucinating thing. It's like making stuff up. Um, but, um, that's the challenge, uh, actually, and how to use these models in robotics. But the good thing is they open up these doors, metaphorically speaking again, um, to just do this task planning in an open-ended way. Um, and you know, and they can just like, um, they also allow to have this very natural interface between people and robots as well. That's another, 

[00:04:26] Russ Altman: Great. So that's really fascinating. So. If I understood your answer, you said that like for a high level, here's kind of the high-level script of how to make dinner, you know, get the dishes, get the ingredients. Um, do you find that there's a level of detail, I think implied in your answer, is that there's a level of detail that you need to get the robot to do the right things that it's not yet able to specify. 

[00:04:49] Are you optimistic that it will be able to do that? Or do you think it's going to be an entirely different approach to like, you know, move the manipulator arm to this position and grasp it gently? Do you think that that's going to be in the range of ChatGPT or will that be other algorithms? 

[00:05:03] Jeannette Bohg: Yeah. So I think to some extent, again, like these, you know, common sense, um, understanding of the world is in there. So for example, the idea that a glass could be fragile and you have to pick it up in a gentle way, or, uh, let's say you have to grasp a hammer by the handle or, you know, the tool tip of, uh, that tool is like over here or something like this.

[00:05:26] These are things that actually, um, help a robot to also plan its motion. Not just kind of this high-level task planning, but actually understand where to grasp things and maybe how much pressure to apply. Um, but they still, uh, they still cannot be directly generate an action, right? Like, so the action that a robot needs to compute is basically how do I move my hand? Like where exactly, like every millisecond, uh, or at least every ten milliseconds or something like that. And that is not what these models do. Um, and that's totally fine because to do that, they need completely different, they would need completely different training data that actually has this information in there.

[00:06:09] Um, like the actual motion of the robot arm needs to be given to these models in order to do that kind of prediction. Um, and so I think, um, so yeah, so that is actually the biggest, one of the biggest challenges in robotics to get to the same level of data that you have in areas like natural language processing or computer vision, that these, uh, models like ChatGPT, have consumed so far, right?

[00:06:38] So that, these models have been trained on trillions of tokens, right? Like multiple trillions of tokens. I don't know what the current maximum is. Um, but it's like, yeah, a lot. And in robotics, we have, uh, more like in the order of hundred thousands data of data points, hundred thousands. This is like millions of, um, it's a, by, uh, the difference is a factor of millions.

[00:07:06] Russ Altman: Now let me just ask you about that because I'm surprised you say that because I think about in many cases robots are trying to do things that we see in video by humans all the time. Like probably on television you could find many, many examples of somebody picking up a glass or opening a door, but it sounds to me like that's not enough for you. Like, in other words, these pictures of people doing things that doesn't turn into useful training data for the robot. And I guess that kind of makes sense. Although I'm a little surprised that we haven't figured out a way to take advantage of all of that human action to inform the video action. So talk to me a little bit about that. 

[00:07:43] Jeannette Bohg: Yeah, yeah. This is like a very interesting question. So the data that I said is too little right now, uh, in comparison to natural language processing and computer vision, that's really data that has been directly collected on the robot. 

[00:07:54] Russ Altman: Okay. So it's robot examples of them reaching, them touching.

[00:07:58] Jeannette Bohg: Yeah. And so that's like painstakingly collected with like joysticks and stuff like this, right? Like it's very tedious. That's why it's, I don't think possible to get to the same level of data, but you bring up a very good point, right? Like on YouTube. I mean, I'm watching YouTube all the time to just figure out like how to do something right?

[00:08:16] And how to repair something or do this and that, and yeah, we are learning from that and we are learning when we are growing up from our parents or whoever is like showing us how to do things. And, um, we want robots to do exactly the same. Uh, and that is like a super interesting research question. Uh, but the reason why it's a research question and not solved, um, is that in a video, um, you see a hand of a person, for example. But this hand, like our hand, sorry, I actually cut myself. 

[00:08:46] Russ Altman: Yes, I see that. For those who are listening, there's a little Band-Aid on Jeannette's hand. 

[00:08:51] Jeannette Bohg: But our hand is actually amazing, right? Like we have these five fingers, we have like, I don't know, it's even difficult to actually count how many degrees of freedom and joints our hand has, but it's like twenty-seven or something like that. It's soft, it can be very stiff, but it can also be very compliant. It's like, an amazing universal tool. And our robot hands are completely different. Unfortunately, I don't have one here, but basically, it's like, like a gripper. Very easy, very, um, very simple. Um, and it's because of that, it's very limited in what it can do. Um, and it might also need to do, um, things that a person does or tasks that a person does in a completely different way. 

[00:09:30] Russ Altman: I see, I see. 

[00:09:31] Jeannette Bohg: To, um, you know, to achieve the same task if it's even possible at all. And so if a robot looks at a video of a person, it needs to somehow understand like, okay, how does this map, uh, to my, my body right. Like my body only has two. 

[00:09:47] Russ Altman: Yeah, no, that's a really, so it's like, if somebody was watching Vladimir Horowitz play the piano, it's not very useful to show them a YouTube of Vladimir and say, just play it like that because he can do things that we can't do. 

[00:09:59] Jeannette Bohg: Right. That's exactly right. And I've heard that Rachmaninoff, for example, uh, has like these insane, had these insanely big hands and therefore, um, he could play, uh, his pieces. But they had like, uh, you're basically in order to play it, you had like to have a very specific difference between your thumb and your pinky, for example, like the distance, 

[00:10:20] Russ Altman: Span, the span of your, 

[00:10:21] Jeannette Bohg: Yeah. 

[00:10:21] Russ Altman: Okay. So that's a really good answer to my question is that the videos are relevant, but we, they're not dealing with beautiful human hands. And so there would have to be a translation of whatever was happening in the video to their world and it's and that would be difficult. 

[00:10:37] Jeannette Bohg: Yes, that is difficult. But people are looking into this, right? Like that's a super interesting research question on actually how. 

[00:10:43] Russ Altman: And because the positive the upside as we've talked about is that you would then have lots and lots of training data. If you could break that code of how to turn human actions in video into instructions for robot. Okay, that's extremely helpful.

[00:10:57] But I want to get to some of the technical details of your work because it's fascinating, but before we get there, another backup, background question is the goal for the robots. Are we trying to, I know you've written a lot about autonomous robots, but you've also talked about how robots can also work with humans to augment them.

[00:11:16] And I want to ask if those are points on a continuum. Like, it seems like autonomous would be different from augmenting a human, but maybe in your mind they work together. So how should we think about, and what should we expect the first generation or the second generation of robotic assistants to be like?

[00:11:34] Jeannette Bohg: Yeah, this is a very good question. So first of all, I would say, yes, uh, this is like, um, points on a spectrum, right? There are solutions, uh, on a spectrum from, uh, teleoperation, I would say, where you basically puppeteer a robot to do something that's typically done for data collection. Um, or, uh, you know, the, on the other end of the spectrum, you have this fully autonomous, it's basically a humanoid that we see in movies. Right. 

[00:11:59] Russ Altman: That's like the vacuum cleaner in my living room, my Roomba. 

[00:12:02] Jeannette Bohg: Right, right. Exactly. Yeah. That one is definitely autonomous. 

[00:12:05] Russ Altman: It seems fully autonomous to me. I have no idea when it's going to go on or off or where it's going to go. 

[00:12:12] Jeannette Bohg: Yeah. Nobody knows. Nobody knows. 

[00:12:15] Russ Altman: Forgive me. Forgive me. 

[00:12:16] Jeannette Bohg: You bought it. I also had one once, uh, back in the days and you know, I just turn it on and then I left because I knew it would take hours and hours to do what it needed to do. Um, 

[00:12:26] Russ Altman: I'm sorry, that was a little bit of a distraction. But yeah, tell me about the, this, um, spectrum. 

[00:12:31] Jeannette Bohg: Yeah. So I think there are ways in which, um, robots can really augment people in that, uh, they can, for example, um, they, uh, theoretically, they could have more strength, right? Like, so, uh, um, that there are lots of people who, it's not my area, but there are lots of people who built these exoskeletons or prosthetic devices, which I actually also find really interesting. They're typically very lightweight, uh, have an easy interface. Um, so that's interesting, but they can also kind of support people who have to lift heavy things, for example. So I think that's one way on how you can think about augmentation of people to help them. Another one is maybe still autonomous, but it's still augmenting people in a way.

[00:13:15] So one example I want to bring up, this is a shout out to, uh, Andrea Thomaz and Vivian Chu who are like, um, leading this, um, startup called Diligent Robotics and I recently heard a keynote from her at a conference. And I thought they did something really smart, which is they went first into hospitals, uh, to observe what nurses are doing all the, all day, right?

[00:13:37] Like, what are they doing with their hours? And to their surprise, what nurses really spend a lot of time on was just like shuttling around supplies between different places instead of actually taking care of patients, right? Which is what they're like trained to do and really good at, why are we using them to shuttle stuff around?

[00:13:55] And so what they decided is like, oh, we actually don't need a robot to do the patient care or do the stocking or whatever. What we actually need is a robot that just shuttled stuff around in a hospital, uh, where it still needs a hand to actually push elevator buttons and push door buttons and things like that. Or like maybe opening a door again, right? Um, like we had in the beginning. And I thought like, oh, this is such a great augmentation if you want, right? Like that. The nurses can actually now spend time on what they're really good at and what they're needed for and what they're trained for, which is patient care, and just stop worrying about where the supplies are, where things like blood samples or things have to go.

[00:14:36] Russ Altman: And it sounds like it might also create a, I don't know, I'm not going to say anything is easy, but a slightly more straightforward engineering challenge to start. 

[00:14:45] Jeannette Bohg: Right. So I think we're so far away from general purpose robots, right? Like we, I, I don't know how long it's going to take, but it's still going to take a lot of time. And I think a smart way to bring robotics into our everyday world is to actually, uh, ideas like the ones from Diligent Robotics, where you really go and analyze what people quote unquote waste their time on. It's not really a waste of time, of course. But you know, it could be done in like a, in an automated way actually, um, to give people time for the things they're actually really good at and where robots are still very bad at.

[00:15:18] Um, yeah. So I think, um, we will probably see, hopefully see more of this, right? Like in the future, like very small things. You can think of Roomba, for example, doing something kind of very small and I don't know how good it is, like it's good enough, 

[00:15:37] Russ Altman: Compared to ignoring our floors, which was our strategy for the first twenty-five years, this is a huge improvement. Because now, even if it's not a perfect sweep, it's more sweeping than we would do under normal circumstances. 

[00:15:49] Jeannette Bohg: Yeah, I agree with that. So I think like these small ideas, right, like that are not again, like this general purpose robot. But, uh, like some very, uh, smart ideas about where robots can help people with things that they find really annoying, um, and are doable for current robotic technology. I think that's what we will see in the next a few years. Um, and again, like it's a, they are still autonomous again, but they are augmenting people in this way. 

[00:16:16] Russ Altman: Right. That resolves that what I thought was attention, but you just explained why it's not really attention. This is the future of everything with Russ Altman. More with Jeannette Bohg next.

[00:16:41] Welcome back to The Future of Everything. I'm Russ Altman, your host, and I'm speaking with Professor Jeannette Bohg from Stanford University. 

[00:16:47] In the last segment, we went over some of the challenges of autonomous versus augmenting robots. We talked a little bit about the data problems. And in this next segment, we're going to talk about hardware. What are robots going to look like? How are they going to work? How expensive are they going to be? I want to get into kind of a fun topic, which is the hardware. You made a brief mention of the hands, uh, and how amazing human hands are, but the current robotic hands, uh, they're not quite human yet.

[00:17:14] Um, where are we with hardware and what are the challenges and what are some of the exciting new developments? 

[00:17:20] Jeannette Bohg: Yeah. Uh, hardware is hard. It's one thing that I've been told is a saying in Silicon Valley recently. But yeah, uh, I think hardware and robotics is one of the biggest challenges. And I think we have very good hardware when it comes to automation in, uh, places like factories that are, um, you know, building cars and all of this. And it's very reliable, right? And that's what you want. But when it comes to the kinds of platforms that we want to see at some point in our homes or in hospitals, again, um, these platforms have to be equally robust and durable and repeatable and all of this. Uh, but we're not there. We're not there. Like literally, uh, I'm constantly, uh, talking to my students and they're constantly repairing whatever else, whatever new things broken again with our robots. I mean, it's constant. Um,

[00:18:12] Russ Altman: But it's interesting to know, just interrupt you. But the guys at Ford Motor Company and the big industry, they have figured out, is it a question of budget? Is it a question that they just spend a lot of money on these robots or are they too simple compared to what you need? I just want to explore a little bit why those industrial ones are so good. 

[00:18:30] Jeannette Bohg: Yeah, so that is a very good question. I think they are, um, first of all, they are still very expensive, uh, robots actually. So they still cost like, uh, multiple ten thousands of dollars. Um, but yeah, they are also, they have a very, they follow a very specific methodology, which is they have to be, um, very stiff, uh, meaning that not like our arms, uh, which are kind of naturally kind of, um, squishy and give in to any kind of things we may be bumping in. Uh, these robots are not, right? Like they're going to go no matter what, to a specific point you sent them to. And, um, that is just the way they are built. And maybe that's also why they are so robust, uh, as well. Um, but they are dangerous, right? 

[00:19:15] Russ Altman: Yes. 

[00:19:15] Jeannette Bohg: So that's why they're in cages. And, uh, people can't get close to them. Uh, and that's of course not what we want in the real world. So the kinds of robots that we work in the research world with are more geared towards like, oh, when can we bring them into someone's home, uh, or have them at least work alongside a person in warehouses or things like that. Um, and so these, this technology I think is just not quite as mature and as robust. Um, and also not produced in that, at that, um, you know, there are just not so many copies of those as there are of these industrial robots. And I think they're just not as optimized yet. 

[00:19:53] Russ Altman: So when you said the robots cost tens of thousands of dollars, are those the robots you're using? 

[00:19:58] Jeannette Bohg: Uh, yeah. 

[00:19:59] Russ Altman: That your students are fixing all day?

[00:20:01] Jeannette Bohg: Yes, unfortunately, this is exactly right. Like I spent so much money from my lab on, on buying forty thousand dollar robot arms, um, or seventy thousand dollar robot arms, right? Like that's the kind of, uh, money we need to spend to have these research platforms that we need to show our results and test our results. And, um, actually, um, yeah. So for example, um, one of the projects we have, um, is, uh, a mobile manipulator. Uh, so it's, uh, a robot arm on top of a mobile platform. Think of a Roomba with an arm, maybe just like way more expensive. It's more like, 

[00:20:36] Russ Altman: A forty thousand dollar Roomba. I gotcha. 

[00:20:39] Jeannette Bohg: At least. Yeah. So, um, think about that. And that project was really fun. It's like, uh, using this mobile manipulator to clean up your house. So it's, um, uh, it's basically talking to you to figure out like, oh, what are your preferences? Where are your dirty socks going? Where are your, you know, Coke cans, your empty Coke cans going? Um, and then it, uh, kind of from your few examples, compresses that down to like some general categories of where to put stuff.

[00:21:05] And so that's the robot we, uh, we did a project on, and people are very excited about it. They loved it. It's even throwing stuff into bins. It's like a basketball star in a way. Um, and people really love it. And also researcher loves it. Researchers loved it, because there's this mobile base. Uh, so the, basically the, um, you know, the thing on wheels, basically, 

[00:21:29] Russ Altman: Yeah, it can move around. 

[00:21:30] Jeannette Bohg: Um, that one, uh, is very unique. It was a donation from some company. Um, and it's, uh, it has like specific capabilities, but it's like three of a kind exist in the world and, um, we, and people can't buy it and it's very disappointing. So, um, but again, yeah, these are the arms that we are constantly, uh, constantly like repairing and it's like scary even because if we lose this platform, we can't do our research. 

[00:21:58] Russ Altman: Right.

[00:21:58] Jeannette Bohg: So one of the things I'm doing for the first time in my lab, actually, and again, I'm a computer scientist, not a mechanical engineer. But, uh, with one of my students, we're looking at how to develop a low-cost version of this, uh, mobile base that has like these special abilities and is very maneuverable.

[00:22:17] Um, and I'm, my hope is that with this platform first, I hope it's reliable, but if not, at least you can like cheaply repair it, um, and can get in there, right? Like even if you're a student with, who is a computer scientist, not a mechanical engineer, and I hope that it just allows you to buy many of these platforms rather than just one, uh, you know, that you have to baby around all the time, but you can maybe hopefully buy many of them, you will hopefully open source all of this design.

[00:22:47] And then, uh, my, what I'm really excited about is to use this low-cost platform, um, to do maybe swarm based manipulation of many robots, uh, collaborating with each other. 

[00:23:00] Russ Altman: So in your current view, what would be the basic functionality of one of these units or is that flexible? But is it a hand? Is it two hands? Is it, uh, is it mobile like a Roomba? 

[00:23:12] Jeannette Bohg: Yeah, it's basically, uh, um, yeah, you could think of it as a Roomba plus plus basically, which has an arm. So it's not just like, uh, vacuum your floor, but it's actually putting things away. Right? Like if you, uh, for those who have children, right, like I, I think they are always most excited about, about this, what we call TidyBot, um, because it's just like putting things into the right places instead of you stepping on these Lego pieces in the middle of the night, right.

[00:23:39] So that's what you, um, that's what we're going for. Uh, and it would be one mobile base with one arm and one hand. And then let's say you have multiple of them. So, uh, for, you could, for example, think of when you have to move, right? Like I personally think moving to another place is, I mean, it's the worst, right?

[00:23:59] Russ Altman: Packing, packing and unpacking is the worst. 

[00:24:01] Jeannette Bohg: Packing, unpacking, but also like carrying stuff around. So imagine if you have like this fleet of robots, right? That just helps you getting the sofa through like these tight spaces and all of this. So that's kind, 

[00:24:11] Russ Altman: Paint a picture for me in this version one-point-oh, how tall is it? Are we talking two feet tall or five feet tall? How big is it? 

[00:24:19] Jeannette Bohg: Now you're getting me with the feets and the inches. 

[00:24:22] Russ Altman: I'm sorry. You can draw meters, whatever, whatever works. 

[00:24:25] Jeannette Bohg: Okay. Yeah. So actually, uh, so the base is actually fairly, uh, low. Um, and actually pretty heavy so that it has like a low center of mass. It's probably like, I guess a foot tall. Um, I let's say twenty centimeters. 

[00:24:39] Russ Altman: Yeah. 

[00:24:39] Jeannette Bohg: Um, and then the arm, if it's fully stretched out and just pointing up, it is probably like one and a half meters long on top of that. 

[00:24:48] Russ Altman: That's five feet or so. 

[00:24:50] Jeannette Bohg: Really like fully stretched out, which it usually isn't to do stuff. It's like, 

[00:24:54] Russ Altman: But then it could reach things on tables. That's the, that's what I was trying to get to. It could reach tables. It could maybe reach into the dryer or the washing machine or stuff like that. It might be within range. 

[00:25:05] Jeannette Bohg: Uh, all of this then, uh, also just making your bed. Uh,

[00:25:09] Russ Altman: Yeah, I hate that. 

[00:25:11] Jeannette Bohg: Yeah, terrible. 

[00:25:11] Russ Altman: So let me ask, uh, since we're talking about what it looks like. Um, in so much of the sci fi, robots seem to have to look like humans. What's your take on that? Like, is it important that the robot, is it, maybe it's not, maybe it's important that it not look like a human, where are you in this whole humanoid debate? 

[00:25:29] Jeannette Bohg: Okay, this is a very good question. And I'm probably going to say something contentious, uh, or maybe not, I don't know. But yeah, I think building a humanoid robot is really exciting from a research standpoint. Um, and I think it's just looks cool. So it gives you like these super cool demos that you see from all these startups right now, 

[00:25:49] Russ Altman: Right, right.

[00:25:49] Jeannette Bohg: On Twitter and all. I mean, this looks very cool. I just personally, um, don't think that it's like the most, um, like economical way maybe to think, uh, about like, what's the most useful robot. I think the arguments are typically like, oh, but, um, the spaces that we walk in and work in and live in, they're all designed for people. So why not making a robot platform that is having the same form factor and can squeeze through tight places and use all the tools and all of that. It kind of makes sense to me.

[00:26:25] Um, but again, like coming back to my earlier point, right? Where I'm thinking like general purpose robots are really, really far away. Um, and I think the, um, like narrow, like the, uh, let's say closer future, not the future of everything, but the future in like the next few years. Uh, it's maybe, um, it's maybe going to look more at like very specific purpose robots that are maybe on wheels because that's just easier, right? Like you don't have to worry about this. Um, and they can do relatively specialized things in like one environment, like going to a grocery store and doing restocking, um, or things like that. Right? Um, 

[00:27:04] Russ Altman: I've also heard that you have to be careful about making it humanoid because then humans might impute to it human capabilities, human emotions. And by having it look like a weird device, it reminds you that indeed this is a device and maybe the user interaction might be more natural and less misled because you start, you know, you don't treat it like it's a human and that might not be the goal. In other cases, like for care of the elderly, maybe you want it to look humanoid because it might be more natural. But okay, that's a very, very helpful answer. 

[00:27:37] Jeannette Bohg: Yeah, I think this is a very good point, actually, that people probably attribute much more intelligence, uh, whatever, whatever way we want to define that to a humanoid robot rather than to something like TidyBot that we had, right? Which is just one arm. It really looks very robot, I have to say. 

[00:27:55] Russ Altman: So what is the outlook to finish up in the last minute or so? Where are we with this platform? And when are you going to start shipping? 

[00:28:04] Jeannette Bohg: We published this on Twitter basically. There were lots of people like how much money, like when can I buy this? And, uh, and yeah, again, like it's, we're pretty far away from having like a robot that we can just literally, uh, give you and then it's gonna work, right? 

[00:28:19] Like, I think there's so much engineering. I think you can probably bring it up like similar to autonomous driving, right? Like fairly, maybe easily to ninety percent, but then the rest of it is all these corner cases, right? That you have to deal with and it's going to be really hard. So I don't want to make a prediction of when we're going to have this. Again, I think it's going to be more like more special purpose, uh, robots. Um, again, maybe a Roomba is maybe not so far away with an arm, right? 

[00:28:48] Russ Altman: I love it. I love it. And I know that in the academic world, ninety percent and cheap will lead to a lot of innovation. 

[00:28:56] Jeannette Bohg: Right. That's the other point, like, when is it affordable, right? Like nobody's going to buy a robot that is as much as a luxury car, right? 

[00:29:04] Russ Altman: Right. 

[00:29:05] Jeannette Bohg: That can't even do anything really well. 

[00:29:07] Russ Altman: Right. 

[00:29:08] Thanks to Jeannette Bohg. That was The Future of Robotics. 

[00:29:11] Thanks for tuning into this episode too. With more than 250 episodes in our archives, you have instant access to a whole range of fascinating conversations with me and other people. If you're enjoying the show, please remember to consider sharing it with friends, family, and colleagues. Personal recommendations are the best way to spread the news about The Future of Everything. You can connect with me on X or Twitter, @RBAltman. And you can connect with Stanford Engineering @StanfordENG.

Related Departments

Ukraine and Russia flags on map displaying Europe.

The future of Russia and Ukraine

CO2 converted to ethanol in a photobioreactor.

Turning carbon pollution into ethanol

Blowtorch heating gel on plywood.

New gels could protect buildings during wildfires

IMAGES

  1. Intro To Robotics, Research Paper

    robotics engineering research paper

  2. Research Paper

    robotics engineering research paper

  3. Robotics Topics For Research Paper

    robotics engineering research paper

  4. (PDF) Robotics and Artificial Intelligence

    robotics engineering research paper

  5. International Journal of Mechanical Engineering and Robotics Research

    robotics engineering research paper

  6. Robot Research: Introduction to Robotics

    robotics engineering research paper

VIDEO

  1. Robotics Courses to get started as a beginner

  2. A framework for robotic excavation and dry stone construction using on-site materials

  3. #engineering #robotics #experiment #facts #scienceexperiment

  4. Robotics Engineering Module 6-3: Velocity Kinematics and Jacobian

  5. Teaching and Assessment Philosophy (English)

  6. Springer Handbook of Robotics

COMMENTS

  1. (PDF) ARTIFICIAL INTELLIGENCE IN ROBOTICS: FROM ...

    This research paper explores the integration of artificial intelligence (AI) in robotics, specifically focusing on the transition from automation to autonomous systems. The paper provides an ...

  2. Science Robotics

    ONLINE COVER A Variable-Stiffness Deformable Wheel. Wheels have a low cost of transport over flat ground but struggle to overcome large obstacles. Inspired by the surface tension of a water droplet, Lee et al. developed a morphing wheel that rolls over flat ground in the circular high-modulus state and deforms over obstacles in the low-modulus state. . The modulus of the wheel is changed in ...

  3. The International Journal of Robotics Research: Sage Journals

    International Journal of Robotics Research (IJRR) was the first scholarly publication on robotics research; it continues to supply scientists and students in robotics and related fields - artificial intelligence, applied mathematics, computer science, electrical and mechanical engineering - with timely, multidisciplinary material... This journal is peer-reviewed and is a member of the ...

  4. Robots in Industry: The Past, Present, and Future of a Growing

    Robots have been part of automation systems for a very long time, and in public perception, they are often synonymous with automation and industrial revolution perse. Fueled by Industry 4.0 and Internet of Things (IoT) concepts as well as by new software technologies, the field of robotics in industry is currently undergoing a revolution on its own. This article gives an overview of the ...

  5. Review of Robotics Technologies and Its Applications

    Abstract: Robots are automatic equipment integrating advanced technologies in multiple disciplines such as mechanics, electronics, control, sensors, and artificial intelligence. Based on a brief introduction of the development history of robotics, this paper reviews the classification of the type of robots, the key technologies involved, and the applications in various fields, analyze the ...

  6. Advances and perspectives in collaborative robotics: a review of key

    This review paper provides a literature survey of collaborative robots, or cobots, and their use in various industries. Cobots have gained popularity due to their ability to work with humans in a safe manner. The paper covers different aspects of cobots, including their design, control strategies, safety features, and human-robot interaction. The paper starts with a brief history and ...

  7. Journal of Robotics

    Online ISSN: 1687-9619. Print ISSN: 1687-9600. RSS Feeds. Journal of Robotics is an open access journal that publishes original research articles as well as review articles on all aspects of automated mechanical devices, from their design and fabrication, to testing and practical implementation. As part of Wiley's Forward Series, this journal ...

  8. Recent Advances in Robotics and Intelligent Robots Applications

    This Special Issue of Applied Sciences, entitled "Recent Advances in Robotics and Intelligent Robots Applications", has 14 research papers, covering topics from bionics (contribution 1) and soft-material robot designs (contribution 14), infrared image algorithms (contribution 2), target tracking algorithms (contribution 3 and 7 ...

  9. Advancements in Humanoid Robots: A Comprehensive Review and Future

    This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure, control and decision-making, and perception and interaction, a holistic overview of the ...

  10. [2401.12317] Software Engineering for Robotics: Future Research

    Robots are experiencing a revolution as they permeate many aspects of our daily lives, from performing house maintenance to infrastructure inspection, from efficiently warehousing goods to autonomous vehicles, and more. This technical progress and its impact are astounding. This revolution, however, is outstripping the capabilities of existing software development processes, techniques, and ...

  11. Swarm Robotics: Past, Present, and Future [Point of View]

    Swarm robotics deals with the design, construction, and deployment of large groups of robots that coordinate and cooperatively solve a problem or perform a task. It takes inspiration from natural self-organizing systems, such as social insects, fish schools, or bird flocks, characterized by emergent collective behavior based on simple local interaction rules [1], [2]. Typically, swarm robotics ...

  12. A review of mobile robots: Concepts, methods, theoretical framework

    Submit Paper. International Journal of Advanced Robotic Systems. Impact ... ASME Press Robotics Engineering Book Series. Momentum Press Engineering 2015, p. 182. ISBN-13: 978-0791860526. ... Kagami S, et al. Motion planning for humanoid robots. In: Paolo Dario, Raja Chatila (eds) Robotics Research. The Eleventh International Symposium. Berlin ...

  13. Recent advancements of robotics in construction

    In the past two decades, robotics in construction (RiC) has become an interdisciplinary research field that integrates a large number of urgent technologies (e.g., additive manufacturing, deep learning, building information modelling (BIM)), resulting in the related literature being both fragmented and vast. This paper has explored the advances ...

  14. Reinforcement learning for robot research: A comprehensive review and

    Reinforcement learning (RL), 1 one of the most popular research fields in the context of machine learning, effectively addresses various problems and challenges of artificial intelligence. It has led to a wide range of impressive progress in various domains, such as industrial manufacturing, 2 board games, 3 robot control, 4 and autonomous driving. 5 Robot has become one of the research hot ...

  15. Artificial intelligence, machine learning and deep learning in advanced

    1. Introduction. Artificial intelligence (AI), machine learning (ML), and deep learning (DL) are all important technologies in the field of robotics [1].The term artificial intelligence (AI) describes a machine's capacity to carry out operations that ordinarily require human intellect, such as speech recognition, understanding of natural language, and decision-making.

  16. 500 research papers and projects in robotics

    These free, downloadable research papers can shed lights into the some of the complex areas in robotics such as navigation, motion planning, robotic interactions, obstacle avoidance, actuators, machine learning, computer vision, artificial intelligence, collaborative robotics, nano robotics, social robotics, cloud, swan robotics, sensors ...

  17. Machine learning techniques for robotic and autonomous ...

    Literature review showed that most research papers that have deployed climbing and crawler robots fit them with an array of sensors to collect a variety of data types. The data types include sound waves and their distance to an object, using ultrasonic sensors; acceleration and velocity using accelerometers and gravity sensors [ 13 ].

  18. (PDF) Advanced Applications of Industrial Robotics: New ...

    three main functions in which robots replace hum ans: (1) extraction of useful information. from massive data flow; (2) accu rate movements to manipulate with an object or tool; and. (3 ...

  19. Review on space robotics: Toward top-level science through space

    Modern space robotics represents a multidisciplinary emerging field that builds on and contributes to space engineering, terrestrial robotics, ... ESA, and industrial companies who have sponsored and/or developed robotic technologies mentioned in the paper. Portions of this research were carried out by the Jet Propulsion Laboratory (JPL ...

  20. Five years of Science Robotics

    Science Robotics is celebrating its fifth anniversary this month. With the motto of "Science for Robotics and Robotics for Science" and an impact factor of 23.75, Science Robotics has become a premier journal in robotics and allied technologies. From the very beginning, the journal has positioned itself to complement existing, well-established engineering journals in robotics and ...

  21. Robotics

    Artificial Intelligence and Decision-making combines intellectual traditions from across computer science and electrical engineering to develop techniques for the analysis and synthesis of systems that interact with an external world via perception, communication, and action; while also learning, making decisions and adapting to a changing environment.

  22. The Future of Robotics: How AI is revolutionizing this Field

    The combination of robotics and AI is transforming the discipline, leading to extraordinary advances and far-reaching repercussions. This abstract discusses the future of robotics in light of fast AI research, stressing its significant influence on numerous sectors and possible obstacles. AI-powered robots are learning and adapting. Advanced machine learning techniques allow robots to ...

  23. Learning automatic navigation control skills for miniature helical

    Research paper. Learning automatic navigation control skills for miniature helical robots from human demonstrations ... PVC pipes as a simulation for blood vessels is a widespread practice, especially within the realms of biomedical engineering, fluid dynamics research, and medical device testing. ... A decade retrospective of medical robotics ...

  24. Trends and research foci of robotics-based STEM ...

    The purpose of this study was to fill a gap in the current review of research on Robotics-based STEM (R-STEM) education by systematically reviewing existing research in this area. This systematic review examined the role of robotics and research trends in STEM education. A total of 39 articles published between 2012 and 2021 were analyzed.

  25. The future of robotics

    Guest Jeannette Bohg is an expert in robotics who says there is transformation happening in her field brought on by recent advances in large language models.. The LLMs have a certain common sense baked in and robots are using it to plan and to reason as never before. But they still lack low-level sensorimotor control - like the fine skill it takes to turn a doorknob.

  26. Exploring the impacts of automation in the mining industry: A

    In 2001, research focused on productivity, followed by roles and skills in 2003 and safety in 2004. Research entered into mining automation in more recent years is referred to as cyber safety and regulations. Less research in automation dealt with a social licence to operate, task allocation and physical and mental effects of automation.