Plenary Speakers
Hesheng Wang, Professor, Shanghai Jiao Tong University, China
Vision-Based Robot Localization, Navigation, and Control
Biography: Hesheng Wang received the Ph.D. degree from the Chinese University of Hong Kong. Currently, he is a Distinguished Professor of Shanghai Jiao Tong University, China. He has published more than 200 papers in refereed journals and conferences. He is an associate editor of IEEE Transactions on Automation Science Engineering, IEEE Robotics and Automation Letters, Assembly Automation and International Journal of Humanoid Robotics, a Technical Editor of IEEE/ASME Transactions on Mechatronics. He served as an associate editor for IEEE Transactions on Robotics from 2015 to 2019. He was the general chair of IEEE RCAR2016 and IEEE ROBIO2022, and program chair of IEEE AIM2019 and IEEE ROBIO2014. He was a recipient of The National Science Fund for Outstanding Young Scholars in 2017, Shanghai Shuguang Scholar in 2019 and The National Science Fund for Distinguished Young Scholars in 2022. He will be the General Chair of IROS2025.
Abstract: This report focuses on the two core functions of service robots: mobility and manipulation. First, an overview of the current status of the service robot industry and technological development, along with the challenges faced, is provided. Next, the report introduces the key achievements of the team’s long-term efforts in addressing the core technical challenges of mobility and manipulation. To tackle the issue of velocity perception and localization failure caused by the complex dynamics of non-holonomic mobile robots and dynamic environmental interferences, a computation method that integrates attention-based back-end optimization and explicit occlusion handling is proposed. This method achieves robust perception and localization of mobile robots through visual fusion in complex and large scenes. To address the problem of traditional calibrated control algorithms being prone to failure in uncalibrated environments, an adaptive visual servoing framework is developed that entirely relies on visual feedback without prior environmental information. This framework solves the challenge of high-precision robotic operations without calibration. A practical and versatile vision-based method system has been established, elevating the critical technological level of service robots.
Huajin Tang, Professor, Zhejiang University, China
Neuromorphic Cognitive Computing to Build Robotic Brain
Biography: Huajin Tang received the Ph.D. degree from the National University of Singapore in 2005. He was an R&D Engineer with STMicroelectronics, Singapore from 2004 to 2006. From 2006, he was a Postdoc with Queensland Brain Institute, University of Queensland, Australia. From 2008-2015 he was the Lab Head of Robotic Cognition at the Institute for Infocomm Research (A*STAR) Singapore. He has been a Professor and Director of the Neuromorphic Computing Research Center, Sichuan University, China. Currently he is a Professor with College of Computer Science and Technology, Zhejiang University. His research work on Brain GPS was reported by MIT Technology Review and Communications of ACM, etc. He received the 2016 Outstanding IEEE TNNLS Award and 2019 Outstanding IEEE CIM Paper Award. Prof. Tang is EIC for IEEE Trans. on Cognitive and Developmental Systems (TCDS), and a Board of Governor member of International Neural Networks Society (INNS).
Abstract: Mimicking the computation, learning and decision making of biological brains is a key challenge for brain science and brain-inspired intelligence research. There are various ways and efforts to model biological brains and transform the modeling methods to embodied robotic intelligence. We hypothesis that the crucial steps are to emulate the computational substrates and the perception, learning, and decision making capabilities emerging from the neural modalities. Neuromorphic cognitive computing is a new theme of computing technology that aims for brain-like computing efficiency and intelligence, and holds a high potential to build robotic brain with computational advantages analog to biological brains. This talk will focus on some recent research progresses, including spike-based learning, sensory processing, and our efforts to build robotic brain.
Steven Hartley Collins, Professor, Stanford University, US
Designing Exoskeletons and Prosthetic Limbs that Enhance Human Locomotor Performance
Biography: Steven H. Collins is an Associate Professor of Mechanical Engineering and, by courtesy, Bioengineering at Stanford University, where he directs the Stanford Biomechatronics Laboratory. His research is focused on speeding and systematizing the design of prostheses and exoskeletons using versatile emulator hardware (Zhang et al. 2017, Science) and algorithms for human-in-the-loop optimization (Slade et al., 2022, Nature). Another interest is efficient autonomous devices, such as passive-dynamic walking robots (Collins et al. 2005, Science) and unpowered exoskeletons (Collins et al. 2015, Nature). Prof. Collins received his B.S. in Mechanical Engineering in 2002 from Cornell University, where he performed research on passive dynamics with Andy Ruina. He received his Ph.D. in Mechanical Engineering in 2008 from the University of Michigan, where he performed research on the biomechanics of human walking with Art Kuo. He performed postdoctoral research on humanoid robots with Martijn Wisse at T. U. Delft in the Netherlands. He was a professor of Mechanical Engineering and Robotics at Carnegie Mellon University before joining Stanford in 2017. Prof. Collins teaches courses on design and robotics and is the Faculty Director of making@stanford. He is a member of the Boards of Dynamic Walking and Science Robotics. He has received the Young Scientist Award from the American Society of Biomechanics and the Best Medical Devices Paper from the International Conference on Robotics and Automation. His teaching has been recognized with student-voted awards including the Tau Beta Pi Teaching Honor Roll and Professor of the Year in his department.
Abstract: Exoskeletons and active prosthetic limbs could improve mobility for tens of millions of people, but two serious challenges must first be overcome: we need ways of identifying what a device should do to benefit an individual user, and we need cheap, efficient hardware that can do it. In this talk, we will describe an approach to the design of wearable robots based on versatile emulator systems and algorithms that automatically customize assistance, which we call human-in-the-loop optimization. We will discuss recent successes of the approach, including large improvements to the energy economy and speed of walking and running through optimized exoskeleton assistance, in both laboratory and real-world conditions. We will also discuss the design of exoskeletons that use no energy themselves yet reduce the energy cost of human walking, and ultra-efficient electroadhesive actuators that could make wearable robots substantially cheaper and more effective.
Xinge Yu, Professor, City University of Hong Kong, China
Skin-Integrated Electronics as Human Machine Interface for VR/AR
Biography: Xinge Yu is currently an Associate Professor of Biomedical Engineering at City University of Hong Kong (CityU), Associate Director of Hong Kong Centre for Cerebro-cardiovascular Health Engineering, and Associate Director of the CAS-CityU Joint Lab on Robotics. He is the recipient of Hong Kong RGC Fellow, NSFC Excellent Young Scientist Grant (Hong Kong & Macao), Innovators under 35 China (MIT Technology Review), New Innovator of IEEE NanoMed, MINE Young Scientist Award, Gold Medal in the Inventions Geneva, CityU Outstanding Research Award, Stanford’s top 2% most highly cited scientists 2022 etc. Xinge Yu’s research group is focusing on skin-integrated electronics and systems for VR and biomedical applications. Now he serves the Associate Editor of Microsystem & NanoEngineering and IEEE Open Journal of Nanotechnology; Editorial Boards for 14 journals, such as Soft Science, Materials Today Physics etc. He has published 160 papers in Nature, Nature Materials, Nature Biomedical Engineering, Nature Machine Intelligence, Nature Communications, Science Advances etc..
Abstract: Skin-integrated electronics have attracted great attentions due to the advantages of soft, lightweight, ultrathin architecture, and stretchable/bendable, thus has the potential to apply in various areas, especially in the field of biomedical engineering. By engineering the classes of materials processing and devices integration, the mechanical properties of the flexible electronics can well match the soft biological tissues to enable measuring bio signals and monitoring human body health. In this report, we will present materials, device structures, power delivery strategies and communication schemes as the basis for novel soft bio-integrated electronics. For instance, we will discuss a wireless, battery-free platform of electronic systems and haptic interfaces capable of softly laminating onto the skin to communicate information via spatio-temporally programmable patterns of localized mechanical vibrations. The resulting technology, which we refer as epidermal VR, creates many opportunities where the skin provides an electronically programmable communication and sensory input channel to the body, as demonstrated through example applications in social media/personal engagement, prosthetic control/feedback and gaming/entertainment.
C. L. Philip Chen, Dean, School of Computer Science and Engineering, South China University and Technology, China
Fuzzy Broad Learning (Neuro) Systems (FBLS): Explainability and Analysis on the Tradeoff between Accuracy and Complexity
Biography: C. L. Philip Chen is the Chair Professor and Dean of the College of Computer Science and Engineering, South China University of Technology. He is a Fellow of IEEE, AAAS, IAPR, CAA, CAAI and HKIE; a member of Academia Europaea (AE), and a member of European Academy of Sciences and Arts (EASA). He received IEEE Norbert Wiener Award in 2018 for his contribution in systems and cybernetics, and machine learnings, and IEEE Joseph G. Wohl Outstanding Career award, and Wu WenJun(吴文俊)Outstanding Contribution award from Chinese AI Association, received two times best transactions paper award from IEEE Transactions on Neural Networks and Learning Systems for his papers in 2014 and 2018. He is a highly cited researcher by Clarivate Analytics from 2018-2022. His current research interests include cybernetics, systems, and computational intelligence. He was the Editor-in-Chief of the IEEE Transactions on Cybernetics, the Editor-in-Chief of the IEEE Transactions on Systems, Man, and Cybernetics: Systems, and the President of IEEE Systems, Man, and Cybernetics Society.
Abstract: The fuzzy broad learning system (FBLS) is a recently proposed neuro-fuzzy model that shares the similar structure of a broad learning system (BLS). It shows high accuracy in both classification and regression tasks and inherits the fast computational nature of a BLS. However, the ensemble of several fuzzy subsystems in an FBLS decreases the possibility of understanding the fuzzy model since the fuzzy rules from different fuzzy systems are difficult to combine together while keeping the consistence. To balance the model accuracy and complexity, this talk is to discuss a synthetically simplified FBLS with better interpretability, named compact FBLS (CFBLS), which can generate much fewer and more explainable fuzzy rules for understanding. In such a way, only one traditional Takagi–Sugeno–Kang fuzzy system is employed in the feature layer of a CFBLS, and the input universe of discourse is equally partitioned to obtain the fuzzy sets with proper linguistic labels accordingly. The random feature selection matrix and rule combination matrix are employed to reduce the total number of fuzzy rules and to avoid the “curse of dimensionality.” The experiments on the popular datasets indicate that the CFBLS can generate a smaller set of comprehensible fuzzy rules and achieve much higher accuracy than some state-of-the-art neuro-fuzzy models. Moreover, the advantage of CFBLS is also verified in a real-world application.