By Ming Liu | May 23, 2016
Ground and Aerial Robots for Challenging Environments
Time & Place: 10 July 2017
Abstract: Disaster response operations or industrial inspections are among the most rewarding but also most challenging tasks for autonomous mobile robot. While robots are already doing a wonderful job as factory work-horses or floor cleaning devices, operations in highly unstructured and unknown environments, which are typically encountered after disasters, in mines or on offshore oil rigs are still a major challenge. Within this talk, our latest research results in rolling, legged and flying robots systems, designed to operate in complex environments, are presented and discussed. Our electrically powered legged quadruped robots are designed for high agility, efficiency and robustness in rough terrain. This is realized through an optimal exploitation of the natural dynamics and serial elastic actuation. Equipped with laser scanners and cameras, our quadruped StarlETH and AnyMal are able to autonomously find their path through rough terrain, climb stairs and build a 3D map of their environment. For fast inspection of complex environments, flying robots are probably the most efficient and versatile devices. However, the limited payload and computing power of multi-copters renders autonomous navigation quite challenging. Thanks to our custom designed visual-inertial sensor, real-time on-board localization, mapping and planning has become feasible and enables our multi-copters for advanced rescue and inspection tasks, even in GPS-denied environments. Overcoming the limited power autonomy and flight range of multi-copters is the main focus of our research in unmanned solar airplanes, omnidirectional blimps and hybrid systesm. Our most recent design of a fixed wing solar airplane with 5.6 m wing span allows for unlimited flight durations, thus enabling search and rescue from the air over large environments. Thanks to on-board visual sensing, these solar airplanes are also capable to fly very close to ground and plan their path around obstacles.
Prof. Dr. Roland Siegwart
Roland Siegwart (born in 1959) is professor for autonomous mobile robots at ETH Zurich, founding co-director of the Wyss Zurich and member of the board of directors of multiple high tech companies. He studied mechanical engineering at ETH, brought up a spin-off company, spent ten years as professor at EPFL Lausanne (1996 – 2006), was vice president research and corporate relations of ETH Zurich (2010 -2014) and held visiting positions at Stanford University and NASA Ames. He is and was the coordinator of multiple European projects and co-founder of half a dozen spin-off companies. He is IEEE Fellow, recipient of the IEEE RAS Inaba Technical Award and officer of the International Federation of Robotics Research (IFRR). He is in the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics including IROS 2002, AIM 2007, FSR 2007 and ISRR 2009. His interests are in the design and navigation of wheeled, walking and flying robots operating in complex and highly dynamical environments.
Talk #2 Title
Time & Place: TBD
Prof. Dr. Hong Qiao
Dr. Hong Qiao is a professor with Institute of Automation, Chinese Academy of Sciences. In 2004, Prof. Qiao gave up the tenure position and returned to China under the grant of the “100 Talent” Program of the Chinese Academy of Sciences. After her return, she founded the “Robotic Theory and Application” group in CASIA. In 2007, she was supported by National Science Fund for Distinguished Young Scholars. Currently she is the Core Expert of CAS Centre for Excellence in Brain Science and Intelligence Technology (CEBSIT), the Vice Director of Joint Lab Between the Institute of Automation and University of Sciences and Technology of China, and the Deputy Director & Principal Investigator of Neuro-robotics, Research Centre for Brain-inspired Intelligence, Institute of Automation, CAS.
Vision-based robot control
Time & Place: 10 July 2017
Abstract: Vision is an important sensory channel for humans to move and act. It is highly desired to uses visual information for the feedback control of robots. To implement a vision-based controller, an important step is to calibrate the intrinsic and extrinsic parameters of the camera. The calibration accuracy of these parameters significantly affects the control errors. It is desirable to use uncalibrated visual signals directly in controller design. By directly incorporating visual feedback in the dynamic control loop, it is possible to enhance the system stability and the control performance. In this talk, various vision-based robot control methods by taking the nonlinear dynamics of the robot manipulator into account will be presented. These methods are also implemented in many robot systems such as manipulator, mobile robot, soft robot, quadrotor and so on.
Prof. Dr. Hesheng Wang
Hesheng Wang received the B.Eng. degree in Electrical Engineering from the Harbin Institute of Technology. Harbin, China, in 2002, the M.Phil. and Ph.D. degrees in Automation & Computer-Aided Engineering from the Chinese University of Hong Kong, Hong Kong, in 2004 and 2007, respectively. From 2007 to 2009, he was a Postdoctoral Fellow and Researcher Assistant in the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong. He joined Shanghai Jiao Tong University as an Associate Professor in 2009. Currently, he is a Professor of Department of Automation, Shanghai Jiao Tong University, China. He worked as a visiting professor at University of Zurich in Switzerland. His research interests include visual servoing, service robot, robot control and computer vision.
Prof. Wang has published over 100 papers in refereed professional journals and international conference proceedings. He has received a number of best paper awards from major international conferences in robotics and automation. He is an associate editor of Robotics and Biomimetics, Assembly Automation, International Journal of Humanoid Robotics and IEEE Transactions on Robotics. He was a guest editor of Mathematical Problems in Engineering, Journal of Applied Mathematics and International Journal of Advanced Robotic Systems. He served as associate editor in Conference Editorial Board of IEEE Robotics and Automation Society from 2011 to 2015. Prof. Wang is actively involving in organization of international conferences. He served as organizing committee member for many international conferences such as ICRA and IROS. He was the program chair of The 2014 IEEE International Conference on Robotics and Biomimetics, the general chair of The 2016 IEEE International Conference on Real-time Computing and Robotics. He is the program chair of The 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics. He was a recipient of Shanghai Rising Star Award in 2014. He is a Senior Member of IEEE.
Vision control of mobile platforms
Time & Place: 11 July 2017
Abstract: This talk provides some discussion on the recently developed vision control strategies for mobile platforms, firstly for ground mobile robot, later on extended to unmanned aerial vehicles. More specifically, it introduces a very efficient motion estimation strategy based on correspondences of two images, based on which some visual servoing algorithms are then designed to achieve visual regulation/tracking tasks of different mobile platforms, under various scenarios. Some interesting experimental results are also included to show the advantages of the proposed algorithms.
Prof. Yongchun, FANG
College of Computer and Control Engineering, CSE
Yongchun Fang received his doctiral degree under the advisement of Dr. Dawson (chair) and Dr. Dixon (co-chair) in Electrical Engineering from Clemson University, Clemson, SC, in 2002. From 2002 to 2003, he was a Postdoctoral Fellow at the Mechanical & Aerospace Engineering Department of Cornell University. He joined the Institute of Robotics and Automatic Information System, Nankai University, Tianjin, P. R. China in 2003, where has been a professor since then.