Keynote Speaker
Dezhen Song
Biography: Dezhen Song is a Professor and Deputy Chair with Department of Robotics in MBZ University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE, and was a Professor and former Associate Department Head with Department of Computer Science and Engineering, Texas A&M University, College Station, Texas, USA. Song received his Ph.D. in 2004 from University of California, Berkeley; MS and BS from Zhejiang University in 1998 and 1995, respectively. Song's primary research area is robot perception, networked robots, visual navigation, automation, and stochastic modeling. From 2008 to 2012, Song was an Associate Editor of IEEE Transactions on Robotics (T-RO). From 2010 to 2014, Song was an Associate Editor of IEEE Transactions on Automation Science and Engineering (T-ASE). Song was a Senior Editor for IEEE Robotics and Automation Letters (RA-L) from 2017 to 2021 and currently is a Senior Editor for IEEE Transactions on Automation Science and Engineering (T-ASE). He is also a multimedia editor and chapter author for Springer Handbook of Robotics. His research has resulted in one monograph and more than 150 refereed conferences and journal publications. Dr. Song received the NSF Faculty Early Career Development (CAREER) Award in 2007, the Kayamori Best Paper Award of the 2005 IEEE International Conference on Robotics and Automation (ICRA), the 2022 Best Paper Award of the LCT 2022 Affiliated Conference, the 1st place in the GM/SAE AutodriveChallenge II competition in 2022, and the Amazon Research Award in 2020.
Speech Information
Title: Toward Robotic Weed Removal
Abstract: In this talk, I will report on our recent progress in developing algorithms and systems to perform robotic weed removal tasks in precision agriculture. Weed removal is an eternal issue in agriculture. However, the task is often labor intensive or has a large environmental impact if herbicide is used. Using robots can address those issues and make it possible to provide an environmentally friendly approach in weed management. Here, we present three case studies. In the first case, we developed a new deep learning method to effectively distinguish nutsedge weeds from background turf grass. The challenge is to reduce human data labeling costs by designing a new network and features and combining synthetic data that enable network training with a small amount of data with low labeling effort. In the second case, we will discuss how we design a robotic micro-volume weed spraying system and motion planning algorithms to suppress weeds in early growth stage. If time permits, I will discuss our latest progress in weed flaming by employing a mobile manipulator with a SPOT mini quadruped robot and a 6-degree of freedom (DoF) robot.

Speaker: Prof. Dezhen Song
Affiliations: Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE

Speaker: Prof. Frank Allgöwer
Affiliations: University of Stuttgart, Germany
Frank Allgöwer
Biography: Frank Allgöwer is director of the Institute for Systems Theory and Automatic Control at the University of Stuttgart in Germany and professor in the mechanical engineering department there. His current research interests are to develop new methods for data-based control, optimization-based control and networked control. Frank has published over 500 papers and received several recognitions for his work including the IFAC Outstanding Service Award, the IEEE CSS Distinguished Member Award, the State Teaching Award of the German state of Baden-Württemberg, and the Leibniz Prize of the Deutsche Forschungsgemeinschaft. Frank has been the President of the International Federation of Automatic Control (IFAC) for the years 2017-2020. He was Editor for the journal Automatica from 2001 to 2015 and is editor for the Springer Lecture Notes in Control and Information Science book series. He was Vice-president for Technical Activities of the IEEE Control Systems Society for the years 2013/14 and was on the EUCA Council for 2001-2004. From 2012 until 2020, Frank also served as Vice-President of Germany's most important research funding agency, the German Research Foundation (DFG).
Speech Information
Abstract: Over the past decade, Model Predictive Control (MPC) has emerged as a cornerstone methodology for advanced robotics applications, ranging from whole-body control of legged robots to autonomous driving and aggressive aerial maneuvers. Its growing impact is driven by three key capabilities: the systematic handling of hard state and input constraints, applicability to uncertain nonlinear multivariable systems, and the ability to provide formal guarantees on closed-loop performance and safety. This keynote provides an overview of modern predictive control frameworks, spanning both classical model-based MPC and emerging data-driven approaches. We first introduce the core concepts underpinning MPC, highlighting their strengths as well as the practical and theoretical challenges that arise in real-world robotic systems. We then turn to data-driven MPC methods, which leverage measured data to compute control actions with little or no explicit model information, offering new opportunities for learning-enabled and adaptive control. Through illustrative examples -- including an autonomous e-scooter platform — we demonstrate the advantages and limitations in practice. The talk aims to equip the audience with both conceptual understanding and practical insights into the rapidly evolving landscape of predictive control, offering guidance for future research at the intersection of models, data, and robotics.
Alessandra Sciutti
Biography: Alessandra Sciutti is a Tenure Track Researcher and head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) Unit of the Italian Institute of Technology (IIT). She received her B.S. and M.S. degrees in Bioengineering and her Ph.D. in Humanoid Technologies from the University of Genova in 2010. After two research periods in the USA and Japan, in 2018, she was awarded the ERC Starting Grant wHiSPER (www.whisperproject.eu), focused on the investigation of joint perception between humans and robots. She has published more than 100 papers and abstracts in international journals and conferences, coordinates the ERC POC Project ARIEL (Assessing Children Manipulation and Exploration Skills), and has participated in the coordination of the CODEFROR European IRSES project (https://www.codefror.eu/). She is currently Chief Editor of the HRI Section of Frontiers in Robotics and AI and Associate Editor for several journals, including the International Journal of Social Robotics, the IEEE Transactions on Cognitive and Developmental Systems, and Cognitive System Research. She is an ELLIS scholar and the corresponding co-chair of the IEEE RAS Technical Committee for Cognitive Robotics. Her research aims to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction.
Speech Information
Title: Bio-Inspired Cognitive Architecture for Adaptive Human–Robot Interaction
Abstract: Effective human–robot collaboration requires more than high task performance; it depends on alignment between human and robotic cognition. Human interaction relies heavily on non-verbal cues such as gaze, motion dynamics, and timing to support anticipation and coordination. Drawing on these principles, robotic systems can be designed to perceive and express meaning in ways that are compatible with human expectations.Embedding embodied interaction mechanisms within cognitive architectures grounded in memory and motivation enables robots to adapt to different partners and contexts over time. Bio-inspired approaches that integrate perception, action, memory, motivation, and prospection support learning from experience, anticipating others’ behavior, and generating socially meaningful responses. This perspective moves human–robot interaction beyond reactive control toward adaptive, cognitively grounded collaboration, contributing to robots that can co-adapt with humans through shared experience and operate as intuitive and trustworthy partners.

Speaker: Prof. Alessandra Sciutti
Affiliations: Istituto Italiano di Tecnologia, Italy

Speaker: Prof. Cagatay Basdogan
Affiliations: Koc University, Turkey
Cagatay Basdogan
Biography: Prof. Basdogan is a member of faculty in College of Engineering at Koc University since 2002. Before joining to Koc University, he was a senior member of technical staff at Information and Computer Science Division of NASA-Jet Propulsion Laboratory of California Institute of Technology (Caltech) from 1999 to 2002. At JPL, he worked on 3D reconstruction of Martian models from stereo images captured by a rover and their haptic visualization on Earth. He moved to JPL from Massachusetts Institute of Technology (MIT) where he was a research scientist and principal investigator at MIT Research Laboratory of Electronics and a member of the MIT Touch Lab from 1996 to 1999. At MIT, he was involved in the development of algorithms that enable a user to touch and feel virtual objects through a haptic device (a force-reflecting robotic arm). He received his Ph.D. degree from Southern Methodist University in 1994 and worked on medical simulation and robotics for Musculographics Inc. at Northwestern University Research Park for two years before moving to MIT. Prof. Basdogan conducts research and development in the areas of human-machine interfaces, control systems, robotics, mechatronics, human-robot interaction, biomechanics, computer graphics, and virtual reality technology. In particular, he is known for his work in the area of human and machine haptics (sense of touch) with applications to medical robotics and simulation, robotic path planning, micro/nano/optical tele-manipulation, human-robot interaction, molecular docking, information visualization, and human perception and cognition. In addition to serving in the program and organizational committees of several conferences and journals, he also chaired the IEEE World Haptics Conference in 2011.
Speech Information
Abstract: As artificial intelligence techniques become more sophisticated, we anticipate that robots collaborating with humans will develop their own intentions, leading to potential conflicts in interaction. This development calls for advanced conflict resolution strategies in physical human–robot interaction (pHRI), a key focus of our research. We use a machine learning (ML) classifier, trained with haptic (force) data alone, to detect conflicts during co-manipulation tasks to adapt the robot’s behavior accordingly using an admittance controller. In our approach, we focus on two groups of interactions, namely “harmonious” and “conflicting,” corresponding respectively to the cases of the human and the robot working in harmony to transport an object when they aim for the same target, and human and robot are in conflict when human changes the manipulation plan, e.g. due to a change in the direction of movement or parking location of the object.
