The Eleventh International Workshop on the
Algorithmic Foundations of Robotics
3-5 August 2014, Boğaziçi University, İstanbul, Turkey
University of Pennsylvania
Autonomous micro aerial robots can operate in three-dimensional, indoor and outdoor environments, with applications to search and rescue, first response and precision farming. I will describe the challenges in developing small, agile robots and the algorithmic challenges in the areas of (a) control and planning, (b) state estimation and mapping, and (c) coordinating large teams of robots.
Vijay Kumar is the UPS Foundation Professor with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering.
Kumar's group works on creating autonomous ground and aerial robots, designing bio-inspired algorithms for collective behaviors, and on robot swarms. They have won many best paper awards at conferences, and group alumni are leaders in teaching, research, business and entrepreneurship. Kumar is a fellow of ASME and IEEE and a member of the National Academy of Engineering.
Vijay Kumar has held many administrative positions in the School of Engineering and Applied Science, including director of the GRASP Laboratory, chair of Mechanical Engineering and Applied Mechanics, and the position of the Deputy Dean. He served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy.
Cagatay Basdogan, Ph.D.
Robotics and Mechatronics Laboratory (http://rml.ku.edu.tr/ )
Koc University, Istanbul, 34450
Even though robots can be programmed to share control with human operators in order to increase task performance, the interaction in such systems is still artificial when compared to natural human-human cooperation. In complex tasks, cooperating human partners may have their own agendas and take initiatives during the task. Such initiatives contribute to a richer interaction between cooperating parties, yet little research exists on how this can be established between a human and a robot. In a cooperation involving haptics, the coupling between the human and the robot should be defined such that the robot can understand the intentions of the human operator and respond accordingly. In this regard, we suggest (1) a role exchange mechanism that is activated based on the magnitude of the force applied by the cooperating parties and (2) a negotiation model that enables more human-like coupling between the cooperating parties. We argue that when presented through the haptic channel, the proposed role exchange mechanism and the negotiation model serve to communicate the cooperating parties dynamically, naturally, and seamlessly, in addition to improving the task efficiency of the user.
Dr. Basdogan is a member of faculty in College of Engineering at Koc University since 2002. Before joining to Koc University, he was a senior member of technical staff at Information and Computer Science Division of NASA-Jet Propulsion Laboratory of California Institute of Technology from 1999 to 2002. He moved to JPL from Massachusetts Institute of Technology where he was a research scientist and principal investigator at MIT Research Laboratory of Electronics and a member of the MIT Touch Lab from 1996 to 1999. He received his Ph.D. degree from Southern Methodist University in 1994 and worked with Musculographics Inc. at Northwestern University Research Park for two years before moving to MIT. Dr. Basdogan conducts research and development in the areas of human-machine interfaces, control systems, mechatronics, biomechanics, computer graphics, and virtual reality technology. He is currently the Associate Editor of Chief in the IEEE Transactions on Haptics and an Associate Editor in the Computer Animation and Virtual Worlds. He chaired the IEEE World Haptics Conference held in Istanbul in 2011
Artificial Intelligence Laboratory
Department of Computer Science
Exploring, working, and interacting with humans, the new generation of robots being developed will increasingly touch people and their lives, in homes and workplaces, in challenging field domains and new production systems. These emerging robots will provide support in services, health care, manufacturing, entertainment, education, assistance, and intervention. While full autonomy for the performance of advanced tasks in complex environments remains challenging, the simple intervention of a human would tremendously facilitate reliable real-time robot operations. Two basic modalities of haptically mediated interaction and direct physical contact are being conceived. Human-robot interaction greatly benefits from combining the experience and cognitive abilities of the human with the strength, dependability, competence, reach, and endurance of robots. Moving beyond conventional teleoperation, the new paradigm places the human at the highest level of task abstraction, relying on highly skilled robots with the requisite competence for advanced task behavior capabilities. The discussion focuses on robot design concepts, robot perception and control architectures, and task strategies that bring human modeling, motion, and skill understanding to the development of safe, easy to use, and competent robotic systems. The presentation will include live hands-on illustrative instance of human-robot interactions in various robotic applications. In particular, it will highlight interactions with a novel underwater robot, being developed jointly in collaboration between Stanford, Meka Robotics, and KAUST. The motivation for this robot is to help marine biologists to safely explore the Red Sea’s fragile and previously inaccessible underwater environment. Live interactions will illustrate how bimanual haptic devices can be used to interact with the entire robot. A 3D graphic and haptic interface allows non-expert users to intuitively operate the robot while feeling contact forces when performing dexterous tasks. While the operator can fully focus on the robot’s task, the robot controller autonomously handles constraints, multiple contacts, disturbances, obstacles, and robot posture, so that the robot task can be optimally performed in the deep sea. This robot illustrates the new emerging paradigm in other challenging areas of underwater robotics, such as archeology, inspection, and maintenance of pipelines and other structures. Connecting humans to increasingly competent robots will certainly fuel a wide range of new robotic applications in challenging environments.
Oussama Khatib received his Doctorate degree in Electrical Engineering from Sup’Aero, Toulouse, France, in 1980. He is Professor of Computer Science at Stanford University. His work on advanced robotics focuses on methodologies and technologies in human-centered robotics including humanoid control architectures, human motion synthesis, interactive dynamic simulation, haptics, and human-friendly robot design. He is Co-Editor of the Springer Tracts in Advanced Robotics series, and has served on the Editorial Boards of several journals as well as the Chair or Co-Chair of numerous international conferences. He co-edited the Springer Handbook of Robotics, which received the PROSE Award. He is a Fellow of IEEE and has served as a Distinguished Lecturer. He is the President of the International Foundation of Robotics Research (IFRR). Professor Khatib is a recipient of the Japan Robot Association (JARA) Award in Research and Development. In 2010 he received the IEEE RAS Pioneer Award in Robotics and Automation for his fundamental pioneering contributions in robotics research, visionary leadership, and life-long commitment to the field. Professor Khatib received the 2013 IEEE RAS Distinguished Service Award in recognition of his vision and leadership for the Robotics and Automation Society, in establishing and sustaining conferences in robotics and related areas, publishing influential monographs and handbooks and training and mentoring the next generation of leaders in robotics education and research. In 2014, Professor Khatib received the 2014 IEEE RAS George Saridis Leadership Award in Robotics and Automation.
School of Computer Science
Tel Aviv University
The Minkowski sum of two sets P and Q in Euclidean space is the result of adding every point in P to every point in Q. Minkowski sums constitute a fundamental tool in geometric computing, used in a large variety of domains including motion planning (Piano Movers), solid modeling, assembly planning, 3d printing and many more. At the same time they are an inexhaustible source of intriguing mathematical and computational problems. We survey results on the structure, complexity, algorithms, and implementation of Minkowski sums in two and three dimensions. We also describe how Minkowski sums are used to solve problems in an array of applications, and primarily in robotics and automation.
Dan Halperin received his Ph.D. in Computer Science from Tel Aviv University. He then spent three years at the Computer Science Robotics Laboratory at Stanford University. In 1996 he joined the Department of Computer Science at Tel Aviv University, where he is currently a full professor and for two years was the department chair. Halperin's main field of research is Computational Geometry and its Applications. A major focus of his work has been in research and development of robust geometric software, principally as part of the CGAL project and library. The application areas he is interested in include robotics and automated manufacturing, and algorithmic motion planning. http://acg.cs.tau.ac.il/danhalperin