{"technology":{"slug":"robotics","name":"Robotics & Autonomous Systems","description":"Robotics research spanning manipulation, locomotion, perception, and planning. Humanoid robots, swarm robotics, surgical robots, and autonomous drones.","discipline":"Engineering / AI","icon":"🦾"},"lastUpdated":"2026-04-11T06:57:53.765Z","articleCount":15,"articles":[{"id":"oa-W2037011962","title":"Aerial manipulation robot composed of an autonomous helicopter and a 7 degrees of freedom industrial manipulator","authors":"Konstantin Kondak, Felix Huber, Marc Schwarzbach, Maximilian Laiacker, David Sommer, Manuel Béjar, Alfredo Ollero Ojeda","journal":"","pubDate":"2014-05-01","doi":"10.1109/icra.2014.6907148","abstract":"This paper is devoted to a system for aerial manipulation, composed of a helicopter and an industrial manipulator. The usage of an industrial manipulator is motivated by practical applications which were identified in different cooperation projects with the industry. We address the coupling between manipulator and helicopter and show that even in case when we have an ideal controller for manipulator and a highperformance controller for helicopter, an unbounded energy flow can be generated by internal forces between helicopter and manipulator if both controllers are used independently. To solve this problem we propose a new kinematical coupling for control by introducing an additional manipulation DoF realized by helicopter rotation around its yaw axis. The new experimental setup and required modifications in the manipulator controller for this purpose are described. Further, we propose dynamical coupling which is implemented by modification of the helicopter controller feeding the interaction force/torque, measured between manipulator base and fuselage, directly to the actuators of the rotor blades. At the end, we present experimental results for aerial manipulation and their analysis.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2037011962","citationCount":141,"isOpenAccess":false,"pdfUrl":""},{"id":"oa-W1963549352","title":"An integrated system for autonomous robotics manipulation","authors":"J. Andrew Bagnell, Felipe Lira de Sá Cavalcanti, Lei Cui, Thomas Galluzzo, Martial Hebert, Moslem Kazemi, Matthew Klingensmith, Jacqueline Libby, Tian Yu Liu, Nancy S. Pollard, Mihail Pivtoraiko, Jean‐Sebastien Valois, Ranqi Zhu","journal":"","pubDate":"2012-10-01","doi":"10.1109/iros.2012.6385888","abstract":"We describe the software components of a robotics system designed to autonomously grasp objects and perform dexterous manipulation tasks with only high-level supervision. The system is centered on the tight integration of several core functionalities, including perception, planning and control, with the logical structuring of tasks driven by a Behavior Tree architecture. The advantage of the implementation is to reduce the execution time while integrating advanced algorithms for autonomous manipulation. We describe our approach to 3-D perception, real-time planning, force compliant motions, and audio processing. Performance results for object grasping and complex manipulation tasks of in-house tests and of an independent evaluation team are presented.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W1963549352","citationCount":120,"isOpenAccess":true,"pdfUrl":"https://figshare.com/articles/An_Integrated_System_for_Autonomous_Robotics_Manipulation/6551918"},{"id":"oa-W2120320883","title":"Closed-Loop Behavior of an Autonomous Helicopter Equipped with a Robotic Arm for Aerial Manipulation Tasks","authors":"Konstantin Kondak, Kai Krieger, Alin Albu‐Schäffer, Marc Schwarzbach, Maximilian Laiacker, Iván Maza, Ángel Rodríguez Castaño, Anı́bal Ollero","journal":"International Journal of Advanced Robotic Systems","pubDate":"2013-01-01","doi":"10.5772/53754","abstract":"This paper is devoted to the control of aerial robots interacting physically with objects in the environment and with other aerial robots. The paper presents a controller for the particular case of a small-scaled autonomous helicopter equipped with a robotic arm for aerial manipulation. Two types of influences are imposed on the helicopter from a manipulator: coherent and non-coherent influence. In the former case, the forces and torques imposed on the helicopter by the manipulator change with frequencies close to those of the helicopter movement. The paper shows that even small interaction forces imposed on the fuselage periodically in proper phase could yield to low frequency instabilities and oscillations, so-called phase circles.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2120320883","citationCount":100,"isOpenAccess":true,"pdfUrl":"https://doi.org/10.5772/53754"},{"id":"oa-W2918665764","title":"Autonomous Tissue Manipulation via Surgical Robot Using Learning Based Model Predictive Control","authors":"Changyeob Shin, Peter Walker Ferguson, Sahba Aghajani Pedram, Ji Ma, Erik P. Dutson, Jacob Rosen","journal":"","pubDate":"2019-05-01","doi":"10.1109/icra.2019.8794159","abstract":"Tissue manipulation is a frequently used fundamental subtask of any surgical procedures, and in some cases it may require the involvement of a surgeon's assistant. The complex dynamics of soft tissue as an unstructured environment is one of the main challenges in any attempt to automate the manipulation of it via a surgical robotic system. Two AI learning based model predictive control algorithms using vision strategies are proposed and studied: (1) reinforcement learning and (2) learning from demonstration. Comparison of the performance of these AI algorithms in a simulation setting indicated that the learning from demonstration algorithm can boost the learning policy by initializing the predicted dynamics with given demonstrations. Furthermore, the learning from demonstration algorithm is implemented on a Raven IV surgical robotic system and successfully demonstrated feasibility of the proposed algorithm using an experimental approach. This study is part of a profound vision in which the role of a surgeon will be redefined as a pure decision maker whereas the vast majority of the manipulation will be conducted autonomously by a surgical robotic system. A supplementary video can be found at: http://bionics.seas.ucla.edu/research/surgeryproject17.html.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2918665764","citationCount":95,"isOpenAccess":true,"pdfUrl":"https://arxiv.org/pdf/1902.01459"},{"id":"oa-W4392127883","title":"Multimodal Sensors Enabled Autonomous Soft Robotic System with Self-Adaptive Manipulation","authors":"Tianhong Wang, Tao Jin, Weiyang Lin, Yangqiao Lin, Hongfei Liu, Tao Yue, Yingzhong Tian, Long Li, Quan Zhang, Chengkuo Lee","journal":"ACS Nano","pubDate":"2024-02-22","doi":"10.1021/acsnano.3c11281","abstract":"Human hands are amazingly skilled at recognizing and handling objects of different sizes and shapes. To date, soft robots rarely demonstrate autonomy equivalent to that of humans for fine perception and dexterous operation. Here, an intelligent soft robotic system with autonomous operation and multimodal perception ability is developed by integrating capacitive sensors with triboelectric sensor. With distributed multiple sensors, our robot system can not only sense and memorize multimodal information but also enable an adaptive grasping method for robotic positioning and grasp control, during which the multimodal sensory information can be captured sensitively and fused at feature level for crossmodally recognizing objects, leading to a highly enhanced recognition capability. The proposed system, combining the performance and physical intelligence of biological systems (i.e., self-adaptive behavior and multimodal perception), will greatly advance the integration of soft actuators and robotics in many fields.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W4392127883","citationCount":91,"isOpenAccess":false,"pdfUrl":""},{"id":"oa-W2533352770","title":"Team RoboSimian: Semi‐autonomous Mobile Manipulation at the 2015 DARPA Robotics Challenge Finals","authors":"Sisir Karumanchi, Kyle Edelberg, Ian Baldwin, Jeremy Nash, Jason Reid, Charles Bergh, John Leichty, Kalind Carpenter, Matthew Shekels, Matthew Gildner, David Newill‐Smith, Jason Carlton, John Koehler, Tatyana Dobreva, Matthew Frost, Paul D. N. Hebert, James Borders, Jeremy Ma, Bertrand Douillard, Paul Backes, Brett Kennedy, Brian Satzinger, Chelsea Lau, Katie Byl, Krishna Shankar, Joel W. Burdick","journal":"Journal of Field Robotics","pubDate":"2016-10-18","doi":"10.1002/rob.21676","abstract":"This paper discusses hardware and software improvements to the RoboSimian system leading up to and during the 2015 DARPA Robotics Challenge (DRC) Finals. Team RoboSimian achieved a 5th place finish by achieving 7 points in 47:59 min. We present an architecture that was structured to be adaptable at the lowest level and repeatable at the highest level. The low‐level adaptability was achieved by leveraging tactile measurements from force torque sensors in the wrist coupled with whole‐body motion primitives. We use the term “behaviors” to conceptualize this low‐level adaptability. Each behavior is a contact‐triggered state machine that enables execution of short‐order manipulation and mobility tasks autonomously. At a high level, we focused on a teach‐and‐repeat style of development by storing executed behaviors and navigation poses in an object/task frame for recall later. This enabled us to perform tasks with high repeatability on competition day while being robust to task differences from practice to execution.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2533352770","citationCount":85,"isOpenAccess":false,"pdfUrl":""},{"id":"oa-W3092822745","title":"Real-time deep learning approach to visual servo control and grasp detection for autonomous robotic manipulation","authors":"Eduardo G. Ribeiro, Raul de Queiroz Mendes, Valdir Grassi","journal":"Robotics and Autonomous Systems","pubDate":"2021-02-24","doi":"10.1016/j.robot.2021.103757","abstract":"","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W3092822745","citationCount":84,"isOpenAccess":true,"pdfUrl":"https://arxiv.org/pdf/2010.06544"},{"id":"oa-W2914433398","title":"Visual Manipulation Relationship Network for Autonomous Robotics","authors":"Hanbo Zhang, Xuguang Lan, Xinwen Zhou, Zhiqiang Tian, Yang Zhang, Nanning Zheng","journal":"","pubDate":"2018-11-01","doi":"10.1109/humanoids.2018.8625071","abstract":"Robotic grasping is one of the most important fields in robotics, in which great progress has been made in recent years with the help of convolutional neural network (CNN). However, including multiple objects in one scene can invalidate the existing CNN-based grasp detection algorithms, because manipulation relationships among objects are not considered, which are required to guide the robot to grasp things in the right order. This paper presents a new CNN architecture called Visual Manipulation Relationship Network (VMRN) to help robots detect targets and predict the manipulation relationships in real time, which ensures that the robot can complete tasks in a safe and reliable way. To implement end-to-end training and meet real-time requirements in robot tasks, we propose the Object Pairing Pooling Layer (OP <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">2</sup> L) to help to predict all manipulation relationships in one forward process. Moreover, in order to train VMRN, we collect a dataset named Visual Manipulation Relationship Dataset (VMRD) consisting of 5185 images with more than 17000 object instances and the manipulation relationships between all possible pairs of objects in every image, which is labeled by the manipulation relationship tree. The experimental results show that the new network architecture can detect objects and predict manipulation relationships simultaneously and meet the real-time requirements in robot tasks.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2914433398","citationCount":82,"isOpenAccess":false,"pdfUrl":""},{"id":"oa-W2003349240","title":"An open-source multi-DOF articulated robotic educational platform for autonomous object manipulation","authors":"Sarah Manzoor, Raza Ul Islam, Aayman Khalid, Abdul Samad, Jamshed Iqbal","journal":"Robotics and Computer-Integrated Manufacturing","pubDate":"2013-12-12","doi":"10.1016/j.rcim.2013.11.003","abstract":"","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2003349240","citationCount":80,"isOpenAccess":true,"pdfUrl":"https://hull-repository.worktribe.com/output/3797120"},{"id":"oa-W2160539467","title":"MICRON: Small Autonomous Robot for Cell Manipulation Applications","authors":"J. Brufau, M. Puig-Vidal, J. López-Sánchez, Josep Samitier, Niklas Snis, Urban Simu, Stefan Johansson, W. Driesen, J.‐M. Breguet, Jian Gao, Thomas Velten, J. Seyfried, Ramon Estaña, Heinz Woern","journal":"","pubDate":"2005-01-01","doi":"10.1109/robot.2005.1570222","abstract":"Manipulating in the micro- or even nano world still poses a great challenge to robotics. Conventional (stationary) systems suffer from drawbacks regarding integration into process supervision and multi-robot approaches, which become highly relevant to fight scaling effects. This paper describes work currently being carried out which aims to make automated manipulation of micrometer-scaled objects possible by robots with nanometer precision. The goal is to establish a small cluster of (up to five) micro robots equipped with on-board electronics, sensors and wireless power supply. Power autonomy has been reached using inductive energy transmission from an external wireless power supply system or a battery based system. Electronics requirements are fulfilled in the electronic module with the full custom integrated circuit design for the robot locomotion control and the closed loop force control for AFM tool in cell manipulation applications. The maximum velocity obtained is about 0.4 mm/s with a saw tooth voltage signals of 20Vpp and 2500 Hz. In order to keep a AFM tool on micro-robot a specific tip with integrated piezoresistance, instead of the classical laser beam methodology, is validated for force measurement.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2160539467","citationCount":80,"isOpenAccess":true,"pdfUrl":"http://infoscience.epfl.ch/record/55464"},{"id":"oa-W1509312879","title":"The I-SWARM Project: Intelligent Small World Autonomous Robots for Micro-manipulation","authors":"Jörg Seyfried, M. Szymanski, Natalie Bender, Ramon Estaña, Michael Thiel, Heinz Wörn","journal":"Lecture notes in computer science","pubDate":"2005-01-01","doi":"10.1007/978-3-540-30552-1_7","abstract":"","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W1509312879","citationCount":75,"isOpenAccess":false,"pdfUrl":""},{"id":"oa-W2920776497","title":"PMK—A Knowledge Processing Framework for Autonomous Robotics Perception and Manipulation","authors":"Mohammed Diab, Aliakbar Akbari, Muhayy Ud Din, Jan Rosell","journal":"Sensors","pubDate":"2019-03-07","doi":"10.3390/s19051166","abstract":"Autonomous indoor service robots are supposed to accomplish tasks, like <i>serve a cup</i>, which involve manipulation actions. Particularly, for complex manipulation tasks which are subject to geometric constraints, spatial information and a rich semantic knowledge about objects, types, and functionality are required, together with the way in which these objects can be manipulated. In this line, this paper presents an ontological-based reasoning framework called Perception and Manipulation Knowledge (PMK) that includes: (1) the modeling of the environment in a standardized way to provide common vocabularies for information exchange in human-robot or robot-robot collaboration, (2) a sensory module to perceive the objects in the environment and assert the ontological knowledge, (3) an evaluation-based analysis of the situation of the objects in the environment, in order to enhance the planning of manipulation tasks. The paper describes the concepts and the implementation of PMK, and presents an example demonstrating the range of information the framework can provide for autonomous robots.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2920776497","citationCount":62,"isOpenAccess":true,"pdfUrl":"https://www.mdpi.com/1424-8220/19/5/1166/pdf?version=1551958405"},{"id":"oa-W2134858017","title":"Extracting data from human manipulation of objects towards improving autonomous robotic grasping","authors":"Diego R. Faria, Ricardo Martins, Jorge Lobo, Jorge Dias","journal":"Robotics and Autonomous Systems","pubDate":"2011-09-14","doi":"10.1016/j.robot.2011.07.020","abstract":"","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W2134858017","citationCount":52,"isOpenAccess":true,"pdfUrl":"https://www.sciencedirect.com/science/article/pii/S0921889011001527"},{"id":"oa-W4213322786","title":"Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification","authors":"Rui Wang, Congjia Su, Hao Yu, Shuo Wang","journal":"IEEE Transactions on Cognitive and Developmental Systems","pubDate":"2022-02-15","doi":"10.1109/tcds.2022.3151331","abstract":"The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods.","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W4213322786","citationCount":14,"isOpenAccess":false,"pdfUrl":""},{"id":"oa-W4392967840","title":"Learning strategies for underwater robot autonomous manipulation control","authors":"Hai Huang, Tao Jiang, Zongyu Zhang, Yize Sun, Hongde Qin, Xinyang Li, Xu Yang","journal":"Journal of the Franklin Institute","pubDate":"2024-03-19","doi":"10.1016/j.jfranklin.2024.106773","abstract":"","tldr":"","source":"OpenAlex","sourceUrl":"https://openalex.org/W4392967840","citationCount":4,"isOpenAccess":false,"pdfUrl":""}],"links":{"web":"https://science-database.com/technology/robotics","llms_txt":"https://science-database.com/technology/robotics/llms.txt","api":"https://science-database.com/api/v1/technology/robotics"}}