NASA ADS 2025-00-00
Li, Xingxu, Han, Yiheng, Ma, Nan, Liu, Yongjin, Pan, Jia, Yang, Shun, Zheng, Siyi
IEEE Transactions on Robotics
Show Abstract
Using robots for tomato truss harvesting represents a promising approach to agricultural production. However, incomplete acquisition of perception information and clumsy operations often results in low harvest success rates or crop damage. To addressthis issue, we designed a new method for tomato truss perception, an autonomous harvesting method, and a novel circular rotary cutting end-effector. The robot performs object detection and keypoint detection on tomato trusses using the proposed topdown fusion network, making decisions on suitable targets for harvesting based on phenotyping and pose estimation. The designed end-effector moves gradually from the bottom up to wrap around the tomato truss, cutting the peduncle to complete the harvest. Experiments conducted in real-world scenarios for robotic perception and autonomous harvesting of tomato trusses show that the proposed method increases accuracy by up to 11.42 and 22.29 for complete and limited dataset conditions, compared to baseline models. Furthermore, we have implemented an automatic tomato harvesting system based on TDFNet, which reaches an average harvest success rate of 89.58 in the greenhouse.
NASA ADS 2024-05-00
1 citations Huang, Hai, Jiang, Tao, Zhang, Zongyu, Sun, Yize, Qin, Hongde, Li, Xinyang, Yang, Xu
Journal of the Franklin Institute
Show Abstract
Autonomous manipulation operations represent the high intelligent coordination from robotic vision and control, it is also a symbol of the advances of robotic intelligence. The limitations of visual sensing and the increasingly complex experimental conditions make autonomous manipulation operations more difficult, particularly for deep reinforcement learning methods, which can enhance robotic control intelligence but require a lot of training process. Due to the high-dimensional continuous state space and continuous action space characteristics of underwater operations, this paper adopts a policy-based reinforcement learning method as the foundational approach. To address the issues of instability and low convergence efficiency in traditional policy-based reinforcement learning algorithms during the learning process, this paper proposes a novel policy learning method. This method adopts the Proximal Policy Optimization algorithm (PPOClip) and optimizes it through an actor-critic network. The aim is to improve the stability and effectiveness of convergence in the learning process. In the underwater training environment, a new reward shaping scheme has been designed to address the issue of reward sparsity during the training process. The manually crafted dense reward function is utilized as attractive and repulsive potential functions for goal manipulation and obstacle avoidance. On the highly complex underwater manipulation and training environment, transferred learning algorithm has been established to reduce the training times and compensate the differences between the simulation and experiment. Simulations and tank experiments have verified the performance of the proposed strategy learning method.
NASA ADS 2024-09-00
Wolf, Adam, Beck, Sascha, Zsoldos, Panna, Galambos, Peter, Szell, Karoly
2024 IEEE 22nd Jubilee International Symposium on Intelligent Systems and Informatics (SISY)
Show Abstract
The increasing complexity and diversity of laboratory automation call for more adaptable and integrated solutions. This paper presents a real-life implementation of the Laboratory Automation Plug and Play (LAPP) concept, leveraging the SiLA 2 protocol to enable seamless robotic integration in heterogeneous laboratory environments. We introduce the mobERT mobile manipulator, a collaborative robot system designed to handle standard labware, such as ANSI/SLAS microtiter plates, across multiple workstations. Our approach employs a hierarchical workflow decomposition and a system architecture model to facilitate plug-and-play configuration. We implemented this system using Biosero's GBG scheduler, ensuring scalable and standardized interoperability. The implementation demonstrates the practical application of LAPP in a pharmaceutical laboratory setting, specifically automating the sample preparation workflow for High-performance Liquid Chromatography (HPLC). This work highlights the feasibility of modular, low-level control agnostic solutions in advancing laboratory automation towards higher efficiency and flexibility.
NASA ADS 2024-10-00
72 citations Ze, Yanjie, Chen, Zixuan, Wang, Wenhao, Chen, Tianyi, He, Xialin, Yuan, Ying, Peng, Xue Bin, Wu, Jiajun
arXiv e-prints
Show Abstract
Humanoid robots capable of autonomous operation in diverse environments have long been a goal for roboticists. However, autonomous manipulation by humanoid robots has largely been restricted to one specific scene, primarily due to the difficulty of acquiring generalizable skills and the expensiveness of in-the-wild humanoid robot data. In this work, we build a real-world robotic system to address this challenging problem. Our system is mainly an integration of 1) a whole-upper-body robotic teleoperation system to acquire human-like robot data, 2) a 25-DoF humanoid robot platform with a height-adjustable cart and a 3D LiDAR sensor, and 3) an improved 3D Diffusion Policy learning algorithm for humanoid robots to learn from noisy human data. We run more than 2000 episodes of policy rollouts on the real robot for rigorous policy evaluation. Empowered by this system, we show that using only data collected in one single scene and with only onboard computing, a full-sized humanoid robot can autonomously perform skills in diverse real-world scenarios. Videos are available at https://humanoid-manipulation.github.io .
arXiv 2020-10-13
Eduardo Godinho Ribeiro, Raul de Queiroz Mendes, Valdir Grassi
Journal: Robotics and Autonomous Systems, publisher: Elsevier, volume number: 139, year: 2021, page number: 103757
Show Abstract
In order to explore robotic grasping in unstructured and dynamic environments, this work addresses the visual perception phase involved in the task. This phase involves the processing of visual data to obtain the location of the object to be grasped, its pose and the points at which the robot`s grippers must make contact to ensure a stable grasp. For this, the Cornell Grasping dataset is used to train a convolutional neural network that, having an image of the robot`s workspace, with a certain object, is able to predict a grasp rectangle that symbolizes the position, orientation and opening of the robot`s grippers before its closing. In addition to this network, which runs in real-time, another one is designed to deal with situations in which the object moves in the environment. Therefore, the second network is trained to perform a visual servo control, ensuring that the object remains in the robot`s field of view. This network predicts the proportional values of the linear and angular velocities that the camera must have so that the object is always in the image processed by the grasp network. The dataset used for training was automatically generated by a Kinova Gen3 manipulator. The robot is also used to evaluate the applicability in real-time and obtain practical results from the designed algorithms. Moreover, the offline results obtained through validation sets are also analyzed and discussed regarding their efficiency and processing speed. The developed controller was able to achieve a millimeter accuracy in the final position considering a target object seen for the first time. To the best of our knowledge, we have not found in the literature other works that achieve such precision with a controller learned from scratch. Thus, this work presents a new system for autonomous robotic manipulation with high processing speed and the ability to generalize to several different objects.
arXiv 2021-12-03
Markku Suomalainen, Yiannis Karayiannidis, Ville Kyrki
Robotics and Autonomous Systems, Volume 156, 2022, 104224, ISSN 0921-8890,
Show Abstract
In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of 1) performing tasks that always require contact and 2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks.
arXiv 2024-06-20
Haokun Liu, Yaonan Zhu, Kenji Kato, Atsushi Tsukahara, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
IEEE Robotics and Automation Letters, vol. 9, no. 8, pp. 6904-6911, Aug. 2024
Show Abstract
Large Language Models (LLMs) are gaining popularity in the field of robotics. However, LLM-based robots are limited to simple, repetitive motions due to the poor integration between language models, robots, and the environment. This paper proposes a novel approach to enhance the performance of LLM-based autonomous manipulation through Human-Robot Collaboration (HRC). The approach involves using a prompted GPT-4 language model to decompose high-level language commands into sequences of motions that can be executed by the robot. The system also employs a YOLO-based perception algorithm, providing visual cues to the LLM, which aids in planning feasible motions within the specific environment. Additionally, an HRC method is proposed by combining teleoperation and Dynamic Movement Primitives (DMP), allowing the LLM-based robot to learn from human guidance. Real-world experiments have been conducted using the Toyota Human Support Robot for manipulation tasks. The outcomes indicate that tasks requiring complex trajectory planning and reasoning over environments can be efficiently accomplished through the incorporation of human demonstrations.
arXiv 2022-10-12
Guanrui Li, Xinyang Liu, Giuseppe Loianno
IEEE Transactions on Robotics, 2024
Show Abstract
Human-robot interaction will play an essential role in various industries and daily tasks, enabling robots to effectively collaborate with humans and reduce their physical workload. Most of the existing approaches for physical human-robot interaction focus on collaboration between a human and a single ground or aerial robot. In recent years, very little progress has been made in this research area when considering multiple aerial robots, which offer increased versatility and mobility. This paper proposes a novel approach for physical human-robot collaborative transportation and manipulation of a cable-suspended payload with multiple aerial robots. The proposed method enables smooth and intuitive interaction between the transported objects and a human worker. In the same time, we consider distance constraints during the operations by exploiting the internal redundancy of the multi-robot transportation system. The key elements of our approach are (a) a collaborative payload external wrench estimator that does not rely on any force sensor; (b) a 6D admittance controller for human-aerial-robot collaborative transportation and manipulation; (c) a human-aware force distribution that exploits the internal system redundancy to guarantee the execution of additional tasks such inter-human-robot separation without affecting the payload trajectory tracking or quality of interaction. We validate the approach through extensive simulation and real-world experiments. These include scenarios where the robot team assists the human in transporting and manipulating a load, or where the human helps the robot team navigate the environment. We experimentally demonstrate for the first time, to the best of our knowledge, that our approach enables a quadrotor team to physically collaborate with a human in manipulating a payload in all 6 DoF in collaborative human-robot transportation and manipulation tasks.
OpenAlex 2012-10-01
120 citations J. Andrew Bagnell, Felipe Lira de Sá Cavalcanti, Lei Cui, Thomas Galluzzo, Martial Hebert, Moslem Kazemi, Matthew Klingensmith, Jacqueline Libby, Tian Yu Liu, Nancy S. Pollard, Mihail Pivtoraiko, Jean‐Sebastien Valois, Ranqi Zhu
Show Abstract
We describe the software components of a robotics system designed to autonomously grasp objects and perform dexterous manipulation tasks with only high-level supervision. The system is centered on the tight integration of several core functionalities, including perception, planning and control, with the logical structuring of tasks driven by a Behavior Tree architecture. The advantage of the implementation is to reduce the execution time while integrating advanced algorithms for autonomous manipulation. We describe our approach to 3-D perception, real-time planning, force compliant motions, and audio processing. Performance results for object grasping and complex manipulation tasks of in-house tests and of an independent evaluation team are presented.
OpenAlex 2022-02-15
14 citations Rui Wang, Congjia Su, Hao Yu, Shuo Wang
IEEE Transactions on Cognitive and Developmental Systems
Show Abstract
The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods.
OpenAlex 2013-01-01
100 citations Konstantin Kondak, Kai Krieger, Alin Albu‐Schäffer, Marc Schwarzbach, Maximilian Laiacker, Iván Maza, Ángel Rodríguez Castaño, Anı́bal Ollero
International Journal of Advanced Robotic Systems
Show Abstract
This paper is devoted to the control of aerial robots interacting physically with objects in the environment and with other aerial robots. The paper presents a controller for the particular case of a small-scaled autonomous helicopter equipped with a robotic arm for aerial manipulation. Two types of influences are imposed on the helicopter from a manipulator: coherent and non-coherent influence. In the former case, the forces and torques imposed on the helicopter by the manipulator change with frequencies close to those of the helicopter movement. The paper shows that even small interaction forces imposed on the fuselage periodically in proper phase could yield to low frequency instabilities and oscillations, so-called phase circles.
OpenAlex 2014-05-01
141 citations Konstantin Kondak, Felix Huber, Marc Schwarzbach, Maximilian Laiacker, David Sommer, Manuel Béjar, Alfredo Ollero Ojeda
Show Abstract
This paper is devoted to a system for aerial manipulation, composed of a helicopter and an industrial manipulator. The usage of an industrial manipulator is motivated by practical applications which were identified in different cooperation projects with the industry. We address the coupling between manipulator and helicopter and show that even in case when we have an ideal controller for manipulator and a highperformance controller for helicopter, an unbounded energy flow can be generated by internal forces between helicopter and manipulator if both controllers are used independently. To solve this problem we propose a new kinematical coupling for control by introducing an additional manipulation DoF realized by helicopter rotation around its yaw axis. The new experimental setup and required modifications in the manipulator controller for this purpose are described. Further, we propose dynamical coupling which is implemented by modification of the helicopter controller feeding the interaction force/torque, measured between manipulator base and fuselage, directly to the actuators of the rotor blades. At the end, we present experimental results for aerial manipulation and their analysis.