Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Sci Robot ; 9(89): eadi9641, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38657088

ABSTRACT

Autonomous wheeled-legged robots have the potential to transform logistics systems, improving operational efficiency and adaptability in urban environments. Navigating urban environments, however, poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation. These challenges include the need for adaptive locomotion across varied terrains and the ability to navigate efficiently around complex dynamic obstacles. This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city. Using model-free reinforcement learning (RL) techniques and privileged learning, we developed a versatile locomotion controller. This controller achieves efficient and robust locomotion over various rough terrains, facilitated by smooth transitions between walking and driving modes. It is tightly integrated with a learned navigation controller through a hierarchical RL framework, enabling effective navigation through challenging terrain and various obstacles at high speed. Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain. These missions demonstrate the system's robustness and adaptability, underscoring the importance of integrated control systems in achieving seamless navigation in complex environments. Our findings support the feasibility of wheeled-legged robots and hierarchical RL for autonomous navigation, with implications for last-mile delivery and beyond.

2.
Sci Robot ; 9(88): eadi7566, 2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38478592

ABSTRACT

Performing agile navigation with four-legged robots is a challenging task because of the highly dynamic motions, contacts with various parts of the robot, and the limited field of view of the perception sensors. Here, we propose a fully learned approach to training such robots and conquer scenarios that are reminiscent of parkour challenges. The method involves training advanced locomotion skills for several types of obstacles, such as walking, jumping, climbing, and crouching, and then using a high-level policy to select and control those skills across the terrain. Thanks to our hierarchical formulation, the navigation policy is aware of the capabilities of each skill, and it will adapt its behavior depending on the scenario at hand. In addition, a perception module was trained to reconstruct obstacles from highly occluded and noisy sensory data and endows the pipeline with scene understanding. Compared with previous attempts, our method can plan a path for challenging scenarios without expert demonstration, offline computation, a priori knowledge of the environment, or taking contacts explicitly into account. Although these modules were trained from simulated data only, our real-world experiments demonstrate successful transfer on hardware, where the robot navigated and crossed consecutive challenging obstacles with speeds of up to 2 meters per second.


Subject(s)
Robotics , Learning , Locomotion , Motion , Upper Extremity
3.
Sci Robot ; 9(86): eadh5401, 2024 Jan 17.
Article in English | MEDLINE | ID: mdl-38232148

ABSTRACT

Legged locomotion is a complex control problem that requires both accuracy and robustness to cope with real-world challenges. Legged systems have traditionally been controlled using trajectory optimization with inverse dynamics. Such hierarchical model-based methods are appealing because of intuitive cost function tuning, accurate planning, generalization, and, most importantly, the insightful understanding gained from more than one decade of extensive research. However, model mismatch and violation of assumptions are common sources of faulty operation. Simulation-based reinforcement learning, on the other hand, results in locomotion policies with unprecedented robustness and recovery skills. Yet, all learning algorithms struggle with sparse rewards emerging from environments where valid footholds are rare, such as gaps or stepping stones. In this work, we propose a hybrid control architecture that combines the advantages of both worlds to simultaneously achieve greater robustness, foot-placement accuracy, and terrain generalization. Our approach uses a model-based planner to roll out a reference motion during training. A deep neural network policy is trained in simulation, aiming to track the optimized footholds. We evaluated the accuracy of our locomotion pipeline on sparse terrains, where pure data-driven methods are prone to fail. Furthermore, we demonstrate superior robustness in the presence of slippery or deformable ground when compared with model-based counterparts. Last, we show that our proposed tracking controller generalizes across different trajectory optimization methods not seen during training. In conclusion, our work unites the predictive capabilities and optimality guarantees of online planning with the inherent robustness attributed to offline learning.

4.
Sci Robot ; 8(84): eabp9758, 2023 Nov 22.
Article in English | MEDLINE | ID: mdl-37992191

ABSTRACT

Automated building processes that enable efficient in situ resource utilization can facilitate construction in remote locations while simultaneously offering a carbon-reducing alternative to commonplace building practices. Toward these ends, we present a robotic construction pipeline that is capable of planning and building freeform stone walls and landscapes from highly heterogeneous local materials using a robotic excavator equipped with a shovel and gripper. Our system learns from real and simulated data to facilitate the online detection and segmentation of stone instances in spatial maps, enabling robotic grasping and textured 3D scanning of individual stones and rubble elements. Given a limited inventory of these digitized stones, our geometric planning algorithm uses a combination of constrained registration and signed-distance-field classification to determine how these should be positioned toward the formation of stable and explicitly shaped structures. We present a holistic approach for the robotic manipulation of complex objects toward dry stone construction and use the same hardware and mapping to facilitate autonomous terrain-shaping on a single construction site. Our process is demonstrated with the construction of a freestanding stone wall (10 meters by 1.7 meters by 4 meters) and a permanent retaining wall (65.5 meters by 1.8 meters by 6 meters) that is integrated with robotically contoured terraces (665 square meters). The work illustrates the potential of autonomous heavy construction vehicles to build adaptively with highly irregular, abundant, and sustainable materials that require little to no transportation and preprocessing.

5.
Sci Robot ; 8(81): eadg5014, 2023 Aug 16.
Article in English | MEDLINE | ID: mdl-37585544

ABSTRACT

Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. These skills can be assessed on the basis of a system's ability to coordinate complex holistic movements and multiple contact interactions when solving different tasks. However, existing approaches have been merely able to shape such behaviors with hand-crafted state machines, densely engineered rewards, or prerecorded expert demonstrations. Here, we propose a minimally guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in premodeled environments. The key insight is that multimodal problems of this nature can be formulated and treated within the context of integrated task and motion planning (TAMP). An effective bilevel search strategy was achieved by incorporating domain-specific rules and adequately combining the strengths of different planning techniques: trajectory optimization and informed graph search coupled with sampling-based planning. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and nonprehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors. These behaviors were also deployed on the real system using a two-layer whole-body tracking controller.

6.
Sci Robot ; 8(80): eade9548, 2023 Jul 12.
Article in English | MEDLINE | ID: mdl-37436970

ABSTRACT

The interest in exploring planetary bodies for scientific investigation and in situ resource utilization is ever-rising. Yet, many sites of interest are inaccessible to state-of-the-art planetary exploration robots because of the robots' inability to traverse steep slopes, unstructured terrain, and loose soil. In addition, current single-robot approaches only allow a limited exploration speed and a single set of skills. Here, we present a team of legged robots with complementary skills for exploration missions in challenging planetary analog environments. We equipped the robots with an efficient locomotion controller, a mapping pipeline for online and postmission visualization, instance segmentation to highlight scientific targets, and scientific instruments for remote and in situ investigation. Furthermore, we integrated a robotic arm on one of the robots to enable high-precision measurements. Legged robots can swiftly navigate representative terrains, such as granular slopes beyond 25°, loose soil, and unstructured terrain, highlighting their advantages compared with wheeled rover systems. We successfully verified the approach in analog deployments at the Beyond Gravity ExoMars rover test bed, in a quarry in Switzerland, and at the Space Resources Challenge in Luxembourg. Our results show that a team of legged robots with advanced locomotion, perception, and measurement skills, as well as task-level autonomy, can conduct successful, effective missions in a short time. Our approach enables the scientific exploration of planetary target sites that are currently out of human and robotic reach.

7.
Int J Rob Res ; 41(2): 189-209, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35694721

ABSTRACT

Modern robotic systems are expected to operate robustly in partially unknown environments. This article proposes an algorithm capable of controlling a wide range of high-dimensional robotic systems in such challenging scenarios. Our method is based on the path integral formulation of stochastic optimal control, which we extend with constraint-handling capabilities. Under our control law, the optimal input is inferred from a set of stochastic rollouts of the system dynamics. These rollouts are simulated by a physics engine, placing minimal restrictions on the types of systems and environments that can be modeled. Although sampling-based algorithms are typically not suitable for online control, we demonstrate in this work how importance sampling and constraints can be used to effectively curb the sampling complexity and enable real-time control applications. Furthermore, the path integral framework provides a natural way of incorporating existing control architectures as ancillary controllers for shaping the sampling distribution. Our results reveal that even in cases where the ancillary controller would fail, our stochastic control algorithm provides an additional safety and robustness layer. Moreover, in the absence of an existing ancillary controller, our method can be used to train a parametrized importance sampling policy using data from the stochastic rollouts. The algorithm may thereby bootstrap itself by learning an importance sampling policy offline and then refining it to unseen environments during online control. We validate our results on three robotic systems, including hardware experiments on a quadrupedal robot.

8.
Sci Robot ; 7(66): eabp9742, 2022 05 25.
Article in English | MEDLINE | ID: mdl-35613301

ABSTRACT

This article presents the core technologies and deployment strategies of Team CERBERUS that enabled our winning run in the DARPA Subterranean Challenge finals. CERBERUS is a robotic system-of-systems involving walking and flying robots presenting resilient autonomy, as well as mapping and navigation capabilities to explore complex underground environments.


Subject(s)
Robotics
9.
Sci Robot ; 7(62): eabk2822, 2022 Jan 19.
Article in English | MEDLINE | ID: mdl-35044798

ABSTRACT

Legged robots that can operate autonomously in remote and hazardous environments will greatly increase opportunities for exploration into underexplored areas. Exteroceptive perception is crucial for fast and energy-efficient locomotion: Perceiving the terrain before making contact with it enables planning and adaptation of the gait ahead of time to maintain speed and stability. However, using exteroceptive perception robustly for locomotion has remained a grand challenge in robotics. Snow, vegetation, and water visually appear as obstacles on which the robot cannot step or are missing altogether due to high reflectance. In addition, depth perception can degrade due to difficult lighting, dust, fog, reflective or transparent surfaces, sensor occlusion, and more. For this reason, the most robust and general solutions to legged locomotion to date rely solely on proprioception. This severely limits locomotion speed because the robot has to physically feel out the terrain before adapting its gait accordingly. Here, we present a robust and general solution to integrating exteroceptive and proprioceptive perception for legged locomotion. We leverage an attention-based recurrent encoder that integrates proprioceptive and exteroceptive input. The encoder is trained end to end and learns to seamlessly combine the different perception modalities without resorting to heuristics. The result is a legged locomotion controller with high robustness and speed. The controller was tested in a variety of challenging natural and urban environments over multiple seasons and completed an hour-long hike in the Alps in the time recommended for human hikers.


Subject(s)
Locomotion/physiology , Robotics/instrumentation , Biomimetic Materials , Biomimetics , Computer Simulation , Environment , Gait/physiology , Humans , Machine Learning , Models, Biological , Neural Networks, Computer , Proprioception/physiology , Robotics/statistics & numerical data , Seasons , Walking/physiology
10.
Sci Robot ; 5(47)2020 10 21.
Article in English | MEDLINE | ID: mdl-33087482

ABSTRACT

Legged locomotion can extend the operational domain of robots to some of the most challenging environments on Earth. However, conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes. These designs have increased in complexity but fallen short of the generality and robustness of animal locomotion. Here, we present a robust controller for blind quadrupedal locomotion in challenging natural environments. Our approach incorporates proprioceptive feedback in locomotion control and demonstrates zero-shot generalization from simulation to natural environments. The controller is trained by reinforcement learning in simulation. The controller is driven by a neural network policy that acts on a stream of proprioceptive signals. The controller retains its robustness under conditions that were never encountered during training: deformable terrains such as mud and snow, dynamic footholds such as rubble, and overground impediments such as thick vegetation and gushing water. The presented work indicates that robust locomotion in natural environments can be achieved by training in simple domains.

11.
Sci Robot ; 4(26)2019 01 16.
Article in English | MEDLINE | ID: mdl-33137755

ABSTRACT

Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. In the present work, we introduce a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes. The approach is applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than before, and recovering from falling even in complex configurations.

12.
IEEE Trans Vis Comput Graph ; 24(1): 298-308, 2018 01.
Article in English | MEDLINE | ID: mdl-28866560

ABSTRACT

Labeling data instances is an important task in machine learning and visual analytics. Both fields provide a broad set of labeling strategies, whereby machine learning (and in particular active learning) follows a rather model-centered approach and visual analytics employs rather user-centered approaches (visual-interactive labeling). Both approaches have individual strengths and weaknesses. In this work, we conduct an experiment with three parts to assess and compare the performance of these different labeling strategies. In our study, we (1) identify different visual labeling strategies for user-centered labeling, (2) investigate strengths and weaknesses of labeling strategies for different labeling tasks and task complexities, and (3) shed light on the effect of using different visual encodings to guide the visual-interactive labeling process. We further compare labeling of single versus multiple instances at a time, and quantify the impact on efficiency. We systematically compare the performance of visual interactive labeling with that of active learning. Our main findings are that visual-interactive labeling can outperform active learning, given the condition that dimension reduction separates well the class distributions. Moreover, using dimension reduction in combination with additional visual encodings that expose the internal state of the learning model turns out to improve the performance of visual-interactive labeling.

13.
Front Neurosci ; 7: 275, 2013.
Article in English | MEDLINE | ID: mdl-24478619

ABSTRACT

Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm.

SELECTION OF CITATIONS
SEARCH DETAIL
...