User Tools

Site Tools


Upload failed. Maybe wrong permissions?
adrl:research:projects

Optimal and Learning Control

Our goal is to enable automatic generation of behaviors on robotic applications that require stable and safe control, are high dimensional and have non linear and switching dynamics, such as walking and manipulation. To this end, we propose to use a single optimization formulation that combines the planning and control problems while it exploits the robot dynamics to complete a given task. Preliminary Results include robotics tasks solved using an iterative optimal control stage performed in simulation and / or a reinforcement learning phase to refine a controller on the real robot. This problem can be defined as an optimal control problem where the desired behavior of the robot is encoded in a cost function. Our approach leverages information obtained from both, the approximate model knowledge as well as from sampling the real system.

Selected publications:

Farbod Farshidian and Michael Neunert and Jonas Buchli (2014). Learning of Closed-Loop Motion Control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014).

Cedric de Crousaz and Farbod Farshidian and Jonas Buchli (2014). Aggressive Optimal Control for Agile Flight with a Slung Load. In IROS 2014 Workshop on Machine Learning in Planning and Control of Robot Motion.

Contact: Farbod Farshidian, Michael Neunert, Alexander Winkler and Diego Pardo.

Research partner: Vijay Kumar Lab

Machine Learning for Robotics

We develop innovative machine learning and control methods based on our previous results on Path Integral Reinforcement Learning. The goal is to develop highly efficient and scalable machine learning algorithms for robotic control applications.

Selected publications:

Farbod Farshidian and Jonas Buchli (2013). Path Integral Stochastic Optimal Control for Reinforcement Learning. In The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM), pp. 5-8.

Contact: Farbod Farshidian

Black box optimization

We investigate what it takes to learn efficiently from a minimum of interaction with the real system. We have developed a highly efficient Black-Box Optimization Algorithm (ROCK*) that converges with significantly less number of learning trials on standard benchmarks and robotic closed loop control problems.

Selected Publications:

Source code: ROCK* Example Implementation

[1] J. Hwangbo, C. Gehring, H. Sommer, R. Siegwart, J. Buchli (2014). ROCK⋆ - Efficient Black-box Optimization for Policy Learning. In Proceedings 2014 IEEE-RAS International Conference on Humanoid Robots PDF

Legged Mobile Manipulation

In order to enable walking robots to manipulate the environment in service tasks, we are developing methods for whole body control and motion planning, including whole body impedance control. These methods are required to exploit the capabilities of robots with arms and legs for legged mobile manipulation.

Contact: Alexander Winkler, Michael Neunert, Farbod Farshidian and Diego Pardo.

Research partner: Dynamic Legged Systems Lab, Istituto Italiano di Tecnologia

Paleoforensics - Use-wear analysis

This project is the first attempt to introduce robotics in archaeological micro-wear experiments. These experiments are widely applied in anthropology and archaeology to study human tool use in prehistory. The idea is to use artificial tools to cut, scrape or pierce under well documented conditions. The resulting microscopic wear pattern is then compared to finding on artifacts from real sites. The main advantage of robots instead of human subjects for such experiments lies in higher control during task execution and the generation of reference pieces to an extend with is unfeasible by humans. Currently we work with a KUKA LWR iiwa as experimental platform.

Contact: Johannes Pfleging

Research partner: Dr. Radu Iovita, MONREPOS Archaeological Research Centre and Museum

Control of Exoskeletons

Within the frame of the FP7 European Project BALANCE we look at control methods for exoskeletons. The overall goal of the project is to realize an exoskeleton that improves the balance performance of humans while standing and walking. While big part of the project effort focuses on increasing the understanding on how human postural control is structured, we focus on using this knowledge to guide the development of human collaborative controls for the exoskeleton robot. The two main control modes are: 1) transparent: under non critical balance situations, the exoskeleton should not apply any force to the wearer; and 2) active: when the exoskeleton detects an imminent loss of balance, it reacts by applying torques to the human joints or even moving the user leg to more appropriate positions in order to keep balance.

Contact: T. Boaventura (tboaventura (at) ethz (dot) ch )

Research partner: BALANCE

Architectural Robotics & Digital Fabrication

The Agile & Dexterous Robotics Lab is a part of the NCCR Digital Fabrication, funded by the Swiss National Science Foundation. Our work is focused around developing computational methods to facilitate the use of robots in building construction. Tasks include: 1) automatic controller generation for articulated robots, and 2) learning control of non-standard mobile manipulation tasks in unstructured environments. The research platform for this work is DimRob, which was first developed by the Chair of Architecture and Digital Fabrication at ETH (pictured to the left). The ADRL joined this project in 2013 to retro-fit the robot to make it capable of autonomous intelligent operation. Our future research will be used to enable this robot to complete complex real-world fabrication tasks.

Selected Publications:

Helm, V.; Ercan, S.; Gramazio, F.; Kohler, M., “Mobile robotic fabrication on construction sites: DimRob,” Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on , vol., no., pp.4335,4341, 7-12 Oct. 2012 link

Contact: Markus Gifthaler and Timothy Sandy

Research partner: Gramazio & Kohler Research , NCCR Digital Fabrication

adrl/research/projects.txt · Last modified: 2014/12/30 05:37 by jonas