Publications
![]() |
GPU-Accelerated Next-Best-View Coverage of Articulated Scenes. Stefan Oßwald and Maren Bennewitz. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018. Abstract Download BibTeX YouTube video Code Next-best-view algorithms are commonly used for covering known scenes, for example in search, maintenance, and mapping tasks. In this paper, we consider the problem of planning a strategy for covering articulated environments where the robot also has to manipulate objects to inspect obstructed areas. This problem is particularly challenging due to the many degrees of freedom resulting from the articulation. We propose to exploit graphics processing units present in many embedded devices to parallelize the computations of a greedy next-best-view approach. We implemented algorithms for costmap computation, path planning, as well as simulation and evaluation of viewpoint candidates in OpenGL for Embedded Systems and benchmarked the implementations on multiple device classes ranging from smartphones to multi-GPU servers. We introduce a heuristic for estimating a utility map from images rendered with strategically placed spherical cameras and show in simulation experiments that robots can successfully explore complex articulated scenes with our system.
@InProceedings{osswald18iros, Title = {{GPU}-Accelerated Next-Best-View Coverage of Articulated Scenes}, Author = {Stefan O{\ss}wald and Maren Bennewitz}, Booktitle = {Proc.\ of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems (IROS)}, Year = {2018}, Abstract = {Next-best-view algorithms are commonly used for covering known scenes, for example in search, maintenance, and mapping tasks. In this paper, we consider the problem of planning a strategy for covering articulated environments where the robot also has to manipulate objects to inspect obstructed areas. This problem is particularly challenging due to the many degrees of freedom resulting from the articulation. We propose to exploit graphics processing units present in many embedded devices to parallelize the computations of a greedy next-best-view approach. We implemented algorithms for costmap computation, path planning, as well as simulation and evaluation of viewpoint candidates in OpenGL for Embedded Systems and benchmarked the implementations on multiple device classes ranging from smartphones to multi-GPU servers. We introduce a heuristic for estimating a utility map from images rendered with strategically placed spherical cameras and show in simulation experiments that robots can successfully explore complex articulated scenes with our system.} } |
![]() |
A Combined RGB and Depth Descriptor for SLAM with Humanoids. Rasha Sheikh, Stefan Oßwald, and Maren Bennewitz. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018. Abstract Download BibTeX Code In this paper, we present a visual simultaneous localization and mapping (SLAM) system for humanoid robots. We introduce a new binary descriptor called DLab that exploits the combined information of color, depth, and intensity to achieve robustness with respect to uniqueness, reproducibility, and stability. We use DLab within ORB-SLAM, where we replaced the place recognition module with a modification of FAB-MAP that works with newly built codebooks using our binary descriptor. In experiments carried out in simulation and with a real Nao humanoid equipped with an RGB-D camera, we show that DLab has a superior performance in comparison to other descriptors. The application to feature tracking and place recognition reveal that the new descriptor is able to reliably track features even in sequences with seriously blurred images and that it has a higher percentage of correctly identified similar images. As a result, our new visual SLAM system has a lower absolute trajectory error in comparison to ORB-SLAM and is able to accurately track the robot's trajectory.
@InProceedings{sheikh18iros, Title = {A Combined {RGB} and Depth Descriptor for {SLAM} with Humanoids}, Author = {Rasha Sheikh and Stefan O{\ss}wald and Maren Bennewitz}, Booktitle = {Proc.\ of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems (IROS)}, Year = {2018}, Abstract = {In this paper, we present a visual simultaneous localization and mapping (SLAM) system for humanoid robots. We introduce a new binary descriptor called DLab that exploits the combined information of color, depth, and intensity to achieve robustness with respect to uniqueness, reproducibility, and stability. We use DLab within ORB-SLAM, where we replaced the place recognition module with a modification of FAB-MAP that works with newly built codebooks using our binary descriptor. In experiments carried out in simulation and with a real Nao humanoid equipped with an RGB-D camera, we show that DLab has a superior performance in comparison to other descriptors. The application to feature tracking and place recognition reveal that the new descriptor is able to reliably track features even in sequences with seriously blurred images and that it has a higher percentage of correctly identified similar images. As a result, our new visual SLAM system has a lower absolute trajectory error in comparison to ORB-SLAM and is able to accurately track the robot's trajectory.} } |
![]() |
Efficient Coverage of 3D Environments with Humanoid Robots Using Inverse Reachability Maps. Stefan Oßwald, Philipp Karkowski, and Maren Bennewitz. Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2017. Abstract Download BibTeX DOI YouTube video Covering a known 3D environment with a robot's camera is a commonly required task, for example in inspection and surveillance, mapping, or object search applications. In addition to the problem of finding a complete and efficient set of view points for covering the whole environment, humanoid robots also need to observe balance, energy, and kinematic constraints for reaching the desired view poses. In this paper, we approach this high-dimensional planning problem by introducing a novel inverse reachability map representation that can be used for fast pose generation and combine it with a next-best-view algorithm for covering a known 3D environment. We implemented our approach in ROS and tested it with a Nao robot on both simulated and real-world scenes. The experiments show that our approach enables the humanoid to efficiently cover room-sized environments with its camera.
@InProceedings{osswald17humanoids, Title = {Efficient Coverage of {3D} Environments with Humanoid Robots Using Inverse Reachability Maps}, Author = {Stefan O{\ss}wald and Philipp Karkowski and Maren Bennewitz}, Booktitle = {Proc.\ of the {IEEE-RAS} International Conference on Humanoid Robots (HUMANOIDS)}, Year = {2017}, Doi = {10.1109/HUMANOIDS.2017.8239550}, Pages = {151--157}, Abstract = {Covering a known 3D environment with a robot's camera is a commonly required task, for example in inspection and surveillance, mapping, or object search applications. In addition to the problem of finding a complete and efficient set of view points for covering the whole environment, humanoid robots also need to observe balance, energy, and kinematic constraints for reaching the desired view poses. In this paper, we approach this high-dimensional planning problem by introducing a novel inverse reachability map representation that can be used for fast pose generation and combine it with a next-best-view algorithm for covering a known 3D environment. We implemented our approach in ROS and tested it with a Nao robot on both simulated and real-world scenes. The experiments show that our approach enables the humanoid to efficiently cover room-sized environments with its camera.} } |
![]() |
Real-Time Footstep Planning in 3D Environments. Philipp Karkowski, Stefan Oßwald, and Maren Bennewitz. Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2016. Abstract Download BibTeX DOI 10.1109/HUMANOIDS.2016.7803256
A variety of approaches exist that tackle the problem of humanoid locomotion. The spectrum ranges from dynamic walking controllers that allow fast walking to systems that plan longer footstep paths through complicated scenes. Simple walking controllers do not guarantee collision-free steps, whereas most existing footstep planners are not capable of providing results in real time. Thus, these methods cannot be used, not even in combination, to react to sudden changes in the environment. In this paper, we propose a new fast search method that combines A* with an adaptive 3D action set. When expanding a node, we systematically search for suitable footsteps by taking into account height information. As we show in various experiments, our approach outperforms standard A*-based footstep planning in both run time and path cost and, combined with an efficient map segmentation, finds valid footstep plans in 3D environments in under 50 ms.
@InProceedings{karkowski16humanoids, author = {Philipp Karkowski and Stefan O{\ss}wald and Maren Bennewitz}, title = {Real-Time Footstep Planning in {3D} Environments}, booktitle = {Proc.\ of the {IEEE-RAS} Int.\ Conf.\ on Humanoid Robots (HUMANOIDS)}, year = {2016}, pages = {69--74}, doi = {10.1109/HUMANOIDS.2016.7803256}, abstract = {A variety of approaches exist that tackle the problem of humanoid locomotion. The spectrum ranges from dynamic walking controllers that allow fast walking to systems that plan longer footstep paths through complicated scenes. Simple walking controllers do not guarantee collision-free steps, whereas most existing footstep planners are not capable of providing results in real time. Thus, these methods cannot be used, not even in combination, to react to sudden changes in the environment. In this paper, we propose a new fast search method that combines A* with an adaptive 3D action set. When expanding a node, we systematically search for suitable footsteps by taking into account height information. As we show in various experiments, our approach outperforms standard A*-based footstep planning in both run time and path cost and, combined with an efficient map segmentation, finds valid footstep plans in 3D environments in under 50 ms.} } |
![]() |
Foresighted Navigation Through Cluttered Environments. Peter Regier, Stefan Oßwald, Philipp Karkowski, and Maren Bennewitz. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016. Abstract Download BibTeX DOI 10.1109/IROS.2016.7759234
In this paper, we introduce an approach to efficient robot navigation through cluttered indoor environments. We propose to estimate local obstacle densities based on already detected objects and use them to predict traversal costs corresponding to potential obstacles in regions not yet observable by the robot's sensors. By taking into account the predicted costs for path planning, the robot is then able to navigate in a more foresighted manner and reduces the risk of getting stuck in cluttered regions. We thoroughly evaluated our approach in simulated and real-world experiments. As the experimental results demonstrate, our method enables the robot to efficiently navigate through environments containing cluttered regions and achieves significantly shorter completion times compared to a standard approach not using any prediction.
@InProceedings{regier16iros, author = {Peter Regier and Stefan O{\ss}wald and Philipp Karkowski and Maren Bennewitz}, title = {Foresighted Navigation Through Cluttered Environments}, booktitle = {Proc.\ of the {IEEE/RSJ} Int.\ Conf.\ on Intelligent Robots and Systems (IROS)}, year = {2016}, pages = {1437--1442}, doi = {10.1109/IROS.2016.7759234}, abstract = {In this paper, we introduce an approach to efficient robot navigation through cluttered indoor environments. We propose to estimate local obstacle densities based on already detected objects and use them to predict traversal costs corresponding to potential obstacles in regions not yet observable by the robot's sensors. By taking into account the predicted costs for path planning, the robot is then able to navigate in a more foresighted manner and reduces the risk of getting stuck in cluttered regions. We thoroughly evaluated our approach in simulated and real-world experiments. As the experimental results demonstrate, our method enables the robot to efficiently navigate through environments containing cluttered regions and achieves significantly shorter completion times compared to a standard approach not using any prediction.} } |
![]() |
Speeding-Up Robot Exploration by Exploiting Background Information Stefan Oßwald, Maren Bennewitz, Wolfram Burgard, and Cyrill Stachniss IEEE Robotics and Automation Letters (RA-L), 2016. Presented at ICRA 2016. Abstract Download BibTeX DOI 10.1109/LRA.2016.2520560
The ability to autonomously learn a model of an environment is an important capability of a mobile robot. In this paper, we investigate the problem of exploring a scene given background information in form of a topo-metric graph of the environment. Our method is relevant for several real-world applications in which the rough structure of the environment is known beforehand. We present an approach that exploits such background information and enables a robot to cover the environment with its sensors faster compared to a greedy exploration system without this information. We implemented our exploration system in ROS and evaluated it in different environments. As the experimental results demonstrate, our proposed method significantly reduces the overall trajectory length needed to cover the environment with the robot's sensors and thus yields a more efficient exploration strategy compared to state-of-the-art greedy exploration, if the additional information is available.
@Article{osswald16ral, author = {Stefan O{\ss}wald and Maren Bennewitz and Wolfram Burgard and Cyrill Stachniss}, title = {Speeding-Up Robot Exploration by Exploiting Background Information}, journal = {{IEEE} Robotics and Automation Letters (RA-L)}, year = {2016}, volume = {1}, number = {2}, pages = {716--723}, doi = {10.1109/LRA.2016.2520560}, issn = {2377-3766}, abstract = {The ability to autonomously learn a model of an environment is an important capability of a mobile robot. In this paper, we investigate the problem of exploring a scene given background information in form of a topo-metric graph of the environment. Our method is relevant for several real-world applications in which the rough structure of the environment is known beforehand. We present an approach that exploits such background information and enables a robot to cover the environment with its sensors faster compared to a greedy exploration system without this information. We implemented our exploration system in ROS and evaluated it in different environments. As the experimental results demonstrate, our proposed method significantly reduces the overall trajectory length needed to cover the environment with the robot's sensors and thus yields a more efficient exploration strategy compared to state-of-the-art greedy exploration, if the additional information is available.} } |
![]() |
Learning to Give Route Directions from Human Demonstrations Stefan Oßwald, Henrik Kretzschmar, Wolfram Burgard, and Cyrill Stachniss Proceedings of the IEEE International Conference on Robotics & Automation (ICRA), Hong Kong, China, 2014. Abstract Download BibTeX DOI 10.1109/ICRA.2014.6907334
For several applications, robots and other computer systems must provide route descriptions to humans. These descriptions should be natural and intuitive for the human users. In this paper, we present an algorithm that learns how to provide good route descriptions from a corpus of human-written directions. Using inverse reinforcement learning, our algorithm learns how to select the information for the description depending on the context of the route segment. The algorithm then uses the learned policy to generate directions that imitate the style of the descriptions provided by humans, thus taking into account personal as well as cultural preferences and special requirements of the particular user group providing the learning demonstrations. We evaluate our approach in a user study and show that the directions generated by our policy sound similar to human-given directions and substantially more natural than directions provided by commercial web services.
@inproceedings{osswald14icra, author = {Stefan O{\ss}wald and Henrik Kretzschmar and Wolfram Burgard and Cyrill Stachniss}, title = {Learning to Give Route Directions from Human Demonstrations}, booktitle = {Proc.\ of the {IEEE} International Conference on Robotics \& Automation (ICRA)}, year = {2014}, address = {Hong Kong, China}, doi = {10.1109/ICRA.2014.6907334}, pages = {3303-3308}, url = {http://ais.informatik.uni-freiburg.de/publications/papers/osswald14icra.pdf}, abstract = {For several applications, robots and other computer systems must provide route descriptions to humans. These descriptions should be natural and intuitive for the human users. In this paper, we present an algorithm that learns how to provide good route descriptions from a corpus of human-written directions. Using inverse reinforcement learning, our algorithm learns how to select the information for the description depending on the context of the route segment. The algorithm then uses the learned policy to generate directions that imitate the style of the descriptions provided by humans, thus taking into account personal as well as cultural preferences and special requirements of the particular user group providing the learning demonstrations. We evaluate our approach in a user study and show that the directions generated by our policy sound similar to human-given directions and substantially more natural than directions provided by commercial web services.}, } |
![]() |
Monte Carlo Localization for Humanoid Robot Navigation in Complex Indoor Environments Armin Hornung, Stefan Oßwald, Daniel Maier, and Maren Bennewitz International Journal of Humanoid Robotics, vol. 11 no. 2, 2014. Abstract Download BibTeX DOI 10.1142/S0219843614410023
Accurate and reliable localization is a prerequisite for autonomously performing high-level tasks with humanoid robots. In this article, we present a probabilistic localization method for humanoid robots navigating in arbitrary complex indoor environments using only onboard sensing, which is a challenging task. Inaccurate motion execution of biped robots leads to an uncertain estimate of odometry, and their limited payload constrains perception to observations from lightweight and typically noisy sensors. Additionally, humanoids do not walk on flat ground only and perform a swaying motion while walking, which requires estimating a full 6D torso pose. We apply Monte Carlo localization to globally determine and track a humanoid's 6D pose in a given 3D world model, which may contain multiple levels and staircases. We present an observation model to integrate range measurements from a laser scanner or a depth camera as well as attitude data and information from the joint encoders. To increase the localization accuracy, e.g., while climbing stairs, we propose a further observation model and additionally use monocular vision data in an improved proposal distribution. We demonstrate the effectiveness of our methods in extensive real-world experiments with a Nao humanoid. As the experiments illustrate, the robot is able to globally localize itself and accurately track its 6D pose while walking and climbing stairs.
@article{hornung14ijhr, AUTHOR = {Armin Hornung and Stefan O{\ss}wald and Daniel Maier and Maren Bennewitz}, TITLE = {Monte {C}arlo Localization for Humanoid Robot Navigation in Complex Indoor Environments}, YEAR = 2014, JOURNAL = {International Journal of Humanoid Robotics (IJHR)}, DOI = {10.1142/S0219843614410023}, volume = {11}, number = {2} } |
![]() |
Improved Proposals for Highly Accurate Localization Using Range and Vision Data Stefan Oßwald, Armin Hornung, and Maren Bennewitz Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 2012 Abstract Download BibTeX DOI 10.1109/IROS.2012.6385657
YouTube video
In order to successfully climb challenging stair-cases that consist of many steps and contain difficult parts, humanoid robots need to accurately determine their pose. In this paper, we present an approach that fuses the robot's observations from a 2D laser scanner, a monocular camera, an inertial measurement unit, and joint encoders in order to localize the robot within a given 3D model of the environment. We develop an extension to standard Monte Carlo localization (MCL) that draws particles from an improved proposal distribution to obtain highly accurate pose estimates. Furthermore, we introduce a new observation model based on chamfer matching between edges in camera images and the environment model. We thoroughly evaluate our localization approach and compare it to previous techniques in real-world experiments with a Nao humanoid. The results show that our approach significantly improves the localization accuracy and leads to a considerably more robust robot behavior. Our improved proposal in combination with chamfer matching can be generally applied to improve a range-based pose estimate by a consistent matching of lines obtained from vision.
@InProceedings{osswald12iros, author = {Stefan O{\ss}wald and Armin Hornung and Maren Bennewitz}, title = {Improved Proposals for Highly Accurate Localization Using Range and Vision Data}, booktitle = {Proc.\ of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems (IROS)}, year = {2012}, pages = {1809--1814}, month = oct, doi = {10.1109/IROS.2012.6385657}, issn = {2153-0858}, url = {http://hrl.informatik.uni-freiburg.de/papers/osswald12iros.pdf}, abstract = {In order to successfully climb challenging stair-cases that consist of many steps and contain difficult parts, humanoid robots need to accurately determine their pose. In this paper, we present an approach that fuses the robot's observations from a 2D laser scanner, a monocular camera, an inertial measurement unit, and joint encoders in order to localize the robot within a given 3D model of the environment. We develop an extension to standard Monte Carlo localization (MCL) that draws particles from an improved proposal distribution to obtain highly accurate pose estimates. Furthermore, we introduce a new observation model based on chamfer matching between edges in camera images and the environment model. We thoroughly evaluate our localization approach and compare it to previous techniques in real-world experiments with a Nao humanoid. The results show that our approach significantly improves the localization accuracy and leads to a considerably more robust robot behavior. Our improved proposal in combination with chamfer matching can be generally applied to improve a range-based pose estimate by a consistent matching of lines obtained from vision. } } |
![]() |
Accurate 6D Localization in Multi-Level Environments Stefan Oßwald, Armin Hornung, and Maren Bennewitz Extended Abstracts of Spatial Cognition (SC), Kloster Seeon, Germany, 2012 Download BibTeX @INPROCEEDINGS{osswald12sc, author = {Stefan O{\ss}wald and Armin Hornung and Maren Bennewitz}, title = {Accurate {6D} Localization in Multi-Level Environments}, booktitle = {Extended Abstracts of Spatial Cognition (SC)}, url = {http://sc2012.informatik.uni-freiburg.de/posters/sc20120extendedabstract-30.pdf}, year = {2012}, month = aug } |
![]() |
Techniques for Autonomous Stair Climbing with Humanoid Robots Stefan Oßwald Master's Thesis. University of Freiburg, Dept. of Computer Science, Humanoid Robots Lab, 2012 Abstract BibTeX Service robots need to be able to reliably climb stairs in order to act autonomously in indoor environments. In this thesis, we present techniques that enable a Nao humanoid robot equipped with a laser range finder to autonomously climb up a spiral staircase in a complex multi-level environment. In contrast to other approaches, we use a standard platform robot without external sensors or specialized hardware components for the stair climbing task, and we do not modify the environment in order to help the robot to sense the stairs. In order to climb up multiple steps of a complex staircase in a row, the robot needs to accurately determine its pose on the stairs. In this thesis, we first discuss and evaluate a standard Monte Carlo Localization (MCL) approach that fuses observations from the 2D laser scanner, an inertial measurement unit, and joint encoders in order to localize the robot within a given 3D model of the environment. We then present two extensions to this approach that increase the accuracy of the pose estimate by additionally integrating information from images acquired with an on-board camera. The first extension reconstructs a 3D model of the next step from edges observed in the camera images and estimates the robot's pose relative to the reconstructed model. The second approach draws particles from an improved proposal distribution, which yields a better representation of the robot's pose posterior distribution. Additionally, the second extension introduces an observation model based on chamfer matching that fits an edge model of the staircase consistently to a set of images. We thoroughly evaluate and compare our localization approaches in real-world experiments. The results show that our extensions to the standard MCL approach improve the localization accuracy significantly and enable the robot to accurately determine its pose. Using our approach, the robot can reliably climb the steps of a spiral staircase. @MASTERSTHESIS{osswald12msc, author = {Stefan O{\ss}wald}, title = {Techniques for Autonomous Stair Climbing with Humanoid Robots}, school = {University of Freiburg, Department of Computer Science, Humanoid Robots Lab}, year = {2012}, type = {Master's Thesis}, month = feb, abstract = {Service robots need to be able to reliably climb stairs in order to act autonomously in indoor environments. In this thesis, we present techniques that enable a Nao humanoid robot equipped with a laser range finder to autonomously climb up a spiral staircase in a complex multi-level environment. In contrast to other approaches, we use a standard platform robot without external sensors or specialized hardware components for the stair climbing task, and we do not modify the environment in order to help the robot to sense the stairs. In order to climb up multiple steps of a complex staircase in a row, the robot needs to accurately determine its pose on the stairs. In this thesis, we first discuss and evaluate a standard Monte Carlo Localization (MCL) approach that fuses observations from the 2D laser scanner, an inertial measurement unit, and joint encoders in order to localize the robot within a given 3D model of the environment. We then present two extensions to this approach that increase the accuracy of the pose estimate by additionally integrating information from images acquired with an on-board camera. The first extension reconstructs a 3D model of the next step from edges observed in the camera images and estimates the robot's pose relative to the reconstructed model. The second approach draws particles from an improved proposal distribution, which yields a better representation of the robot's pose posterior distribution. Additionally, the second extension introduces an observation model based on chamfer matching that fits an edge model of the staircase consistently to a set of images. We thoroughly evaluate and compare our localization approaches in real-world experiments. The results show that our extensions to the standard MCL approach improve the localization accuracy significantly and enable the robot to accurately determine its pose. Using our approach, the robot can reliably climb the steps of a spiral staircase.} } |
![]() |
From 3D Point Clouds to Climbing Stairs: A Comparison of Plane Segmentation Approaches for Humanoids Stefan Oßwald, Jens-Steffen Gutmann, Armin Hornung, and Maren Bennewitz Proceedings of the IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS), Bled, Slovenia, 2011 Abstract Download BibTeX DOI 10.1109/Humanoids.2011.6100836
YouTube video
In this paper, we consider the problem of building 3D models of complex staircases based on laser range data acquired with a humanoid. These models have to be sufficiently accurate to enable the robot to reliably climb up the staircase. We evaluate two state-of-the-art approaches to plane segmentation for humanoid navigation given 3D range data about the environment. The first approach initially extracts line segments from neighboring 2D scan lines, which are successively combined if they lie on the same plane. The second approach estimates the main directions in the environment by randomly sampling points and applying a clustering technique afterwards to find planes orthogonal to the main directions. We propose extensions for this basic approach to increase the robustness in complex environments which may contain a large number of different planes and clutter. In practical experiments, we thoroughly evaluate all methods using data acquired with a laser-equipped Nao robot in a multi-level environment. As the experimental results show, the reconstructed 3D models can be used to autonomously climb up complex staircases. @INPROCEEDINGS{osswald11humanoids, author = {Stefan O{\ss}wald and Jens-Steffen Gutmann and Armin Hornung and Maren Bennewitz}, title = {From 3{D} Point Clouds to Climbing Stairs: A Comparison of Plane Segmentation Approaches for Humanoids}, booktitle = {Proc.\ of the {IEEE-RAS} International Conference on Humanoid Robots (Humanoids)}, year = {2011}, pages = {93--98}, month = oct, abstract = {In this paper, we consider the problem of building 3D models of complex staircases based on laser range data acquired with a humanoid. These models have to be sufficiently accurate to enable the robot to reliably climb up the staircase. We evaluate two state-of-the-art approaches to plane segmentation for humanoid navigation given 3D range data about the environment. The first approach initially extracts line segments from neighboring 2D scan lines, which are successively combined if they lie on the same plane. The second approach estimates the main directions in the environment by randomly sampling points and applying a clustering technique afterwards to find planes orthogonal to the main directions. We propose extensions for this basic approach to increase the robustness in complex environments which may contain a large number of different planes and clutter. In practical experiments, we thoroughly evaluate all methods using data acquired with a laser-equipped Nao robot in a multi-level environment. As the experimental results show, the reconstructed 3D models can be used to autonomously climb up complex staircases.}, doi = {10.1109/Humanoids.2011.6100836}, url = {http://hrl.informatik.uni-freiburg.de/papers/osswald11humanoids.pdf} } |
![]() |
Autonomous Climbing of Spiral Staircases with Humanoids Stefan Oßwald, Attila Görög, Armin Hornung, and Maren Bennewitz Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 2011 Abstract Download BibTeX DOI 10.1109/IROS.2011.6048209
YouTube video
In this paper, we present an approach to enable a humanoid robot to autonomously climb up spiral staircases. This task is substantially more challenging than climbing straight stairs since careful repositioning is needed. Our system globally estimates the pose of the robot, which is subsequently refined by integrating visual observations. In this way, the robot can accurately determine its relative position with respect to the next step. We use a 3D model of the environment to project edges corresponding to stair contours into monocular camera images. By detecting edges in the images and associating them to projected model edges, the robot is able to accurately locate itself towards the stairs and to climb them. We present experiments carried out with a Nao humanoid equipped with a 2D laser range finder for global localization and a low-cost monocular camera for short-range sensing. As we show in the experiments, the robot reliably climbs up the steps of a spiral staircase. @INPROCEEDINGS{osswald11iros, author = {O{\ss}wald, Stefan and G\"{o}r\"{o}g, Attila and Hornung, Armin and Bennewitz, Maren}, title = {Autonomous climbing of spiral staircases with humanoids}, booktitle = {Proc.\ of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems (IROS)}, year = {2011}, pages = {4844--4849}, month = sep, abstract = {In this paper, we present an approach to enable a humanoid robot to autonomously climb up spiral staircases. This task is substantially more challenging than climbing straight stairs since careful repositioning is needed. Our system globally estimates the pose of the robot, which is subsequently refined by integrating visual observations. In this way, the robot can accurately determine its relative position with respect to the next step. We use a 3D model of the environment to project edges corresponding to stair contours into monocular camera images. By detecting edges in the images and associating them to projected model edges, the robot is able to accurately locate itself towards the stairs and to climb them. We present experiments carried out with a Nao humanoid equipped with a 2D laser range finder for global localization and a low-cost monocular camera for short-range sensing. As we show in the experiments, the robot reliably climbs up the steps of a spiral staircase.}, doi = {10.1109/IROS.2011.6048209}, issn = {2153-0858}, keywords = {2D laser range finder;3D model;Nao humanoid;autonomous climbing;edge detection;global localization;humanoid robot;monocular camera images;robot pose estimation;spiral staircases;stair contours;edge detection;humanoid robots;laser ranging;mobile robots;motion control;pose estimation;position control;}, url = {http://hrl.informatik.uni-freiburg.de/papers/osswald11iros.pdf} } |
![]() |
Learning Reliable and Efficient Navigation with a Humanoid Stefan Oßwald, Armin Hornung, and Maren Bennewitz Proceedings of the IEEE International Conference on Robotics & Automation (ICRA), Anchorage, AK, USA, 2010 Abstract Download BibTeX DOI 10.1109/ROBOT.2010.5509420
YouTube video
Reliable and efficient navigation with a humanoid robot is a difficult task. First, the motion commands are executed rather inaccurately due to backlash in the joints or foot slippage. Second, the observations are typically highly affected by noise due to the shaking behavior of the robot. Thus, the localization performance is typically reduced while the robot moves and the uncertainty about its pose increases. As a result, the reliable and efficient execution of a navigation task cannot be ensured anymore since the robot's pose estimate might not correspond to the true location. In this paper, we present a reinforcement learning approach to select appropriate navigation actions for a humanoid robot equipped with a camera for localization. The robot learns to reach the destination reliably and as fast as possible, thereby choosing actions to account for motion drift and trading off velocity in terms of fast walking movements against accuracy in localization. We present extensive simulated and practical experiments with a humanoid robot and demonstrate that our learned policy significantly outperforms a hand-optimized navigation strategy. @INPROCEEDINGS{osswald10icra, author = {Stefan O{\ss}wald and Armin Hornung and Maren Bennewitz}, title = {Learning Reliable and Efficient Navigation with a Humanoid}, booktitle = {Proc.\ of the {IEEE} International Conference on Robotics \& Automation (ICRA)}, year = {2010}, pages = {2375--2380}, month = may, abstract = {Reliable and efficient navigation with a humanoid robot is a difficult task. First, the motion commands are executed rather inaccurately due to backlash in the joints or foot slippage. Second, the observations are typically highly affected by noise due to the shaking behavior of the robot. Thus, the localization performance is typically reduced while the robot moves and the uncertainty about its pose increases. As a result, the reliable and efficient execution of a navigation task cannot be ensured anymore since the robot's pose estimate might not correspond to the true location. In this paper, we present a reinforcement learning approach to select appropriate navigation actions for a humanoid robot equipped with a camera for localization. The robot learns to reach the destination reliably and as fast as possible, thereby choosing actions to account for motion drift and trading off velocity in terms of fast walking movements against accuracy in localization. We present extensive simulated and practical experiments with a humanoid robot and demonstrate that our learned policy significantly outperforms a hand-optimized navigation strategy.}, doi = {10.1109/ROBOT.2010.5509420}, url = {http://hrl.informatik.uni-freiburg.de/papers/osswald10icra.pdf} } |
![]() |
Learning Adaptive Navigation Strategies for Resource-Constrained Systems Armin Hornung, Maren Bennewitz, Cyrill Stachniss, Hauke Strasdat, Stefan Oßwald, and Wolfram Burgard Proceedings of the 3rd International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS), Lisboa, Portugal, 2010 Abstract Download BibTeX The majority of navigation algorithms for mobile robots assume that the robots possess enough computational or memory resources to carry out the necessary calculations. Especially small and lightweight devices, however, are resource-constrained and have only restricted capabilities. In this paper, we present a reinforcement learning approach for mobile robots that considers the imposed constraints on their sensing capabilities and computational resources, so that they can reliably and efficiently fulfill their navigation tasks. Our learns a policy that optimally trades off the speed of the robot and the uncertainty in the observations imposed by its movements. It furthermore enables the robot to learn an efficient landmark selection strategy to compactly model the environment. We describe extensive simulated and real-world experiments carried out with both wheeled and humanoid robots which demonstrate that our learned navigation policies significantly outperform strategies using advanced and manually optimized heuristics. @InProceedings{hornung10erlars, author = {Armin Hornung and Maren Bennewitz and Cyrill Stachniss and Hauke Strasdat and Stefan O{\ss}wald and Wolfram Burgard}, title = {Learning Adaptive Navigation Strategies for Resource-Constrained Systems}, booktitle = {Proc.\ of the 3rd International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS)}, year = {2010}, month = aug, pages = {1--10}, issn = {2190-5576}, url = {http://hrl.informatik.uni-freiburg.de/papers/hornung10erlars.pdf}, abstract = {The majority of navigation algorithms for mobile robots assume that the robots possess enough computational or memory resources to carry out the necessary calculations. Especially small and lightweight devices, however, are resource-constrained and have only restricted capabilities. In this paper, we present a reinforcement learning approach for mobile robots that considers the imposed constraints on their sensing capabilities and computational resources, so that they can reliably and efficiently fulfill their navigation tasks. Our technique learns a policy that optimally trades off the speed of the robot and the uncertainty in the observations imposed by its movements. It furthermore enables the robot to learn an efficient landmark selection strategy to compactly model the environment. We describe extensive simulated and real-world experiments carried out with both wheeled and humanoid robots which demonstrate that our learned navigation policies significantly outperform strategies using advanced and manually optimized heuristics.} } |
![]() |
Reliable Vision-based Navigation with a Humanoid Robot Stefan Oßwald Bachelor's Thesis. University of Freiburg, Dept. of Computer Science, Humanoid Robots Lab, 2009 Abstract BibTeX Motion blur is a severe problem in visual localization since it can prevent a robot from detecting and matching visual features reliably. Especially when using a small humanoid robot, the images taken while the robot is walking are strongly affected by motion blur. Thus, the localization performance is typically reduced in these situations, which means that the uncertainty of the robot about its pose increases. As a result, it cannot be ensured anymore that a navigation task can be executed reliably and efficiently since the robot's pose estimate might not correspond to the true location. In this thesis, we extend an existing reinforcement learning approach for wheeled robots so that it is applicable to the domain of humanoid robots. The robot learns to select appropriate navigation actions to reach the destination reliably and as fast as possible, thereby trading off velocity against accuracy in localization. The robot implicitly takes the impact of motion blur on observations into account and avoids delays caused by localization errors. Simulated and real-world experiments with the humanoid robot Nao show that our approach generates suitable policies for accomplishing the navigation task. The policies learned in our simulator can be transferred to the real robot and significantly outperform a standard navigation strategy in terms of reliability and efficiency. @MASTERSTHESIS{osswald09bsc, author = {Stefan O{\ss}wald}, title = {Reliable Vision-based Navigation with a Humanoid Robot}, school = {University of Freiburg, Department of Computer Science, Humanoid Robots Lab}, year = {2009}, type = {Bachelor's Thesis}, month = sep, abstract = {Motion blur is a severe problem in visual localization since it can prevent a robot from detecting and matching visual features reliably. Especially when using a small humanoid robot, the images taken while the robot is walking are strongly affected by motion blur. Thus, the localization performance is typically reduced in these situations, which means that the uncertainty of the robot about its pose increases. As a result, it cannot be ensured anymore that a navigation task can be executed reliably and efficiently since the robot's pose estimate might not correspond to the true location. In this thesis, we extend an existing reinforcement learning approach for wheeled robots so that it is applicable to the domain of humanoid robots. The robot learns to select appropriate navigation actions to reach the destination reliably and as fast as possible, thereby trading off velocity against accuracy in localization. The robot implicitly takes the impact of motion blur on observations into account and avoids delays caused by localization errors. Simulated and real-world experiments with the humanoid robot Nao show that our approach generates suitable policies for accomplishing the navigation task. The policies learned in our simulator can be transferred to the real robot and significantly outperform a standard navigation strategy in terms of reliability and efficiency.} } |