update stuff

This commit is contained in:
Paul Lesur 2022-10-19 13:58:45 +02:00
parent e3e2620048
commit 0e287ac8c9
5 changed files with 78 additions and 7 deletions

View file

@ -4,8 +4,11 @@ title: Home
Hey, I'm Paul Lesur, a software engineer, currently based in Germany.
Here are some links to my different repositories (on [GitHub](https://github.com/lesurp)),
my [LinkedIn profile](https://linkedin.com/in/paul-lesur/),
and a link to [my résumé](/Paul_Lesur_resume.pdf) (pdf, of course!).
Here are some links:
You will find the list of my publications [here]({{< ref "/publications" >}})
* [GitHub profile](https://github.com/lesurp)
* [LinkedIn profile](https://linkedin.com/in/paul-lesur/)
* [Résumé](/Paul_Lesur_résumé.pdf)
* [CV (including publications)](/Paul_Lesur_CV.pdf)
You can also find the list of my publications [here]({{< ref "/publications" >}})

View file

@ -0,0 +1,21 @@
---
title: Deep Multi-State Object Pose Estimation for Augmented Reality Assembly
author: Su, Yongzhi & Rambach, Jason & Minaskan, Nareg & Lesur, Paul & Pagani, Alain & Stricker, Didier
date: '2019-08'
publications:
- deep_multistate_object_pose_estimation
---
# Deep Multi-State Object Pose Estimation for Augmented Reality Assembly
10.1109/ISMAR-Adjunct.2019.00-42
## Authors
Su, Yongzhi & Rambach, Jason & Minaskan, Nareg & Lesur, Paul & Pagani, Alain & Stricker, Didier
## Abstract
Neural network machine learning approaches are widely used for object classification or detection problems with significant success. A similar problem with specific constraints and challenges is object state estimation, dealing with objects that consist of several removable or adjustable parts. A system that can detect the current state of such objects from camera images can be of great importance for Augmented Reality(AR) or robotic assembly and maintenance applications. In this work, we present a CNN that is able to detect and regress the pose of an object in multiple states. We then show how the output of this network can be used in an automatically generated AR scenario that provides step-by-step guidance to the user in assembling an object consisting of multiple components.

View file

@ -0,0 +1,27 @@
---
title: Online Multi-Agent Path Planning in Runaway Scenarios
author: Lesur, Paul & Bajcinca, Naim
date: '2022-09'
publications:
- sgmapf
---
# Online Multi-Agent Path Planning in Runaway Scenarios
Not yet accepted / published
## Authors
Lesur, Paul & Bajcinca, Naim
## Abstract
In this work we present a real-time capable algorithm for solving path planning
problems for the application of Runaway Scenarios. In such applications, a main agent has to follow
a path, and other secondary agents should create space for the main agent,
without colliding with one another. Our algorithm uses a low-level path planner,
similar to what can be found in the literature for the problem of Multi-Agent
Path Finding (MAPF), and combines it with our novel Planner Scheduler, a
high-level scheduler that allows us to find a sub-optimal solution quickly.

View file

@ -0,0 +1,21 @@
---
title: 'SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment'
author: Shu, Fangwen & Lesur, Paul & Xie, Yaxu & Pagani, Alain & Stricker, Didier
date: '2021-01'
publications:
- slam_in_the_field
---
# SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment
10.1109/WACV48630.2021.00180
## Authors
Shu, Fangwen & Lesur, Paul & Xie, Yaxu & Pagani, Alain & Stricker, Didier
## Abstract
This paper demonstrates a system capable of combining a sparse, indirect, monocular visual SLAM, with both of-fline and real-time Multi-View Stereo (MVS) reconstruction algorithms. This combination overcomes many obstacles encountered by autonomous vehicles or robots employed in agricultural environments, such as overly repetitive patterns , need for very detailed reconstructions, and abrupt movements caused by uneven roads. Furthermore, the use of a monocular SLAM makes our system much easier to integrate with an existing device, as we do not rely on a LiDAR (which is expensive and power consuming), or stereo camera (whose calibration is sensitive to external perturbation e.g. camera being displaced). To the best of our knowledge, this paper presents the first evaluation results for monocular SLAM, and our work further explores unsupervised depth estimation on this specific application scenario by simulating RGB-D SLAM to tackle the scale ambiguity, and shows our approach produces reconstructions that are helpful to various agricultural tasks. Moreover, we highlight that our experiments provide meaningful insight to improve monocular SLAM systems under agricultural settings.

View file

@ -1,7 +1,7 @@
---
title: SlamCraft
author: Rambach, Jason & Lesur, Paul & Pagani, Alain & Stricker, Didier
date: '2016-03-27'
date: '2019-03'
publications:
- slamcraft
---
@ -12,11 +12,10 @@ publications:
## Authors
Rambach, Jason & Lesur, Paul & Pagani, Alain & Stricker, Didier. (2019).
Rambach, Jason & Lesur, Paul & Pagani, Alain & Stricker, Didier
## Abstract
Monocular Simultaneous Localization and Mapping (SLAM) approaches have progressed significantly over the last two decades. However, keypoint-based approaches only provide limited structural information in a 3D point cloud which does not fulfil the requirements of applications such as Augmented Reality (AR). SLAM systems that provide dense environment maps are either computationally intensive or require depth information from additional sensors. In this paper, we use a deep neural network that estimates planar regions from RGB input images and fuses its output iteratively with the point cloud map of a SLAM system to create an efficient monocular planar SLAM system. We present qualitative results of the created maps, as well as an evaluation of the tracking accuracy and runtime of our approach.