Point Cloud Segmentation

๐Ÿš€ A glimpse into a building block of a bigger project: Lidar point cloud segmentation, with only 3D points a Ouster OS1-128 Lidar.

๐Ÿš—That’s the daily task of Autonomous Vehicles, and Perception4D applies to many other usages

๐Ÿค– Benchmarking and Retraining Deep Learning networks on inhouse data, from old-ish RandLanet PointPillars to KPConv and SuperPoint Transformer

๐Ÿ’ป The direct usage goes from cleaning the scanned point cloud to computing road surface or cable length. And thatโ€™s a building brick for other projects.

๐ŸคA huge thanks to Youssef OUCHOUID who work on this during his end of study internship supervised by Manon Cortial-Picard & Bastien Jacquet

LiDAR Simulator POC on Unity

๐Ÿ†› Simulator for a two 360ยฐ LiDAR car-scanner, to tune their position & scanning coverage. Proof of concept done in a few hours using #Unity3D, building on past #UE5 work. (Sorry for quick-and-shaky video ๐Ÿ˜Š)
Stay tuned for the corresponding real-world data ๐Ÿ˜‰

Added value:
โ†ช ๐Ÿ“ Design: Choose the best Lidar positioning for robot field of view coverage.
โ†ช ๐Ÿค– Robotics: Test the algorithms with a controlled, synthetic, LiDAR datastream.
โ†ช ๐ŸŽฅ Surveillance: Place the Lidar at the best position covering intersection or parking spots.
โ†ช ๐Ÿ“ท Lidar Manufacturers: Optimize laser directions and assembly for specific needs.

๐Ÿš€ Combined expertise in Robotics, Lidar and Visualization get us pretty quick to prototype 4D (3D+time) data analysis in hours.

๐Ÿ’ป Perception4D take your R&D concept to a tailored prototype in record time.

1 year taking R&D concepts to tested prototypes

๐Ÿš€ One year ago, we were two LiDAR and 3D visualization experts, very excited to offer our expertise in 3D and 4D (3D+time) data analysis. 
We were a bit anxious about finding enough projects to sustain our business and have fun.

๐Ÿ‘จโ€โš–๏ธ One year later, Perception4D is 4 people strong, having fun running multiple projects including SLAM and robotics, multi lidar integration and point cloud machine learning & AI.

๐Ÿ’ป We are the 4D Perception experts who can take your R&D concept to a tailored prototype in record time.

๐ŸŽ‰ We are so glad to have jumped in this thrilling adventure ! 
๐ŸŽ‚ Happy 1-year anniversary Perception4D !

The project here was about localisation and mapping, and was efficiently run thanks to the great power of opensource softwares:

  • LidarView I created years ago
  • SLAM algo KISS-ICP (by Ignacio Martin Vizzo, Tiziano Guadagnino, Benedikt Mersch, Louis Wiesmann, Jens Behley, and Cyrill Stachniss)
  • SLAM algo CT-ICP (by Pierre Dellenbach, Jean-Emmanuel Deschaud, Bastien Jacquet, Franรงois Goulette)

Thanks to our amazing team :
Joachim Pouderoux, Manon Cortial-Picard, Bastien Jacquet, Youssef OUCHOUID, Nikos Paragios
#robotics #lidar #slam #3Dperception #agile

๐ŸŽ‰ Sheโ€™s here! Manon just joined the Perception4D team. Our new partner is ready to tackle any LiDAR, SLAM or point cloud challenge you may have!

๐ŸŽ“ She has a 10y+ background in robotics, computer vision and AI, and expertise in embedded software for industrial 3D printing. As a qualified expert in 3D data analysis, she will ensure high quality counseling and robust programming on your 3D projects.

๐Ÿš€ Need a hand on mobile robotics SLAM? 3D computer vision? Point cloud data analysis? Contact us!

๐Ÿ”ŽWant more?
๐Ÿ‘‰Visit us / contact us via perception4d.com
๐Ÿ‘‰Follow us on LinkedIn : Perception4D, Bastien Jacquet, Joachim Pouderoux, Manon Cortial, Nikos Paragios.

๐Ÿšจ๐Ÿš—๐Ÿค– Perception4D had already many projects going on, so the team had to grow. (And let’s be honest, bigger team means more fun! ).
๐Ÿ’ช๐Ÿ˜ Amid the job market craziness in Computer Vision, Perception4D’s offer for innovative projects, small team and direct company direction involvement attracted many, hence it took only two weeks finding our Senior Computer Vision & Partner.
[So no need anymore to hurry up and mail us your CV: we will still read it, but priority is now our marvelous projects.]

๐ŸŽ‰๐Ÿค๐Ÿ‘จโ€โš–๏ธ Perception4D found a rare gem: a Partner eager to tackle the amazing projects of our coming years.

๐ŸŽ“๐Ÿ’ป๐Ÿ†› She will enhance Perception4D’s expertise in Robotic, Computer Vision, SLAM, and AI;

๐Ÿ˜Š๐ŸŒŽ๐Ÿš€ We look forward to share her enthusiasm which convinced us, her experience in collaborative projects, and her passion and knowledge of autonomous innovative machines & robots.

Stay tuned for details ๐Ÿ˜‰

๐Ÿ”ŽWant more?
๐Ÿ‘‰Visit us / contact us viaย perception4d.com
๐Ÿ‘‰Follow us on LinkedIn : Perception4D, Bastien Jacquet, Joachim Pouderoux, Nikos Paragios, our rare gem (soon).

#hiring #computervision #LiDAR #LiDARView #Robot #SLAM #AI

We just learnt that the ICRA 2022 paper “CT-ICP: Real-Time Elastic LiDAR Odometry with Loop Closure” by Pierre Dellenbach, Jean-Emmanuel Deschaud, Bastien Jacquet, Franรงois Goulette has been selected as one of three finalists for the ICRA 2022 Outstanding paper awards!

Fig. 1: Top, in color, one LiDAR scan; the color depends on the timestamp of each point (from the oldest in blue to newestin red). The scan is deformed elastically to align with the map (white points) by the joint optimization of two poses at the start and end of the scan and interpolation according to the timestamp, hence creating a continuous-time scan-to-map odometry. Below, the formulation of our trajectory with a continuity of poses intra-scan and discontinuity between scans.

Fig. 2: Aggregated point clouds for NCLT dataset (top left), KITTI-CARLA (top-right), Newer College Dataset (bottom left), and ParisLuco (bottom right) show the quality of the maps obtained with CT-ICP.

Fig. 4: Qualitative results of loop closure on the sequence 00 of KITTI-360 (11501 scans). The top left is an elevation image built by projecting the local map. The top right shows both the CT-ICP odometryโ€™s trajectory and the one corrected using the computed Loop Closure constraints (CT-ICP+LC). The bottom shows the different loop closure constraints (green) found for the same turn as the local map at the top left.

โ€ข New elastic LIDAR odometry based on the continuity of poses intra-scan and discontinuity between scans.
โ€ข Local map based on a dense point cloud stored in a sparse voxel structure to obtain real-time processing speed.
โ€ข Large campaign of experiments on 7 datasets in driving and high-frequency motion scenarios, all reproducible with public and permissive open-source code (C++ & Python bindings).
โ€ข Fast method of loop detection integrated with a pose graph back-end to build a complete SLAM, integrated into pyLiDAR-SLAM.

Congrats to Pierre Dellenbach, Jean-Emmanuel Deschaud, Bastien Jacquet, Franรงois Goulette !
(And especially the first two for the hand-on coding sessions over the summer ๐Ÿ™‚ )

You can download the full article on arXiv and HAL.