🎉 She’s here! Manon just joined the Perception4D team. Our new partner is ready to tackle any LiDAR, SLAM or point cloud challenge you may have!
🎓 She has a 10y+ background in robotics, computer vision and AI, and expertise in embedded software for industrial 3D printing. As a qualified expert in 3D data analysis, she will ensure high quality counseling and robust programming on your 3D projects.
🚀 Need a hand on mobile robotics SLAM? 3D computer vision? Point cloud data analysis? Contact us!
🚨🚗🤖 Perception4D had already many projects going on, so the team had to grow. (And let’s be honest, bigger team means more fun! ). 💪😍 Amid the job market craziness in Computer Vision, Perception4D’s offer for innovative projects, small team and direct company direction involvement attracted many, hence it took only two weeks finding our Senior Computer Vision & Partner. [So no need anymore to hurry up and mail us your CV: we will still read it, but priority is now our marvelous projects.]
🎉🤝👨⚖️ Perception4D found a rare gem: a Partner eager to tackle the amazing projects of our coming years.
🎓💻🆛 She will enhance Perception4D’s expertise in Robotic, Computer Vision, SLAM, and AI;
😊🌎🚀 We look forward to share her enthusiasm which convinced us, her experience in collaborative projects, and her passion and knowledge of autonomous innovative machines & robots.
Fig. 1: Top, in color, one LiDAR scan; the color depends on the timestamp of each point (from the oldest in blue to newestin red). The scan is deformed elastically to align with the map (white points) by the joint optimization of two poses at the start and end of the scan and interpolation according to the timestamp, hence creating a continuous-time scan-to-map odometry. Below, the formulation of our trajectory with a continuity of poses intra-scan and discontinuity between scans.
Fig. 2: Aggregated point clouds for NCLT dataset (top left), KITTI-CARLA (top-right), Newer College Dataset (bottom left), and ParisLuco (bottom right) show the quality of the maps obtained with CT-ICP.
Fig. 4: Qualitative results of loop closure on the sequence 00 of KITTI-360 (11501 scans). The top left is an elevation image built by projecting the local map. The top right shows both the CT-ICP odometry’s trajectory and the one corrected using the computed Loop Closure constraints (CT-ICP+LC). The bottom shows the different loop closure constraints (green) found for the same turn as the local map at the top left.
Contributions: • New elastic LIDAR odometry based on the continuity of poses intra-scan and discontinuity between scans. • Local map based on a dense point cloud stored in a sparse voxel structure to obtain real-time processing speed. • Large campaign of experiments on 7 datasets in driving and high-frequency motion scenarios, all reproducible with public and permissive open-source code (C++ & Python bindings). • Fast method of loop detection integrated with a pose graph back-end to build a complete SLAM, integrated into pyLiDAR-SLAM.