You are here: Home Institut Pressemitteilungen Compressing Dense Point Clouds using Deep Learning
Date: Mar 21, 2022

Compressing Dense Point Clouds using Deep Learning IGG-Blogpost Series | Working Group Photogrammetry

— filed under: ,

The ability to build consistent maps of the environment and use them during navigation is key for robots, self-driving cars, and other autonomous systems. Robots and cars need maps to localize themselves, plan efficient and collision-free trajectories, and perform numerous other tasks. Thus, maps are a central building block in any mapping or navigation stack.

 

High-resolution

High-resolution colored 3D point cloud of an office environment (© Photo: IGG / Photogrammetry).

 

 

Efficient and fast compression is key for operating in the real world

Thus, sensor data and maps information need to be compressed in order to be stored and processed. Finding an efficient representation that allows for compact storage and fast querying is an old topic in robotics, computer graphics, and other disciplines. A popular way is given through octrees, which offer a hierarchical and recursively 3D storage. Techniques such as OctoMap, which use these trees, have been around for years and form the gold standard today.

 

A tree

A tree that recursively breaks down the local 3D space in 8 blocks sits at the heart of OctoMap (© Photo: IGG / Photogrammetry).

 

 

Machine learning offers new means for compression

A recent work by Louis Wiesmann et al. proposes a new way of compressing dense 3D point cloud maps using deep neural networks. This method by Wiesmann allows computing compact a scene representation for 3D point cloud data obtained from autonomous vehicles in large environments.

 

Compressing a 40 GB point cloud

Compressing a 40 GB point cloud into a 200 MB representation using learned compression (© Photo: IGG / Photogrammetry).

 

It tackles the problem of compression by learning a set of local feature descriptors from which the point cloud can be reconstructed efficiently and effectively. The paper proposes a novel deep convolutional autoencoder architecture that directly operates on the points themselves so that no voxelization is needed. This means that in contrast to OctoMap or Occupancy Voxel Grids, no discretization of the space needs to be computed, which is a great advantage — no need to commit to a certain resolution beforehand. The work also describes a deconvolution operator to upsample point clouds from the compressed representation, which decomposes the range data at an arbitrary density.

 

The compression

The compression and decompression architecture to achieve that (© Photo: IGG / Photogrammetry).

 

Their paper shows that the learned compression achieves better reconstructions at the same bit rate than other state-of-the-art compression algorithms. It furthermore demonstrates that the approach generalizes well to different LiDAR sensor.

 

Different types

Different types of lossy compression/decompression techniques are available, but the proposed one works best; top: input data; bottom: decompressed data (© Photo: IGG / Photogrammetry).

 

 


Further reading:

  • L. Wiesmann, A. Milioto, X. Chen, C. Stachniss, and J. Behley, “Deep Compression for Dense Point Cloud Maps,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 2060–2067, 2021.
    https://www.doi.org/10.1109/LRA.2021.3059633

Source Code:


Trailer video:

 

Document Actions