
When starting out new research, my approach is usually to test different related things until enough experience allows me to begin connecting the dots. Before I could start building custom models for 3D object detection, I acquired a LiDAR and played around with some data. One next obvious step was to find out how the research world was labeling such data before I could label my own.
There are some very popular point clouds datasets for autonomous driving out there, with the most popular being the KITTI dataset, NuScenes, Waymo Open Dataset among others. I spent some time studying the KITTI dataset a while ago, and in general, noticed how hard it was to find the right tools to visualize the data. That was until I discovered Open3D, which made it simple for me to process and visualize point clouds. Open3D can be optionally bundled with Open3D-ML, which includes tools to visualize annotated point cloud data, and train/build/test 3D machine learning models (more on that in a future post).

The Open3D-ML GitHub page provides easy instructions to install the library with pip, but this only works with specific versions of CUDA and TensorFlow. Because I wanted to use the newer versions of such libraries, I decided to build Open3D from source. When doing this, I noticed that some steps were missing or were not clear enough. To simplify the life of anyone interested in building this library, I include below the steps that I followed to install and test Open3D-ML. Note that my system is Ubuntu 20.04.4 LTS and I have a Cuda-enabled GPU, therefore, the instructions here presented may vary depending on your system.
Step 1: Install Conda
Using Conda is the recommended way to try anything new without risking breaking your system. To install Conda follow the official steps here.
Step 2: Create and activate a Conda environment
Make sure to replace myenv with the actual name that you want to use.
conda create --name myenv
conda activate myenv
Step 3: Install Node.js
To install Node.js you can follow the steps below:
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo npm install -g yarn
Step 4: Install TensorFlow
To install TensorFlow follow the official steps here.
Step 5: Install Jupyter Lab
conda install -c conda-forge jupyterlab
Step 6: Clone Open3D
git clone https://github.com/isl-org/Open3D
Step 7: Install dependencies
cd Open3d
./util/install_deps_ubuntu.sh
Step 8: Create the build directory and clone Open3D-ML
mkdir build
cd build
git clone https://github.com/isl-org/Open3D-ML.git
Step 9: Configure the installation
This is assuming you have a Cuda-enabled GPU. Make sure to replace /path/to/your/conda/env/bin/python with the correct path to your Python. Also do not forget the two dots at the end of the command.
cmake -DBUILD_CUDA_MODULE=ON -DGLIBCXX_USE_CXX11_ABI=ON -DBUILD_TENSORFLOW_OPS=ON -DBUNDLE_OPEN3D_ML=ON -DOPEN3D_ML_ROOT=Open3D-ML -DBUILD_JUPYTER_EXTENSION:BOOL=ON -DBUILD_WEBRTC=ON -DPython3_ROOT=/path/to/your/conda/env/bin/python ..
Step 10: Build the library
make -j$(nproc)
Step 11: Install as Python package
make install-pip-package
Step 12: Test Open3D installation
python -c "import open3d"
Step 13: Test Open3D-ML with TensorFlow installation
python -c "import open3d.ml.tf as ml3d"
Step 14: Downloading and preparing a dataset
In this step, we will be downloading the SemanticKITTI dataset. This dataset is over 80 GB so make sure to have plenty of space and time. The following steps will download and prepare the dataset. Make sure to replace /path/to/save/dataset with the desired path.
cd Open3D-ML/scripts/
./download_semantickitti.sh /path/to/save/dataset
Step 15: Loading and visualizing the dataset
In order to visualize the SemanticKITTI dataset, save the following Python code in a file and run it. Remember to replace /path/to/save/dataset/ with the path where the SemanticKITTI dataset was saved.
import open3d.ml.tf as ml3d
# construct a dataset by specifying dataset_path
dataset = ml3d.datasets.SemanticKITTI(dataset_path='/path/to/save/dataset/SemanticKitti/')
# get the 'all' split that combines training, validation and test set
all_split = dataset.get_split('all')
# print the attributes of the first datum
print(all_split.get_attr(0))
# print the shape of the first point cloud
print(all_split.get_data(0)['point'].shape)
# show the first 100 frames using the visualizer
vis = ml3d.vis.Visualizer()
vis.visualize_dataset(dataset, 'all', indices=range(340))
Right when you run the Python script, a visualizer opens and loads the first 340 data frames. You can change the number of frames loaded in the code. Once opened, you can explore the point clouds based on intensity, but the most interesting part is to explore the point clouds based on the semantic label of each point. The videos below show two examples.
In the first video, you can see how by selecting multiple frames you can play them as an animation. Make sure to select labels as the data type from the presented options.
The second video shows how you can select a given frame and inspect the semantic objects present by activating and de-activating certain labels. When certain colors are too light and difficult to see, you can change the color to improve visibility.
Step 16: Troubleshooting
When performing the steps above, I encountered the following exceptions. Fixing them was easy, in case you find them as well.
If you get ModuleNotFoundError: No module named ‘yapf’
pip install yapf
If you get ModuleNotFoundError: No module named ‘jupyter_packaging’
pip install jupyter-packaging
And that’s it. Open3D-ML is a great tool for visualizing point cloud datasets. The next step is to study the datasets to see how they were labeled. Then, I will go over training/testing 3D models with Open3D. Hopefully, this will bring me closer to performing the same operations with my custom data.
Leave a Reply