Continuing my work on Machine Learning with point clouds in the realm of autonomous robots, and coming from working with image data, I was faced with the following question: does 3D data need normalization like image data does? The answer is a clear YES (duh!). Normalization, or feature scaling, is an important preprocessing step for many machineContinue reading “Normalizing (Feature Scaling) Point Clouds for Machine Learning”
For the past two years, I have been working with robots. Earlier this year I stopped focusing on cameras only and decided to start working with LiDARs. So after much research, I settled for a 32 beams RoboSense device. I had to spend some time setting it up, especially creating a suitable mount able toContinue reading “Creating a Point Cloud Dataset for 3D Deep Learning”
Annotating point clouds from multi-line 360° LiDAR is exceedingly difficult. Providing context in the form of camera frames and limiting the point cloud to the Field Of View (FOV) of the camera simplifies things. To achieve this, we first had to replace our old, and not so stable LiDAR mount, with a sturdier one capableContinue reading “Simplifying Point Cloud Labeling with Contextual Images and Point Cloud Filtering”
On Saturday, May 14, 2022, we demonstrated SEDRAD at the AppWorks offices in Taipei, Taiwan. The goal was to get approval to use the robot during their upcoming Demo Day #24. The demonstration was a big success and SEDRAD is set to navigate autonomously while showing information about the participating startups in the event. AppWorksContinue reading “Demonstrating SEDRAD, The Self Driving Ads Robot at AppWorks.”
Before you can train a supervised Deep Learning model, you must first label your data. Today I am testing Intel OpenVINO’s CVAT and MATLAB’s Lidar Labeler annotation tools for 3D data. First impressions, CVAT makes it easier to navigate the point cloud, but a small bug makes it hard to place the initial cuboid, makingContinue reading “Testing 3D Annotation Tools”
I took the new LiDAR for a night ride, here is what it sees! Definitely, this kind of sensor is much more suitable at night than traditional cameras.
After a few days of playing with some Makeblock (blue) metal pieces, I finally created a temporary mount for my RTK Heading (Dual GNSS) + 32-beans LiDAR system. It should be enough to test the sensors while a more stable one is built. I also conducted a quick indoor test of the LiDAR, it hasContinue reading “RTK Heading + LiDAR (Temporary) Mount Ready”
I am pleased to announce that the first version of SEDRAD, The Self-Driving Ads Robot, is finally here. I have released it as part of the final submission of the OpenCV Spatial AI Competition #Oak2021. Watch the video to learn more about what SEDRAD is capable of doing, and if you have any questions, don’tContinue reading “SEDRAD, The Self-Driving Ads Robot is Finally Here”
Experimenting with autonomous driving by segmenting the drivable surface and using its centroid’s location as a goal for ROS move_base.
A couple of weeks ago I was collecting and labeling driving images to teach #SEDRAD how to recognize the surface to drive on using semantic segmentation. The first deeplabv3+ model running fully on Oak-D cameras is ready and we took it for a spin. It is not perfect but it is a first step towardsContinue reading “Drivable Path Segmentation Test 1”