RTK Heading + LiDAR (Temporary) Mount Ready

After a few days of playing with some Makeblock (blue) metal pieces, I finally created a temporary mount for my RTK Heading (Dual GNSS) + 32-beans LiDAR system. It should be enough to test the sensors while a more stable one is built. I also conducted a quick indoor test of the LiDAR, it has been raining for two weeks so no chance to go outdoors yet.

SEDRAD, The Self-Driving Ads Robot is Finally Here

I am pleased to announce that the first version of SEDRAD, The Self-Driving Ads Robot, is finally here. I have released it as part of the final submission of the OpenCV Spatial AI Competition #Oak2021. Watch the video to learn more about what SEDRAD is capable of doing, and if you have any questions, don’t hesitate to contact me.

Drivable Path Segmentation Test 1

A couple of weeks ago I was collecting and labeling driving images to teach #SEDRAD how to recognize the surface to drive on using semantic segmentation.

The first deeplabv3+ model running fully on Oak-D cameras is ready and we took it for a spin. It is not perfect but it is a first step towards improving the safety of our #SelfDriving #Ads #Robot.

#Oak2021 #OpenCV #robotics #AI #MachineLearning #SuperAnnotate #autonomousdriving

Our Face Analysis Now Powered by Oak-D

An old Soul Hackers Labs’ trick now powered by Oak-D. People tracking and face detection happens on edge device. The detected face is fed to SHL’s face analyzer (on host) to determine the age, gender, emotions, attention level, viewing time of the tracked person. Metadata is produced and stored to generate useful reports for advertisers.

Tracking and detecting faces was the most resource consuming part and now the host computer has been freed from this burden, thanks Oak-D!

#SEDRAD, the Self-Driving Ads Robot coming soon! #Oak2021 #OpenCV #AI #machinelearning #robotics #retailanalytics #digitalsignage #digitalsignagesolutions

Annotation Process Under Way

One of the steps of teaching the Self-Driving Ads Robot (SEDRAD) how to navigate the environment is to teach it which surfaces are good for it to drive on. We began the tedious task of collecting images of the surrounding, and creating a segmentation of the images to later “teach” SEDRAD to recognize the surface to follow and stay on. This process comprises image acquisition, image annotation (segmenting in our case), and then segmentation validation. It is very time consuming. We also ran into a problem. Due to technical issues, the real SEDRAD was not available over the weekend, when we collected the data. Instead, we mounted the cameras at the same height and separation from each other on a wagon. Annotation here we go!

ROS+Oak-D: Turning a depth image into a 2-D laser scan for obstacle avoidance.

When we applied to the #OpenCV Spatial AI Competition #Oak2021, the very first issue we told the organizers we were going to solve using an Oak-D stereo camera was the inability of our robot to avoid obstacles located lower than the range of its 2D lidar. Back then we had no idea how we were going to do this, but we knew a stereo camera could help. In this video we present our solution. The video does not show it yet in action during autonomous navigation, but it explains how we will be turning depth images from two front facing Oak-D cameras to create 2 virtual 2D lidars that can avoid obstacles located near the floor.

The Robotics division of SHL (SHL Robotics) joins the OpenCV AI Competition #Oak2021

We are pleased to announce that we are officially part of the second phase of the OpenCV AI Competition #Oak2021. Our team joins over 200 team selected worldwide among hundreds of participants. As a price, OpenCV and Luxonis have awarded us a certificate and a free Oak-D camera (to join the 3 others we already owned) to help us develop our self-driving ads robot. Stay tuned for more updates.