A couple of weeks ago I was collecting and labeling driving images to teach #SEDRAD how to recognize the surface to drive on using semantic segmentation. The first deeplabv3+ model running fully on Oak-D cameras is ready and we took it for a spin. It is not perfect but it is a first step towardsContinue reading “Drivable Path Segmentation Test 1”
Category Archives: Uncategorized
Our Face Analysis Now Powered by Oak-D
An old Soul Hackers Labs’ trick now powered by Oak-D. People tracking and face detection happens on edge device. The detected face is fed to SHL’s face analyzer (on host) to determine the age, gender, emotions, attention level, viewing time of the tracked person. Metadata is produced and stored to generate useful reports for advertisers.Continue reading “Our Face Analysis Now Powered by Oak-D”
Annotation Process Under Way
One of the steps of teaching the Self-Driving Ads Robot (SEDRAD) how to navigate the environment is to teach it which surfaces are good for it to drive on. We began the tedious task of collecting images of the surrounding, and creating a segmentation of the images to later “teach” SEDRAD to recognize the surfaceContinue reading “Annotation Process Under Way”
49″ Semi-Outdoor Displays Arriving Soon
Two 49″ semi-outdoor displays already shipped from China and soon to be installed on the Self Driving Ads Robot (#SEDRAD). #OAK2021 #OpenCV
ROS+Oak-D: Turning a depth image into a 2-D laser scan for obstacle avoidance.
When we applied to the #OpenCV Spatial AI Competition #Oak2021, the very first issue we told the organizers we were going to solve using an Oak-D stereo camera was the inability of our robot to avoid obstacles located lower than the range of its 2D lidar. Back then we had no idea how we wereContinue reading “ROS+Oak-D: Turning a depth image into a 2-D laser scan for obstacle avoidance.”
The Robotics division of SHL (SHL Robotics) joins the OpenCV AI Competition #Oak2021
We are pleased to announce that we are officially part of the second phase of the OpenCV AI Competition #Oak2021. Our team joins over 200 team selected worldwide among hundreds of participants. As a price, OpenCV and Luxonis have awarded us a certificate and a free Oak-D camera (to join the 3 others we alreadyContinue reading “The Robotics division of SHL (SHL Robotics) joins the OpenCV AI Competition #Oak2021”
Soul Hackers Labs has joined AppWorks Batch #13
When it comes to Taiwan and South East Asia, no accelerator is bigger and more impactful than AppWorks. With 275 startups accelerated, AppWorks has come to raise about US$ 222M. With their vast network of human resources, it is a no brainer to want to join them. We are happy to announce that starting JulyContinue reading “Soul Hackers Labs has joined AppWorks Batch #13”
The Internet of Things Needs a New Kind of Sensor
“I have been looking for some time for a camera to complement my smart home and I came to the conclusion that there is no product in the market that provides a decent solution for the user”, reads the introduction to a blog post I read the other day. This is particularly true of theContinue reading “The Internet of Things Needs a New Kind of Sensor”
Let Me Hear Your Voice and I Will Tell You How You Feel
Creating mood sensing technology has become very popular in recent years. There is a wide range of companies trying to detect your emotions from what you write, the tone of your voice, or from the expressions on your face. All of these companies offer their technology online through cloud-based programming interfaces (APIs). As part ofContinue reading “Let Me Hear Your Voice and I Will Tell You How You Feel”
Offline Emotion-Specific Speech-to-Text in Low-End Devices
From virtual assistants that fail to respond appropriately to distressed users, to chat bots that go racist and sexist, there is an increasing urge to embed empathy and emotions in our Artificial Intelligence. Motivated by this, I started this year working on an artificial emotional brain (hardware+software) codenamed “Project Jammin”. Two components of “Project Jammin” areContinue reading “Offline Emotion-Specific Speech-to-Text in Low-End Devices”