When we applied to the #OpenCV Spatial AI Competition #Oak2021, the very first issue we told the organizers we were going to solve using an Oak-D stereo camera was the inability of our robot to avoid obstacles located lower than the range of its 2D lidar. Back then we had no idea how we were going to do this, but we knew a stereo camera could help. In this video we present our solution. The video does not show it yet in action during autonomous navigation, but it explains how we will be turning depth images from two front facing Oak-D cameras to create 2 virtual 2D lidars that can avoid obstacles located near the floor.
The Robotics division of SHL (SHL Robotics) joins the OpenCV AI Competition #Oak2021
We are pleased to announce that we are officially part of the second phase of the OpenCV AI Competition #Oak2021. Our team joins over 200 team selected worldwide among hundreds of participants. As a price, OpenCV and Luxonis have awarded us a certificate and a free Oak-D camera (to join the 3 others we already owned) to help us develop our self-driving ads robot. Stay tuned for more updates.

Soul Hackers Labs has joined AppWorks Batch #13
When it comes to Taiwan and South East Asia, no accelerator is bigger and more impactful than AppWorks. With 275 startups accelerated, AppWorks has come to raise about US$ 222M. With their vast network of human resources, it is a no brainer to want to join them.
We are happy to announce that starting July 29, 2016, we will be joining this prestigious institution as part of their batch #13. Soul Hackers Labs, together with over 30 other teams from Taiwan, Hong Kong, Macau, Malaysia, Singapore, and New York, will be spending the next 6 months building a sustainable business.
AppWorks will be providing us with office space, mentorship, connections, access to computing resources (through partnerships with Amazon AWS and Microsoft BizSpark) to help us boost our path to success. We are thrilled to see all the great things we will be building during this time. Please stay tuned for updates as the awesomeness takes place.
The Internet of Things Needs a New Kind of Sensor
“I have been looking for some time for a camera to complement my smart home and I came to the conclusion that there is no product in the market that provides a decent solution for the user”, reads the introduction to a blog post I read the other day. This is particularly true of the Smart Home market, and any other market that deals with humans. Now, making a camera that solves all needs (human tracking, face recognition, people counting, etc …) is not straightforward, but that does not mean companies should not try to cover at least a small set of such desirable solutions. I believe there can be a demand for such things, but most people don’t know they need one yet.
The problem, seems to me, to be one of perception. While most people tend to consider a temperature or a light sensor a simple plug-and-play hardware that can be easily connected to a Smart Hub, camera solutions tend to be thought of as complex projects developed for a specific task and product. Why should this be the case? Many people, from Internet of Things (IoT) companies to makers, could benefit from an advanced plug-and-play “camera sensor”. One that could easily be plugged to your home hub, car, or any other Internet-enabled device and give access to its rich data (people id, objects recognized, etc) through its application programming interface (API).
Because I believe such things should exist, and because I believe once available many will benefit from it, I decided to create one such smart camera sensor. I am happy to introduce an early prototype of Project Jammin’s Face Sensor for the IoT, whose primary goal is to sense human emotions in real-time and without the need for cloud-based services. This sensor will offer the following functionalities and advantages out-of-the-box:
- Facial emotion analysis
- Face recognition
- Attention tracking
- Offline and real-time processing
- Small and affordable
- Protect your privacy, no need to send video data to the cloud
- Convenient API to collect detected emotions, faces, attention
- Ability to build your own apps and systems with emotion sensing
This product, once it goes into production, will be suitable for retail, where it can be used to detect people’s reactions and attention to products. In education, such sensor can be used to monitor kids and determine best study times, preferred topics, etc. Smart homes could benefit by adding emotion-based automation, just imagine if your home could adjust the lights, temperature, and music based on how you feel. Healthcare is another area in which this sensor could be useful, by placing it in front of sick patients or the elderly, one could monitor their recovery based on their emotions. The limit is your imagination. While emotion detection is not a new thing, I have not found yet an offering that just works, like these proximity, temperature, etc, sensors that now proliferate.
Please find in the following video a demo of Project Jammin’s Face Sensor for the IoT:
Let Me Hear Your Voice and I Will Tell You How You Feel
Creating mood sensing technology has become very popular in recent years. There is a wide range of companies trying to detect your emotions from what you write, the tone of your voice, or from the expressions on your face. All of these companies offer their technology online through cloud-based programming interfaces (APIs).
As part of my offline emotion sensing hardware (Project Jammin), I have already built early prototypes of facial expression and speech content recognition for emotion detection. In this short article I describe the missing part, a voice tone analyzer.
In order to build a tone analyzer, it is necessary to study the properties of the speech waveform (a two dimensional representation of a sound). Waveforms are also known as time domain representations of sound as they are representations of changes in intensity over time. For more details about the waveform you can refer to this interesting page.
Using software specifically designed to analyze speech, the idea is to extract certain characteristics of the waveform that can be used as features to train a machine learning classifier. Given a collection of speech recordings, manually labelled with the emotion expressed, we can construct vector representations of each recording using the extracted features.
The features used in emotion detection from speech vary from work to work, and sometimes even depend on the language analyzed. In general, many research and applied works used a combination of pitch, Mel Frequency Cepstral Coefficients (MFCC), and Formants of speech.
Once the features are extracted and the vector representations of speech constructed, a classifier is trained to detect emotions. Several types of classifiers have been utilized in previous works. Among the most popular are Support Vector Machines (SVM), Logistic Regressions (Logit), Hidden Markov Models (HMM), and Neural Networks (NN).
As an early prototype I have implemented a simplified version of an emotion detection classifier. Instead of detecting several emotions like joy, sadness, anger, etc., my tone analyzer performs a binary classification to detect the level of arousal of a user. A high level of arousal is associated with emotions like joy, surprise, and anger whereas a low level of arousal is associated with emotions like sadness and boredom. The video below shows my tone analyzer running on a Raspberry Pi. Enjoy!
Offline Emotion-Specific Speech-to-Text in Low-End Devices
From virtual assistants that fail to respond appropriately to distressed users, to chat bots that go racist and sexist, there is an increasing urge to embed empathy and emotions in our Artificial Intelligence. Motivated by this, I started this year working on an artificial emotional brain (hardware+software) codenamed “Project Jammin”.
Two components of “Project Jammin” are currently ready, a very basic facial expression detector, and an emotion classifier from text. Since this project is not meant to run on a phone or computer, but instead be a component of any connected hardware (or robot), the big missing part was a speech-to-text interface. In the past few days I have been working to implement this missing part.
After some research, and an attempt to balance performance, speed, and low-resource consumption, I decided to use the popular Pocketsphinx library. This library has been widely used with low cost hardware like the Raspberry Pi (which I am actually using to build my prototype). The installation process was smooth, but once I tested it with the built-in language model, the performance was terrible. The tool could not recognize a single phrase I said correctly.
After some research, I found out how to create my own language models. Since I am interested in a system that can transcribe conversational utterances (as opposed to dictations for instance), I decided to collect a chat log to create my model. After spending some time collecting data I was able to obtain a chat dataset with over 700k sentences. I then trained a trigram language model with a dictionary consisting of the 20k most frequent words. I was very excited, this chat log was big enough to recognize most possible sentences we say in a regular conversation.
After running the code with the new model for the first time, my initial smile faded away quickly. Although the accuracy was much higher than the one with the built-in model, the transcribed text was always a lot different than what I spoke into the mic. After tuning different parameters and testing over and over again, I never attained a descent performance. Out of desperation, I decided to make one last test.
After manually inspecting the dataset, I noticed that there were many sentences that did not really matter in an emotion-detection context. (e.g. “I will see you tomorrow”). With this in mind, I defined a small set of mood-related keywords (happy, afraid, …) as well as some words related to relationship (family, husband, …), and filtered out any sentence not containing the keywords. The result was a smaller dataset of about 5k sentences. Next I trained a new language model. The model was way smaller than the previous one, and only had around 3k unique words, but surprisingly, the recognition rate jumped dramatically.
Although this simple model is unable to detect every single phrase you say, it can recognize in near real-time many, if not most, emotion-loaded key-phrases. Later I will combine this with other input like facial expression recognition (done), and tone of voice detection (future work). The idea is to have different weak detectors working together towards a more robust emotion classification. In a near future, when all the parts are working together, I will let you know if I was right or if I was just dreaming. Meanwhile, check out this video of my speech-to-text + textual emotion classification working together.
Detecting Emotion in Faces Using Geometric Features
Recognizing emotions in facial expressions is relatively straightforward for humans, and in recent times machines are getting better at it too. The applications of emotion-detecting computers are numerous, from improving advertising to treating depression, the possibilities are limitless. Motivated mainly by the impact in mental health that such technology can have, I started building my own emotion recognition technology.
In a previous post I described a quick test in which I used ideas drawn from research on how facial expressions are decomposed. In this simplified scenario a computer distinguished between sad and happy faces by detecting facial landmarks (points of eyes, mouth, etc …) and using one simple geometric feature of the mouth (representing a Lip Corner Puller). That single-rule algorithm was correct 76% of the time. As usual I got quickly overexcited and started defining other geometric features to improve and extend to six basic emotions (anger, disgust, fear, joy, sadness, and surprise).
To detect a Cheek Raiser, which basically closes the eyelids, and it’s more obvious when we laugh, I used the ratio of the height to the width of the eyes. To detect an Inner Brow Raiser, which basically raises the inner brows, and is characteristic of emotions like sadness, fear, and surprise, I computed the slope of a line crossing the landmarks representing the inner and outer brows.
As you can guess by now, manually identifying geometric features, to represent the nearly 20 actions necessary to perform the 6 basic emotions, got crazy hard pretty quickly. Not to mention that many were just impossible to define just using the landmarks (a Brow Lowerer just wrinkles the forehead). Even if I could successfully define them all, determining how to effectively combine them to detect an emotion would be just impossible by hand.
So I went back to machine learning, which essentially let’s a machine learn how to efficiently combine features to classify or detect things. To make my life easier, instead of manually defining the geometric features, I decided to just feed the machine a series of lengths of the lines representing a face mesh (as described here). The idea is that such lengths will vary from emotion to emotion as a representation of muscle contractions and extensions.
Given such lengths, 178 to be more precise, a classifier can be trained to recognize different emotions. In my particular case I tried the popular Support Vector Machines (SVM) and a Logistic Regression (Logit) classifier, trained on around 20,000 low-res images (48×48 pixels). Logit gave better results across 3 completely different test sets. For the NimStim Face Stimulus Set (574 faces) it achieved 54% accuracy, for a subset of images crawled from flickr user The Face We Make (850 faces) it achieved 55%, and for a set collected from Google Image Search and manually labeled by me (734 faces) it achieved 49% accuracy. The performance is not exactly human-like, and there are certainly systems way more accurate, but it is worth remembering that it only uses 178 features, and was trained in less than a minute in a laptop (as opposed to hours in multiple GPUs for state-of-the-art systems).
Finally, some papers I have surveyed mention that state-of-the-art accuracy can be achieved by combining geometric features with texture features. Texture features can be used to detect wrinkles in forehead, nose, and other parts of the face resulting from certain facial expressions. In the near future I will learn how to extract and try such features.
Facial Emotion Recognition: Single-Rule 1–0 DeepLearning
In my attempt to build Artificial Emotional Intelligence I first turned my head to Deep Learning. The main reason being its recent success at cracking Computer Vision tasks, as I am currently working on the part that detects emotions from faces (I already have the part that understands the content of our written words). So I spent the last month and a half taking online courses, reading online books, and learning a Deep Learning tool. To be honest, that was the easy part. The real challenge was amassing a descent dataset of faces classified by emotion. Why is that a challenge? Because Deep Learning algorithms are data-hungry!
In order to get a descent dataset, I collected face pics from Google Images, and cropped the faces with OpenCV (as described here). I was able to collect several thousand pics but my annotation approach failed due to many pics either not containing a face, or not having the right emotion. At the end, I ended up with just around 600 pics, a useless number for hungry Neural Networks. Looking around the Internet, I was able to crawl a pre-labeled, yet still small set from flickr user The Face We Make (TFWM). In a desperate (and probably the smartest move) I asked on the MachineLearning sub-reddit and someone saved my life. I was pointed to a collection from a Kaggle competition with over 35K pics labeled with 6 emotions plus a neutral class. I was suddenly filled with hope.
Armed with a larger dataset and my beginner’s skills on Deep Learning, I modified the two TensorFlow MNIST sample networks to train them with the 35k pics and test them with the TFWM set. I was thrilled and full of anticipation while the code was running, after all the simplest code (simple MNIST), a modest softmax regression, achieved 91% accuracy, while the deeper code (deep MNIST), a two-layers Convolutional Network achieved around 99.2% accuracy on the MNIST dataset. What a huge disappointment when the highest accuracy I got was 14.7% and 21.8% respectively. I then tried changing a few parameters like learning rates, number of iterations, switch from Softmax to ReLU in the last layer, but things did not change much. Somehow I felt cheated, so before spending more time exploring Deep Learning in order to build more complex networks, I decided to try a small experiment.
Having being working with emotions for a while, I have become familiar with research from psychologist Paul Ekman. In fact, most emotions detected by systems are somehow based on his proposed 6 basic emotions. The work that inspired my experiment is the Facial Action Coding System(FACS), a common standard to systematically categorize the physical expression of emotions. In essence, FACS can describe any emotional expression by deconstructing it into the specific Action Units (the fundamental actions of individual muscles or groups of muscles) that produced the expression. FACS has proven useful to psychologists and to animators, and I believe most emotion detection systems adapt it. FACS is complex, and to develop a system that uses it from scratch might take a long time. In my simple experiment, I identified 2 Action Units relatively easy to detect in still images: Lip Corner Puller, which draws the angle of the mouth superiorly and posteriorly (a smile), and Lip Corner Depressor which is associated with frowning (and a sad face).
To perform my experiment, I considered only two emotions, namely joy and sadness. To compare with the adapted MNIST networks, I created a single-rule algorithm as follows. Using dlib, a powerful toolkit containing machine learning algorithms, I detected the faces in each image with the included face detector. For any detected face, I used the included shape detector to identify 68 facial landmarks. From all 68 landmarks, I identified 12 corresponding to the outer lips.
Once having the outer lips, I identified the topmost and the bottommost landmarks, as well as the landmarks for the corners of the mouth. You can think of such points as constructing a bounding box around the mouth.
Then the simple rule is as follows. I compute a mouth height (mh) as the difference between the y coordinates of the topmost and bottommost landmarks. I set a threshold (th) as half that height (th = mh/2). The threshold can be thought of as the y coordinate of a horizontal line dividing the bounding box into an upper and a lower region.
I then compute the two “lip corner heights” as the difference between the y coordinates of the topmost landmark and both mouth corner landmarks. I take the maximum (max) of the “lip corner heights” and compare it to th. If max is smaller than the threshold, it means that the corner of the lips are in the top region of the bounding box, which represents a smile (by the Lip Corner Puller). If not, then we are in the presence of a Lip Corner Depressor action, which represents a sad face.
With this simple algorithm in place I then performed the experiment. For the NMIST networks, I extracted the related faces from the Kaggle set and I ended up with 8989 joy and 6077 sad training faces. For testing I had 224 and 212 faces respectively from the TFWM set. After training and testing, the simple NMIST network obtained 51.4% and the deep NMIST 55% accuracy, a significant improvement over the 7-classes version, but still a very bad performance. I then used the test set and ran the single-rule algorithm. Surprisingly, this single rule obtained an accuracy of 76%, a 21% improvement over the deep NMIST network.
There has been a long debate on whether Deep Learning algorithms are better than custom algorithms built based on some domain knowledge. Recently Deep Learning has outperformed many such algorithms in Computer Vision and Speech Recognition. I have no doubt about the power of Deep Learning, however, much has been said about how difficult it is to build a good custom algorithm and how easy it is to build a good neural network. The single-rule algorithm I just described is very simple and far from being a realistic system, however, this simple algorithm built in an afternoon beat something that took me over a month to understand. This is not by any means a definitive answer to the debate, but makes me wonder if custom algorithms are ready to be replaced by their Deep Learning counterparts. Custom algorithms are not only good, but as expressed in a previous post, they also give you the satisfaction of fully understanding what’s going on inside, a priceless feeling.
This was an experiment out of desperation and curiosity, it was never meant to deny or criticize the power of Deep Learning. All opinions expressed were felt during that particular moment and may change over the curse of my journey, in which I plan to build both a Deep Learning and a Custom facial emotion expression classifier.
Figures 1 and 2 were obtained from the amazing online tool ARTNATOMY by Victoria Contreras Flores (Spain 2005).
Thoughts on Motivation and Self-Motivated Software
Being raised by a non traditional Latin mother, I learned from an early age how to help doing the housework, I hated it nonetheless. Being married to a non traditional Vietnamese woman, she made sure that the housework was almost equally distributed between the two. At home I am in charge of washing the dishes and clothes, and cleaning the floors and toilets. I love being useful almost as much as I hate the tasks that I have to do. That’s why every time I see my one-year-old son sporting a wide smile while using his bibs to clean his toys, walls, and basically everything in his way, I get shocked. How can he enjoy so much something that I have to push myself to do?
My wife quickly explained to me that in order to keep a clean environment in my son’s daycare center, the teachers are always cleaning the walls, floors, and toys. One of the main ways in which babies learn is imitation, so it makes perfect sense that my son tries to replicate what the people taking care of him do. When anyone sees my son cleaning they applaud and laugh, and he also bursts in laughter. He seems to enjoy all these smiles, probably motivating him to keep cleaning. The same seems to be true in adults, we often behave in ways that tend to increase the acceptance by our peers, usually perceived in the forms of smiles and flattering.
It is said that state of the art emotion detection technologies can now achieve human-like accuracy at detecting happiness and sadness from facial expressions, tone of voice, and content of our words. I wonder why no one has tried to replicate the motivation shown by babies to create better software. Something as simple as letting a virtual home assistant automatically play music or TV shows that it has detected we like by reading our smiles through a home camera, or by what we tweet or post on Facebook. Is it really that difficult? Artificial Intelligence has already beaten the world champions at Chess, Jeopardy, and Go, so why can’t it beat a one-year-old at showing empathy.
Don’t Feed Me, Teach Me How to Fish
“Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime” reads the old proverb. After jumping on the Deep Learning bandwagon (but this applies to the Big Data, and any other bandwagon), I (my Neural Nets) have been constantly fed, and it has been great. I have been specially enchanted by how Convolutional Neural Networks (CNNs, ConvNets) have performed at classifying handwritten numbers, detecting cats, and spotting facial landmarks. The problem is that, that feast was already cooked and readily served on my (NNs) table. It was not my show, it was someone else’s.
If you are an expert in Machine Learning you will read this and think “he finally got it”. If you are just beginning and still in the tutorials (there are many, for many frameworks/packages/libraries, all convincing you they are the best) then you probably still don’t get my feeling. You might say “but the X tutorial on Y package explained very well (and pointed to nice materials) how a CNN works” and you are right, but how about the datasets? Data are the fish for our CNNs (and our RNNs and any other NN), but the data was not just fished (or hunted) for us (and our NNs), it was already nicely cooked and seasoned (cleaned, labeled, formatted).
When people claim Deep Learning will bring Machine Learning to the masses they are right, but except for a few experts, and others sitting on large amounts of data, the masses will be largely playing with MNIST, CIFAR-10/100, and other datasets available through Kaggle competitions. You still don’t know what I mean? I invite you to follow a TensorFlow tutorial (the same applies for any other library, they all use the same datasets), skip all the explanations, just copy and paste the code, then see the magic happening. After the ecstasy wears off, define a new task, not recognizing numbers or airplanes, but maybe detecting emotions in faces (like I did) and good luck my friend.
Deep Learning, like Big Data before it, is not bringing Machine Learning (or Data Science) to the masses in the right way. It feels more like politicians before the elections, coming to the less fortunate with loads of food in order to win votes (this happens all the time in my country Honduras). To really bring Machine Learning to the masses it is necessary to teach us the nice tricks that top notch scientists use to collect and prepare data. To my head come things I have done myself before like crawling data sources (Twitter, Instagram, Google, …), crowdsourcing our labeling and cleaning tasks, and other more advanced techniques. To bring Machine Learning to the masses stop feeding the masses, teach them how to fish.
This is not a critique to Deep Learning. I understand there is a large number of people with some sort of experience dealing with data, and Deep Learning can become a very useful tool for them. The raison d’être of this article is that many are irresponsibly saying Deep Learning will bring Machine Learning to the masses, and the masses will come, and the masses will get disappointed, and suffer.