Creating mood sensing technology has become very popular in recent years. There is a wide range of companies trying to detect your emotions from what you write, the tone of your voice, or from the expressions on your face. All of these companies offer their technology online through cloud-based programming interfaces (APIs).
As part of my offline emotion sensing hardware (Project Jammin), I have already built early prototypes of facial expression and speech content recognition for emotion detection. In this short article I describe the missing part, a voice tone analyzer.
In order to build a tone analyzer, it is necessary to study the properties of the speech waveform (a two dimensional representation of a sound). Waveforms are also known as time domain representations of sound as they are representations of changes in intensity over time. For more details about the waveform you can refer to this interesting page.
Using software specifically designed to analyze speech, the idea is to extract certain characteristics of the waveform that can be used as features to train a machine learning classifier. Given a collection of speech recordings, manually labelled with the emotion expressed, we can construct vector representations of each recording using the extracted features.
The features used in emotion detection from speech vary from work to work, and sometimes even depend on the language analyzed. In general, many research and applied works used a combination of pitch, Mel Frequency Cepstral Coefficients (MFCC), and Formants of speech.
Two components of “Project Jammin” are currently ready, a very basic facial expression detector, and an emotion classifier from text. Since this project is not meant to run on a phone or computer, but instead be a component of any connected hardware (or robot), the big missing part was a speech-to-text interface. In the past few days I have been working to implement this missing part.
After some research, and an attempt to balance performance, speed, and low-resource consumption, I decided to use the popular Pocketsphinx library. This library has been widely used with low cost hardware like the Raspberry Pi (which I am actually using to build my prototype). The installation process was smooth, but once I tested it with the built-in language model, the performance was terrible. The tool could not recognize a single phrase I said correctly.
After some research, I found out how to create my own language models. Since I am interested in a system that can transcribe conversational utterances (as opposed to dictations for instance), I decided to collect a chat log to create my model. After spending some time collecting data I was able to obtain a chat dataset with over 700k sentences. I then trained a trigram language model with a dictionary consisting of the 20k most frequent words. I was very excited, this chat log was big enough to recognize most possible sentences we say in a regular conversation.
After running the code with the new model for the first time, my initial smile faded away quickly. Although the accuracy was much higher than the one with the built-in model, the transcribed text was always a lot different than what I spoke into the mic. After tuning different parameters and testing over and over again, I never attained a descent performance. Out of desperation, I decided to make one last test.
After manually inspecting the dataset, I noticed that there were many sentences that did not really matter in an emotion-detection context. (e.g. “I will see you tomorrow”). With this in mind, I defined a small set of mood-related keywords (happy, afraid, …) as well as some words related to relationship (family, husband, …), and filtered out any sentence not containing the keywords. The result was a smaller dataset of about 5k sentences. Next I trained a new language model. The model was way smaller than the previous one, and only had around 3k unique words, but surprisingly, the recognition rate jumped dramatically.
Although this simple model is unable to detect every single phrase you say, it can recognize in near real-time many, if not most, emotion-loaded key-phrases. Later I will combine this with other input like facial expression recognition (done), and tone of voice detection (future work). The idea is to have different weak detectors working together towards a more robust emotion classification. In a near future, when all the parts are working together, I will let you know if I was right or if I was just dreaming. Meanwhile, check out this video of my speech-to-text + textual emotion classification working together.
Recognizing emotions in facial expressions is relatively straightforward for humans, and in recent times machines are getting better at it too. The applications of emotion-detecting computers are numerous, from improving advertising to treating depression, the possibilities are limitless. Motivated mainly by the impact in mental health that such technology can have, I started building my own emotion recognition technology.
In a previous post I described a quick test in which I used ideas drawn from research on how facial expressions are decomposed. In this simplified scenario a computer distinguished between sad and happy faces by detecting facial landmarks (points of eyes, mouth, etc …) and using one simple geometric feature of the mouth (representing a Lip Corner Puller). That single-rule algorithm was correct 76% of the time. As usual I got quickly overexcited and started defining other geometric features to improve and extend to six basic emotions (anger, disgust, fear, joy, sadness, and surprise).
To detect a Cheek Raiser, which basically closes the eyelids, and it’s more obvious when we laugh, I used the ratio of the height to the width of the eyes. To detect an Inner Brow Raiser, which basically raises the inner brows, and is characteristic of emotions like sadness, fear, and surprise, I computed the slope of a line crossing the landmarks representing the inner and outer brows.
As you can guess by now, manually identifying geometric features, to represent the nearly 20 actions necessary to perform the 6 basic emotions, got crazy hard pretty quickly. Not to mention that many were just impossible to define just using the landmarks (a Brow Lowerer just wrinkles the forehead). Even if I could successfully define them all, determining how to effectively combine them to detect an emotion would be just impossible by hand.
So I went back to machine learning, which essentially let’s a machine learn how to efficiently combine features to classify or detect things. To make my life easier, instead of manually defining the geometric features, I decided to just feed the machine a series of lengths of the lines representing a face mesh (as described here). The idea is that such lengths will vary from emotion to emotion as a representation of muscle contractions and extensions.
Given such lengths, 178 to be more precise, a classifier can be trained to recognize different emotions. In my particular case I tried the popular Support Vector Machines (SVM) and a Logistic Regression (Logit) classifier, trained on around 20,000 low-res images (48×48 pixels). Logit gave better results across 3 completely different test sets. For the NimStim Face Stimulus Set (574 faces) it achieved 54% accuracy, for a subset of images crawled from flickr user The Face We Make (850 faces) it achieved 55%, and for a set collected from Google Image Search and manually labeled by me (734 faces) it achieved 49% accuracy. The performance is not exactly human-like, and there are certainly systems way more accurate, but it is worth remembering that it only uses 178 features, and was trained in less than a minute in a laptop (as opposed to hours in multiple GPUs for state-of-the-art systems).
Finally, some papers I have surveyed mention that state-of-the-art accuracy can be achieved by combining geometric features with texture features. Texture features can be used to detect wrinkles in forehead, nose, and other parts of the face resulting from certain facial expressions. In the near future I will learn how to extract and try such features.
In order to get a descent dataset, I collected face pics from Google Images, and cropped the faces with OpenCV (as described here). I was able to collect several thousand pics but my annotation approach failed due to many pics either not containing a face, or not having the right emotion. At the end, I ended up with just around 600 pics, a useless number for hungry Neural Networks. Looking around the Internet, I was able to crawl a pre-labeled, yet still small set from flickr user The Face We Make (TFWM). In a desperate (and probably the smartest move) I asked on the MachineLearning sub-reddit and someone saved my life. I was pointed to a collection from a Kaggle competition with over 35K pics labeled with 6 emotions plus a neutral class. I was suddenly filled with hope.
Armed with a larger dataset and my beginner’s skills on Deep Learning, I modified the two TensorFlow MNIST sample networks to train them with the 35k pics and test them with the TFWM set. I was thrilled and full of anticipation while the code was running, after all the simplest code (simple MNIST), a modest softmax regression, achieved 91% accuracy, while the deeper code (deep MNIST), a two-layers Convolutional Network achieved around 99.2% accuracy on the MNIST dataset. What a huge disappointment when the highest accuracy I got was 14.7% and 21.8% respectively. I then tried changing a few parameters like learning rates, number of iterations, switch from Softmax to ReLU in the last layer, but things did not change much. Somehow I felt cheated, so before spending more time exploring Deep Learning in order to build more complex networks, I decided to try a small experiment.
Having being working with emotions for a while, I have become familiar with research from psychologist Paul Ekman. In fact, most emotions detected by systems are somehow based on his proposed 6 basic emotions. The work that inspired my experiment is the Facial Action Coding System(FACS), a common standard to systematically categorize the physical expression of emotions. In essence, FACS can describe any emotional expression by deconstructing it into the specific Action Units (the fundamental actions of individual muscles or groups of muscles) that produced the expression. FACS has proven useful to psychologists and to animators, and I believe most emotion detection systems adapt it. FACS is complex, and to develop a system that uses it from scratch might take a long time. In my simple experiment, I identified 2 Action Units relatively easy to detect in still images: Lip Corner Puller, which draws the angle of the mouth superiorly and posteriorly (a smile), and Lip Corner Depressor which is associated with frowning (and a sad face).
To perform my experiment, I considered only two emotions, namely joy and sadness. To compare with the adapted MNIST networks, I created a single-rule algorithm as follows. Using dlib, a powerful toolkit containing machine learning algorithms, I detected the faces in each image with the included face detector. For any detected face, I used the included shape detector to identify 68 facial landmarks. From all 68 landmarks, I identified 12 corresponding to the outer lips.
Once having the outer lips, I identified the topmost and the bottommost landmarks, as well as the landmarks for the corners of the mouth. You can think of such points as constructing a bounding box around the mouth.
Then the simple rule is as follows. I compute a mouth height (mh) as the difference between the y coordinates of the topmost and bottommost landmarks. I set a threshold (th) as half that height (th = mh/2). The threshold can be thought of as the y coordinate of a horizontal line dividing the bounding box into an upper and a lower region.
I then compute the two “lip corner heights” as the difference between the y coordinates of the topmost landmark and both mouth corner landmarks. I take the maximum (max) of the “lip corner heights” and compare it to th. If max is smaller than the threshold, it means that the corner of the lips are in the top region of the bounding box, which represents a smile (by the Lip Corner Puller). If not, then we are in the presence of a Lip Corner Depressor action, which represents a sad face.
With this simple algorithm in place I then performed the experiment. For the NMIST networks, I extracted the related faces from the Kaggle set and I ended up with 8989 joy and 6077 sad training faces. For testing I had 224 and 212 faces respectively from the TFWM set. After training and testing, the simple NMIST network obtained 51.4% and the deep NMIST 55% accuracy, a significant improvement over the 7-classes version, but still a very bad performance. I then used the test set and ran the single-rule algorithm. Surprisingly, this single rule obtained an accuracy of 76%, a 21% improvement over the deep NMIST network.
There has been a long debate on whether Deep Learning algorithms are better than custom algorithms built based on some domain knowledge. Recently Deep Learning has outperformed many such algorithms in Computer Vision and Speech Recognition. I have no doubt about the power of Deep Learning, however, much has been said about how difficult it is to build a good custom algorithm and how easy it is to build a good neural network. The single-rule algorithm I just described is very simple and far from being a realistic system, however, this simple algorithm built in an afternoon beat something that took me over a month to understand. This is not by any means a definitive answer to the debate, but makes me wonder if custom algorithms are ready to be replaced by their Deep Learning counterparts. Custom algorithms are not only good, but as expressed in a previous post, they also give you the satisfaction of fully understanding what’s going on inside, a priceless feeling.
This was an experiment out of desperation and curiosity, it was never meant to deny or criticize the power of Deep Learning. All opinions expressed were felt during that particular moment and may change over the curse of my journey, in which I plan to build both a Deep Learning and a Custom facial emotion expression classifier.
Figures 1 and 2 were obtained from the amazing online tool ARTNATOMY by Victoria Contreras Flores (Spain 2005).
Being raised by a non traditional Latin mother, I learned from an early age how to help doing the housework, I hated it nonetheless. Being married to a non traditional Vietnamese woman, she made sure that the housework was almost equally distributed between the two. At home I am in charge of washing the dishes and clothes, and cleaning the floors and toilets. I love being useful almost as much as I hate the tasks that I have to do. That’s why every time I see my one-year-old son sporting a wide smile while using his bibs to clean his toys, walls, and basically everything in his way, I get shocked. How can he enjoy so much something that I have to push myself to do?
My wife quickly explained to me that in order to keep a clean environment in my son’s daycare center, the teachers are always cleaning the walls, floors, and toys. One of the main ways in which babies learn is imitation, so it makes perfect sense that my son tries to replicate what the people taking care of him do. When anyone sees my son cleaning they applaud and laugh, and he also bursts in laughter. He seems to enjoy all these smiles, probably motivating him to keep cleaning. The same seems to be true in adults, we often behave in ways that tend to increase the acceptance by our peers, usually perceived in the forms of smiles and flattering.
It is said that state of the art emotion detection technologies can now achieve human-like accuracy at detecting happiness and sadness from facial expressions, tone of voice, and content of our words. I wonder why no one has tried to replicate the motivation shown by babies to create better software. Something as simple as letting a virtual home assistant automatically play music or TV shows that it has detected we like by reading our smiles through a home camera, or by what we tweet or post on Facebook. Is it really that difficult? Artificial Intelligence has already beaten the world champions at Chess, Jeopardy, and Go, so why can’t it beat a one-year-old at showing empathy.
“Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime” reads the old proverb. After jumping on the Deep Learning bandwagon (but this applies to the Big Data, and any other bandwagon), I (my Neural Nets) have been constantly fed, and it has been great. I have been specially enchanted by how Convolutional Neural Networks (CNNs, ConvNets) have performed at classifying handwritten numbers, detecting cats, and spotting facial landmarks. The problem is that, that feast was already cooked and readily served on my (NNs) table. It was not my show, it was someone else’s.
If you are an expert in Machine Learning you will read this and think “he finally got it”. If you are just beginning and still in the tutorials (there are many, for many frameworks/packages/libraries, all convincing you they are the best) then you probably still don’t get my feeling. You might say “but the X tutorial on Y package explained very well (and pointed to nice materials) how a CNN works” and you are right, but how about the datasets? Data are the fish for our CNNs (and our RNNs and any other NN), but the data was not just fished (or hunted) for us (and our NNs), it was already nicely cooked and seasoned (cleaned, labeled, formatted).
When people claim Deep Learning will bring Machine Learning to the masses they are right, but except for a few experts, and others sitting on large amounts of data, the masses will be largely playing with MNIST, CIFAR-10/100, and other datasets available through Kaggle competitions. You still don’t know what I mean? I invite you to follow a TensorFlow tutorial (the same applies for any other library, they all use the same datasets), skip all the explanations, just copy and paste the code, then see the magic happening. After the ecstasy wears off, define a new task, not recognizing numbers or airplanes, but maybe detecting emotions in faces (like I did) and good luck my friend.
Deep Learning, like Big Data before it, is not bringing Machine Learning (or Data Science) to the masses in the right way. It feels more like politicians before the elections, coming to the less fortunate with loads of food in order to win votes (this happens all the time in my country Honduras). To really bring Machine Learning to the masses it is necessary to teach us the nice tricks that top notch scientists use to collect and prepare data. To my head come things I have done myself before like crawling data sources (Twitter, Instagram, Google, …), crowdsourcing our labeling and cleaning tasks, and other more advanced techniques. To bring Machine Learning to the masses stop feeding the masses, teach them how to fish.
This is not a critique to Deep Learning. I understand there is a large number of people with some sort of experience dealing with data, and Deep Learning can become a very useful tool for them. The raison d’être of this article is that many are irresponsibly saying Deep Learning will bring Machine Learning to the masses, and the masses will come, and the masses will get disappointed, and suffer.
During my quest to build emotion detection from faces, I had always assumed there was no available system to compare my own with. Not because there are no companies or hobbyist already building such thing, but because the companies making the headlines are keeping details of their work in obscurity. Turns out I was wrong. Last November Microsoft Research released, as part of their ongoing Project Oxford, a fantastic facial emotion detection demo, and I just found out. How could I miss it? I don’t know, but guess what? I now have something to compare my own system (after a build it) with.
So just how good is this MS system? It is darn good! For instance, I tried to use OpenCV to detect and crop the face in the image below to add it to my own dataset, but it was unable to detect a face in it (bad news for me since I was planning to use OpenCV as part of my system). Have a look at what the MS system did:
It correctly detected the face and the expressed emotion. So it seems like I will have a tough time trying to beat Microsoft. You may even think I am crazy to even try. The thing is, this is extremely motivating and exciting. I want to see how far I can get. Whether I beat MS or not is not important, this is my training, my personal journey, and who knows maybe once again David will defeat Goliath.
Teaching a machine how to recognize objects (cars, houses, and cats) is a difficult task, teaching it to recognize emotions is another story. If you have been following my posts, you then know that I want to teach machines to recognize human emotions. One important way in which machines can detect our feelings is by reading our faces. Teaching a machine to read faces has many challenges, and now that I started to tackle this problem I have encountered my first big one.
Deep Learning, a powerful tool used to teach machines seems promising for the task at hand, but in order to make use of it I needed to find the materials to teach the machine. Let me use an analogy to explain. For humans to learn to recognize objects, or in our specific case recognize facial expressions, a person has to be exposed to many faces. That’s not a big deal as we see faces everywhere from the second we are born. On the other hand, we don’t really have tools to take a computer into the wild and let it learn. So my big challenge was finding pictures or videos of people showing emotions in their face, to feed it to the machine and let it learn.
Companies like Google and Facebook, and some big labs in prestigious universities, have access to an enormous amount of data (just think of how many faces people tag on Facebook). However, mere mortals like me have to find not straightforward ways to collect humble amounts of data to teach our machines. So let me start by defining exactly what is the data I wanted to collect. To teach my machine to recognize emotions from facial expressions, I needed to collect pictures of faces expressing some emotion (angry faces or happy faces), and at the same time I need to explicitly tell the machine what the emotion is (this face shows anger). To be more exact, what I need to feed the machine is a collection of data pairs of the form[picture, emotion]. The question now is how to obtain such data?
First let me quickly tell you how you should not obtain this data. Many, including me, would first think about manually collecting thousands of pics from different sources (personal photos, Facebook, etc …), use a photo app to crop the faces (the learning is more efficient if the pic contains just the face), and manually define the emotion tag. This is time consuming, and not scalable. Let me explain what I did instead .First, many companies offer some automatic ways to pull data from their servers. The obvious choice for pics then might be Instagram (not Facebook as the data is not public). Now the problem with Instagram is that it’s not easy to specify that you want pics with faces. So in order to get exactly what I needed (faces with emotional expressions) my best choice was Google.
Google offers the Custom Search API, a tool to let programs pull data based on queries, much like humans would using the Google website. This was perfect for me, to understand why try the query scared look on Google (then go to images). So now I had an automatic way to get faces expressing emotions and I did not have to manually identify the emotion (it comes from the query). But wait, what about this picture:
“Big Man With Angry Eyes Points His Gun To Your Face”, obtained using the query “angry look”
The image was obtained with the query angry look, and it clearly has an angry face in it, but it also has an upper body, a gun, and many watermarks. This is not good as it will confuse my machine. How about this picture obtained with the query sad person:
It clearly has no sad person, it has no person at all as it’s just a table. So while in most cases (when using appropriate queries) you will obtain faces with the intended emotion (like the angry man), it will most likely come with extra noise, or sometimes even not have a face at all. Again, the best way to deal with this is not manually, but using Computer Vision tools to remove the noise automatically.
After submitting many queries and downloading a few thousands pics (due to rate limitations this might span a few days), I automatically processed all the pics using the popular Computer Vision library OpenCV (free if you wonder). OpenCV comes pre-loaded with a set of nice filters to detect faces and other features (eyes, mouth, …) in pics. The results are magical:
OpenCV automatically detected a square region containing the face, and with additional commands, my program was able to automatically crop and reduce the face to a size and format appropriate to feed to my machine. Now what happened to the image without the face? OpenCV did not detect any face in it so it was automatically ignored. And Voila, that’s how I could efficiently (and free) start building a descent dataset of faces to later teach a computer how to detect our emotions.
To conclude, very often (depending on the query) you will find friends like this in the pictures:
and OpenCV will of course return you this beauty:
Whether this is bad or not for the trained machine I still don’t know. I will find out when I move to the training process. Worst case, I have to manually remove a few faces (and other possible wrongfully detected objects). Best case, I have a machine that can know if my kids are watching happy cartoons.
Disclaimer: I don’t own any picture used in this article. Pictures will be removed if requested.
As much as I tried not to fall for the hype recently gained by Deep Learning, I could not really resist to explore their promises. Let me quickly explain. In order to build real AEI I wanted to start by the component that can understand our words. This belongs to the fields called Natural Language Processing (NLP), and Computational Linguistics (CL). Building powerful and useful NLP/CL systems is extremely challenging. It took me nearly 3 years to build a system that can guess your emotions from what you write, and the accuracy is far from perfect. The reason is that such systems are traditionally built using manually defined rules, features, and algorithms tailored for specific tasks.
Deep Learning, on the other hand, promises to replace handcrafted features with efficient algorithms able to “learn” the features automatically from some input data, saving you all the hard work. So yeah! When you think about this it makes sense to want to give it a try. And so I did. First I studied the basics of Artificial Neural Networks using the awesome Coursera Machine Learning Course. Then, to complement that knowledge I read this great online book, and checked these fantastic video tutorials. All that taught me to play with toy Deep Networks on code fully written by me. When I was ready I jumped to TensorFlow, a full-fledged Deep Learning software library and followed their tutorials to train Deep Networks to classify handwritten characters. My reaction? A rush of elation followed by a bit of disappointment.
Don’t take me wrong, Deep Learning is awesome. There is mathematical proof that in theory they can solve any problem. The handwritten characters classification tutorial, although simple, hints to that. Yet there is something about Deep Learning that leaves a sour taste in the mouth. During my previous research project, I always felt I was in control, and in most cases I could justify why things worked. With Deep Learning, it all felt like magic. Except for the valid mathematical intuition, you can’t really understand what’s going on inside the black box that is the constructed Networks. Moreover, even the state-of-the-art systems where constructed in an empirical way, by testing different network architectures until finding the best performer, with little clue of why it performs better.
So yes, Deep Learning can solve complex problems, yes it can save time and effort, but without clear understanding of what is going on inside, it might lead to many frustrations in the process. As soon as I move past the tutorials and into developing the first part of my AEI, I will post more about my feelings towards Deep Learning.
Apple just bought Emotient, a startup that uses Artificial Intelligence to read people’s emotions by analyzing their facial expressions. Also, in October of 2015 it bought VocalIQ, another startup that uses speech technology to teach machines to understand the way people speak. As usual, Apple did not disclose the reasons behind the acquisitions.
Why is Apple interested in such technologies? To me the reasons are clear. If you have interacted with Siri, Apple’s virtual assistant for mobile devices, you have probably discovered how limited it is. My guess is that the company is trying to inject Siri with Artificial Emotional Intelligence (or some sort of Artificial Emotional Brain), in an attempt to make interactions with the system much more natural. The missing piece of the puzzle? Technology to understand not just our faces and tones, but the implicit emotions hidden in the meaning of hour words. If you haven’t yet, please have a look at my demo of Emotion Detection from Text.