Deep Feelings About Deep Learning

So I want to build Artificial Emotional Intelligence (AEI), and I already wrote about a possible application to treat mental health problems. Even the big guns like Apple Inc. are trying to build AEI (for some obscure reason). So the obvious step when you want to build something is to study and to do research.

As much as I tried not to fall for the hype recently gained by Deep Learning, I could not really resist to explore their promises. Let me quickly explain. In order to build real AEI I wanted to start by the component that can understand our words. This belongs to the fields called Natural Language Processing (NLP), and Computational Linguistics (CL). Building powerful and useful NLP/CL systems is extremely challenging. It took me nearly 3 years to build a system that can guess your emotions from what you write, and the accuracy is far from perfect. The reason is that such systems are traditionally built using manually defined rules, features, and algorithms tailored for specific tasks.

Deep Learning, on the other hand, promises to replace handcrafted features with efficient algorithms able to “learn” the features automatically from some input data, saving you all the hard work. So yeah! When you think about this it makes sense to want to give it a try. And so I did. First I studied the basics of Artificial Neural Networks using the awesome Coursera Machine Learning Course. Then, to complement that knowledge I read this great online book, and checked these fantastic video tutorials. All that taught me to play with toy Deep Networks on code fully written by me. When I was ready I jumped to TensorFlow, a full-fledged Deep Learning software library and followed their tutorials to train Deep Networks to classify handwritten characters. My reaction? A rush of elation followed by a bit of disappointment.

Don’t take me wrong, Deep Learning is awesome. There is mathematical proof that in theory they can solve any problem. The handwritten characters classification tutorial, although simple, hints to that. Yet there is something about Deep Learning that leaves a sour taste in the mouth. During my previous research project, I always felt I was in control, and in most cases I could justify why things worked. With Deep Learning, it all felt like magic. Except for the valid mathematical intuition, you can’t really understand what’s going on inside the black box that is the constructed Networks. Moreover, even the state-of-the-art systems where constructed in an empirical way, by testing different network architectures until finding the best performer, with little clue of why it performs better.

So yes, Deep Learning can solve complex problems, yes it can save time and effort, but without clear understanding of what is going on inside, it might lead to many frustrations in the process. As soon as I move past the tutorials and into developing the first part of my AEI, I will post more about my feelings towards Deep Learning.

Building an Emotional Artificial Brain — Apple Inc.

1-auYkYLHTZtGhlxxWTLICqA

Apple just bought Emotient, a startup that uses Artificial Intelligence to read people’s emotions by analyzing their facial expressions. Also, in October of 2015 it bought VocalIQ, another startup that uses speech technology to teach machines to understand the way people speak. As usual, Apple did not disclose the reasons behind the acquisitions.

Why is Apple interested in such technologies? To me the reasons are clear. If you have interacted with Siri, Apple’s virtual assistant for mobile devices, you have probably discovered how limited it is. My guess is that the company is trying to inject Siri with Artificial Emotional Intelligence (or some sort of Artificial Emotional Brain), in an attempt to make interactions with the system much more natural. The missing piece of the puzzle? Technology to understand not just our faces and tones, but the implicit emotions hidden in the meaning of hour words. If you haven’t yet, please have a look at my demo of Emotion Detection from Text.


You can read more about Apple’s acquisitions here

Building an Emotional Artificial Brain — Motivations

“An estimated one in five people in the U.S. have a diagnosable mental disorder.”and “… cost an estimated $467 billion in the U.S. in lost productivity and medical expenses ($2.5 trillion globally).” are some of the lines that can be read in an interesting article about Virtual Reality Therapy published in TechCrunch. Now just like me, you will probably be shocked to know that VR has been used to treat some types of mental disorders for decades. So why is it that most people have never heard of such thing? I suspect the reasons to be two.

First, VR has so far failed to enter the mainstream of popular tech. The main reasons for this are that in order to create real immersive virtual experiences it is necessary to have an absurd amount of resources, as free to roam worlds will need a higher level of details than the best video games out there. Moreover, in order to run VR software and display such virtual worlds, it is necessary to have an incredibly powerful hardware. Even the Oculus Rift, the device that promises to bring descent VR to the masses, requires you to have a high-end computer that many can’t afford (or don’t need).

Second, and the most important to me is that in the realm of mental health, VR has mostly been used to treat fears, phobias, and post-traumatic stress disorder (PTSD). To treat arachnophobia for instance, the patient can be safely exposed to virtual spiders in a virtual room, in order to help her overcome this problem. Special hardware can also be constructed to simulate a virtual airplane to help a patient overcome the fear of flying. Virtual “worlds at war” can be constructed to help veterans with PTSD. The complication is that, in order to treat other mental health disorders that require direct interaction with another human (like the trauma after being raped, or a child with autism), most VR therapies require the (often indirect) intervention of a trained therapist. The problem is that access to professionals of mental health is scarce.

So this is one case where the need for the Emotional Artificial brain becomes evident. If such technology existed, it could be incorporated into virtual therapists created using artificial intelligence. Such virtual humans could maintain conversations with the patient (trained using transcriptions of real sessions) and adapt the conversation (content and tone) based on the emotions being expressed by the patient (tone, content, facial expressions). If such technology existed, it could bring VR-based therapy even closer to the masses, and who knows, maybe in 5–10 years no one would be surprised to hear about VR being used to fight The Global Mental Health Crisis.


The TechCrunch article that inspired this writing is this: Virtual Reality Therapy: Treating The Global Mental Health Crisis

Building an Emotional Artificial Brain — The Beginning

In October last year I finally got my long awaited PhD degree. My research topic was Sentiment Analysis (SA), a sub-field of Artificial Intelligence and Natural Language Processing that seeks to identify the polarity of any given text. Put in simpler words, given any subjective text, SA seeks to tell wether the text is positive, negative, or neutral. The result of my long research is a set of short and poorly optimized algorithms that, when combined in a pre-defined order, can yield a very simple emotions classifier. Yes, emotions, not sentiment, which means this classifier can guess (very often wrongly) which of hundreds of emotions a subjective text expresses.

After setting up a working demo of the emotions detection system, my excitement grew quickly. The next obvious step for me was to build a company and create consumer apps that use the technology. Every single person that heard my plan and tried the demo was excited too. Nonetheless, my plan failed. The reason is simple, I realized that a system that can simply guess an emotion from your text is a very crude representation of what an ideal empathetic system should be. So I moved on and I started another unrelated company, with the hopes of one day reviving my previous dream.

Being a tech entrepreneur, I spend much of my time reading the latest tech news. Some of the most discussed trends in recent days are Virtual Reality, Smart everything (homes, cars, devices, etc.) and the rise (and fear) of AI. With every new article read came a new rush of hope, but the fundamental question still remained: How to apply my algorithms and knowledge to these trending areas? I think I have finally found an answer, and this is what this writing, and the ones that will follow are about.

During the last days of 2015 I decided to build an Emotional Brain, or an Artificial Amygdala to be more specific. This is half personal project, half an attempt to predict what will be one of the main components of any future tech humans interact with. Concretely, I will attempt to build a series of algorithms that together can listen, read, and see us, understand our feelings, and reply or act accordingly. A true empathic system. I don’t know if it will be a full fledged conversational agent, or just a simple component of a whole, like an operating system module. Whatever it turns out to be, I will be telling the story in a series of short articles with varying formats. Some might feel like short research papers, others like a story, and others maybe like a random dump of my mind.