logo-mobile

ROHM

ROHM
Menu
  • Arduino –
  • Raspberry Pi –
  • Trending –
  • Others –
  • About –
  • Contact –

Trending

Wearables TechCon 2016: The Rise of Emotionally-Intelligent Wearable Devices (Interview with JR Alaoui of Eyeris)

Device Plus Editorial Team
Published by Device Plus Editorial Team at September 9, 2016
Categories
  • Trending
Tags
wearables tech con

wearables tech con

The influx of wearable tech and IoT into our everyday lives means more data is collected about our functioning and habits than ever before. Monitoring devices can tell how well we slept, when we need to drink more water, or when it is good for us to take some deep breaths to de-stress, but there is a limit to what they can tell us. These devices cannot determine our thoughts and motivations. Being able to measure the driving force behind our actions is another level of human/machine interaction.

Humans are emotional beings. Our feelings influence our consumer behavior and how we interact with our environment and each other. But emotions can be very abstract. How can a logically programmed computer device understand the complexities of human emotion?

Palo Alto-based Eyeris is undertaking this challenge, enabling emotion sensing devices through their artificially intelligent vision sensing embeddable software. EmoVu analyzes facial micro-expressions through any camera-enabled device, registering universal expressions for joy, sadness, surprise, anger, disgust, fear, or neutral. It also recognizes added dimensions, including gender, race, age group, head position, eye tracking, and face recognition. The result provides real-time, highly-precise feedback on user identification (e.g, age, gender, race, etc.)  and user emotion.

wearables tech con

Picture 1. The seven emotions and additional variables tracked and analyzed by EmoVu software / ©Eyeris

wearables tech con

Picture 2. Eyeris CEO and founder JR Alaoui demonstrating EmoVu at Wearables TechCon. The software registers this smiling expression as the emotion joy.

wearables tech con

Picture 3. EmoVu software recognizes this frowning facial expression as the emotion sadness.

wearables tech con

Picture 4. EmoVu software recognizes this neural facial expression as the emotion neutral.

Eyeris’ proprietary deep learning based algorithm employs convoluted neural networks (CNN) for image processing and continuous learning. Their dataset was created from extensive analysis of images for all major races/ethnicities and for different ages, genders, postures, attributes (glasses, hats, etc.), and lighting environments. The comprehensiveness of sampling and integration of artificial intelligence supports more accurate Ambient Intelligence (AmI). AmI refers to electronically-enabled environments that are sensitive and responsive to people. EmoVu’s adaptive learning enhances the accuracy of user identification and can increase the quality of a user’s interaction with their environment.

The ability to capture emotional engagement and demographics make Emovu an insightful feedback tool for evaluating user interaction with games, apps, websites, etc. Third-party application developers can integrate EmoVu’s capabilities through a desktop SDK, mobile SDK, or cloud API. The mobile SDK allows for information to be locally processed on a device, making it readily accessible to other applications. By leveraging EmoVu’s capabilities, developers can enhance user experience based on user emotional response without a large computational overhead.

wearables tech con

Picture 5. Eyeris CEO JR Alaoui presenting “Emotionally-Intelligent Wearables Using Artificially Intelligent Vision Sensing” at Wearables TechCon 2016.

Modar (JR) Alaoui, founder and CEO of Eyeris, shared with developers at Wearables TechCon how Emovu can add functionality and contextual awareness to wearables and IoT. During the interview, he reviewed how as embedded cameras are becoming more a part of our everyday environment, they can contribute the recognition of three things: people, places, and things. Facial recognition is the most important assessment factor for analyzing people in their environment because that is “how we can derive behavior, how we can derive predictive analytics, and this is how we can customize the environment around people—by understanding who they are, what gender, what age group they are, what their head positioning is, where they are looking at, whether their eyes are open or not, and, most importantly, what their current emotion is…” The objective of this technology positively supports individuals through customization of environment or applications and via providing behavioral feedback to companies who are looking to enhance their customer interaction experience.

EmoVu has several advantages that make it optimal for embedding into wearable tech. Despite its complex functioning, the algorithm has a small footprint (less than 1 mB), requiring low RAM, ROM, and storage space. This is an incredibly important feature given the small form factors of wearable tech, which restrict the size of embedded systems.

EmoVu is also formatted for local processing, allowing for real-time data analysis. Data privacy is also a huge issue in wearable tech, and the procurement and storage of images carries legal implications. However, EmoVu abides by all privacy laws worldwide: all images and videos are destroyed and only the data analytics are kept.

During his presentation, Alaoui shared some significant medical applications for emotion sensing. The technology could provide feedback for people on the autism spectrum, alerting teachers or caregivers if an autistic person was becoming agitated or over-stimulated. Similarly, triggers for PTSD could be identified.

Pain management has a lot to gain from this impartial feedback system. Emotion sensing through analyzing facial expressions could assist healthcare providers in instances where a person may be unable to accurately verbally express their level of discomfort. It could also help in patient triage (i.e. prioritizing patients to be seen based on their pain severity).  Overall, this technology could add another dimension of data on pain beyond self-reporting that could aid in more accurate diagnosis and treatment.

Alaoui shared that he doesn’t believe the incorporation of the technology into hospital infrastructure will lead to increased healthcare automation, but he anticipates that it could help doctors diagnose better and get a more accurate assessment of pain levels. He does think, however, that it could have significant implications in areas such as e-health and telemedicine, where patients are communicating virtually through camera-enabled devices. He said, “If you have, for example, a virtual doctor on your computer, it can look at the color of your face, the shape; it can analyze in real-time how your general look is compared to yesterday, for example. What’s your eye-openness? What’s your pupil dilation? Whether your eyes are red or not—there’s so much that can be derived from the face to benefit the healthcare industry.”

As medical wearables are an area of high focus, developers would be prudent to explore the heightened functionality offered by emotion-recognition software.

When asked what were the top two applications of the technology that he was most excited about, Alaoui responded with driver monitoring and social robotics. For driver monitoring, he outlined that the software tracks drivers’ inattention, their cognitive awareness, and their emotional distraction, especially for self-driving cars. Even though these cars are autonomous, there can be system failures or road hazards, which will always require human intervention, that is, a hand-off from autonomous driving to manual control. But before that hand-off happens, there needs to be a camera looking at a driver’s face and recognizing “whether that person is authenticated (to drive),  their eyes are open, their head position is looking straight and ready to take manual control of the vehicle.”

The technology also works in the transition from manual to autonomous. If a driver were to be showing signs of drowsiness or inattentiveness, feedback signals could be sent to alert the driver, or the vehicle could be transitioned into full autonomous mode if a certain level of danger was reached, such as in the case of a medical emergency.

wearables tech con

Picture 6. EmoVu technology can provide feedback on the attention status of drivers. / ©Huffingtonpost

The second vertical application that Alaoui was very excited about was social robotics, primarily robotic companions, such as SoftBank’s Pepper, Jibo, or Buddy. He said their prices would not be too expensive, and that they are already being produced by the hundreds of thousands as we speak. He re-emphasized how excited he was to see the progression of social robotics in 2017. “Think about Amazon Echo, and the second generation of that or third generation of that will have a camera in it. A camera sees and has face-to-face interaction with you. So, think about the third-party developers that come by the thousands or tens of thousands and think about the applications they will be building just from that capability. Gaming, for example is one, e-learning, therapeutic stuff, things for seniors. We believe social robotics will be as disruptive as the iPhone was in 2007.” With EmoVu’s capabilities of facial and emotion recognition, it is easy to see how the predicted explosion of social robotics could spell a paralleled proliferation for Eyeris.

EmoVu technology is bridging towards a future where our everyday devices will offer greater levels of customization and interaction through emotion monitoring. With its ability to be leveraged by third-party developers through proprietary SDKs and cloud API, a host of new utilities are primed for this technology in mobile apps and camera-enabled devices. Since the value of emotional and demographic feedback is not combined to one vertical, EmoVu will continue to be integrated into many diverse areas of wearable tech and IoT, including healthcare, driver monitoring, consumer evaluation, and social robotics. As we move towards our lives being more quantified and our environments more responsive than ever before, emotional recognition software, such as EmoVu, is poised play a pivotal role in enabling devices to better sense, understand, and address our needs.

By Amanda Mintier

Wearable TechCon 2016 Series:

  • Recap – Unveiling the Future of Wearable Technology
  • Medical Wearables – Wearables in Healthcare: Sci-Fi Becoming Sci-Fact
  • Micro Wearables – Overcoming MEMS Design Challenges in Wearable Tech Space

 

 

Device Plus Editorial Team
Device Plus Editorial Team
Device Plus is for everyone who loves electronics and mechatronics.

Check us out on Social Media

  • Facebook
  • Twitter

Recommended Posts

  • IoT Tech Expo North America 2016: The Smart Living of IoT (Smart Homes, Cities and Cars)IoT Tech Expo North America 2016: The Smart Living of IoT (Smart Homes, Cities and Cars)
  • KenKen Puzzle Inventor’s Tips for Engineers to Think “Outside the Box”KenKen Puzzle Inventor’s Tips for Engineers to Think “Outside the Box”
  • IoT Tech Expo North America 2016: The Limitless Potential of IoT Innovation (Interview with Author Sudha Jamthe)IoT Tech Expo North America 2016: The Limitless Potential of IoT Innovation (Interview with Author Sudha Jamthe)
  • Score A Wealth Of Soccer Data From This Startups Eagle EyeScore A Wealth Of Soccer Data From This Startups Eagle Eye
  • Insect-Inspired “Arthrobots” Made of Drinking StrawsInsect-Inspired “Arthrobots” Made of Drinking Straws
  • Manta Ray-Inspired Soft Robotic FishManta Ray-Inspired Soft Robotic Fish
Receive update on new postsPrivacy Policy

Recommended Tutorials

  • How to integrate an RFID module with Raspberry Pi How to integrate an RFID module with Raspberry Pi
  • How to Use the NRF24l01+ Module with Arduino How to Use the NRF24l01+ Module with Arduino
  • How to Run Arduino Sketches on Raspberry Pi How to Run Arduino Sketches on Raspberry Pi
  • Setting Up Raspberry Pi as a Home Media Server Setting Up Raspberry Pi as a Home Media Server

Recommended Trends

  • SewBot Is Revolutionizing the Clothing Manufacturing Industry SewBot Is Revolutionizing the Clothing Manufacturing Industry
  • All About The Sumo Robot Competition And Technology All About The Sumo Robot Competition And Technology
  • 5 Interesting Tips to Calculating the Forward Kinematics of a Robot 5 Interesting Tips to Calculating the Forward Kinematics of a Robot
  • Go Inside the Drones That Are Changing Food Delivery Go Inside the Drones That Are Changing Food Delivery
Menu
  • Arduino –
    Arduino Beginner’s Guide
  • Raspberry Pi –
    Raspberry Pi Beginner's Guide
  • Trending –
    Updates on New Technologies
  • Others –
    Interviews / Events / Others

Check us out on Social Media

  • Facebook
  • Twitter
  • About
  • Company
  • Privacy Policy
  • Terms of Service
  • Contact
  • Japanese
  • 简体中文
  • 繁體中文
Don’t Forget to Follow Us!
© Copyright 2016-2023. Device Plus - Powered by ROHM
© 2023 Device Plus. All Rights Reserved. Muffin group

istanbul escort istanbul escort istanbul escort