Here at the RE.WORK Connect Summit, the future of driving is being laid out, facial gesture by facial gesture. In a fascinating look at how we’ll all be driving in the years to come, with cameras embedded in our cars that will study our every tic and raised eyebrow, Modar Alaoui directed the crowd to watch his video showing how it all works.
With cameras fixed on his face, Alaoui smiled, then frowned, then looked surprised, then looked angry. And with each gesture, a color-coded bar on the bottom of the screen would rise or drop, indicating the algorithm had successfully “read” each of those expressions.
“Everytime someone hits the brakes hard in our demos,” he said, mugging feigned fear for the camera, “this emotion happens.”
So hard braking equals fear equals fearful look on our faces (without us even knowing it). With that, the cameras and the software linked to them can go to work, helping us all drive better, predicting approaching dangers on the road, sending data back to a central point where it can be sliced and diced in a hundred ways to help us all negotiate our crowded roadways.
Alaoui was the lead-off hitter in a two-day conference dedicated to the latest and greatest developments in the world of the Internet of Things, or IoT.
There were speakers talking about “Augmented Mobility” and “Connected Ecologies.” Experts from Ford and MIT’s Sensible Cities Lab and even the Office the Mayor of San Francisco shared their cutting-edge explorations into the world of increasing connectivity we all live in. With smartphone usage exploding around the world, and an ever-greater use of sensors embedded in city streets and cameras being loaded onto new vehicles coming off the assembly line, the Internet of Things is unfolding right before our eyes.
And they all agree: it’s all about to explode into one huge and crazy bloom in the next two to five years.
Alaoui, CEO and Founder of Eyeris, was typical of his fellow speakers in his futurist pitch. He started with killer numbers:
- 22 billion connected devices by 2020
- 61 billion dollars in revenue in the IoT space
- one-third of all devices will be (drum roll, please!) cameras
- and all those cameras, in our cars and homes and offices and cities, will be closely following three things: People. Places. Things. In Alaoui’s field, they even have a shorthand for that: PPT
And many of those all-seeing cameras, he said, will be focused directly on the faces of motorists behind the wheel. Using amazing software that’s still being developed for its full potential, the cameras will essentially “read” our faces as we cruise down the road. Eyeris’ data bank collects information from our faces and plops it all into various categories. They can break it all down into 5 races, 4 age groups, 2 genders, 13 head poses and 10 lighting conditions.
“It’s the most comprehensive suite of face analytics ever created,” he said.
Their software can recognize your face. It can detect multiple faces. And it can track your emotions one second to another.
“Our goal,” he said, “is to put our software in the back of every camera in the world.”
These cameras, he said, will be able to see when and for how long you’re taking your eyes off the road to text: “Someone who texts and takes their eyes off the road for five seconds travels the equivalent of the entire length of a football field, driving essentially blind.”
He cited studies that show 80 percent of all collisions are attributable to inattention or rage. When cameras can read that information, the software can then help get the driver’s attention back where it belongs: on the road.
Photo: A Google self-driving car shown in 2012. (Justin Sullivan/Getty Images)