Internet Of Intelligence: A Framework of Thinking About Artificial Intelligence in the Home
Discussions of Artificial Intelligence (AI) are everywhere, ironically enough, some of it written and published by bots… Elon Musk continues to gain headlines for his crusade against the development of AI, citing concerns that if we pick the wrong utility function, humans could be optimized out of existence. (from his talk with Walter Issacson, if the goal is to eliminate spam, the AI could decide that eliminating humans is the most efficient path). IBM’s Watson has been the poster child of deep learning, promised to be the panacea for everything from call center routing to finding copyright infringement to winning at Jeopardy. But what does this mean for the every day consumer? What happens when our homes really become ‘smart?’ Does it punish us for not cleaning the toilet? Does it decide, based on reading the news feeds, it would be safer for the family not to leave the house and lock us in, delivering food via Grub Hub and Amazon drones?
We need a way to think about AI in the home so we can better design these systems to anticipate the benefits and well as where a ‘rogue’ intelligence could go off the rails. Much the way our own brain segments functions in different sections of our gray matter, home intelligence could also benefit from some sense of specialization and hierarchy. AI systems are really good at recognizing patterns and determining action based on those patterns. Within the home, you can consider five types of intelligences needed to realize the true promise of the Smart Home.
- Visual Intelligence
- Behavioral Recognition
- Human Interface Engine
- Threat detection and abatement
- Ethics Engine
The Visual Intelligence is the most common being used today, looking at streams of visual data, static and dynamic, coming from security cameras, phone snapshots and more. These maturing intelligences can distinguish between dogs and cats in the scene, ensure only family members are allowed in the back door, and with Apple’s new A11 Bionic on the iPhone X, process your face fast enough to paint your expressions onto an animated emoji of poo. The trend is to push as much of this recognition to the edge so that the homes do not have to push terabytes of data up to the cloud every time someone rings your doorbell camera. In the tug-a-o-war between privacy and security, edge process of the visual data from your home also ensures your data stays yours as well as lowers the latency in recognition. The interpretation of visual information in the home is a key requirement for any home intelligence as correct evaluation of what is happening visual drives much of the follow-on actions in the home. Adjusting the thermostat based on the number of people in the room. Letting people know when the dog went outside for a potty break and more.
Behavioral Recognition builds on the information within the visual scene of the home and interprets the behavior of the occupants to look for patterns of engagement that can be automated. Vivint’s new Sky intelligence examines what actions the occupants take inside of their homes and takes on those tasks over time. For example, if when you leave in the morning, you turn down the thermostat, turn on the back porch light, switch off your stream of classical music from Alexa and send a text to the nanny reminding them of pickup, Sky promises to, over time, do most of these as you walk out the door. The same happens when you come home, the right lights turn on, smooth jazz is piped through the kitchen, and the house has already been working on achieving the right comfort level as soon as it knew you were headed home. This Behavioral Recognition extrapolates within your routine (and less routine) actions to determine candidates for automation while at the same time understanding when you deviate from your normal routines and how the react appropriately. Say you come home with extra kids for a play date, smooth jazz probably isn’t the vibe you are going for. Or shift your thermostat because Mom is visiting. The Behavioral Recognition is critical to having your home feel intelligence and tailored to your context.
Of course the occupants need to interact with the home. The Human Interface Engine bridges that gap between human and home. We’ve seen the rapid rise of Alexa and Google Home (and eventually Bixby and HomePod) leverage voice as key interface to our home environment. But there are also gestures, apps, touch and more that drive our engagement. The Human Interface Engine will be that collection of intelligences that gather and interpret those actions on behalf of the human occupants. Want the kids to exercise more? Require 200 jumping jacks before they can turn on the Xbox? Wave goodnight to your living room to turn off the music and the lights before retiring for the night. Filter out your teenager’s commands every time she wants Alexa to play death metal. Without the ability to recognize and react to the ultimate users in the Smart Home, the humans, any system would only be guessing at what do to on behalf of their occupants.
Security becomes critical as well within the home environment both for the ability for the bad guys to get access to our resources but also for the ability for nefarious actors to leverage our homes to attack others. As such, a local intelligence that is focused on Threat Detection and Abatement is critical to any Smart Home intelligence solution. Solutions like Cujo bring solutions typically reserved for the enterprise into the home and are a good start towards the type of security needed in the home. Cujo and others leverage the attack patterns sensed across their customers to continuously update their own understanding of evolving threats. Eventually, the abatement of threats to the home could also include counter-intelligence capabilities, spoofing data and usage information as a way to throw off the intentions of the bad guys. Are you on vacation or just taking quick trip to Costco? Home with a sick child or on a business trip? This type of misinformation also becomes a new direction of capabilities for these intelligences and aid in the overall security of the home and its users.
The final intelligence we believe is necessary in the home is an Ethics Engine. The Ethics Engine establishes the bounds of what is the correct actions for the home to take. Just because you can turn the thermostat down to 40 degrees does not mean that you should. Just because the dash button can order three lifetimes of Cheetos to be deliverd by Friday, does not mean is should. This notion has received a lot within Autonomous Vehicles, where the vehicle makes decisions regarding whether to hit the squirrel or the telephone pole. Within the home, this is also critical as more of the home’s systems become controllable by these intelligences. This ‘brain of brains’ will have the final sign-off on any actions taken by the Smart Home as the final line of defense of the users health, safety and wellbeing. Models of robo-ethics have been around since Issac Asimov first published his laws of robotics but they have not yet been applied to the Smart Home content. The Disney Channel movie, Smart House, saw a resurgence in popularity recently because of its campy portrayal of a home gone rogue. An Ethics Engine would prevent and more dangerous scenarios from happening, ensuring the home does not fulfill Elon Musk’s nightmare scenario of a rampant AI that eradicates the family hamster.
Within each of these five intelligences, you can see how this framework helps give context to the types of AI that needs to be brought together in concert in the Smart Home. True intelligence in our abodes will come from the seamless integration and collaboration within these critical capabilities. While no one company has brought all of these elements together, we see the capabilities evolving. Look for the service providers like Vivint and Comcast leading the charge, integrating the solutions where feasible and building their own where necessary. Eventually, we will have that home of the future, today, and feel safe using it.