Why the Future of the Computer Is Everywhere, All the Time

Imagine this scenario in the not-too-distant future. You’re awakened at 6:11 a.m. by the gentle sounds of tinkling bells and birdsong, even though you live in a 12th-floor apartment. Your alarm clock uses radar to track your breathing, and wakes you gently, with sound and light, when it detects you’re in a lighter phase of sleep.

Your transition to wakefulness triggers a cascade of changes in your apartment. Your window shades open automatically. In the kitchen, coffee starts brewing. As you pad into the bathroom to brush your teeth, a display projected onto the mirror above the sink shows your calendar for the day. It highlights what time you’ll have to leave to get to your office for the in-person meeting you scheduled for 8:30.

Returning to your bedroom, you find your stowaway robotic bed has retracted into the ceiling, and your collapsible walk-in closet has expanded to reveal your clothes and a full-length mirror. The mirror suggests, based on your schedule and the weather, an outfit it displays as an augmented-reality overlay that moves with your body as you inspect yourself. You aren’t fond of the first option, so you make a swipe-left gesture in the air. The mirror responds by suggesting another outfit. You signal assent, and the two drawers containing the items you want glow around their edges, so you don’t have to waste time hunting for them.

As you dress, a newscast starts playing from the nearest speaker. When you walk into the kitchen, the sound follows you from a speaker in that room as well. You decide that’s enough, and ask for silence, and a moment later all you can hear are the last burbles of the coffee maker.

Slippery definition

If this morning sounds fanciful to our present-day ears, it’s only because so few of us have experienced its individual elements, all of which are possible today or likely to be so in the very near future. What will make such a morning possible, even mundane, is what tech companies call ambient computing.


How do you feel about the prospect of an “ambient” future? Join the conversation below.

As in the early days of the cloud, the definition of ambient computing is slippery, subject to revision, and more than a little aspirational. In general, ambient computing is the idea that we’ll interact with the world through a growing assortment of gadgets and sensors, many of which will be physically embedded in our environments. And we’ll interact with this technology in a growing variety of ways—from voice and gestures to simply existing in a space full of sensors that track our every action.

If this sounds reminiscent of previous ideas about the Internet of Things or the smart home, that’s because it’s an evolution of those concepts. But ambient computing is something bigger and, at least in theory, more usable. The smart home of today is largely transaction- and device-focused. We tell our connected thermostat to raise the temperature, and it does. We tell Alexa to play a song, and it does. We tell our wearable heart monitor to let us know when our heart rate goes awry, and it does.

By contrast, in the ambient world, the technology is all around us—unseeable and untouchable. Sensors know when we wake up, set the heat at what we always want, play the songs we like, get the autonomous car ready for the meeting they know we have and suggest clothes appropriate for that meeting. There are a lot of steps between where we are today and this ambient world, but most tech leaders think we’re well on our way to this destination.

The Astro home-monitoring robot is one of several devices that point to Amazon’s long-term ambitions to be everywhere in our homes.


Gabby Jones for The Wall Street Journal

Amazon’s timetable

Today, for instance, Alexa can already do many things that we may someday think of as being part of ambient computing, from controlling the lights in our homes to walking us through a meditation routine before bedtime, says

Dave Limp,

senior vice president of devices and services at


AMZN -6.80%

com Inc. “But is this easy enough for consumers?” he asks. “The answer is no. That’s why we believe this ambient intelligence revolution is five to 10 years out.”

Amazon’s recent spate of device announcements—including an update of its home-monitoring Astro robot, the debut of its Halo Rise bedside sleep-tracking device, and new TVs that detect a person’s presence in a room—all point to its long-term ambitions to be everywhere in our homes, sensing and responding to everything.

And Amazon is hardly alone.

Alphabet Inc.’s

GOOG 4.30%

Google also recently announced new devices to bring its computing everywhere we are, from the home—with a new Pixel tablet designed to double as a smart-home control hub—to everyplace else we go, in the form of its new Pixel smartphones and smartwatch. Since 2019, Google executives have been talking about how ambient computing is core to the company’s vision of the future, and how they think the company’s custom, AI-focused chips, which now appear in its phones and tablet, will be central to that.

Google’s array of devices—headphones, phones, smart-home hubs and the like—is meant to create a “personal, intelligent, cohesive computing experience,” wrote

Rick Osterloh,

Google senior vice president of devices and services, in a recent blog post. This vision, he said, is “what we have been building up to for a while.”

A new infrastructure

One of the biggest challenges to making ambient computing work for the masses is that no matter how good our voice-based assistants and other sensors are at understanding our desires, a huge amount of work still needs to be done behind the scenes to enable all that hardware and software to act on them.

“I dream of a world where I can walk through my house and say ‘What time is my flight tomorrow?’ or ‘When is my next credit-card payment due?’ But to do that you need to connect all the plumbing,” says Mark Webster, who works on audio and voice products at

Adobe Inc.

ADBE 2.21%

Some of that “plumbing” already exists—such as that built by companies that want to make their services available through the dominant smart assistants. But for now, at least, this leads to a transactional, task-based mode of interaction with smart assistants.

“Google and Amazon talk about this assistant that’s always available to you to do actions, take requests and have some anticipation of your needs,” says

Ben Bajarin,

chief executive and principal analyst at consumer-tech research firm Creative Strategies. “But I don’t think that’s how consumers view it—it’s more like, I can turn my lights on, play music, do a search. For consumers, there’s no ‘always’ around sentient AI.”

Going further, and making our smart assistants capable of more than the most straightforward interactions, will require connecting those assistants not just to various services but to each other, says Mr. Limp. That is, our smart assistants have to be able to talk to any smart-home or smart-building gadget, no matter which smart assistant we own or brand of gadget we buy.

A new standard, called Matter—which

Apple Inc.,

AAPL 7.56%

Google and Amazon have all signed on to—promises to do just that. There’s a lot going on under the hood, but what it amounts to is that we will no longer have to check the back of a new smart light or smart lock to see if it’s compatible with our smart assistant. Devices that support Matter will start arriving by the end of this year, and eventually the standard could supersede the proprietary communications standards that have so far held back smart-home adoption.

Smart locks like this one from Yale are among the early steps toward the ambient-computing future.



Thousands of points

Matter is in many ways just the beginning of the rollout of new ways to wirelessly connect all the smart things in our world—in homes, offices and industrial facilities. Other standards in the works could allow the connection of not just dozens of objects to a single wireless access point, but hundreds or even thousands. These standards will be necessary for realizing the part of ambient computing that is all about peppering our world with sensors and then handling all the data that results.

New wireless communications networks like these will be needed as the number of connected devices continues to grow, says

Steve Statler,

senior vice president of marketing at Wiliot, a supply-chain technology company based in Israel. His company recently unveiled a combination sensor and tiny computer that requires no batteries to operate and could some day be manufactured for pennies apiece. These tags are essentially stickers that get slapped onto things in supply chains that retailers want to track, like crates of goods.

Now imagine, for instance, that every item in your refrigerator has a similar smart tag on it, and the moment you run out of something, like peanut butter, your refrigerator automatically reorders it for you. Amazon tried a version of this before—buttons that let people reorder things with just a tap—but in a world where our voice assistant can also be made a party to household consumption, running out of something could trigger our smart speaker to ask our permission before an item is reordered, which could help consumer adoption.

Accomplishing this sort of thing would mean tracking so many objects at once—think every consumable in our homes—that it would overwhelm current base stations for interacting with wirelessly connected devices. That’s why new ways to connect all the sensors, computers and other devices are needed.

Not so intelligent

What’s also needed are more smarts.

“We often say the problem with ‘smart buildings’ is that it’s a total euphemism,” says

Troy Harvey,

chief executive of PassiveLogic, a Salt Lake City-based company that helps engineers and building managers control lighting and HVAC systems in offices and apartment buildings with complicated environmental controls. “When people say ‘smart,’ they just mean connected—so where is the smart part?”

That’s the void that Google, Amazon and other companies aim to fill.

Already, a third of all actions performed by Alexa, it does proactively, without an immediate prompt by a user, says Mr. Limp. Most of these are simple repetitions of a requested action—for example, when a user asks Alexa to wake them every weekday morning at a certain time. But the engineers who build Alexa are also starting to give it the ability to operate on a hunch. For example, if for the past 30 days you always ask Alexa to turn off your connected porch light before you go to bed, and on the 31st day you forget, Alexa will in some cases do it for you. Another example: If you drive away and forget to close your connected garage door, Alexa might someday recognize that and close it on your behalf.

For ambient computing to make our lives easier, it’s going to have to start doing a lot more of this sort of thing. The challenge is that if the system guesses wrong, “customers get very frustrated, very quickly,” says Mr. Limp. For instance, if Alexa tries to turn your lights off when you go to bed on the 30th day, because you did that the previous 29, but this time you left them on because you wanted them to be on for a family member who was arriving late, well, that’s pretty annoying.

Then there’s the issue of privacy and how much of it we may have to give up to achieve new heights of convenience and life automation. “One could argue you need sensors all over your house to do machine learning to start to predict your behavior,” says Mr. Bajarin at Creative Strategies. But his own company’s research suggests that, despite the popularity of smart doorbells and outdoor security systems, one of the best sensors to accomplish this—surveillance cameras—isn’t going into people’s living rooms, bedrooms or bathrooms, for obvious reasons.

Trust is easily lost, says Mr. Limp, who points out that Amazon’s Echo Show smart speaker has a built-in cover for its camera, which also switches off the camera when it’s rotated into place. Amazon told Congress in July that 11 times in the previous year, the company gave footage from its Ring security cameras to authorities without user consent. The company has said it did so only in the event of an “emergency request” from law enforcement when the company “made a good-faith determination that there was an imminent danger of death or serious physical injury.”

“We take these requests very seriously and regularly deny those that don’t meet the standard,” says an Amazon spokesman. “This is also clearly disclosed in our privacy notice.”

Still, consumers are always potentially sensitive about putting sensors—especially cameras—inside their homes. So if they hear that Amazon may give up footage from those cameras, that should give them pause, no?

It also isn’t clear, given the endless game of cybersecurity cat and mouse between tech companies and hackers, whether the smart home, office and factory will become a new way that ordinary people make themselves vulnerable to being hacked.

And then, of course, there’s the inescapable fact of Murphy’s Law, and the way that increasing complexity in a system increases the likelihood of its failure—like getting locked out of your house by a smart lock or being misidentified as an intruder in your home.

Can these problems be solved? Almost certainly, and just as certainly there will be bumps along the way. The question for tech companies is just how big those bumps will be, and how much it slows down the march toward an ambient world.

“We’ve been thinking of buildings as buildings,” says Mr. Harvey of PassiveLogic, “but buildings, it turns out, happen to be the world’s most complicated robots.”

Mr. Mims writes The Wall Street Journal’s Keywords column. He can be reached at christopher.mims@wsj.com.

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Source link

Source link

Rate this post

Add your comment

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.