If the future is here but unevenly distributed, as William Gibson said, then where is it concentrated?
One place, certainly, is the contract research giant SRI International. Created by Stanford University in 1946, it’s the organization we have to thank for inventions like automated check processing, the computer mouse, hypertext, the ARPANET (which evolved into the Internet), and ultrasound as a medical diagnostic tool. And SRI is still innovating today—one of its recent creations is Siri, the virtual-assistant iPhone app that was spun off as a startup last year and quickly snapped up by Apple for a reported $150 to $250 million.
SRI researchers like the legendary Douglas Engelbart have long had a knack for seeing how the rest of us will be using computers in the future. Eager to hear what SRI is cooking up these days, I talked yesterday with Bill Mark, the head of the institute’s Information and Computing Sciences Division. Aside from directing a staff of 250 scientists, Mark is a software systems designer who studies smart spaces—environments where embedded computers help people work, learn, or communicate more effectively.
Mark argues that despite our culture’s current infatuation with iPhones, iPads, and the like, mobile devices are actually ill-suited for many tasks, especially those involving group interactions. In those situations, he says, it would make more sense to embed computing smarts in the environment, be it a conference room or a classroom.
I asked Mark to lay out some of those ideas as an appetizer for Xconomy’s May 17 forum Beyond Mobile: Computing in 2021. At this evening event on the SRI campus in Menlo Park, CA, Mark will be on stage alongside Calit2 director Larry Smarr, Microsoft eXtreme Computing Group leader Dan Reed, and myself to talk about the current trends shaping the way computers will fit into our lives in 10 years’ time. The following outtakes from my conversation with Mark give a partial preview of the topics we’ll unpack at the event. To hear the rest, you’ll have to buy a ticket. (Disclosure: SRI is an Xconomy underwriter.)
Wade Roush: To build the Siri mobile app—which can help users do things like buy concert tickets or book a table at a local restaurant—your scientists drew on years of defense-funded research at SRI on natural language understanding and other aspects of artificial intelligence. But the app is still limited to fairly simple query-response situations. Will we be having full conversations with future versions of Siri?
Bill Mark: Yes, we view Siri as a first step in that direction. When you say something to Siri, it understands your intent and puts together a set of services that fulfill that intent. That is great—I really think Siri did a fantastic job, and we’ll see what Apple does with that core technology. But there is much more to the story than that. One thing is dialogue. In real life, we use dialogue all the time. It’s extremely rare that you say something and your assistant goes off and does it and that’s the entire interaction. Our research right now is pushing into systems that can do that.
Roush: That sounds an order of magnitude harder than just responding to a spoken search query.
Mark: It’s much harder. This sounds obvious, but one challenge is that the system needs to understand what it just told you. People in a dialogue assume that the other person, or in this case the piece of software, understood the previous utterance. Most systems don’t. There are also performance issues. The system has to come back with a reasonable response in a reasonable amount of time, otherwise it’s not dialogue. And the key piece is that the system has to