[Editor’s note: This is part of a series examining the internet’s first 50 years and predicting the next half century. Join Xconomy and World Frontiers Forum on July 16 for Net@50, an event exploring the internet’s past and future.]
No one has done more than Donald Norman to teach us that every piece of hardware and software—in fact, every human-made object, including the internet—embodies design decisions that aren’t always made with the best interests of users in mind.
On top of writing the best-selling 1988 book The Design of Everyday Things, which popularized the concept of human-centered design, Norman has spent decades working inside universities and companies (including Apple (NASDAQ: [[ticker:AAPL]]) and Hewlett-Packard (NYSE: [[ticker:HPQ]])) to remind his colleagues that technology should accommodate human needs, capabilities, and behaviors, rather than the other way around.
He’s currently director of The Design Lab at the University of California, San Diego. I talked with Norman for my feature article “Special Report 2069: Predicting the Internet’s Next 50 Years.” An edited transcript of our conversation is reproduced below.
Xconomy: Back in 2011, in a column for the website Core77 you said you were worried that in the absence of some fundamental innovation, the internet might evolve into a patchwork of walled gardens. I think that was prescient—today, more and more of the internet is behind paywalls, inside social networks, or quarantined by authoritarian nations. I know you’ve continued to think and write about the design and workings of the internet. What’s on your mind right now?
Donald Norman: I have many, many concerns about where we’re moving to and where we are today. ARPANET started off as a sort of informal communications network among remote facilities, in Massachusetts and California and maybe Utah. Nobody at that time predicted what was going to happen, and I think it may have been impossible to predict it.
So, the systems were kludged together. They were used by highly technical people, mostly by people who are friendly with one another, and they started sharing information. And then they started developing methods of doing this over long distances, and over the newly developed packet-switching networks, which eventually became ARPANET, and later combining multiple networks into the internet.
I remember in the days, in the early days, not only was there no security and no attempt to have any, but people bragged about that fact. At the MIT AI Lab, they bragged that if you connected to MIT, you entered the debugging state of their PDP-10 computer. And the argument was that anybody who can work the debugger is somebody we welcome to the community. But that’s the most dangerous place to be in a computer because you have complete control of what everybody does!
I recall when a student at UC San Diego really violated the trust. I forget even what the student had done, but it wasn’t acceptable, and there was a big discussion about, “Oh, what should we do, how should we punish the student?” And I said, “Look, I’ll just go talk to him.” And that’s what I did. And we talked, and I said, “This isn’t the way we do things.”
And so, that was the early attitude. But the problem is that all the fundamental infrastructure was built with this kind of trust and openness in mind. And it’s really hard, once you have this infrastructure and it gets in place to a large extent all across the world, to change it easily. And that’s, I think, one of the major issues because now we’ve let everyone in. And people discovered that they can steal. They can steal data, they can steal privacy, they can modify documents, you can spoof identities, you either hide your true identity or take on someone else’s. We can produce fake news.
And let me keep going. The other thing is that Google, Facebook, and others—when they started, they didn’t have a clear business model in mind. And they discovered advertising, which by itself is not a bad thing. But they discovered that advertisers said, not unreasonably, the more we know about people, the more we can provide information that’s relevant to their interests and needs, et cetera. But that has now carried on to an obscene level. And we, as individuals, no longer have any control over information about ourselves.
I was just at a conference on transportation, and one of the companies is proud of the fact that it’s introducing a new fare system for the MTA in New York City. So, their equipment allows you to take the New York City subways and buses and so on. And they were proud of the fact in this conference that, “We say we don’t own the data, the city owns the data.” And I said to myself, and then I said publicly to the conference, “Why should the city own the data? Isn’t it the individual’s data about what routes they are taking, and when and where they’re going, and who they’re with?”
And now, the psychological sciences have been mined to the utmost to create little tidbits of information that are so exciting that they become addictive, until the social networks have learned how to present little tidbits continually. We want to know what’s going on, and we don’t want to be left out. And this is, I think it’s much to the detriment of society, in decreased performance and decreased quality of the work that you can do when you’re continually interrupting yourself.
And then the horrible license agreements that we’re forced to accept even though we can’t read them, and can’t understand them even if we could read them. The argument is, no, you’re not forced. You could always say no. But no, there are no alternatives to the services. Elections are no longer to be trusted.
And the other one is that we’ve always had multiple networks and multiple ways of getting information. We had radio, which was different from TV, which was different from books, which was a different way of getting information, and moreover, we had different channels even within TV or radio. On television, there were basically three networks. So, that meant that everybody in the country got the same information, and that brought the country together. You had to listen to things that maybe you disagreed with. But today, [the internet] allows us to read only things we agree with. And if you only hear agreement, you never really use your mind, you never consider alternatives, you never understand other people’s points of view.
So yes, I am very concerned. Now, I haven’t told you anything in that whole long list, that diatribe, that others haven’t been saying. Which actually is a good thing, not a bad thing. Because it means that it isn’t just my private opinion, but it’s shared by many, many people.
X: Putting on a futurist hat, what are the most important forces likely to determine what the internet looks like 50 years from now?
DN: It’s really hard to predict. I suppose we could have imagined Facebook in the ARPANET days, but we would never have thought they would reach a billion people. That number would be inconceivable. The word “giga” didn’t exist. I mean, it existed in some exotic dictionary, but it’s not in anyone’s normal vocabulary. We talked about kilobytes; not about even megabytes, let alone gigabytes or terabytes. So, I don’t think it’s predictable.
My friend Herbert Simon, a Nobel laureate and all that, once made this wonderful statement which I love, which is, “It’s really easy to predict the future. People do it all the time. The hard part is getting it right.” And the problem is that we are talking about multiple orders of magnitude of changes in the technologies. It’s hard enough to predict what a 10 times change will do, but to try to predict a thousand times or a million times change?
I mean, neural networks were invented in the AI Lab at MIT. I remember we had cognitive science postdocs in the early days, and I remember I was once teaching them about perceptrons and why they were early computational devices, but they were very limited. And I was talking about the work of [Marvin] Minsky and [Seymour] Papert showing their limitations. And one of the new postdocs said I was wrong. And he stood up and took the chalk away from me and started showing some of the new work that was being done in England. And that was Geoffrey Hinton. And from that conversation he and then Dave Rumelhart, my colleague, developed what’s called the hidden layer in neural networks, which made them dramatically more powerful. Of course, Minsky and Papert never imagined that you could do this with their technology, so their analyses didn’t apply anymore.
Everybody liked neural networks for a while, but then they sort of died away because