Last month, I posted a story about which Boston-area innovators subscribe to the belief that there will be a technological “singularity,” as popularized by Ray Kurzweil and Vernor Vinge.
The idea is that a superhuman artificial intelligence will emerge in a few decades, thereby creating an event horizon beyond which humans cannot fathom—and leading to all sorts of possibilities, such as the explicit merging of humans and computers. (Presumably if you believe this, it might affect your technology strategy. Or not.)
One local I ended up reaching out to is Stephen Wolfram, the computational guru, CEO of Wolfram Research, and creator of Mathematica, A New Kind of Science, and Wolfram Alpha. I predicted that his answer would be, shall we say, complicated.
Here’s what Wolfram wrote back (I would say he’s a non-believer, but you be the judge; as usual he brings up a different way of looking at things):
“Yes, it’s complicated.
There will be more automation, and the rate of new automation will increase.
More and more we’ll be able to state a goal, then everything about how the
goal is executed will be figured out automatically.
There’ll also be more and more pre-emptive automation: things being done
automatically without us explicitly having to ask for them.
I don’t think there’ll be one dramatic moment when humans get surpassed by
technology. We’ll just see all sorts of steps taken, continuing the trend
we’ve seen for centuries.
We’ll have automation that can do an incredible amount. But we humans will
still have to define what we want to achieve; what the purpose is.
Now how pre-emptive automation is designed (or, effectively, evolves) will
affect what we think of to do, and how we think about it.
Even today quite a lot of what we do is defined by technology; more of that
will happen. And it’ll be complicated to say whether the technology is ‘in
charge’…”