bringing those inventions to [market]. I’m an inventor and I’m also an executive. I’m kind of weird in that way. Most people that become executives in tech stop doing the coding or the invention or stop working in the lab. And I don’t. I don’t think you can really lead engineering unless you’re in it.
I wish I didn’t feel so much on a limb when I say that, but I’ve got more than 100 patents published or issued in the last five years alone. And I work and invent and have a lab at home and continuing to kind of tinker about ideas by working physically in the lab.
And so it occurred to me that after I left Facebook—I think we announced it last May and I left full time in August, but I was really part time since last February—and so I was able to work on my company from that time forward, and last spring I completely reinvented the idea.
I thought that I was going to use the simple approach of bringing in the light and just looking for a signal and being more clever and just dealing with all this scatter, the fog if you will, of your body. But then it occurred to me that I could instead take a hologram. I know a lot about holography. I made my first hologram as a teenager. I spent a decade making really advanced holographic systems including with a group of graduate students. And I co-created the world’s first holographic video system. And a holographic video system is what’s needed for your body. Because it’s scattering, and we need to take a hologram of the scatters of the fog of your body, then invert that. But then your body is moving, so you need to do this at a steady clip, somewhere between 10 microseconds and one second, depending upon what you’re measuring. You need to make a new hologram of your body and then by doing this phase conjugation trick you can invert the scattering and make your body transparent, and that has really, really profound applications.
But the first discovery of this basic physics effect was in 1965 by the very people, Leith and Upatnieks, who made the first display hologram that you could see with your eyes. It was of a train. What’s really interesting about the work of Upatnieks that made that first hologram of the train is the very next hologram they made, they put a diffuser, like a ground glass screen, in front of the train, made a hologram of it, and then inverted the image, as the phase conjugation. And they could see the train through the diffuser. That was like, whoa. That’s pretty amazing. But people haven’t really exploited that technique. I think it’s sort of a forgotten thing. It gets reinvented every decade or so in the field of math, working in, you know, microwaves or different types of wavelengths, and it’s become of interest lately in medical imaging.
Now there are a few scattered academic groups trying to work in this. They’re using off the shelf components, micro displays. They’re using components that are worse than stuff that we shipped in the late 90s at MicroDisplay Corp.
Xconomy: So it’s because you’re so ideally positioned, with your experience in display design and display manufacturing, that you can bring a perspective to this whole holography problem that nobody else has had.
MLJ: That was what occurred to me. And the strange thing is, it occurred to me last spring after I had already jumped off that cliff and started Openwater. But I completely threw out my old ideas on it and and moved forward with this holography thing. And device design, which is really the thing that I’m best known for. I put them into hardware, software, consumer electronic systems. But all my software systems have been really distinguished by getting into every level. Like when you look at a chip, there’s layers of metal and oxide and silicon and how thick they are, how long you bake them for, what patterns you put into them, what the design rules are, how you can move the design rules to a different space, break the design rules by making it a little longer. Or all kinds of different things. So there’s a very, very deep level of device physics and photonics and optoelectronics that can be brought to bear here.
After co-founding One Laptop Per Child with Nicholas Negroponte—hey, he and I made this little nonprofit and it catalyzed $30 billion of revenue for our for-profit partners and changed the lives of 100 million children in the developing world with a $100 laptop. A key part of the electronics was massively lowering the power consumption. And so we needed full custom chipsets for everything. And I designed and architected those. And as I came back to my team, I thought wow, maybe there aren’t enough MIT professors sleeping on the factory floors of the world figuring out how we can use these very slim-margin multibillion-dollar manufacturing structures to make more innovative products and skip a couple of generations.
And I could probably make the same components with the grad students or post-docs at the MIT labs that they could probably make, and maybe once and not repeat them. But if I make them on a $13 billion fab in Japan, once they make one of them they can make a million of them three months later. And I just got hooked on having that kind of impact. And so I left MIT to try to continue to make use of those 2-percent-margin fabs of the world and move them to faster innovation cycles.
Xconomy: Which you’ve done now for Oculus at Facebook and Google X, presumably. And now you’re bringing it all the way back around to your own company at Openwater. So what stage are you out, Mary Lou, at Openwwater? Are you building a prototype? And what do you think will be the first application areas?
MLJ: Yeah, we’re building prototypes. We’re actually spending about a year just building up prototypes, ripping them down and trying in parallel, you know a dozen different approaches when you get to the nitty gritty of the design. We haven’t decided our first product.
I keep looking at my little dog too. I’d really love to know what she is thinking. But following all the right protocols of course. But I’m thinking about that as a lab test obviously.
But the first product—there are so many people hitting us up. I mean, the implications are pretty profound for the billion people in the world that live with debilitating brain disease, be it mental disease or a degenerative disease, and they get an MRI once a year, for example. And the move to having a wearable with continuous diagnostic could be akin to the transformation of diabetes care before you could buy an off the shelf system, where you could do a blood stick and see what your insulin level was at that moment rather than a standard level of insulin every day. You can titrate it based on how you’re doing.
And also we can localize treatment because these systems that we’re creating in Openwater that make your body effectively transparent can do reading and writing. So you can localize treatment in certain areas of your brain or body. You can read things and write things so that as we get to thinking about telepathy, that also has some profound implications, if you think about learning and implanting thoughts and communicating at the neuronal level.
Xconomy: Do you think that you’ll try to develop the medical imaging applications and the thought reading applications simultaneously—that different people will partner with you and take it in both directions at once? Or are the so-called mind reading scenarios much farther out?
MLJ: I think the mind reading scenarios are farther out because of the ethical and legal implications. And the reason that I’m talking about them early is because they do have profound ethical and legal implications. The medical implications of having a bra that can tell you if you’re developing breast cancer are more straightforward, so to speak. I presume the medical applications will come first. But there’s a crossover when you look at mental disease, neurodegenerative diseases.
And that’s something where we will walk through and we are certainly in a lot of different conversations about right now as we’re developing the prototypes, trying to get in front of it. One of the reasons that I wanted to do this in a small company was so that we could talk about it. It’s just difficult for large companies to have programs that are in the very early stages be spoken about outside. It’s too complicated for their PR departments and for all kinds of different reasons.
And it was actually Peter Gabriel, the musician and human rights activist who kept calling me, I think, every week for about six months trying to convince me to take this project, make it into a startup and not do it inside of a big company, at least in the beginning, so that we could talk openly about the ethical and legal implications of this.
Xconomy: What’s Peter Gabriel’s interest in this?
MLJ: I’ve known him for a long time. But I saw him backstage at some conference. We were both speaking and I reconnected with him and he’s just very interested in it and wants it to happen. But wants it to happen in a way where it’s openly discussed with the ethical implications, and we’re figuring out how best to do that. There are a lot of existing organizations that work on this, and we may start a new organization. We’re certainly participating in a lot of the discussions but collectively, globally, our notions of privacy are changing every year right now.
And so the days of a committee sitting in a board room behind closed doors deciding what’s ethical seem to be over. There needs to be public discussion of it because of this rapid change of what our expectations and beliefs are on privacy.
There’s other people working on different approaches to telepathic systems. In fact right now if you look at it as I have, and I think others have, the National Academy of Sciences in the United States said one of the top five things you can work on as a technologist is reverse-engineering the brain. That’s true for many other countries right now in Europe, China, Australia, and so forth. So the question is—scientists don’t like to talk about the ethics as they’re going along. It’s not really in the education system right now.
Yet with all of these top bright minds working on this, what happens if we achieve it? What do we do? What’s right, what’s wrong, why are we not talking about what happens? And I think because so much of it is in the academy, and you don’t want to overstate anything in the papers that you publish for promotion, for getting the professorship. And yet the logical conclusion of all of this is we’re able to communicate with—we’re able to read a book by just tapping the book and it’s downloaded into our brain. That kind of stuff is the logical conclusion of this. And we’re not really talking about what that world looks like.
In many ways we see this sort of mass almost hysteria, I would say, about AIs taking over the world. But what if we do the opposite. I mean, when Marvin Minsky and John McCarthy first coined the term artificial intelligence, Doug Engelbart immediately said, “I don’t want AI, I want IA, intelligence augmentation.” How can we make people smarter. And so this technology makes people smarter.
And so maybe right now we’re really limited. Our input to our brains is pretty good. Our brains themselves are more complex than any computer we know how to make. A hundred billion neurons, each neuron having 100,000 different connections, and we don’t really understand how neurons work. As Paul Allen likes to say, there’s five Nobel prizes just to understand how a neuron works. They’re really, really complicated. So that’s pretty good. The problem is the output. We move our jaws and our tongues to talk or move our fingers to type. And what if we could communicate and dump images and music and thoughts and ideas directly to the computer or to each other, first mediated by computer. Or even, we can put a filter on it. Thoughts that you don’t want to communicate with others, you can filter them. You own your thoughts, you can delete your thoughts. These are some basic tenets that we’re working on. But if you’ve shared a thought you don’t get to pull it back.
Xconomy: You could be looking at the world’s best lie detector, right? No one would be able to mask their thoughts anymore.
MLJ: That’s why we need to teach people how to mask their thoughts. And so, because if the police or military makes you wear such a system—anybody’s system, I think it will be ours, but there’s lots of people working on these types of systems—I was just talking to the kernel.co people, I think they’re working on a totally different approach, it’s invasive, we’re doing noninvasive—but if the police or military, if you have such a hat and the police or the military makes you wear such a hat, it becomes our responsibility to inform everybody how to fool the system.
You have to want to think into the hat for it to work. That’s my goal right now. And be very responsible about introducing this into the world.