Note to self: Next time you give a keynote speech in Second Life, tickle your avatar every once in a while to keep it awake.
I was slightly embarrassed yesterday at Life 2.0, a virtual conference organized inside the virtual world Second Life by multimedia publisher CMP, when I realized that I’d been lecturing for 10 or 15 minutes but my avatar was slumped over the podium like a narcoleptic. It’s one of the unintentionally hilarious features of Second Life that when a user is “afk” or away from the keyboard for more than about 10 minutes, their avatar falls asleep. I wasn’t technically afk, but I was gabbing away over an audio link without remembering to nudge my avatar.
Hopefully, my disrespectful posture didn’t sour the hundred or so people in the audience on my speech, which was about the current-day technologies giving rise to the “Metaverse.” That term is a product of the cyberpunk fiction of Neal Stephenson, but it’s being used today to connote the sum product (and the future shape) of immersive 3-D computer environments as diverse as Second Life, Google Earth, Microsoft Virtual Earth, and World of Warcraft. I basically spent the first third of 2007 writing a massive cover story about the Metaverse for MIT’s Technology Review magazine—a story that, I’m pleased to say, a lot of people have ended up pointing to as a useful, centralized explanation of the current moment in the evolution of virtual-worlds technology. A few weeks ago John Jainschigg, the director of online technology and new business for the CMP Metaverse division of publishing giant CMP, invited me to give a talk about the article as the opening session for the fall Life 2.0, a quarterly event that attracts software developers and businesspeople interested in using Second Life and other immersive environments to engage with customers. Despite a few qualms about being able to translate the article into a decent talk, I accepted, and yesterday I was forced to make good on my commitment.
I mainly repeated the argument from my article that anyone who has spent time in both Google Earth (the most popular map world or “geobrowser”) and Second Life (the leading social virtual world, created and operated by San Francisco-based Linden Lab) should appreciate how powerful it would be to mash up the two technologies, or at least the driving ideas behind them. Wandering around Second Life demonstrates how natural it can be to build and explore 3-D structures and environments through the medium of a human-shaped, human-acting avatar. Browsing Google Earth demonstrates what a sense of freedom and mastery comes from having tip-of-your-fingers access to an entire globe’s worth of geographical data at multiple levels of resolution.
The next step—either bringing avatars into map worlds, or making social virtual worlds more map-like—is so obvious that someone will figure out how to do it, whether or not there’s money in it. And from there, it’s not too many more steps to a full Metaverse—a 24/7 immersive simulation of the real world, as ubiquitous and accessible as the Web and used for everything from recreation and virtual tourism to city management, logistics and supply chain management, military training, environmental monitoring.
I wasn’t arguing in my piece that privately controlled platforms like Google Earth and Second Life themselves will become the cornerstones of the Metaverse (although several rudimentary attempts are underway to make avatars work in Google Earth and to make giant maps work in Second Life, as I detailed in my article). Rather, I think these two programs are serving as testbeds and training grounds for the developers who will soon go out and build a new Metaverse infrastructure that’s much more Web-like (in the sense that it will be based on open standards, so that anyone can add to it).
That’s not to say that Google and Linden Lab won’t be there as participants. Linden Lab, where the programmers and executives are smart enough to know they can’t build the Metaverse on their own, has already released an open source version of its client viewer program and says it will eventually contribute the underlying simulation software to the open source community. And Google recently contributed KML, the formatting language that allows users to create data overlays in Google Earth, to the Open Geospatial Consortium for consideration as an industry-wide, open source geographical markup standard.
After I shut up, audience members posed several nice questions, such as (my paraphrases): “What business models will drive the Metaverse?” “Will there be wide enough access to the broadband Internet to make the Metaverse work for everyone?” “What critical mass of participation must be reached before Metaverse construction really takes off?” I won’t bore you by recounting my answers. But as at any well-run technology event (which—hats off to John Jainschigg—this one definitely was), there were far too many good questions at the end and too little time to talk about them.
I made sure to plug Xconomy and to tell people what we’re doing here to analyze, and hopefully cultivate, the innovation scene in greater Boston. In that connection, it’s worth noting that Microsoft and Google both have growing presences in Boston, and that several Boston-area companies are working on various aspects of the Metaverse, including the Cambridge outpost of Linden Lab, Quincy game studio 2K Boston, Red Sox pitcher Curt Schilling’s MMORPG development house 38 Studios, and stealth-mode social-virtual-world developer Conduit Labs. There’s also a project underway at Emerson College to create a virtual version of Boston, or at least its major landmarks, inside Second Life.
Life 2.0, which is being sponsored by Sun Microsystems and IBM, continues until September 21. You can register for free here to attend the remainder of the conference. I’m told that a videotape of my keynote and other sessions will be available at some point in the future.