I used Google Glass: the future, with monthly updates | The Verge
Finding Glass
The Glass project was started “about three years ago” by an engineer named Babak Parviz as part of Google’s X Lab initiative, the lab also responsible for — amongst other things — self-driving cars and neural networks. Unlike those epic, sci-fi R&D projects at Google, Glass is getting real much sooner than anyone expected. The company offered developers an option to buy into an early adopter strategy called the Explorer Program during its I/O conference last year, and just this week it extended that opportunity to people in the US in a Twitter campaign which asks potential users to explain how they would put the new technology to use. Think of it as a really aggressive beta — something Google is known for.
I was about to beta test Glass myself. But first, I had questions.
Seated in a surprisingly bland room — by Google’s whimsical office standards — I find myself opposite two of the most important players in the development of Glass, product director Steve Lee and lead industrial designer Isabelle Olsson. Steve and Isabelle make for a convincing pair of spokespeople for the product. He’s excitable, bouncy even, with big bright eyes that spark up every time he makes a point about Glass. Isabelle is more reserved, but speaks with incredible fervency about the product. And she has extremely red hair. Before we can even start talking about Glass, Isabelle and I are in a heated conversation about how you define the color navy blue. She’s passionate about design — a condition that seems to be rather contagious at Google these days — and it shows.
Though the question of design is at the front of my mind, a picture of why Glass exists at all begins to emerge as we talk, and it’s clearly not about making a new fashion accessory. Steve tries to explain it to me.
“Why are we even working on Glass? We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes. If you’re a parent — let’s say your child’s performance, watching them do a soccer game or a musical. Often friends will be holding a camera to capture that moment. Guess what? It’s gone. You just missed that amazing game.” Isabelle chimes in, “Did you see that Louis C.K. stand up when he was telling parents, ‘your kids are better resolution in real life?’” Everyone laughs, but the point is made.
Human beings have developed a new problem since the advent of the iPhone and the following mobile revolution: no one is paying attention to anything they’re actually doing. Everyone seems to be looking down at something or through something. Those perfect moments watching your favorite band play or your kid’s recital are either being captured via the lens of a device that sits between you and the actual experience, or being interrupted by constant notifications. Pings from the outside world, breaking into what used to be whole, personal moments.
Steve goes on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” I can’t stop looking at the lens above his right eye. “It’s a new wearable technology. It’s a very ambitious way to tackle this problem, but that’s really sort of the underpinning of why we worked on Glass.”
I get it. We’re all distracted. No one can pay attention. We’re missing all of life’s moments. Sure, it’s a problem, but it’s a new problem, and this isn’t the first time we’ve been distracted by a new technology. Hell, they used to think car radios would send drivers careening off of the highways. We’ll figure out how to manage our distraction, right?
Maybe, but obviously the Glass team doesn’t want to wait to find out. Isabelle tells me about the moment the concept clicked for her. “One day, I went to work — I live in SF and I have to commute to Mountain View and there are these shuttles — I went to the shuttle stop and I saw a line of not 10 people but 15 people standing in a row like this,” she puts her head down and mimics someone poking at a smartphone. “I don’t want to do that, you know? I don’t want to be that person. That’s when it dawned on me that, OK, we have to make this work. It’s bold. It’s crazy. But we think that we can do something cool with it.”
Bold and crazy sounds right, especially after Steve tells me that the company expects to have Glass on the market as a consumer device by the end of this year.
Google-level design
Forget about normal eyeglasses for a moment. Forget about chunky hipster glasses. Forget about John Lennon’s circle sunglasses. Forget The Boys of Summer; forget how she looks with her hair slicked back and her Wayfarers on. Pretend that stuff doesn’t exist. Just humor me.
The design of Glass is actually really beautiful. Elegant, sophisticated. They look human and a little bit alien all at once. Futuristic but not out of time — like an artifact from the 1960’s, someone trying to imagine what 2013 would be like. This is Apple-level design. No, in some ways it’s beyond what Apple has been doing recently. It’s daring, inventive, playful, and yet somehow still ultimately simple. The materials feel good in your hand and on your head, solid but surprisingly light. Comfortable. If Google keeps this up, soon we’ll be saying things like “this is Google-level design.”
Even the packaging seems thoughtful.
The system itself is made up of only a few basic pieces. The main body of Glass is a soft-touch plastic that houses the brains, battery, and counterweight (which sits behind your ear). There’s a thin metal strip that creates the arc of the glasses, with a set of rather typical pad arms and nose pads which allow the device to rest on your face.
Google is making the first version of the device in a variety of colors. If you didn’t want to get creative, those colors are: gray, orange, black, white, and light blue. I joke around with Steve and Isabelle about what I think the more creative names would be. “Is the gray one Graphite? Hold on, don’t tell me. I’m going to guess.” I go down the list. “Tomato? Onyx? Powder — no Avalanche, and Seabreeze.” Steve and Isabelle laugh. “That’s good,” Isabelle says.
But seriously. Shale. Tangerine. Charcoal. Cotton. Sky. So close.
That conversation leads into discussion of the importance of color in a product that you wear every day. “It’s one of those things, you think like, ‘oh, whatever, it is important,’ but it’s a secondary thing. But we started to realize how people get attached to the device… a lot of it is due to the color,” Isabelle tells me.
And there is something to it. When I saw the devices in the different colors, and when I tried on Tangerine and Sky, I started to get emotional about which one was more “me.” It’s not like how you feel about a favorite pair of sunglasses, but it evokes a similar response. They’re supposed to feel like yours.
Isabelle came to the project and Google from Yves Behar’s design studio. She joined the Glass team when their product was little more than a bizarre pair of white eyeglass frames with comically large circuit boards glued to either side. She shows me — perhaps ironically — a Chanel box with the original prototype inside, its prism lens limply dangling from the right eye, a gray ribbon cable strewn from one side to the other. The breadboard version.
It was Isabelle’s job to make Glass into something that you could wear, even if maybe you still weren’t sure you wanted to wear it. She gets that there are still challenges.
The Explorer edition which the company will ship out has an interchangeable sunglass accessory which twists on or off easily, and I must admit makes Glass look slightly more sane. I also learn that the device actually comes apart, separating that center metal rim from the brains and lens attached on the right. The idea is that you could attach another frame fitted for Glass that would completely alter the look of the device while still allowing for the heads-up functionality. Steve and Isabelle won’t say if they’re working with partners like Ray-Ban or Tom Ford (the company that makes my glasses), but the New York Times just reported that Google is speaking to Warby Parker, and I’m inclined to believe that particular rumor. It’s obvious the company realizes the need for this thing to not just look wearable — Google needs people to want to wear it.
So yes, the Glass looks beautiful to me, but I still don’t want to wear it.
Topolsky in Mirrorshades
Finally I get a chance to put the device on and find out what using Glass in the real world actually feels like. This is the moment I’ve been waiting for all day. It’s really happening.
When you activate Glass, there’s supposed to be a small screen that floats in the upper right-hand of your field of vision, but I don’t see the whole thing right away. Instead I’m getting a ghost of the upper portion, and the bottom half seems to melt away at the corner of my eye.
Steve and Isabelle adjust the nose pad and suddenly I see the glowing box. Victory.
It takes a moment to adjust to this spectral screen in your vision, and it’s especially odd the first time you see it, it disappears, and you want it to reappear but don’t know how to make it happen. Luckily that really only happens once, at least for me.
Here’s what you see: the time is displayed, with a small amount of text underneath that reads “ok glass.” That’s how you get Glass to wake up to your voice commands. Actually, it’s a two-step process. First you have to touch the side of the device (which is actually a touchpad), or tilt your head upward slowly, a gesture which tells Glass to wake up. Once you’ve done that, you start issuing commands by speaking “ok glass” first, or scroll through the options using your finger along the side of the device. You can scroll items by moving your finger backwards or forward along the strip, you select by tapping, and move “back” by swiping down. Most of the big interaction is done by voice, however.
The device gets data through Wi-Fi on its own, or it can tether via Bluetooth to an Android device or iPhone and use its 3G or 4G data while out and about. There’s no cellular radio in Glass, but it does have a GPS chip.
Let me start by saying that using it is actually nearly identical to what the company showed off in its newest demo video. That’s not CGI — it’s what Glass is actually like to use. It’s clean, elegant, and makes relative sense. The screen is not disruptive, you do not feel burdened by it. It is there and then it is gone. It’s not shocking. It’s not jarring. It’s just this new thing in your field of vision. And it’s actually pretty cool.
Images taken with Google Glass
Glass does all sorts of basic stuff after you say “ok glass.” Things you’ll want to do right away with a camera on your face. “Take a picture” snaps a photo. “Record a video” records ten seconds of video. If you want more you can just tap the side of the device. Saying “ok glass, Google” gets you into search, which plugs heavily into what Google has been doing with Google Now and its Knowledge Graph. Most of the time when you ask Glass questions you get hyper-stylized cards full of information, much like you do in Google Now on Android.
The natural language search works most of the time, but when it doesn’t, it can be confusing, leaving you with text results that seem like a dead-end. And Glass doesn’t always hear you correctly, or the pace it’s expecting you to speak at doesn’t line up with reality. I struggled repeatedly with Glass when issuing voice commands that seemed to come too fast for the device to interpret. When I got it right however, Glass usually responded quickly, serving up bits of information and jumping into action as expected.
Some of the issues stemmed from a more common problem: no data. A good data connection is obviously key for the device to function properly, and when taking Glass outside for stroll, losing data or experiencing slow data on a phone put the headset into a near-unusable state.
Steve and Isabelle know the experience isn’t perfect. In fact, they tell me that the team plans to issue monthly updates to the device when the Explorer program starts rolling. This is very much a work in progress.
But the most interesting parts of Glass for many people won’t be its search functionality, at least not just its basic ability to pull data up. Yes, it can tell you how old Brad Pitt is (49 for those keeping count), but Google is more interested in what it can do for you in the moment. Want the weather? It can do that. Want to get directions? It can do that and display a realtime, turn-by-turn overlay. Want to have a Google Hangout with someone that allows them to see what you’re seeing? Yep, it does that.
But the feature everyone is going to go crazy with — and the feature you probably most want to use — is Glass’ ability to take photos and video with a “you are there” view. I won’t lie, it’s amazingly powerful (and more than a little scary) to be able to just start recording video or snapping pictures with a couple of flicks of your finger or simple voice commands.
At one point during my time with Glass, we all went out to navigate to a nearby Starbucks — the camera crew I’d brought with me came along. As soon as we got inside however, the employees at Starbucks asked us to stop filming. Sure, no problem. But I kept the Glass’ video recorder going, all the way through my order and getting my coffee. Yes, you can see a light in the prism when the device is recording, but I got the impression that most people had no idea what they were looking at. The cashier seemed to be on the verge of asking me what I was wearing on my face, but the question never came. He certainly never asked me to stop filming.
Once those Explorer editions are out in the world, you can expect a slew of use (and misuse) in this department. Maybe misuse is the wrong word here. Steve tells me that part of the Explorer program is to find out how people want to (and will) use Glass. “It’s really important,” he says, “what we’re trying to do is expand the community that we have for Glass users. Currently it’s just our team and a few other Google people testing it. We want to expand that to people outside of Google. We think it’s really important, actually, for the development of Glass because it’s such a new product and it’s not just a piece of software. We want to learn from people how it’s going to fit into their lifestyle.” He gets the point. “It’s a very intimate device. We’d like to better understand how other people are going to use it. We think they’ll have a great opportunity to influence and shape the opportunity of Glass by not only giving us feedback on the product, but by helping us develop social norms as well.”
I ask if it’s their attempt to define “Glass etiquette.” Will there be the Glass version of Twitter’s RT? “That’s what the Explorer program is about,” Steve says. But that’s not going to answer questions about what’s right and wrong to do with a camera that doesn’t need to be held up to take a photo, and often won’t even be noticed by its owner’s subjects. Will people get comfortable with that? Are they supposed to?
The privacy issue is going to be a big hurdle for Google with Glass. Almost as big as the hurdle it has to jump over to convince normal people to wear something as alien and unfashionable as Glass seems right now.
But what’s it actually like to have Glass on? To use it when you’re walking around? Well, it’s kind of awesome.
Think of it this way — if you get a text message or have an incoming call when you’re walking down a busy street, there are something like two or three things you have to do before you can deal with that situation. Most of them involve you completely taking your attention off of your task at hand: walking down the street. With Glass, that information just appears to you, in your line of sight, ready for you to take action on. And taking that action is little more than touching the side of Glass or tilting your head up — nothing that would take you away from your main task of not running into people.
It’s a simple concept that feels powerful in practice.
The same is true for navigation. When I get out of trains in New York I am constantly jumping right into Google Maps to figure out where I’m headed. Even after more than a decade in the city, I seem to never be able to figure out which way to turn when I exit a subway station. You still have to grapple with asking for directions with Glass, but removing the barrier of being completely distracted by the device in your hand is significant, and actually receiving directions as you walk and even more significant. In the city, Glass make you feel more powerful, better equipped, and definitely less diverted.
I will admit that wearing Glass made me feel self-conscious, and maybe it’s just my paranoia acting up (or the fact that I look like a huge weirdo), but I felt people staring at me. Everyone who I made eye contact with while in Glass seemed to be just about to say “hey, what the hell is that?” and it made me uncomfortable.
Steve claims that when those questions do come, people are excited to find out what Glass is. “We’ve been wearing this for almost a year now out in public, and it’s been so interesting and exciting to do that. Before, we were super excited about it and confident in our design, but you never know until you start wearing it out and about. Of course my friends would joke with me ‘oh no girls are going to talk to you now, they’ll think it’s strange.’ The exact opposite happened.”
I don’t think Glass is right for every situation. It’s easy to see how it’s amazing for parents to capture all of the adorable things their kids are doing, or for skydivers and rock climbers who clearly don’t have their hands free and also happen to be having life changing experiences. And yes, it’s probably helpful if you’re in Thailand and need directions or translation — but this might not be that great at a dinner party, or on a date, or watching a movie. In fact, it could make those situations very awkward, or at the least, change them in ways you might not like.
Sometimes you want to be distracted in the old fashioned ways. And sometimes, you want people to see you — not a device you’re wearing on your face. One that may or may not be recording them right this second.
And that brings me back to the start: who would want to wear this thing in public?
Not if, but when
Honestly, I started to like Glass a lot when I was wearing it. It wasn’t uncomfortable and it brought something new into view (both literally and figuratively) that has tremendous value and potential. I don’t think my face looks quite right without my glasses on, and I didn’t think it looked quite right while wearing Google Glass, but after a while it started to feel less and less not-right. And that’s something, right?
The sunglass attachment Google is shipping with the device goes a long way to normalizing the experience. A partnership with someone like Ray-Ban or Warby Parker would go further still. It’s actually easy to see now — after using it, after feeling what it’s like to be in public with Glass on — how you could get comfortable with the device.
Is it ready for everyone right now? Not really. Does the Glass team still have huge distance to cover in making the experience work just the way it should every time you use it? Definitely.
But I walked away convinced that this wasn’t just one of Google’s weird flights of fancy. The more I used Glass the more it made sense to me; the more I wanted it. If the team had told me I could sign up to have my current glasses augmented with Glass technology, I would have put pen to paper (and money in their hands) right then and there. And it’s that kind of stuff that will make the difference between this being a niche device for geeks and a product that everyone wants to experience.
After a few hours with Glass, I’ve decided that the question is no longer ‘if,’ but ‘when?’
Video shot and edited by: Jordan Oplinger & Ryan Manning; Additional editing by Billy Disney
Photography: Michael Shane