Tech Files First Take: Google Glass

Google Glass may have a lot of potential. But my first impression of the new wearable technology product is that it needs a lot of polishing if it’s ever to become a mainstream device.

And it suffers from what may be a fundamental flaw that could prevent it from ever catching on.

For those who have somehow avoided the hype, Glass is the new computer from Google that users wear like a pair of eyeglasses. Its display is contained in a small clear box that’s connected to Glass’ frame just above a user’s right eye. Users interact with Glass either by giving it audible commands or by swiping or tapping on its touch-sensitive temple. It also includes a speaker that transmits sound to a user’s ear via bone-conduction technology.

Google has only offered Glass to some 2,000 developers and another 8,000 consumers who submitted winning entries in the company’s “If I had Glass” contest earlier this year. The early adopters had to pay a pretty penny for the product: $1,500.

I got my first chance to test Glass last night at a low-key event Google held in San Francisco. I wore a Glass unit for about 30 minutes while a company representative walked me through some of its primary features.

That’s clearly not enough time to thoroughly evaluate the device. I didn’t get to go through the steps of configuring it. I didn’t get to test its battery life.

And because of the format of the event and the way the demonstration units were configured, I didn’t get to test some of its key features. I didn’t pair it with a phone, so I couldn’t make calls through it or get directions, which requires Google Maps on an Android device. And because the only contact in it was a dummy one, I couldn’t share view with someone else by making a video call.

Still, I did have enough time with Glass to form some initial thoughts about it.

The first thing you notice when wearing Glass is its display. The glass box containing it hovers in front of your eye. Even when the screen isn’t turned on,  it’s noticeable and, at least in my short experience, annoying. I found myself lifting my head like I was wearing reading glasses to look under it.

Given how close physically the display is to your eye, it was frustrating to me how difficult it was to focus on it when it was turned on. While the display is only an inch or so away from your eye, the focal distance of the screen is maybe a foot away.

The descriptions of Glass that I’ve read indicate that you merely have to look up to see the display. But it’s more than that. You have to look in the right place at the right distance away. If you were staring at something up close or something in the distance, it may take your eyes a moment or two to find Glass’ display and be able to focus on it.

It was also pretty clear from my test why some authorities are talking about banning the use of Glass while driving. Because the simple fact is that even though Glass has only one display and that display only partially blocks the vision of one eye, that screen is all you see when you are focusing on it. Your mind blocks out pretty much everything else. Maybe because it’s so close to your eye, it seems to severely constrain your peripheral vision, even more so than when you walk and look at a smartphone.

The design also has some serious limitations. Glass is not exactly made for wearers of prescription glasses, because you can’t yet get prescription lenses that fit into the device’s frame. That could leave glasses owners awkwardly trying to wear two pairs of frames at the same time or forgoing their prescription lenses entirely.

Maybe it’s because I’m north of 40, but I think people who wear Glass just look goofy. Regardless, it’s going to take some getting used to interacting with people who are wearing the device. It’s like trying to talk someone who’s got at least one eye on the TV screen at all times.

In terms of using the device, there are some cool things about it. One fun thing I did was to ask Glass how to say “good morning ” in Swahili. It quickly gave me the translation, both speaking it to me and displaying it on its screen. It was also fun to be able to take pictures or videos without having to hold a camera in my hands.

Of the two ways to interact with Glass, voice commands were much easier to use. Google’s voice technology is well developed and very accurate and it was fairly easy to get Glass to do a Web search, take a picture or send a message by just talking to it. Glass displays what he hears you saying as you speak it, so you can easily tell if it’s understanding you.

I was very less impressed with the touch-based interface. It was hard to remember when you tap with one finger or two, swipe up or down or swipe fast or slow. Each of those gestures performs a different command, and if you do the wrong one, you can end up on the wrong screen. My guess is that users get accustomed to those gestures the more they use Glass, but they’re not exactly intuitive to learn.

I also didn’t like the fact that some of Glass’ key functions are severely limited. There’s no way to zoom in on a picture, for example. And because the screen doesn’t offer previews, it’s hard to know what will be in your picture until you take it.  Meanwhile, videos are limited to 10 seconds unless you happen to extend the recording time by touching the right button before that 10 seconds is up.

There’s also no way to adjust the volume of Glass’ speaker, which can make it difficult to hear in places where there is a lot of ambient noise. That seems like a big shortcoming in that many of the places you are likely to use Glass — as you’re walking down a city street, say, or at a party — are noisy environments.

I was disappointed that I didn’t get to test the video calling feature of Glass, because that is perhaps its killer app. The ability to share with others what you are viewing has lots of potential uses, from doctors performing surgeries, to field trips from afar, to sharing sports experiences, to cool new kinds of augmented reality video games.

My overall impression of Glass from my brief experience with it is that the device is a great proof-of-concept. It shows what a smart wearable technology product might look like and how it might work.

But it’s a far cry from a finished product, something Google itself recognizes. Company representatives say that the mainstream version the company plans to release next year will be influenced by the feedback from early adopters.

I’ll be interested to see the changes Google makes to Glass for version 2.0. I’m sure the company will fix some of its more picayune problems. But I’m curious as to whether and how the company can address the cluster of problems — both experiential and social — caused by placing a display right in front of a users’ eye at all times.

I’m worried that those are problems that Google can’t overcome.


Tags: , , ,


Share this Post