Usability Testing of Google Glass

NB: This was a paper I wrote for my library school course, LIS 644 Usability on May 10, 2014. I have not made any revisions or updates since that time.

Usually, usability testing of a new product happens behind the scenes. One popular method, user testing, takes place in a controlled environment, where participants are gently guided by usability experts and data is meticulously collected and analyzed. Participants are carefully recruited, often to capture a representative sample of a population, and perhaps rewarded with a token of appreciation for their time and effort. User testing is a solid, reliable research method that is well-established in the usability field, even considered to be the best method of evaluation[1]. Yet when it came time for Google to test the market version of their wearable computer, Google Glass, it seems like the usability experts said, “Actually, let’s do the opposite of all that.” They decided to make participants compete to join the test, made them pay over a thousand dollars to purchase the very device they were testing, and publicized the entire research project, called the Google Explorer program.

On the surface, the Google Explorer program seems closer to a guerilla testing method than more traditional usability research, in that the goal of the program was to evaluate real world usage and direct feedback from users without the rigidity of a lab test. But even guerilla testing methods tend to involve a certain degree of control, such as observation by the researcher or using predetermined tasks users need to perform during a test. As an emerging research method, there does not appear to be one single way experts carry out guerilla usability testing. However, most still retain the trappings of traditional user testing: moderator, a single environment, a limited time for testing, note-taking/data gathering by moderator. A key advantage of guerrilla usability testing is that it is cheap and quick; Unger and Warfel observe that guerilla testing is ideal “when you are battling against time and money limitations”[2]. The testing for Google Glass turned all this on its head. Google didn’t need to save money: it’s Google. Recruitment doesn’t seem to have been a problem either. They actually got their participants to fight to be selected and then pay Google for the privilege of joining in on the testing. Users were tasked with crafting a 50-word application tagged “#ifihadglass” and post it to Google+ or Twitter. Winners would need to purchase a device for $1,500 and attend a training session (or “special pick-up experience,” in Google’s words) in New York, Los Angeles or San Francisco, which means some participants would need to travel out-of-state to be involved in the testing[3]. Few companies but Google could make such demands on recruits and still have to turn candidates away. The Google Explorer program was clearly not the only research method Google employed during their design of Google Glass: the question is why they decided to do it at all. Time and budget constraints probably aren’t a problem for a company like Google, so what is the advantage of using something like guerrilla usability testing?

The primary purpose of the Google Explorer program was to determine what users will actually do with Glass in real life. It sent participants out into the world seemingly without direction. Google seems to have collected feedback by soliciting stories and content from “the Glass Explorer Community,” which is an online community where Explorers interacted with each other and with Google directly[4]. We do not know how else Google solicited feedback, if they asked for it a specific way (such as questionnaires, diary studies, etc.), or how they analyzed the feedback they received. And since the Explorers actually own their Google Glass devices, it isn’t clear when the testing period is actually over. Does it last as long as the device holds up? Are Explorers under any kind of obligation to report back to Google for a certain period of time? Explorers heavily utilized public social media accounts to record their experiences with Glass, and Google collected a lot of that content from users to put on its website (http://www.google.com/glass/start/explorer-stories/). Using social media to carry out usability evaluations has exciting potential. By asking users to document their feelings and experiences using social media, users can contribute in a more natural way than using a diary form or survey. It has the potential to capture more holistic feedback, encompassing both emotional and pragmatic reactions (MacDonald & Atwood, 2013). However, one would imagine this places a significant burden on the researchers to clean up what is likely a huge amount of inconsistent data for analysis. Unlike many usability tests, Google did not receive feedback just from their participants: the Explorer program also inspired a huge response from outside the community.

The Explorer program was unlike a typical usability test. It was on a large, public scale, and the very visible testing of Google Glass produced a lot of commentary from both the tech community and outside observers, including Congress. They expressed concerns about privacy and etiquette, even even coining a new term “glasshole” for obnoxious Glass users (one wonders how the widespread use of the word “glasshole” would be discussed in a usability report). Some Google Glass users have experienced violence as a direct result of the product. Mat Honan wore Google Glass for a year, and remarks, “People get angry at Glass. They get angry at you for wearing Glass. They talk about you openly. It inspires the most aggressive of passive aggression” (2013). Honan also observed that since the application requirements to test the product were widely known and had a high cost barrier ($1,500), non-users sometimes felt Google Glass and/or its users were privileged or snobbish[5]. However, strong emotional reactions are still valuable to researchers, even when the reactions come from people who coexist with the product rather than use it: Google has been paying attention to the chatter around appropriate codes of conduct. One significant result of the Explorer program related to how users should interact with the world around them when using the product. In February, they posted an etiquette guide for Glass users (https://sites.google.com/site/glasscomms/glass-explorers), specifically asking users not to be “glassholes” and advising users how to better engage with both the product and the world around them. (Presumably, Google also decided to make other changes to the Glass interface based on the feedback collected from Explorers, but this was not publicized.) For such a groundbreaking product, the Explorer program was an invaluable way for Google to figure out not only if there were any usability problems with the product itself, but all aspects of a user’s experience–including reactions of people around him or her.

Is the Google Glass Explorer program a sound usability testing method, or just a PR stunt? Perhaps it is a little bit of both. Some of the content from the Explorers, including photos and videos, has been compiled on the Google Glass site for anyone to view. It’s a great advertisement for Glass, but also gives important insights into how people might use wearable technology. Putting a new interface or product out in the wild with real users gives a usability expert less control over the test, but provides real, valuable data on how users will actually interact with the product. Additionally, public testing might help companies understand how non-users perceive their products and what affect that might have on user experience. While usability tests of other products may not command the same amount of publicity as a Google product, the Explorer program is a valuable look at how the combination of social media and in the wild testing can provide natural feedback on a product.

Footnotes:

[1] MacDonald, C. M., & Atwood, M. E. (2013). Changing perspectives on 4valuation in HCI: Past, present, and future. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’13). ACM, New York, NY, 1969-1978.

[2] Unger, R. & Warfel, T. Z. (2011, February 15). Getting guerilla with it. UX Magazine. Retrieved from https://uxmag.com/articles/getting-guerrilla-with-it

[3] Ulanoff, L. (2013, February 20). Want Google Glass? Tell Google how you’ll use it. Mashable. Retrieved from http://mashable.com/2013/02/20/get-google-glass/

[4] Google. (n.d.). Explorers. Google Glass. Retrieved from https://sites.google.com/site/glasscomms/glass-explorers

[5] Honan, M. (2013, December 30). I, glasshole: My year with Google Glass. Wired. Retrieved from http://www.wired.com/2013/12/glasshole/