Technology

Okay Glass, let's get started

Anthony Marshall
Anthony Marshall
31 Jul 2014
blog post featured image

When you receive an email offering a summer of being paid to play with Google Glass, you eagerly accept. It was in awe that I first tried the Glass, knowing it was mine to toy with (insurance permitting). It took a little getting used to; I realised I had no idea what I was going to make for it, or how I was going to make it. The possibilities, for apps for this novel technology, are endless. The suggestions discussed amongst the team ranged from the impossible to the unprintable, however we agreed on something that it seemed of potential use, an app for presenters to use at events to track the reactions of their audience to their presentations, on Twitter, in real time. With no way to physically enter in text, the app needed a way to set a custom hashtag to search Twitter for, as well as the ability to both jump to the latest tweet and scroll through past tweets.

For those that haven't yet experienced Glass, the user interface is designed as a timeline of cards that can be scrolled through, with each card containing a different piece of information. System-wide, these could be text messages, photos, emails, or any other media. For this app, the cards would represent tweets. There are two main modes of input into Glass: voice control and a touchpad mounted on one side of the device. Although actions on this touchpad are limited to taps and swipes, it is a surprisingly versatile input method. With these limits in mind, I focused on the overall desired flow of the app. The user would tap Glass to refresh the Twitter feed, swipe on the touchpad to scroll from tweet to tweet, and would speak aloud the hashtag they wanted to search for. We decided to call this app YOLO.

Having decided on the overall design and input methods for the app, I now focused on how to go about actually building it. First of all, I had to decide between using the Mirror API (an API to send notification cards to Glass with custom text, images, and HTML) or the GDK used to write native apps for Glass. It seemed our Twitter app would be perfectly suited to the Mirror API after all, all it needed to do was send the content of each tweet as a card, so I set off using it.

Interacting with the Mirror API is surprisingly straight-forward; Google provides libraries for a number of different languages. Sending content to Glass was straight-forward and was also easy to combine with the results of a Twitter search. This however is the limit of the Mirror API. Try to set the app up to send multiple tweets required more interaction than it should have. The user experience was not restricted to just what we wanted to send, it was missing a lot of functionality that would have provided nice extra touches (keeping the screen on rather than dimming after 15 seconds, for example). Having exhausted this API, I proceeded to experiment with the GDK.

The GDK offered yet more choices. There are two versions of native apps for Glass: Live Cards and Immersions. Live Cards were much as the name would suggest, like a regular card in the timeline but one that could be constantly updated. If I hadn’t have had the criteria for the app of wanting the user to have control over the tweets, this approach would have worked; I could have simply just kept fetching tweets and displaying them on a timer. The aim however was for a more immersive experience, so an Immersion seemed to fit.

Building the app as an Immersion was relatively straight-forward. The first step involved extending the built-in card scroller and adapter to handle multiple cards in an array, which would be the basis of the app. Using the Twitter4J library for Twitter interaction, I soon had cards populated with the search results for a pre-defined hashtag, quickly fleshed out with extra metadata such as the Twitter user's screen name, the timestamp of the tweet, and an indicator of what number tweet was currently being displayed. I then added in a menu to allow the user to set the hashtag, either through voice recognition or through a list of previously used hashtags. All in all, the app was getting there, but it felt like it was missing that final touch.

One particularly cool piece of functionality included in Glass is the ability to take a photo when you wink, much easier when you've got your hands full. I wasn't so much interested in taking photos, more the implication that Glass had the capability to capture eye gestures. Although using the Glass touchpad to swipe through the tweets isn't too intrusive, it's still a level of interaction that could be more efficient if replicated by eye gestures. Although eye gestures weren't officially documented as part of the GDK, the underlying methods were accessible with a little investigation. I found that double-blinking would, depending on which tweet you were on, either jump to the latest tweet or refresh the Twitter feed. The eye gesture detection is however incredibly unreliable, so this is probably why it currently remains undocumented. That said, I hope in time Google will make it accessible, as it opens up a whole new way of interacting with Glass.

Using (and developing for) Glass is in equal parts fascinating and frustrating. It will randomly shut down and might spontaneously take blurry photos of your desk because it thought you winked. However, when Glass works, whether when it's scrolling through auto-generated cards or when the speech recognition is spot on, it’s awesome. This means it has great potential, and is growing as both Glassware and Glass itself become more publicly accessible. Getting Glassware into MyGlass is practically impossible at the moment, but hopefully YOLO (for Glass) will eventually find its way out into the world, albeit with a better name.

First page Listening for hashtag Hashtag displayed Swiping through tweets

Google Glass Presentation Photo. Licensed under CC BY-SA 3.0 via Wikimedia Commons

Close chatbot
Open chatbot
Open chatbot