Dérive is an interactive virtual guestbook targeting at travellers in spaces such as airports and hotels. With Leap Motion technologies, Travellers can sign their names in air, indicate their origins and destinations, then a unique star will be generated based on features of their signatures in the virtual universe together with other travellers' stars. They can explore the universe to see other people's comments poped up on each star and contribute their own tips or feelings by scanning the QR code and inputting on their smartphones. A globe gesture will transition the universe view to a globe view where travelers can see where their fellow travelers come from and where they are going to.
Dérive is made with the help of Leap.js, WebGL, Three.js, Express, MongoDB and Socket.io. A live demo can be found here.
We decided to generalize from some of the more specific needs that we saw in the context of hotels/shared public spaces to needs of travelers, whether in foreign situations or local situations. We felt a need ourselves while we were doing the needfinding activity that it was difficult to navigate through a large city we were only somewhat familiar with even when we had smartphone navigation to aid us and specific goals in mind (a list of hotels to visit). What we realized was that just like having calculators has allowed to let mental math abilities lapse, so too mobile maps and GPS have allowed us to neglect developing our mental maps and spacial awareness of more than our immediate surroundings. These tools can fail if you run out of battery or if wifi cuts out and you don’t have data, and even with these tools, a better cognitive model would help us use them to their full effect. Moreover, in planning packed itineraries it would help not only to plan efficiently, but to practice so that in actuality we could be as efficient as we plan. Even within more local settings we feel that this ability to understand and remember pathways in an intuitive and physical way could have applications: parents could rehearse path walking with their children and “test” their kids to make sure they remember them, for instance, so that both parties feel more confident that they won’t get lost.
To summarize, the needs we seek to address:
For this prototype, we narrowed back to our initial insights that travelers (especially business travelers) need to be reminded of the city around them and what it offers and that they need to get motivated to get out of the hotels and explore. We decided to get back to this because after discussions within ourselves and with the couch and TAs, we felt that the general need for cognitive understanding of maps seemed too vague and didn't lead us to more interesting and useful ideas.
Though exploring maps on the floor might be the most intuitive way, it's not the only option and there could some feasibility problems involved. So for this prototype we project the map on the wall and gestures can be used to interact and explore. Basically, it allows users to point to places of interests and draw a possible path, and then generates suggested route and pops up related related cultural or historical facts for each place. Users can export the route information to their phone and go on exploring the city.
Here's our sketch for this idea.
Here's our video prototype.
When a hotel customer passes a wall in the hotel lobby, she notices an RFID activator. She wonders what it is and pulls out her phone to activate it. A map of San Francisco then appears on the wall. Text instruction "Point" indicates that she can point to the places of her interests. A dot label will be added once pointed. She thumbs up to suggest that she's done with pointing. Then "Path" appears indicating that she can draw a path linking these places.Once the path is complete, "Explore?" appears asking her whether she wants to explore this path. Again she thumbs up to confirm and a suggested route is drawn on top of the map. She opens up her hand to show that she's interested in more information about the places. Boxes of cultural/historical references pop up as requested. Then "Export on phone?" prompts to ask whether she would like to export everything to her phone so that she can reference on the go. She thumbs up and soon received the detailed route information on the phone. She happily goes on exploring the city. The map and everything else fade away gradually leaving only the RFID waiting for the next interested passer-by.
Here are some other ideas we're considering: Keep a dot random-walking on the map and gestures can be used to adjust speed and diverge. Each time the dot passes a place of interest, some text information can pop up. Another idea is tracking the actual travel path with user's smartphone if user consents. Once the user comes back, he/she can choose whether to contribute his/her path to our map. So over time we can generate most-traveled paths of travelers in this hotel.
For this prototype, we continued with our cognitive map exploring idea.
As we were not all that excited about our previous ideas, we went back to our observation and found that the need to be present but anonymous is subtle but compelling. Then the idea of Stranger City popped out: In the airport, as travelers are waiting or wandering around, they can trace their signature in the air, which can be captured by Leap Motion to generate a personalized building in a virtual city. We felt that this idea addresses the need for engagement or entertainment in the large space while waiting, the need of both presence and anonymity and the need of connecting to strangers in a subtle way. We are very excited about this idea as it's awesome and something we really want to play with.
Then we proceeded to specify the ideas in more details, when many ideas of possible features and the need to make design decisions started to emerge. For the wizard of oz testing, we didn't actually make 2 different interface prototypes. Instead, we made one PSD file including all the features and design alternatives we've considered. We tested these variables on every individual testers. The interactions are mimicked by manually toggling the visibility of layers according to testers' actions. A brief walkthrough of the wizard of oz testing is as follows. (The complete PSD file is available here)
When a traveler approaches the app, he/she is asked to sign in the air above the Leap Motion sensor. Here we have the choice of whether to show the signature traces or not. Considering privacy and security issues, we decided to only show a dot following the user's fingertip.
Then a unique building is generated based on the characteristics of the input signature for the user. Here, we have a choice of whether to display how the building is generated (the mappings between features of the building and features of the signature) or not.
The user is then asked to speak where they come from (country/state). We've also considered other input methods, such as pointing on a global map or globe. After the input, the building is automatically placed in the city where the airport is located (e.g. San Francisco if the airport is SFO). The map shown here is real except for the buildings are virtually representing each individual traveler. There could be options of displaying maps of other airports. If two travelers are from similar places, there could be some form of linkage (e.g. a bridge) between the two buildings.
As for where the building is placed, we have several alternatives: a purely fictional city, the city where the airport is in (shown here), the user's hometown, chosen by the user etc. A small GIF indication will be shown in the corner (not shown here, in the actual wizard of oz testing, one of the team members will play the role of the GIF) that the user can perform a "grab" gesture to transform from the plain view to the globe view, where each building is located where the corresponding traveler actually comes from. (To better illustrate how this transformation happens, besides Photoshop we used orange peel to show the transition.) The idea behind this design is that in one view travelers are from all over the world, they seem to be far apart from each other, but in another view, they are in the same city as they all have been to the same airport, which creates subtle connections between stranger travelers.
In the globe view, there's also a GIF indicating that the user can a "flatten" gesture to go back to the plain city view.
When no one is interacting with the app, one possible design is that the buildings can pop up random conversations about the actual neighborhoods in the city, which is consistent with the choice that the building is placed in the city where the airport is located.
Another optional feature is that the weather of the city can be changed by different gestures, such as rain, snow, sunny and clouds.
Besides, we are also concerned about whether the building should fade away after some time and how the travelers would like to do after they leave the airport. Our suggestions include, fading the building depending on how long the traveler will be traveling, virtually move the building to the next airport the traveler is arriving and providing a way for user to monitor and interact with his/her building on the phone after they leave.
Tester #1
Tester #2
Tester #3
Tester #4
Tester #5
After the wizard of oz testing, we have reflected on the issues surfaced and considered possible plans. For some of them, perhaps we need to do more observation and need finding in airports.
Dérive - is an unplanned journey through a landscape, usually urban, on which the subtle aesthetic contours of the surrounding architecture and geography subconsciously direct the travellers, with the ultimate goal of encountering an entirely new and authentic experience.
Our app introduces the user to the guestbook with this intro:
A guestbook for exploring the city * you're a star no matter where you are * gesture sign your signature * and join the rest.
The design process let us focus and narrow our ideas strictly to combining the notion of exploration, and the real functionality of a map to introduce random walks on, and guest signatures, leaving some permanence in a temporal setting. In the process of adapting the metaphor of the guestbook, we moved from visually mapping signature features to a structure, to flying objects and finally to stars, as they are less cognitively confusing in context of a real city and less kitsch (a common pitfall of large installation type apps).
Design Space
Contexts related to Travel (Hotels, Airports, Tourist Agencies).
User Need
Express Presence but maintain anonymity, gain notion of others around them, encouragement to explore strange places.
Significance
Durability - using gestural input in the space of art installation calls into question the currently limiting factor of touch interfaces in multi user environments to function properly after longer periods of time.
Chaos - developing interactions meant for multicultural and chaotic environments, users can be from anywhere, be any age, and have very different agendas.
Input
Signature - Leap tracks signature and writes to some canvas for feature analysis. Front end is just a dot as the sign so they know where their finger is. Internally, the feature set could include: amount of time to input signature, width of signature in x direction, height in y direction, aspect ratio. Potential features can be hand size, how cursive, slant or top heavy it is.
Place of Origin - voice or type or TEXT.
Journey / Travel Time - Planned obsolescence. Maybe stars should just fade to super dull if never commented to.
Comments - Take in comments that later get sent to stars.
Gestures - Map navigation.
Star Structure
Generation - Generate and draw the star based on the passed feature set. Attach labels so that they can see where are their new stars. Stars have different colors. Maybe tag stars with location. Possibly user can add final touches to the star.
Interaction
Exploration - Notion of context and their location on the map. Weather changes. Highlight local vs. global. Gesture that takes from plane view to globe view.
Ambient Display - Comments from stars create ambient city for passers-by.
Technology
Minimal prototype I
Implementation to-dos
Motivation
Our project is motivated by our impetus to emphasize the forgotten experience of "encounters," unplanned journeys, random walks and to address the need of travelers to express presence while maintaining anonymity. This question the supposition that itinerants or visitors lack ambient awareness of how others live in the current environment and that filling this need suffices to encourage exploration by diminishing action biases of exploring new places (by introducing opportunity for spontaneous decision making). The project chooses to do so using the tradition of guestbooks to preserve history of a place. From this we are left to build on their understanding of guestbooks and extend its capabilities. In short dérive is a re-imagining of the guestbook which tries to tie exploratory commentary "in time" rather than "after the fact." We realized some of the exciting tradition of guestbooks is the allowance to peruse other travelers’ comments, without their cognizance. Taking this tradition, of leaving a statement, we translated this idea to being able to "text" (webApp) to your digital signature (star), while you are exploring the city, letting those at the source of the projection see travelers comments -- to begin their own journey.
Hypotheses
From this motivation and mulling we can extract this subset of hypotheses:
Driving Questions
Critical Tasks
Data
User Recruitment**
We plan to a/b test on other HCI students and non-cs** friends before real visitor impromptus.
Then we will go to Stanford visitor center to recruit random campus tourists by setting up at the visitor center and occupying their waiting time.
*Users are all types of travelers with all types of agendas.
**Important that most recruited users not actually be preplanned as a guestbook is encountered in traveler settings, such as the visitor center (airports/hotels).
***Those who may be considered traditional purists and abhor technology, to test their understanding of the guestbook metaphor extension.
During Mar. 7th and Mar. 8th, we have done user testing with 10 representive users.
User 1 (Psych Major)
User 2 (Bio Major)
User 3 (Visitor Center Desk Assistant, CEE Masters)
User 4 (Astronomy/SLAC Phd)
User 5 (New Student Visiting from NY with Mother)
User 6 (Businesswoman from San Francisco)
User 7 (Director of Architectural Design Department)
User 8 (EE Freshman, Tech Geek)
User 9 (Entrepreur at Bytes Cafe from San Diego)
User 10 (CS Phd from South Africa)
General Conclusion
One thing that we tested in this round of studies was the kind of instruction or lack thereof a user needs to learn the set of gestures. We recognize the new-ness of the input style and decided to first see how users react in the "discovery" process. When verbally instructed on gestures we recognize for a certain scene they used their "understanding" of what that mapped to. We saw the most variety in interpretation when we asked users to "place" themselves on a map. In this scene they are asked to place a second hand to "lock" their choice. Where they place their hand would cause it not to work 60% of the time (6/10 users).
When user got tired of not getting the gesture correct we would use our nifty keyboard shortcuts to bypass the "scene/stage/step" in the app.
Our system did not have any gesture instructions (we do have animated gifs for this) per my first comment as a result the testing process was much longer. From this we discovered that users regardless of instruction just start making things up when it doesn't respond quick enough.
We also did not add the webapp/comment layer, though part of it is implemented. We discovered that without this layer users wanted it.
From interaction time, and flippancy to wave the gestures as completely not working when they don't respond, we determine that the system really has to be more robust to catch up with their learning curve. This curve was pointed out to us as being steep but quick.
The tendency of not knowing their angle of interaction or "pick-up" with the leap suggests to us a need to point out the space they can move their hand in. Does this mean we put a glass box around the leap to a certain height? Or we tape a rectangle on the back wall? Do we limit the amount of space their hand can move in front of the leap on the y-axis?
We also discovered that the frame of reference for the user is actually the leap as it sits facing up to them, since they expect this to be the mapping, we are considering changing the gesture zoom to be moving the hand up and down rather than backwards and forwards.
The users tended to keep writing their signature. We had decided that for security reasons drawing the entire signature out needed to be just leaving traces. The users took this as "not registering" and kept writing, so the program wouldn't continue in a timely manner. This leads us to consider adding the persistent line drawing back.
Users consistently had issues finding themselves in the placing stage, and not just that, but had trouble getting the cursor to where they were from and were frustrated in settling for near. This leads us to consider changing the relative mapping of where their finger is to the context, and also doubling the size of the cursor.
We recognized situations where the user was not alone, that the backseat driver helped troubleshoot error in usage very quickly.
From our more advanced users (mostly from CS), we were queried as to how the star was related to the signature and how the stars mapped to the globe. These relationships were lost due to procedural animation complexity and timeouts in the js files the work between events. We hope to add in prototype two indicative stages the show how the stars go to globe view. and also illustrate how the signature is mapped to features of the star.
Based on the valuable feedbacks from the user testing, we compiled a list of changes mostly about gestures, as gestures are essential and received most requests for improvement. We also realized that we need to think about how to better guide users through the system and how to better convey the concept. Here are the detailed changes we came up with and their implementation statuses.
Changes on gestures:
Other changes:
As our system is relatively large and complicated, some parts were not yet fully implemented before user testing. Here are some progress updates: