LIGHTHOUSE

  • Victoria Flores - Light - Reaching to the moon.
  • Lilith Wu - Control - Keeping it down to earth.
  • Ningxia Zhang - Staircase - Putting it all together.
  • Bell Wang - Foundation - Make it work!

Dérive is an interactive virtual guestbook targeting at travellers in spaces such as airports and hotels. With Leap Motion technologies, Travellers can sign their names in air, indicate their origins and destinations, then a unique star will be generated based on features of their signatures in the virtual universe together with other travellers' stars. They can explore the universe to see other people's comments poped up on each star and contribute their own tips or feelings by scanning the QR code and inputting on their smartphones. A globe gesture will transition the universe view to a globe view where travelers can see where their fellow travelers come from and where they are going to.

Dérive is made with the help of Leap.js, WebGL, Three.js, Express, MongoDB and Socket.io. A live demo can be found here.

Introducing Dérive

User Need

We decided to generalize from some of the more specific needs that we saw in the context of hotels/shared public spaces to needs of travelers, whether in foreign situations or local situations. We felt a need ourselves while we were doing the needfinding activity that it was difficult to navigate through a large city we were only somewhat familiar with even when we had smartphone navigation to aid us and specific goals in mind (a list of hotels to visit). What we realized was that just like having calculators has allowed to let mental math abilities lapse, so too mobile maps and GPS have allowed us to neglect developing our mental maps and spacial awareness of more than our immediate surroundings. These tools can fail if you run out of battery or if wifi cuts out and you don’t have data, and even with these tools, a better cognitive model would help us use them to their full effect. Moreover, in planning packed itineraries it would help not only to plan efficiently, but to practice so that in actuality we could be as efficient as we plan. Even within more local settings we feel that this ability to understand and remember pathways in an intuitive and physical way could have applications: parents could rehearse path walking with their children and “test” their kids to make sure they remember them, for instance, so that both parties feel more confident that they won’t get lost.

To summarize, the needs we seek to address:

  • need to feel situated and present, not overwhelmed by surroundings
  • need to regain cognitive model of physical surroundings
  • need to feel confident and safer walking through unfamiliar areas
  • need to project assuredness walking through less safe areas
  • need to practice pathways
  • need to plan in advance in a more visual, tactile way
  • need to feel confident in others’ knowledge of paths (parents/guardians who need to let children travel alone. E.g. walking to/from school)

For this prototype, we narrowed back to our initial insights that travelers (especially business travelers) need to be reminded of the city around them and what it offers and that they need to get motivated to get out of the hotels and explore. We decided to get back to this because after discussions within ourselves and with the couch and TAs, we felt that the general need for cognitive understanding of maps seemed too vague and didn't lead us to more interesting and useful ideas.

Though exploring maps on the floor might be the most intuitive way, it's not the only option and there could some feasibility problems involved. So for this prototype we project the map on the wall and gestures can be used to interact and explore. Basically, it allows users to point to places of interests and draw a possible path, and then generates suggested route and pops up related related cultural or historical facts for each place. Users can export the route information to their phone and go on exploring the city.

Here's our sketch for this idea.

Here's our video prototype.

When a hotel customer passes a wall in the hotel lobby, she notices an RFID activator. She wonders what it is and pulls out her phone to activate it. A map of San Francisco then appears on the wall. Text instruction "Point" indicates that she can point to the places of her interests. A dot label will be added once pointed. She thumbs up to suggest that she's done with pointing. Then "Path" appears indicating that she can draw a path linking these places.Once the path is complete, "Explore?" appears asking her whether she wants to explore this path. Again she thumbs up to confirm and a suggested route is drawn on top of the map. She opens up her hand to show that she's interested in more information about the places. Boxes of cultural/historical references pop up as requested. Then "Export on phone?" prompts to ask whether she would like to export everything to her phone so that she can reference on the go. She thumbs up and soon received the detailed route information on the phone. She happily goes on exploring the city. The map and everything else fade away gradually leaving only the RFID waiting for the next interested passer-by.

Here are some other ideas we're considering: Keep a dot random-walking on the map and gestures can be used to adjust speed and diverge. Each time the dot passes a place of interest, some text information can pop up. Another idea is tracking the actual travel path with user's smartphone if user consents. Once the user comes back, he/she can choose whether to contribute his/her path to our map. So over time we can generate most-traveled paths of travelers in this hotel.

Prototype I

Prototype II

For this prototype, we continued with our cognitive map exploring idea.

A New Idea of Stranger City

As we were not all that excited about our previous ideas, we went back to our observation and found that the need to be present but anonymous is subtle but compelling. Then the idea of Stranger City popped out: In the airport, as travelers are waiting or wandering around, they can trace their signature in the air, which can be captured by Leap Motion to generate a personalized building in a virtual city. We felt that this idea addresses the need for engagement or entertainment in the large space while waiting, the need of both presence and anonymity and the need of connecting to strangers in a subtle way. We are very excited about this idea as it's awesome and something we really want to play with.

Then we proceeded to specify the ideas in more details, when many ideas of possible features and the need to make design decisions started to emerge. For the wizard of oz testing, we didn't actually make 2 different interface prototypes. Instead, we made one PSD file including all the features and design alternatives we've considered. We tested these variables on every individual testers. The interactions are mimicked by manually toggling the visibility of layers according to testers' actions. A brief walkthrough of the wizard of oz testing is as follows. (The complete PSD file is available here)

When a traveler approaches the app, he/she is asked to sign in the air above the Leap Motion sensor. Here we have the choice of whether to show the signature traces or not. Considering privacy and security issues, we decided to only show a dot following the user's fingertip.

Then a unique building is generated based on the characteristics of the input signature for the user. Here, we have a choice of whether to display how the building is generated (the mappings between features of the building and features of the signature) or not.

The user is then asked to speak where they come from (country/state). We've also considered other input methods, such as pointing on a global map or globe. After the input, the building is automatically placed in the city where the airport is located (e.g. San Francisco if the airport is SFO). The map shown here is real except for the buildings are virtually representing each individual traveler. There could be options of displaying maps of other airports. If two travelers are from similar places, there could be some form of linkage (e.g. a bridge) between the two buildings.

As for where the building is placed, we have several alternatives: a purely fictional city, the city where the airport is in (shown here), the user's hometown, chosen by the user etc. A small GIF indication will be shown in the corner (not shown here, in the actual wizard of oz testing, one of the team members will play the role of the GIF) that the user can perform a "grab" gesture to transform from the plain view to the globe view, where each building is located where the corresponding traveler actually comes from. (To better illustrate how this transformation happens, besides Photoshop we used orange peel to show the transition.) The idea behind this design is that in one view travelers are from all over the world, they seem to be far apart from each other, but in another view, they are in the same city as they all have been to the same airport, which creates subtle connections between stranger travelers.

In the globe view, there's also a GIF indicating that the user can a "flatten" gesture to go back to the plain city view.

When no one is interacting with the app, one possible design is that the buildings can pop up random conversations about the actual neighborhoods in the city, which is consistent with the choice that the building is placed in the city where the airport is located.

Another optional feature is that the weather of the city can be changed by different gestures, such as rain, snow, sunny and clouds.

Besides, we are also concerned about whether the building should fade away after some time and how the travelers would like to do after they leave the airport. Our suggestions include, fading the building depending on how long the traveler will be traveling, virtually move the building to the next airport the traveler is arriving and providing a way for user to monitor and interact with his/her building on the phone after they leave.

Wizard of Oz Results

Tester #1

  • Needed quicker introduction of "what to do", not whole paragraphs
  • Would want to see all the features pointing to everything
  • Wanted to maybe take his building with him, but maybe just a picture is enough
  • Liked the idea of the building disappearing
  • Liked the idea of it being the same building; for every airport he visits it shows up again
  • Not super interested in changing his building
  • tried to click on other peoples buildings, would like to see who it was but wouldn't want to share that information
  • Would want it in San Francisco not some fantasy land
  • Remarked the sphere as really cool
  • No sense of what gestures can be done
  • Loved the idea of weather
  • Needed clarification on passers-by/strangers
  • Loved to look at other airports
  • Wanted to walk through around his building

Tester #2

  • When the building was generated (without tagging the features), was very confused about how it came about
  • When the tagging is shown, felt much clearer, but showed a strong desire to customize the building. Argued that the building should be co-authored
  • When asked to input the hometown information, responded immediately by speaking "San Diego" (not a state/country as requested)
  • When the city map was shown with buildings on it, was confused why after inputing "San Diego", a map of San Francisco appeared
  • After we explained that as the background was set to be SFO, buildings were all set in SF, still felt unconvinced
  • Wanted to know where exactly his building was and why it was there, wondered if he could change that
  • Felt urged to zoom in/out the map, and point on different buildings
  • When seeing the GIF, imitated the gesture indicated by it
  • Was very confused by the transformation from the plain city view to the globe view, didn't figure out what the globe meant
  • Suggested that a smoother and slower transition would convey the globe / plain views better
  • Remarked that idea was awesome and would very much like to play with it when implemented
  • Would like to take the building with him on the phone

Tester #3

  • When asked to input the signature, was a bit confused and hesitant, didn't quite get what the arrow indicated
  • When the building was generated, was confused why and how it came about
  • When the map, and then the globe transformation was shown, had almost no idea what the app was doing
  • After we explained the basic idea, realized that this was like a virtual guest book in the form of virtual cities
  • Explored the plain view, globe view, ambient city and weather change, remarked it as entertaining
  • Suggested that there should be some purpose in the design and that we should do more contextual research in airports

Tester #4

  • Remarked the interface as pretty
  • When the building was generated, was a bit confused why it appeared to be like this, even after seeing the feature tags
  • Wanted to customize the building, change the color, shape etc
  • After the building was placed on the map, wanted to zoom in the see her own building
  • Wanted to point and inspect other people's buildings, wondering whom those buildings were representing
  • When seeing the GIF, imitated the gesture indicated by it
  • Didn't quite get the purpose of globe view and plain city view
  • Was surprised by the gesture-controlled weather changes, remarked it as very interesting

Tester #5

  • Didn't sign his name in the first place (but did some arbitrary shape), after we explained, was confused why he had to sign his name
  • Was confused why there was no feedback on the screen of his signing
  • When the building was generated, wondered why it was like that
  • When the feature tags revealed, even more confused about what those tags meant
  • When asked to input hometown, responded by speaking "China", after the map of SF was shown, was confused why it was SF not China
  • Wanted to point the exact place where he wanted the building to be
  • Felt urged to inspect some of the buildings, and zoom in/out to interact with the map
  • Wanted to take the building with him and wanted it to be permanent there so that he could tell other friends that he had a building in SFO
  • Would like to see other airports
  • Saw the GIF and did what the GIF indicated
  • Was very confused by the transformation and globe view. Argued that buildings couldn't be moved once built and didn't see the purpose of the globe view
  • Liked the buildings were in SF and would like the building name to be actual building name at that position
  • Didn't like the building talking
  • Liked the the gesture-controlled weather changes, remarked it as very interesting and fun
  • Would like to see connections between his building and strangers'

Summary and Plans

After the wizard of oz testing, we have reflected on the issues surfaced and considered possible plans. For some of them, perhaps we need to do more observation and need finding in airports.

  • Need to resolve the confusion of building generation and leave space for customization (e.g. colors and textures)
  • Is hometown information useful? Or should other information be used as well? (flight number by scanning boarding passes perhaps?)
  • Whether or not to preserve the two views? If so, should the globe view be shown first as the user has just input the hometown information
  • Users seem to prefer placing the building on the map of the city where the airport is located
  • How the maps at different airports be related
  • How to interact with the map needs to be specified, such as zoom-in/out, 3D rotating, what information is shown for each building
  • Need clearer instructions on gestures
  • Should preserve the weather changing feature
  • Would people like the building to be permanent or transient? What's the motivation of constructing the building?

Wizard of Oz

Functional Prototype I

Dérive - is an unplanned journey through a landscape, usually urban, on which the subtle aesthetic contours of the surrounding architecture and geography subconsciously direct the travellers, with the ultimate goal of encountering an entirely new and authentic experience.

Our app introduces the user to the guestbook with this intro:

A guestbook for exploring the city * you're a star no matter where you are * gesture sign your signature * and join the rest.

The design process let us focus and narrow our ideas strictly to combining the notion of exploration, and the real functionality of a map to introduce random walks on, and guest signatures, leaving some permanence in a temporal setting. In the process of adapting the metaphor of the guestbook, we moved from visually mapping signature features to a structure, to flying objects and finally to stars, as they are less cognitively confusing in context of a real city and less kitsch (a common pitfall of large installation type apps).

Design Space

Contexts related to Travel (Hotels, Airports, Tourist Agencies).

User Need

Express Presence but maintain anonymity, gain notion of others around them, encouragement to explore strange places.

Significance

Durability - using gestural input in the space of art installation calls into question the currently limiting factor of touch interfaces in multi user environments to function properly after longer periods of time.

Chaos - developing interactions meant for multicultural and chaotic environments, users can be from anywhere, be any age, and have very different agendas.

Input

Signature - Leap tracks signature and writes to some canvas for feature analysis. Front end is just a dot as the sign so they know where their finger is. Internally, the feature set could include: amount of time to input signature, width of signature in x direction, height in y direction, aspect ratio. Potential features can be hand size, how cursive, slant or top heavy it is.

Place of Origin - voice or type or TEXT.

Journey / Travel Time - Planned obsolescence. Maybe stars should just fade to super dull if never commented to.

Comments - Take in comments that later get sent to stars.

Gestures - Map navigation.

Star Structure

Generation - Generate and draw the star based on the passed feature set. Attach labels so that they can see where are their new stars. Stars have different colors. Maybe tag stars with location. Possibly user can add final touches to the star.

Interaction

Exploration - Notion of context and their location on the map. Weather changes. Highlight local vs. global. Gesture that takes from plane view to globe view.

Ambient Display - Comments from stars create ambient city for passers-by.

Technology

  • HTML5
  • WebGL
  • CSS3
  • Node.js
  • MongoDB
  • three.js
  • two-way sms (twilio / Google Voice)
  • Google Maps API

Minimal prototype I

  • Map and stars in context together
  • Star appears on signature
  • Map navigation (Non-gestural)
  • Minimal Leap interactions. Now the signature is shown for the sake of testing. Three fingers is erase. One finger writes. Two fingers act as cursor (can move hands without leaving anything on screen)

Implementation to-dos

  • Connect Leap to control for map movement
  • Develop feature recognition from signature
  • Draw stars based on feature
  • Store stars with comments, coords, place, unique ID
  • Have some way to create paths in google maps (hopefully randomly)
  • Refine signature input from leap
  • Develop instructions for user to understand gestural input.

Introduction

Motivation

Our project is motivated by our impetus to emphasize the forgotten experience of "encounters," unplanned journeys, random walks and to address the need of travelers to express presence while maintaining anonymity. This question the supposition that itinerants or visitors lack ambient awareness of how others live in the current environment and that filling this need suffices to encourage exploration by diminishing action biases of exploring new places (by introducing opportunity for spontaneous decision making). The project chooses to do so using the tradition of guestbooks to preserve history of a place. From this we are left to build on their understanding of guestbooks and extend its capabilities. In short dérive is a re-imagining of the guestbook which tries to tie exploratory commentary "in time" rather than "after the fact." We realized some of the exciting tradition of guestbooks is the allowance to peruse other travelers’ comments, without their cognizance. Taking this tradition, of leaving a statement, we translated this idea to being able to "text" (webApp) to your digital signature (star), while you are exploring the city, letting those at the source of the projection see travelers comments -- to begin their own journey.

Hypotheses

From this motivation and mulling we can extract this subset of hypotheses:

  • Users* gain salience on local vs. global distribution of those around them, who (if sampling is maintained) are also travelers.
    - In question: the effect of this knowledge.
  • Users* are excited by the digital guestbook with intuitive gestural interactions and engaging visuals. They love the idea of guestbook but feel the traditional guestbooks are fairly obsolete.
  • Users* are attached to the idea of their star being present in this visual/digital system.
    - In question: do the expect their star to stay? Would they ever return? Would they want to visit their representation from the remote web-app? See the comment history?
  • Users* more often than not want to explore and follow up on comments they find compelling, especially if it is related to day planning.
  • Users* will need some impetus to really comment through the app after they leave, some reminder.
    - Key question on extended usage: the desire to leave more than one comment if any after leaving!

Driving Questions

  • Can we encourage spontaneous exploration in new contexts, where otherwise less present?
  • Will users understand the basis of how to use a digital guestbook?
  • Will users "sign" the guestbook while exploring the city?
  • How do visitors interpret the ambient comments?
  • What role do guestbooks play and how has it changed with the real-time remote feed?
  • How far can we extend the notion of abstracted presence (in our case signatures)?
  • What is the threshold of private and public in these settings?
  • How engaging will users find about interacting with the guestbook using gestures?

Methods

Critical Tasks

  • Signature signing
  • Input origin and destination
  • Switch between star view and globe view
  • Explore each view using gestures (including toggling between origin, destination and comments modes in globe view)
  • Connect to star through web app for remote use on cellular device

Data

  • Length of interactivity
  • Time spent exploring on star view
  • Time spent exploring globe view
  • Types of unspecified gesture use
  • Connect to star through web app for remote use on cellular device
  • Observed breakdowns or confusions
  • Interview notes on their feelings and emotions about the experience

User Recruitment**

We plan to a/b test on other HCI students and non-cs** friends before real visitor impromptus.

Then we will go to Stanford visitor center to recruit random campus tourists by setting up at the visitor center and occupying their waiting time.


*Users are all types of travelers with all types of agendas.

**Important that most recruited users not actually be preplanned as a guestbook is encountered in traveler settings, such as the visitor center (airports/hotels).

***Those who may be considered traditional purists and abhor technology, to test their understanding of the guestbook metaphor extension.

Results

During Mar. 7th and Mar. 8th, we have done user testing with 10 representive users.

User 1 (Psych Major)

  • Expressed nervousness and feelings that she was 'doing it wrong' due to inaccurate/insensitive feedback on screen from the Leap
  • Expected zoom to be triggered by a spreading-fingers gesture, or a pinch-and-pull gesture with two hands. 
  • Wanted pan (or spin in globe view) to be triggered either by one finger swipe or whole hand swipe in desired direction.
  • Also intuitively wanted to "grab" the globe with a claw-hand motion and rotate it that way (kind of like a joystick) 
  • Would not miss the zoom on the globe view if it did not exist
  • Thought that zooming in on the earth to switch to globe view might be better than having an explicit gesture.
  • Thought that perhaps we should have a terrain map globe if we only had a few colors for the data points. (This is also more coherent with the earth image in the star view. In this case, perhaps we should also use a terrain map for the location chooser - that should maybe be A/B tested as well since for that particular purpose of choosing location, the black and white map might be less distracting, but again the terrain map would match the rest of the app better).

User 2 (Bio Major)

  • Suggested that the signature have acceleration detection like mouse acceleration- greater acceleration would map to moving greater distances across the screen rather than just a 1-to-1 mapping. Said it might feel more natural. 
  • Played around a lot more, did not care so much that the data was accurate - he was mainly just curious what it could do/what he could make it do/what would happen if he tried such-and-such motion. 
  • Would miss the zoom in globe view if it did not exist
  • Wanted to be able to spin the globe with hand swipe
  • Tried "holding" globe between two hands and rotating
  • Tried using flat hand palm down as a plane of rotation 
  • Idea for gesture to return to star view was a clap - bringing both hands together, kind of like closing a book or squishing the earth 
  • Thought that it would be nice if everything rotated around the earth so the user would never have to navigate beyond the earth but could just "spin" the stars around the earth so the ones in back move to the front. (We have considered this, but the graphical calculations are complicated, could not get it to render as desired when we tried it. May think about trying again if there is time. This sort of layout would allow for more parallel motion between star view and globe view - both would have a spinning sort of navigation)

User 3 (Visitor Center Desk Assistant, CEE Masters)

  • Tries to use a degree of angle space much more than the leap can detect.
  • Does not realize/recognize/learn the mapping of gesture to screen-- that little movement can make a big dip in canvas.
  • In signature mode user actually pinches to write their name, somewhat similar to holding a pencil, rather than thinking of their hand as a pencil.
  • About a minute and fifteen seconds of interaction time, before high tension sits in and they give up.
  • Tendency to touch the sensor when it doesn't work and at the very beginning.
  • Got stressed when the phone rang.

User 4 (Astronomy/SLAC Phd)

  • Fearful to use and be observed using.
  • Seemed skeptical
  • Was unclear on their angle of use for input.
  • Needed very clear gestural instructions.

User 5 (New Student Visiting from NY with Mother)

  • [In this study we didn't give "what the app can do" but asked "what do you want to do" at each stage, we found that we supported their wants!]
  • Wanted to see Earth. (we then toggle to globe view)
  • Want to see entire signature.
  • Too close to leap most of the time (directed by mother to pull back)
  • Much younger (high school) not familiar with being user studied or this tech.
  • User ask whether they can talk to other people (we associate our comments mechanism as fulfilling this need).
  • User has trouble finding cursor to place themselves.

User 6 (Businesswoman from San Francisco)

  • [Most skeptical user]
  • Talked a lot about her behavior at airports, about eyeballing other people and whether or not she can speak to them.
  • Accessing whether other people are bored too.
  • Wants to meet up with other people she doesn't know to kill time in the airport.
  • Large international traveler yearns to talk to people.
  • Friends around user, give user a hard time.

User 7 (Director of Architectural Design Department)

  • 3D more exciting to those that work with 3D in their profession/studies.
  • Learning curve steep but fast.
  • Likes the guestbook idea.

User 8 (EE Freshman, Tech Geek)

  • Played with the Leap demo a bit before testing.
  • Signed very smoothly and commented that this is very sleek, wished he could see the entire signature
  • Had trouble moving the cursor around the map, but understood the instruction, successfully confirmed with putting another hand
  • But when inputting the destination, there was a false positive
  • When seeing the star, he naturally started to explore possible gestures to navigate through the space and quickly figured out how to pan or zoom.
  • Succeeds in transitioning to globe view with the two-hand gesture, still he quickly learned how to play with the globe and concluded that there was momentum in the globe's movement, that it's easier to control the globe with two hands.
  • He would like to inspect a specific region on the globe, but with the current gesture control it's not easy to do that.
  • He suggested it would be better if it's less sensitive.
  • He commented he would definitely play with it when he's traveling as it's cool to look at and fun to interact with.

User 9 (Entrepreur at Bytes Cafe from San Diego)

  • When signing, firstly he moved his finger very slowly and then he got the sense of how this worked and started to sign smoothly.
  • Questioned the definition of origin, asking whether it means place of birth or just the place he's flying from. He suggested the latter definition would make more sense in airports.
  • Had trouble moving the cursor across the map, the confirmation gesture didn't work.
  • Liked the concept of leaving marks in places he has been.
  • When navigating in the star view, he naturally moved his hand up and down but nothing happened. He didn't figure out that he could use two fingers back and forth to zoom in/out after I told him so. He discovered himself how to pan left and right.
  • Would like to zoom closer to his star and other people's stars.
  • In the globe view, he commented that the visualizations, specifically the spikes on the globe are very creative and interesting.
  • Wished to make more sense from the data presented, such as some information about the people who contributed to one spike, what are the names of some isolated islands (fewer people have identified them as origins or destinations).
  • Wished that there could be some way to reveal people's purposes of being here, such as vacation, conference, even events like SXSW.
  • He thought the comments are too open-ended that maybe people don't know what comments to leave.

User 10 (CS Phd from South Africa)

  • When signing, commented that he had no idea about how large is the sensing area. Suggested there could be a calibrating procedure, where user can touch the four corners of the screen to get a sense of the area.
  • Commented that moving the cursor on the map using leap is very frustrating. He can't get the cursor to his origin, South Africa.
  • When navigating in the star view, he suggested that when the viewpoint is further away, moving slightly should trigger larger movements, while the viewpoint is closer, moving slightly should trigger smaller movements.
  • Gesture to transition to the globe view didn't work out.
  • Didn't see the connection between the star view and the globe view.
  • First he naturally used two hands to control the globe.
  • Commented the visuals of the globe are very impressive but the gestures are frustrating as the globe didn't behave the way he expected.
  • He had trouble getting the South Africa to face to him.

General Conclusion

  • Users thought graphics were cool, thought Leap control would be cool if it were smoother and more accurate
  • Zoom was triggered too easily, especially in globe view
  • Users understood the "put in another hand to confirm" very quickly. It was interesting to note that the put the second hand in in the same pointing position as the first. But this gesture didn't work most of the time. 
  • Users wanted the signature to stay on-screen. One user said otherwise she wasn't sure it was actually registering anything, or if it was just showing her where she was on the screen.
  • Users need to feel connected to their star - why is it "theirs"? How does its generation have anything to do with them (their signature).
  • Users asked what they were supposed to sign. We had supposed that signing a guestbook was familiar enough that they would assume their names, but perhaps we need more explicit instructions (Vicky's idea: to have a star swoop through the signature space and write "sign" or "signature" in a .gif before the user starts signing. This could also replace the countdown currently in place and still give users that buffer time to prepare) 
  • Users approved of older stars being farther back.
  • Users thought the comment system would be cool, wanted to see that implemented.
  • Many of the gestures users performed were in fact very close to what we had intended, but they gave up on them and continued trying different things because they weren't perfectly oriented so the Leap didn't pick up their intended motion accurately. It is difficult to teach users ambiently to use gestures because the gestures are best performed when you know the limits of the Leap and that it is looking for fingertips. Users used the whole hand motion, for example, to pan in star view, but it resulted in a jerky motion that didn't map to their entire motion because they had their hand oriented slightly diagonally and fingers were occluding each other as they moved.

Discussion

One thing that we tested in this round of studies was the kind of instruction or lack thereof a user needs to learn the set of gestures. We recognize the new-ness of the input style and decided to first see how users react in the "discovery" process. When verbally instructed on gestures we recognize for a certain scene they used their "understanding" of what that mapped to. We saw the most variety in interpretation when we asked users to "place" themselves on a map. In this scene they are asked to place a second hand to "lock" their choice. Where they place their hand would cause it not to work 60% of the time (6/10 users).

When user got tired of not getting the gesture correct we would use our nifty keyboard shortcuts to bypass the "scene/stage/step" in the app.

Our system did not have any gesture instructions (we do have animated gifs for this) per my first comment as a result the testing process was much longer. From this we discovered that users regardless of instruction just start making things up when it doesn't respond quick enough.

We also did not add the webapp/comment layer, though part of it is implemented. We discovered that without this layer users wanted it.

Implications

From interaction time, and flippancy to wave the gestures as completely not working when they don't respond, we determine that the system really has to be more robust to catch up with their learning curve. This curve was pointed out to us as being steep but quick.

The tendency of not knowing their angle of interaction or "pick-up" with the leap suggests to us a need to point out the space they can move their hand in. Does this mean we put a glass box around the leap to a certain height? Or we tape a rectangle on the back wall? Do we limit the amount of space their hand can move in front of the leap on the y-axis?

We also discovered that the frame of reference for the user is actually the leap as it sits facing up to them, since they expect this to be the mapping, we are considering changing the gesture zoom to be moving the hand up and down rather than backwards and forwards.

The users tended to keep writing their signature. We had decided that for security reasons drawing the entire signature out needed to be just leaving traces. The users took this as "not registering" and kept writing, so the program wouldn't continue in a timely manner. This leads us to consider adding the persistent line drawing back.

Users consistently had issues finding themselves in the placing stage, and not just that, but had trouble getting the cursor to where they were from and were frustrated in settling for near. This leads us to consider changing the relative mapping of where their finger is to the context, and also doubling the size of the cursor.

We recognized situations where the user was not alone, that the backseat driver helped troubleshoot error in usage very quickly.

From our more advanced users (mostly from CS), we were queried as to how the star was related to the signature and how the stars mapped to the globe. These relationships were lost due to procedural animation complexity and timeouts in the js files the work between events. We hope to add in prototype two indicative stages the show how the stars go to globe view. and also illustrate how the signature is mapped to features of the star.

User Testing

Functional Prototype II

Based on the valuable feedbacks from the user testing, we compiled a list of changes mostly about gestures, as gestures are essential and received most requests for improvement. We also realized that we need to think about how to better guide users through the system and how to better convey the concept. Here are the detailed changes we came up with and their implementation statuses.

Changes on gestures:

  • The complete signature is shown now while signing.
  • In the placing origin/destination mode, detecting whether there were two hands to "lock" didn't work quite well. So we've changed it to using two fingers to "lock", which works pretty well.
  • In the placing origin/destination mode, we've adjusted the parameters so that user can navigate across the entire map. During user testing, sometimes they couldn't reach East Asia.
  • In the placing origin/destination mode, we've changed the cursor to be larger and more salient by using a flashing bubble.
  • In the star view, now two fingers up and down are for zooming out and in respectively, as most users found it more intuitive than moving back and forth.
  • In the globe view, all gesture controls are less sensitive now, that is, hand movements trigger less changes on the perspectives of the globe.
  • In the globe view, the zooming gestures are two fingers up and down, same as star view.
  • In the globe view, whole hand up and down flips the globe vertically and swipe left and right flips it horizontally. Vertical flipping is more sensitive. In this way, it's easier to navigate to a certain region on the globe.

Other changes:

  • We're making GIFs to give better instructions on gesture controls.
  • Users typically found their signatures were disconnected with the stars. So we've implemented feature analysis of signature and mapped the features to those of stars. We'll also include a brief description of the mapping once the star is generated.

As our system is relatively large and complicated, some parts were not yet fully implemented before user testing. Here are some progress updates:

  • All the information of a newly created star is now stored into database so that the next time it can be rendered exactly the same.
  • The star view is now showing all the saved stars, not randomly generated ones.
  • The globe now takes in a dummy data file containing origin/destination data and visualizes the data in two different colors. We'll implement preparing the data file from database soon.
  • A web app is implemented so that user can use his/her star id to access and post comments through it. The comments are saved into database.