Game Design (Part 1)

Following the last week, I was tasked to come up with some initial game ideas. I proposed three:

Initial ideasResponds for these three game ideas were good. People tend to preferred either Blue Creature or Trash it. Ultimately, it turned out that there are enough people in the class who are interested in working on the Blue Creatures idea with me. So we are trying our best to bring it to life!

Out team: 3 Computer Scientist, and 1 designer (myself)

Blue Creatures:

You are a Scuba diver! the objective of the game is to catch Blue Creatures but you need to physically set up traps to get those Blue Creatures! The more traps/baits you set up underwater, the more chance that you will get Blue Creatures faster! However, the harder for you to watch out for all your baits. You have limit amount of bait to use. Once they are gone … the game is also over.

Game Design_2_850px Game Design_3_850px

Paper Prototype:

After a team discussion of how the game should work, we went ahead and try out how the game may work with paper prototype. It was very interesting to observe how people play our game!

Paper Prototype_1

Paper Prototype_2

Paper Prototype_3_850px


People like the game play! They feel that there are enough component going on in the game that keep them interested. Yet, it is not too complex that they cannot handle the level of difficulty.

What we learn:

a. Items that player use to eliminate other creatures need to be use in a close range. Otherwise, people would simply stay on one side of the screen and do not maneuver around.

b. The speed of diver relative to all creatures is crucial, we need to pin down the speed of the game.

c. People did not realize that they can move the bait up and down. Probably need a better indication/ interface design.

Our next steps would be to do more testing and get into more details of the game. Stay tune for my blog next week !



Library Observation: Week 1

First week of my in dept research class, we had to go out and observe how people behave in libraries. The assignment seems simple but it actually required a lot of patient and good observation skill. You will be amaze how many people react in the exactly same in a certain situation. Sometime they did it without realization. Below are some of my recording and some insights I got from my observation:



10.30 am: The library just suddenly gets quiet. A moment ago I can feel the loud ambient noise but not now. There are significantly less people walking in-out the library. Wonder why?? (Assumption: it must be because classes start at this time. People are probably hurry to their class. A library is not only a place where people just come and study, it is also pathway where people use to go to their destination)

Suzzallo_First Floor_850pxx

First floor elevator: the elevator is open by itself?! 12 times already. I only see a few people use the elevator on this floor. But the elevator open by itself many more time than people using it.

Art exhibition: one person glance at it but no one touches or even take a look at it. I am curious to know why because it is so obvious and also in the center of the open space.

Alarm: only goes off once. People are looking at it but no one does anything.

People: There were a lot of people walking in and out of the library. Many people walk with head phone and ear phone. Some of them walk with friends and some of them walk by themselves. Quite normal at this point.

11.06: I am now on the First floor Suzzallo library, midway from the main entrance and Allen library South wing. On my right, there are many computers where student can use. On my left, there are countless books and bookshelves. All computers occupied but there was no one looking at any books on these shelves. Am I in a library or computer lab? lol.

Wait, I just saw this one really old guy was reading a book without any computer equipment.

11.25 am: I am following people from the entrance of Suzzallo to see where the big crowd would go. Surprisingly, many people went straight to the other exit of the library and I am back at the same spot I was before (the entrance South wing of Allen library). It is loud and many people are walking relatively fast. I am wondering if it will suddenly get quiet again at a half hour…

11.28 am: alarm went off but no one does anything.

11.31 am: The ambient noise suddenly gets very quiet. There now much less people walking in and out the library at this exact time.

Conclusion from observation

Maybe all people want is some place to sit down and study. Thousands of books are sitting on bookshelves; however, I rarely see anyone actually looking at them. Books are apparently obsolete and most people use computer device in library. Library now is really not as much of a place where people usually come and find or see new books. It is more like an open space where people can come comfortably do their work. They can sit down and rest. Allen/Suzzallo library in particular, does not seem to have much of a group work activities that I observe. I mainly see individual working by themselves.



Research Observation #2 (observing the same place for a second time)

1.45 PM: I am sitting at the same spot as I did last Friday (near the South wing exit side of the library) to observe people’s general behavior in library. Not many people are walking around the library at this point. The library seems quiet.

The same art exhibition which sits in the middle of the big empty space in front of me is still here and no one is looking at it. People walk right pass it like nothing is there ….

The elevator which locates directly in front of me still accidently opens by itself like it did the last time I was here.

2.59 PM: I am now in the Harry Potter Library. This place is still very quiet. I see no one is talking to each other. Everyone is working on their own thing. Many of them have either a cup of coffee or water bottle next to them. I saw on person is comfortably sleeping.

Nothing else really change or different from the last time I observe.

Suzzallo_Water Bottle_850px

3.36 PM: There were many people who come to this library just for visiting. They use their cell phone capture image in the library.

Suzzallo_Taking Picture_850px

Conclusion from observation

I wasn’t notice much different from what I observe last time. Two new observations I didn’t realize before is that people like to have their water bottle with them when they are working in the library. Secondly, not only people who wants to study are here at the library. There are also those who visit the library because it is so pretty : D


Coding Pong!

For this Spring quarter of my Master Degree in Human Computer Interaction and Design, I am taking a Game Design in the CSE department. That’s right Computer Science … that means coding! Even though I am a designer, I also have some coding background. I know the basic Java programming and web languages (html, css, php, jquery, ajax, and abit of database). I thought I knew what I was getting myself into but … our first assignment was to coding a “Pong Game” in Flash! Of course, it was tough because I have never code in Flash using ActionScript before. We were given spec for the assignment and … that’s it! I had to figure everything else myself.

Well, for those of you who are learning basic game programming in Flash, and if you are frustrated/confuse the first time around… you are not alone! I had the same experience. However, if you stick to the end and finish coding your game … it was such a joy to see the game working : D

After I code this game, I totally understand how programmer thinks when he/she is coding. Their objective is to get the program to work! and that’s why they have no time to think about how the interface is going to look (this is where designer like myself comes in). I now have a very clear understand of how programmer think and concern the most. Check out my Vimeo video of me playing my Pong game against myself   : )






Hitch Hiker

For our final project, we are working on how to make short distance carpooling experience better.

Team member: Shia Liang, Punchet Sangnil, David Yang

Hitch Hiker is a mobile phone application which help matches driver who need someone to carpool with passengers who need a ride and are planning to go the same direction. The application is intended to be used spontaneously for both passenger and driver. In other words, there is no need for pre planning for both passenger and driver. Passengers and drivers can use our application to find a ride/someone to carpool at the time they need. The pay would be based on the driving distance and would automatically be calculated by the Hitch Hiker App for ease.
While we were implementing this system, we recognize that it is crucial for any driver who uses our application to be able to quickly identify passenger he supposed to pick up while he is driving.

Checkout our video at: https://vimeo.com/89218839

Chet_Hitch Hiker_Photo1_800px

Chet_Hitch Hiker_Photo2_800px

Chet_Hitch Hiker_Photo3_800px

Chet_Hitch Hiker_Photo4_800px


Looking for opportunity
We had a quick brainstorm session to generate several design questions. Ultimately, we decided on “How can we make short-distance carpooling easier & more efficient”. The problem space looked very promising for our final project with one week time constraint.


In depth discussion
We then discussed among ourselves the design question and map out potential problems in this space. We want to make sure that we have a good problem to tackle before going onto finding solution for this problem. We identify stakeholders: driver who needs someone to carpool and passenger who needs a ride. This initial mapping of the stakeholders allowed us to sketch out flow of interaction between them.

After we flushed out interaction flow of our application, we derived two research questions mentioned in the beginning of this document. We also decided that video prototype and behavioral prototype would be perfect prototyping technique to answer those questions.

Hitch Hiker Sketches_9_800px


We prepared high fidelity interface mockup as well as storyboard before shooting our video prototype.

Hitch Hiker Sketches_1_800px

Hitch Hiker Sketches_5_800px

Hitch Hiker Sketches_4_800px

Hitch Hiker Sketches_3_800px

Behavioral Prototype:

We had two of our classmate to participate in our behavioral prototype. Below is our summary from the testing.

User Testing Result Summary:

As we stated earlier in this report, we looked for the best interaction technique for driver to recognize passenger he is suppose to pick up. Furthermore, the method used by passenger should not distract other cars on the road. We had three different types of interaction which we tested with behavioral prototype.

Research Question:
What would be the best interaction technique for driver to recognize passenger he is suppose to pick up?
Interaction methods:
1. Using flashlight on mobile devices to get the driver’s attention.
2. Specific color pattern on passenger’s screen.
3. Arm gesture by the passenger.

Testing gesture_800px

Set up:

We set up the test in a long road with cars parked on the side in university village. David as the tester will show up in any spot between cars on the road side. Tester will only show himself when the user come close enough. Also, there is a camera setup behind the tester to record the tester’s movement.

Untitled-1-01_800pxOn the other side, we ask our users to sit in the car with the other tester, drive along the road, and ask them if they can clearly notice the tester. We tested with each user all three interaction methods. For the sake of saving time, we did not ask our user to stop the car to pick up the tester on the road. Because the main purpose of this test is to compare these three interaction methods, it did not affect the test whether or not the user stop the car.

We also set up a iphone on the dashboard in the front of the car to record the reaction from the user and the conversation inside the car.
Test Result:

After the test, we found that the most effective way to make the driver to notice the passenger is to use body gesture. First of all, flashlight is not strong enough during the day. According to the participant, the flashlight is not the main thing that catch his eye.

“ I thought he’s taking a picture of me.”
“I can’t see it until the last second.”
“It’s hard to see in the day time.”


Secondly, the color bar on the screen is not visible enough for both participant. The can only see tester holding his cell phone on the road, the content on the screen is no clear for the driver.
“Nothing I can see on the screen of his phone.”
“The fact he was holding his phone is all I can see.”


Lastly the gesture is the most effective way in communicating the tester’s location. Both participants found the waving gesture very effective, they can easily tell the location of the tester. Also they both understand that the tester is trying to stop them.

“That’s pretty obvious. It is obvious that I should pick him up.”
“It’s like a convention, if someone wants to stop the car, they wave.”



Interestingly, both of the user mention the waving gesture as convention. People are used to this kind of gesture, when someone is waving to a car, they are assumed to need help or try to stop the car.


Physical Prototype

For this week, we were task with physical prototype and 3D printing using Makerbot! Luckily, my background before attending MHCID was Industrial Design so this wasn’t new to me. I had fun making stuff like I did when I was in undergraduate.

I made a “Labyrinth” (maze). Because I am familiar with the process of making things in 3D, I spent more of my time on designing the maze rather than trying out/experiment with the tools. Below are my rendering:

Labyrinth_view 1_800px

Labyrinth_view 2_800px

Labyrinth_view 4_800px

Check out the 3D printed version of the Maze!




Even though, Makerbot still has a lot of limitation, it makes 3D printing much more accessible than ever before. I am very excited to see what future will bring to the 3D printing industry.



Fun with Arduino

This was the first time I had a chance to play with Arduino! It was very exciting to be able to have a hand on electronic parts and make things work. When I was an undergraduate, I focus on how things look, its function and usability. However, I did not pay much attention of how it actually going to be implemented. Arduino is a breadboard prototype which allow me to quickly (most of the time) create functional prototype.

For this week, I am trying to make a password detector! Basically, when a person put in a wrong password into the system, he will get a message saying “Password is wrong! Try it again!” and also a beep sound. Otherwise, it will reward you with a song!

Beyond the basic Arduino board and the bread board, the project required Piezo Speaker module  and Numeric Membrane Keypad.


Here’s how all components connect with each other:


When you enter a wrong password:


When you enter the right password:


A music track is also playing when you get the password right!

Overall, I really enjoy putting electronic parts together and see something is really working! However, the part that I have the most difficulty was coding … I had to spend a lot of time trying to understand the code. With no prior experience, I need to figure out how to get the Piezo speaker module as well as the key pad to work with Arduino.  To get around it, I search on various websites and try to find tutorials/ instructtion of how to work those two main components. After an extensive research, I found share library which contain basic code for both components. Beyond that, I need to write more code in order to connect these two devices and make it function the way I intented it to. I was difficult but it was a good learning experience. I would say that Arduino is good for those who likes to test out electronics function. It is also better if one has a basic in computer programming. Otherwise, it will be very hard to execute anything with Arduino.



Mobile Prototype

Here comes App Inventor! 

For this week, we were tasked to develop a working mobile application called “Tweak the Tweet”. This is for a system that is part of HCDE Professor Kate Starbird’s work in crisis research. She studies the “use of social media during crisis events, specifically looking at how the converging audience (aka, the ‘crowd’) can contribute—and is already contributing—to crisis response efforts.


“TtT uses Twitter, in a form of “digital volunteerism,” to gather and direct information during crises to people who can act on it to the benefit of affected people and communities. As you can imagine, Twitter data can be very unstructured and noisy, and TtT is designed to allow digital volunteers to provide information in a form that is more easily and reliably processed and analyzed.

Last year, a group of HCDE students working with Professor Starbird, including Grace Jang, created a design for a mobile application to help volunteers build and send tweets that report crisis information in a structured format.

The Team and App Inventor:

I was working in a group of 5 people: Shia Liang, Albert Lui, Hadiza Ismail, David Yang, and myself to tackle each part of the spec. We were all new to the App Inventor and so this was a very good learning experience for us. I was responsible for programming GPS location pinpointing in the application, word counts while the user is typing and showing the tweet status on each page.

Check out video prototype of the project here:



While we were programming the application, we found a lot of room for improvement. So beyond the basic function in the spec, we went a step further to redesign some part of the user interface. We improve the user interface so that it has a better flow for user to go the application. For example, we made the first screen of the application less intimidating and create a new identity for the application.

example screen

Interface Design Credit: David Yang


Working with App Inventor was very challenging especially working in team on one project. At first, we divided each part of the project so that everyone can start to work in parallel. However, App Inventor did not have an easy way to share our work. So we need to compile our work onto one computer after we finish our part. Beyond the difficulty of sharing work on App Inventor, the program itself produces a lot of compiling error. We spent about a third of our time trying to figure out if it was the code we wrote that produce errors because of that. Overall, App Inventor is a good tool if you are an intermediate programmer. It is accessible and provide a lot of useful functionality.


Website Prototype

I was prompted to re-design the existing “Dub” website as well as conducting some initial user testing to evaluate the design. The newly design web page include the following basic functionality:

a. Events: calendar like system which include information about Dub events and weekly seminar series (schedule, presenters, abstracts, etc.).
b. Research: faculty research areas, projects, publications, collaborators, etc.
c. Blogs: listing of news and announcements of interest to the community.
d. People: listing of faculty, students, affiliates, etc. who are members of the dub community.
e. Login: a members section for dub members (login required) to edit own information on the site (profile, research, etc.).

Beyond the basic functionality, the homepage is meant for audience to quickly have an overview of highlighted information include Current Events, Highlighted Research, Latest Blog Post, and People in Dub Community.

Please check out my prototype via Axure share:


Existing Design Analysis

Before creating a prototype, I first explored and identify problematic areas in the existing Dub website.





Information Architecture and Layout Template

After identifying problematic areas, I reorganize information architecture for the Dub website and group information with similar content under the same categories. For example, I grouped publication and project under research category and I grouped news, calendar and weekly Dub seminar under events.


Grouping information with similar content helps me great deal to design a new layout for the website. Since the old Dub website overloads its front page with a lot of unnecessary  information, I decided to use the front page to only show the most important from each of the high level content: Events, Researches, Blogs and People.


Interactive Wire frame
After sketching out a rough layout template and reorganizing the information architecture of the Dub website, I used Axure to execute the interactive wire frame prototype.


Here are screen shots of the new Dub website.






Design Evaluation

I have conducted user research with a few participants. All of them stated that the overall aesthetic of the new Dub website was very well done. The layout is clean and they appreciate the amount of relevant information which were presented to them on the first page. One area of improvement for the new design is the “People” section of the website. Participants stated that the list of people’s name can be better presented. With the new design, it is still very difficult for audiences to look through lists of countless names.

Axure is definitely one of a better tool to prototype websites today. However, it still has a lot of constraint and limitation. For example, it is very difficult, sometimes not possible, to prototype different kind of animation or transition elements which I know exist on the web in Axure. I also found myself doing a repetitive action in order to get the same task done in different places. Furthermore, the html file output from Axure is not maintainable. Therefore, it is not usable in a long run. That is because the html code is not outputted in a way we could easily understand. If these issues can be resolved in the new version, then Axure would be a great tool for website prototyping.

Please check out my prototype via Axure share:





Behavioral Prototype

Have you ever heard of the Wizard of Oz? Yes, you read it right. We were task to perform behavioral prototype or Wizard of Oz to test gestural interface for apple TV.

Design Prompt:

Build and test a behavior prototype for the following scenario:

Gesture recognition platform: a gestural user interface for an Apple TV system that allows basic video function controls (play, pause, stop, fast forward, rewind, etc.). The gestural UI can be via a 2D (tablet touch) or a 3D (camera sensor, like Kinect) system.
Your prototype should be designed to explore the following design research and usability questions:

• How can the user effectively control video playback using hand gestures?
• What are the most intuitive gestures for this application?
• What level of accuracy is required in this gesture recognition technology?

Design Consideration:

In considering the design prompt provided for this assignment, we decided on the following parameters for our behavior prototype:

1. Our prototype will demonstrate a 3D gestural UI for an Apple TV system.
2. Our behavior prototype and user testing will focus on the following 7 functionality of the Apple TV:

Task icon list_800px

3. We will establish a unique gesture for each of the above 7 functionality and an initiate gesture ahead of the user testing.



4. Our user testing will examine and validate the following four areas:
a. Users’ ability to command the Apple TV through gestures without instructions or training on the actual available gestures.
b. Users’ ability to command the Apple TV through gesture after some explicit in structions/training on the actual available gestures.
c. Different range of motions (to determine the ideal range for the users) for the gesture UI.
d. The need for a trigger/initiating motion/gesture to activate the gesture UI.

5. The user scenario is one where the user is standing in front of his large screen TV and casually browsing and sampling through different channels and shows as he is not sure what he would like to watch at the moment.

Prototype Setup:

1. We used a 15-inch Macbook Pro situated on a moving cart (to elevate the screen level) to micmic a large screen TV.
2. We considered the embedded camera on the Macbook Pro to be the gesture sensor that takes in user input.
3. Since we don’t possess an actual Apple TV, we used the iTune player to micmic the Apple TV interface.
4. We placed the user 5-10 feet in front of the Macbook Pro.


5. One of our team members was situated a distance behind the user/tester with a Apple remote to control the UI behavior in response to the user/tester’s gestures.

6. We set up two cameras (one on a tripod and another one on a high desk) to capture the videos of the user testing from two different angles/perspectives.


7. We also recorded the actual screen on the Macbook Pro (we actually did not have enough room to splice the screen recording into our prototype video. Instead, we used visual indicators in our final edited video to show whether a gesture from the tester resulted in a desired action on the Apple TV).
8. We asked the user/tester to think out loud and speak the actions/commands that he was gesturing (even though this is not a voice activated system) so that we can follow the tester’s intention.

Discovery & Learning from the Evaluation Session:

1. Without any instructions or training, majority of the gestures that the user/tester attempt ed during our first evaluation session actually matched closely to the gestures we designed/decided on with the exceptions of play, pause, and stop. The user had a hard time guessing what the gestures should be for those three features.
2. The gestures we designed for the play, pause, and stop were not distinguished enough from each other. This caused some confusion on the user’s part.
3. We found that having a trigger/initiation gesture to activate the gesture UI was important to avoid segmentation ambiguity (where the system confuses intended gestures from unintended gestures.)
4. Our user preferred a range of gesture motion to be approximately within the width of his body. Larger gestures motion not only causes strain on the users but also (as the user insightfully pointed out) can create confusion on the system’s part (e.g. it would be harder to tell where a gesture begins or ends when motion for fast forwarding or rewinding.)
5. With a very low cost setup, we were able to observe the user behavior and validate (or in some cases, invalidate) our assumptions going into the the evaluation session.


Video Prototype

We finally got exposed to how to properly make a video prototype! Our lecture last Friday was super informational … we watched couple of movie clips from Sergei Eisenstien and Alfred Hitch Cock. It was old but still very good to see movies which were produced by these two producers.

What we need to do this week is to create a video prototype that demonstrate functionality and experience using a product. We can either video our prototype from the last couple of weeks or we can come up with a new product all together. I chose to make my video prototype on “CLIP”, automatic action camera (a made up product). The idea was that this small hand free camera which would automatically capture your moment no matter where you are.

Sequence 02_adjust

To complete the video prototype, I reuse my own footage from a road trip I had couple months ago. I want to show how little camera can capture such an amazing experience. Beyond that, I also staged an interview of myself in order for me to show the CLIP prototype. I had couple of friends help me set up the stage and interview me about my experience using CLIP!



Video prototype is a great way to show experience of using a product. Motion picture can capture so much of what’s going on in an environment. Adobe After Effect and Premiere Pro are two very handy tools to edit videos. Anyway, using these editing programs for the first time can be intimidating. Another down fall of video prototype is setting up time could take a very long time.