Fun with Arduino

This was the first time I had a chance to play with Arduino! It was very exciting to be able to have a hand on electronic parts and make things work. When I was an undergraduate, I focus on how things look, its function and usability. However, I did not pay much attention of how it actually going to be implemented. Arduino is a breadboard prototype which allow me to quickly (most of the time) create functional prototype.

For this week, I am trying to make a password detector! Basically, when a person put in a wrong password into the system, he will get a message saying “Password is wrong! Try it again!” and also a beep sound. Otherwise, it will reward you with a song!

Beyond the basic Arduino board and the bread board, the project required Piezo Speaker module  and Numeric Membrane Keypad.


Here’s how all components connect with each other:


When you enter a wrong password:


When you enter the right password:


A music track is also playing when you get the password right!

Overall, I really enjoy putting electronic parts together and see something is really working! However, the part that I have the most difficulty was coding … I had to spend a lot of time trying to understand the code. With no prior experience, I need to figure out how to get the Piezo speaker module as well as the key pad to work with Arduino.  To get around it, I search on various websites and try to find tutorials/ instructtion of how to work those two main components. After an extensive research, I found share library which contain basic code for both components. Beyond that, I need to write more code in order to connect these two devices and make it function the way I intented it to. I was difficult but it was a good learning experience. I would say that Arduino is good for those who likes to test out electronics function. It is also better if one has a basic in computer programming. Otherwise, it will be very hard to execute anything with Arduino.



Mobile Prototype

Here comes App Inventor! 

For this week, we were tasked to develop a working mobile application called “Tweak the Tweet”. This is for a system that is part of HCDE Professor Kate Starbird’s work in crisis research. She studies the “use of social media during crisis events, specifically looking at how the converging audience (aka, the ‘crowd’) can contribute—and is already contributing—to crisis response efforts.


“TtT uses Twitter, in a form of “digital volunteerism,” to gather and direct information during crises to people who can act on it to the benefit of affected people and communities. As you can imagine, Twitter data can be very unstructured and noisy, and TtT is designed to allow digital volunteers to provide information in a form that is more easily and reliably processed and analyzed.

Last year, a group of HCDE students working with Professor Starbird, including Grace Jang, created a design for a mobile application to help volunteers build and send tweets that report crisis information in a structured format.

The Team and App Inventor:

I was working in a group of 5 people: Shia Liang, Albert Lui, Hadiza Ismail, David Yang, and myself to tackle each part of the spec. We were all new to the App Inventor and so this was a very good learning experience for us. I was responsible for programming GPS location pinpointing in the application, word counts while the user is typing and showing the tweet status on each page.

Check out video prototype of the project here:



While we were programming the application, we found a lot of room for improvement. So beyond the basic function in the spec, we went a step further to redesign some part of the user interface. We improve the user interface so that it has a better flow for user to go the application. For example, we made the first screen of the application less intimidating and create a new identity for the application.

example screen

Interface Design Credit: David Yang


Working with App Inventor was very challenging especially working in team on one project. At first, we divided each part of the project so that everyone can start to work in parallel. However, App Inventor did not have an easy way to share our work. So we need to compile our work onto one computer after we finish our part. Beyond the difficulty of sharing work on App Inventor, the program itself produces a lot of compiling error. We spent about a third of our time trying to figure out if it was the code we wrote that produce errors because of that. Overall, App Inventor is a good tool if you are an intermediate programmer. It is accessible and provide a lot of useful functionality.


Website Prototype

I was prompted to re-design the existing “Dub” website as well as conducting some initial user testing to evaluate the design. The newly design web page include the following basic functionality:

a. Events: calendar like system which include information about Dub events and weekly seminar series (schedule, presenters, abstracts, etc.).
b. Research: faculty research areas, projects, publications, collaborators, etc.
c. Blogs: listing of news and announcements of interest to the community.
d. People: listing of faculty, students, affiliates, etc. who are members of the dub community.
e. Login: a members section for dub members (login required) to edit own information on the site (profile, research, etc.).

Beyond the basic functionality, the homepage is meant for audience to quickly have an overview of highlighted information include Current Events, Highlighted Research, Latest Blog Post, and People in Dub Community.

Please check out my prototype via Axure share:


Existing Design Analysis

Before creating a prototype, I first explored and identify problematic areas in the existing Dub website.





Information Architecture and Layout Template

After identifying problematic areas, I reorganize information architecture for the Dub website and group information with similar content under the same categories. For example, I grouped publication and project under research category and I grouped news, calendar and weekly Dub seminar under events.


Grouping information with similar content helps me great deal to design a new layout for the website. Since the old Dub website overloads its front page with a lot of unnecessary  information, I decided to use the front page to only show the most important from each of the high level content: Events, Researches, Blogs and People.


Interactive Wire frame
After sketching out a rough layout template and reorganizing the information architecture of the Dub website, I used Axure to execute the interactive wire frame prototype.


Here are screen shots of the new Dub website.






Design Evaluation

I have conducted user research with a few participants. All of them stated that the overall aesthetic of the new Dub website was very well done. The layout is clean and they appreciate the amount of relevant information which were presented to them on the first page. One area of improvement for the new design is the “People” section of the website. Participants stated that the list of people’s name can be better presented. With the new design, it is still very difficult for audiences to look through lists of countless names.

Axure is definitely one of a better tool to prototype websites today. However, it still has a lot of constraint and limitation. For example, it is very difficult, sometimes not possible, to prototype different kind of animation or transition elements which I know exist on the web in Axure. I also found myself doing a repetitive action in order to get the same task done in different places. Furthermore, the html file output from Axure is not maintainable. Therefore, it is not usable in a long run. That is because the html code is not outputted in a way we could easily understand. If these issues can be resolved in the new version, then Axure would be a great tool for website prototyping.

Please check out my prototype via Axure share:





Behavioral Prototype

Have you ever heard of the Wizard of Oz? Yes, you read it right. We were task to perform behavioral prototype or Wizard of Oz to test gestural interface for apple TV.

Design Prompt:

Build and test a behavior prototype for the following scenario:

Gesture recognition platform: a gestural user interface for an Apple TV system that allows basic video function controls (play, pause, stop, fast forward, rewind, etc.). The gestural UI can be via a 2D (tablet touch) or a 3D (camera sensor, like Kinect) system.
Your prototype should be designed to explore the following design research and usability questions:

• How can the user effectively control video playback using hand gestures?
• What are the most intuitive gestures for this application?
• What level of accuracy is required in this gesture recognition technology?

Design Consideration:

In considering the design prompt provided for this assignment, we decided on the following parameters for our behavior prototype:

1. Our prototype will demonstrate a 3D gestural UI for an Apple TV system.
2. Our behavior prototype and user testing will focus on the following 7 functionality of the Apple TV:

Task icon list_800px

3. We will establish a unique gesture for each of the above 7 functionality and an initiate gesture ahead of the user testing.



4. Our user testing will examine and validate the following four areas:
a. Users’ ability to command the Apple TV through gestures without instructions or training on the actual available gestures.
b. Users’ ability to command the Apple TV through gesture after some explicit in structions/training on the actual available gestures.
c. Different range of motions (to determine the ideal range for the users) for the gesture UI.
d. The need for a trigger/initiating motion/gesture to activate the gesture UI.

5. The user scenario is one where the user is standing in front of his large screen TV and casually browsing and sampling through different channels and shows as he is not sure what he would like to watch at the moment.

Prototype Setup:

1. We used a 15-inch Macbook Pro situated on a moving cart (to elevate the screen level) to micmic a large screen TV.
2. We considered the embedded camera on the Macbook Pro to be the gesture sensor that takes in user input.
3. Since we don’t possess an actual Apple TV, we used the iTune player to micmic the Apple TV interface.
4. We placed the user 5-10 feet in front of the Macbook Pro.


5. One of our team members was situated a distance behind the user/tester with a Apple remote to control the UI behavior in response to the user/tester’s gestures.

6. We set up two cameras (one on a tripod and another one on a high desk) to capture the videos of the user testing from two different angles/perspectives.


7. We also recorded the actual screen on the Macbook Pro (we actually did not have enough room to splice the screen recording into our prototype video. Instead, we used visual indicators in our final edited video to show whether a gesture from the tester resulted in a desired action on the Apple TV).
8. We asked the user/tester to think out loud and speak the actions/commands that he was gesturing (even though this is not a voice activated system) so that we can follow the tester’s intention.

Discovery & Learning from the Evaluation Session:

1. Without any instructions or training, majority of the gestures that the user/tester attempt ed during our first evaluation session actually matched closely to the gestures we designed/decided on with the exceptions of play, pause, and stop. The user had a hard time guessing what the gestures should be for those three features.
2. The gestures we designed for the play, pause, and stop were not distinguished enough from each other. This caused some confusion on the user’s part.
3. We found that having a trigger/initiation gesture to activate the gesture UI was important to avoid segmentation ambiguity (where the system confuses intended gestures from unintended gestures.)
4. Our user preferred a range of gesture motion to be approximately within the width of his body. Larger gestures motion not only causes strain on the users but also (as the user insightfully pointed out) can create confusion on the system’s part (e.g. it would be harder to tell where a gesture begins or ends when motion for fast forwarding or rewinding.)
5. With a very low cost setup, we were able to observe the user behavior and validate (or in some cases, invalidate) our assumptions going into the the evaluation session.