Internship at IBM Canada, Fall 2014
I was working at IBM Toronto lab on the Web Tasking project. My job includes:
- Planning/preparing/conducting in-lab user testing sessions (Tool used: Morae)
- Gather and analyze testing results; generate reports
- Improve the UX based on testing results
- Create design assets and style guides to the development team
Project time: 1.5 months
I was awarded 'Innovator of the Year' at IBM CASCON 2014.
The original Scribble prototype was an almost unusable rough prototype with poor visual design and UX flow.
My job was to conduct formal user testing sessions base on this version, and redesign a usable prototype to showcase at IBM CASCON 2014.
The purpose of the test was to assess the overall usability and effectiveness of Scribble’s interface and information flow on the performing web tasking:
- Measure the ease of use - how easy or difficult it is to perform web tasking using Scribble and whether users of all types can use the application equally well.
- Identify the primary usability problems that are barriers to users
- Validate that performing the same task with Scribble on IISE will be more efficient than without it.
Methodology and Procedure
Qualitative vs Quantitative data
I decided to go with the hybrid approach because the product were at the strategic stage. I needed to measure prototype performance as well as explore new directions.
Rating scale design: Why the 7-point scale?
The psychometric literature suggests that having more scale points is better but there is a diminishing return after around 11 points (Nunnally 1978). Having seven points tends to be a good balance between having enough points of discrimination without having to maintain too many response options.
If there aren't enough response options users will be forced to choose the next best alternative and this introduces measurement error. For example, if users think a 5 is too high and a 4 is too low they are forced to settle on an option which is higher or lower than they wanted (assuming they can't pick a 4.5).
The study lasted about an hour. Each participant was given a short video tutorial on how to use Scribble to create a simple personalized task. Then the facilitator explained the session and asked the participant about his/her background. The participant was then asked to perform two tasks using Scribble. The facilitator asked the participant to think out loud while he/she was performing the task and made observations.
At the end of each task, the facilitator asked the participant to provide two ratings:
- The overall experience on a 7-point scale ranging from 1(very easy) to 7(very difficult).
- Whether using Scribble was more efficient than the participant’s own solution in completing the same task, on a 7-point scale from 1(strongly agree) to 7 (strongly disagree).
Example of quantitative data collected during testing
- Only 23% of participants agreed that the interface was intuitive;
- 69% of participant agreed that it was easy to create tasks;
- 85% of participants agreed that Scribble would be an effective alternative in completing the tasks.
After colelcting the results from user testing I started designing the next version based on the research.
In order to speed up the process, I suggested the development team to use Bootstrap for front-end developing. This decision tremendously helped the team to ship product on time.
The new version of Scribble was released November 2014 during IBM CASCON 2014. Around 20 scholars and researchers tried the demo and gave very positive feedback to the product.
The app is no longer online but you can still watch the demo video:
As a member of the UX team, I was also awarded "Innovator of the year".