‘The phrase “mobile usability” is pretty much an oxymoron. It’s neither easy nor pleasant to use the Web on mobile devices’ (Jakob Nielsen, Mobile Usability: Alertbox, 20/07/09).
Thus usability guru Jakob Nielsen throws down the gauntlet to the designer of a mobile app: how to get it right, so that your future users can do the tasks they need in the brightest sunlight, at the top of the highest mountain, in the far reaches of the taiga or in the loudest tropical thunderstorm – in short, accommodating all manner of environments and connectivity?
However, that’s only one of the usability challenges confronting the Rave in Context project. As Sander noted in his previous post, we are creating usable templates – patterns/models – for other developers to apply to specific contexts. We’re doing this by developing widgets based on the functionality of a specific VRE (myExperiment), which will ultimately be decontextualised into the templates. Therefore, we are not only working at one remove from our ultimate users but also, in designing and evaluating the user experience in myExperiment, we have constantly to ask whether the features that make the widgets usable by myExperiment users will be translatable, via the templates, to other – as yet unknown – tools. It’s this aspect that makes the project so intriguing from the usability perspective.
The process by which we are tackling this twin usability challenge is being documented in the Evaluation Plan on the project wiki, but I’ll summarise it here.
Scoping the usability problem space
The first task was to establish the usability ‘problem space’: specifically, our baseline definition of usability and its associated concepts:
- Learnability: How easy it is to learn to use the app?
- Ease of use, or throughput: How easily and efficiently can users perform their tasks with the app?
- Usefulness: To what extent does the app supports the purpose for which it is intended: i.e. can users do what they want/need to do in it?
- Affective response: Do users like using the app?
- Memorability: how easy it is to re-familiarise oneself with the app after a period of non-use?
- Flexibility: to what extent has the app has been designed such that it can accommodate changes in the tasks that users do?
Aspects of usability on which our evaluation activities with users are focusing are: learnability, usability and usefulness. Users’ affective response will also be captured, but it can be expected to flow, in part, from data collected in relation to the other factors. Flexibility, in the sense of i) adaptability of the code to support different form factors and ii) adaptability of the templates for use with other tools, can only be evaluated internally by the open-source developer community.
From concepts to prototypes
The next step was to conduct a Web search for literature on usability issues relating to mobile applications, which happily revealed an emerging consensus among developers regarding good UI design practice. Combining the guidance most relevant to our needs with established principles of usability (guided largely the work of Jakob Nielsen), we collated a set of general guidelines for designing the widget UIs (which is also published on our wiki). These, in turn, informed the compilation of a set of detailed requirements for the user experience.
From the UX requirements we have produced a set of paper mock-ups, which we are currently evaluating through cognitive walkthroughs with five individuals. Each participant has been selected for their expertise in at least one of the following areas: usability, mobile app design, accessibility and the use of research tools with a Web 2.0 dimension. On Thursday 15th September we visited E.A. Draffan of the ALUIAR project at the University of Southampton, who, in an energising conversation, set us right on several matters of accessibility (thank you, E.A!).
We will complete the evaluations and modify the UI designs over the coming week. Unfortunately, resource constraints are preventing us from carrying out our original intention to evaluate digital versions of the revised designs with users. Instead, beginning in the last week of September, we will be conducting a rolling internal evaluation during the development process (which is following an agile methodology). Our final evaluation will be a workshop with potential users in the first half of November: details to follow.