Author Archives: Liz Masterman

Real open development in action

Ross Gardler, lead developer on the project, reports:

I discovered an issue: Wookie doesn’t pass cookies from widgets to remote servers, which means we can’t log in to MyExperiment. I reported the issue to the Wookie project and got on with doing stuff that didn’t require login, intending to go back to this issue later.

In the meantime, a new community member on the Wookie project sent a mail identifying the same problem. I pointed him at the issue and discussed how it impacts users in the hope someone has time to fix it our find a workaround.

Two days later Scott Wilson committed a fix for the problem.

In summary, by taking the time to collaborate in a viable community I’ve reduced the work I need to do.

Why did Scott do this? Well, whatever the reason, we are all better off thanks to his efforts, including both the Rave in Context and Wookie projects.

Open Development is more than just the reuse of open source code. It is also a sharing of expertise and the appropriate application of resources.

Ross has already thanked Scott on the Wookie list; this is a public thank-you from the rest of the Rave in Context team as well.

Evaluation findings prompt a reassessment of our assumptions

This is a somewhat belated post on the usability evaluation of our UI mock-ups which we conducted in mid-September. The evaluation was intended to:

  • Appraise the proposed form designs for tablets and smartphones for (i) layout and readability and (ii) flow;
  • Appraise the proposed functionality in terms of its usefulness: are the actions to be supported ones that users would want to perform on a mobile device?

Using iMockups, a low-cost prototyping tool for the iPad, we created mockups of all the forms for two form factors: the iPhone (representing smartphones) and iPad (representing tablets). We then walked through the forms with five individuals, who among them represented expertise in the following areas:

  • Use of myExperiment or another collaborative research tool;
  • Experience in using a range of apps on mobile devices;
  • Use of a website for professional purposes that has a strong social-network component (e.g. LinkedIn);
  • Knowledge of good practice in usability in general;
  • Knowledge of good practice in accessibility.

We had prepared a list of questions relating to aspects of layout and functionality which had proved problematic to design, but also allowed the participants to give their impressions freely.

Participants’ feedback was fed directly into a redesign of the UI and a revision of the final UX requirements document which we passed to the development team. However,  the evaluation caused us to reappraise the rationale underlying some of our original design decisions, which we note here:

Nature of the activities carried out on each form factor:
The UX design requirements envisaged variations in the likely scenarios of use on desktops, tablets and smartphones, as follows:

‘A researcher who uses myExperiment from a desktop computer, a tablet and a mobile device is likely to perform different tasks in each environment. The desktop might be most commonly used for executing workflows and analysing results, a tablet might be commonly used in meetings to search for and present available resources, and to make minor modifications to existing materials, while a mobile phone might largely be restricted to social activities such as monitoring the activity streams of friends and groups.’

Feedback from the evaluation suggests that this is not necessarily the case: people use smartphones for more demanding tasks than one might expect, and so the differences between form factors specified in the first version of the UX Design Requirements document have been considerably ironed out.

‘Real estate’ on screens:
The simple inability to display very much on a smartphone led us to omit certain items from the smartphone UI, including the option to display a full-size image of a workflow. However, feedback from the evaluation suggests that people are willing (even happy) to scroll around sizeable graphics on their smartphones. We have therefore made smartphones display the same data as tablets, but have included the file size of the larger images so that users can decide whether or not to view them.

Technological limitations:
Issues associated with the robustness and speed of networks led us to reduce the functionality available on smartphones. However, feedback from the evaluation suggests that this less of an issue than the usability literature led us to expect: smartphones are often used with wireless networks, and people are willing to wait a reasonable time to access or download something which they know they want and which will be useful to them.

Horizontal swiping and fat finger syndrome

…are just two of the topics covered in Jakob Nielsen’s latest Alertbox article, Mobile Usability Update, hot off the (virtual) press.

Noting the overall improvements in mobile usability since his first report two years ago (although I’m unsure whether it’s a good thing that the number of guidelines has increased from 85 to 210), Nielsen singles out Android apps as being markedly better  – a sign, he suggests, of a growing attention to quality resulting from an increase in market share.

A mobile app still tends to be superior to a mobile website – but in the case of the Rave in Context project, we don’t have this luxury, as we want to develop a single set of widgets to run across a range of devices and form factors.

So, what guidelines have we gleaned in particular? Well, I have just emailed these two to our development team:

Horizontal swiping:

‘Swiping is still less discoverable than most other ways of manipulating mobile content, so we recommend including a visible cue when people can swipe, or they might never do so and thus miss most of your offerings. Also, you should avoid swipe ambiguity: don’t employ the same swipe gesture to mean different things on different areas of the same screen. This recommendation is the same for mobile phones and tablet usability, showing the similarity between these two gesture-based platforms.’

Fat-finger syndrome:

‘…we still see users struggle to hit tiny areas that are much smaller than their fingers. The fat-finger syndrome will be with us for years to come…’

Usability in Context

‘The phrase “mobile usability” is pretty much an oxymoron. It’s neither easy nor pleasant to use the Web on mobile devices’ (Jakob Nielsen, Mobile Usability: Alertbox, 20/07/09).

Thus usability guru Jakob Nielsen throws down the gauntlet to the designer of a mobile app: how to get it right, so that your future users can do the tasks they need in the brightest sunlight, at the top of the highest mountain, in the far reaches of the taiga or in the loudest tropical thunderstorm – in short, accommodating all manner of environments and connectivity?

However, that’s only one of the usability challenges confronting the Rave in Context project. As Sander noted in his previous post, we are creating usable templates – patterns/models – for other developers to apply to specific contexts. We’re doing this by developing widgets based on the functionality of a specific VRE (myExperiment), which will ultimately be decontextualised into the templates. Therefore, we are not only working at one remove from our ultimate users but also, in designing and evaluating the user experience in myExperiment, we have constantly to ask whether the features that make the widgets usable by myExperiment users will be translatable, via the templates, to other – as yet unknown – tools. It’s this aspect that makes the project so intriguing from the usability perspective.

The process by which we are tackling this twin usability challenge is being documented in the Evaluation Plan on the project wiki, but I’ll summarise it here.

Scoping the usability problem space

The first task was to establish the usability ‘problem space’: specifically, our baseline definition of usability and its associated concepts:

  • Learnability: How easy it is to learn to use the app?
  • Ease of use, or throughput: How easily and efficiently can users perform their tasks with the app?
  • Usefulness: To what extent does the app supports the purpose for which it is intended: i.e. can users do what they want/need to do in it?
  • Affective response: Do users like using the app?
  • Memorability: how easy it is to re-familiarise oneself with the app after a period of non-use?
  • Flexibility: to what extent has the app has been designed such that it can accommodate changes in the tasks that users do?

Aspects of usability on which our evaluation activities with users are focusing are: learnability, usability and usefulness. Users’ affective response will also be captured, but it can be expected to flow, in part, from data collected in relation to the other factors. Flexibility, in the sense of i) adaptability of the code to support different form factors and ii) adaptability of the templates for use with other tools, can only be evaluated internally by the open-source developer community.

From concepts to prototypes

The next step was to conduct a Web search for literature on usability issues relating to mobile applications, which happily revealed an emerging consensus among developers regarding good UI design practice. Combining the guidance most relevant to our needs with established principles of usability (guided largely the work of Jakob Nielsen), we collated a set of general guidelines for designing the widget UIs (which is also published on our wiki). These, in turn, informed the compilation of a set of detailed requirements for the user experience.

From the UX requirements we have produced a set of paper mock-ups, which we are currently evaluating through cognitive walkthroughs with five individuals. Each participant has been selected for their expertise in at least one of the following areas: usability, mobile app design, accessibility and the use of research tools with a Web 2.0 dimension. On Thursday 15th September we visited E.A. Draffan of the ALUIAR project at the University of Southampton, who, in an energising conversation, set us right on several matters of accessibility (thank you, E.A!).

Where next?

We will complete the evaluations and modify the UI designs over the coming week. Unfortunately, resource constraints are preventing us from carrying out our original intention to evaluate digital versions of the revised designs with users. Instead, beginning in the last week of September, we will be conducting a rolling internal evaluation during the development process (which is following an agile methodology). Our final evaluation will be a workshop with potential users in the first half of November: details to follow.