Crazy: 90 Percent of People Don't Know How to Use CTRL+F


Crazy: 90 Percent of People Don't Know How to Use CTRL+F

Aug 18 2011, 5:37 PM ET

This week, I talked with Dan Russell, a search anthropologist at Google, about the time he spends with random people studying how they search for stuff. One statistic blew my mind. 90 percent of people in their studies don't know how to use CTRL/Command + F to find a word in a document or web page! I probably use that trick 20 times per day and yet the vast majority of people don't use it at all.

"90 percent of the US Internet population does not know that. This is on a sample size of thousands," Russell said. "I do these field studies and I can't tell you how many hours I've sat in somebody's house as they've read through a long document trying to find the result they're looking for. At the end I'll say to them, 'Let me show one little trick here,' and very often people will say, 'I can't believe I've been wasting my life!'"

I can't believe people have been wasting their lives like this either! It makes me think that we need a new type of class in schools across the land immediately. Electronic literacy. Just like we learn to skim tables of content or look through an index or just skim chapter titles to find what we're looking for, we need to teach people about this CTRL+F thing.

Google itself is trying to teach people a little something with their campaign, but the ability to retrieve information via a search engine is actually much bigger than the search engine itself. We're talking about the future of almost all knowledge acquisition and yet schools don't spend nearly as much time on this skill as they do on other equally important areas.

Comments (1)

Ingo Neitzke

Ingo Neitzke Oct 7, 2013

Patrice, danke dafür! Solche Fakten aus der Realität helfen Realisten andere aufzuwecken und selbst nicht wahnsinnig zu werden, einsam im Ozean der vorsätzlichen Lernverweigerung.

User Testing in the Wild: Joe’s First Computer Encounter « Boriss' Blog


This past Friday, I went to Westfield Mall in San Francisco to conduct user tests on how people browse the web, and especially how (or if) they use tabs. This was part of a larger investigation some Mozillians are doing to learn about users’ tab behavior.

The mall is a fantastic place to find user test participants, because the range of technical expertise varies widely. Also, the people I encountered tended to be bored out of their minds, impatiently waiting for their partners to shop or friends to meet them. However, rather than completing all 20 tests I was hoping to, I ended up spending three hours testing a man I’ll call Joe.

I find Joe, a 60-year-old hospital cafeteria employee, in the food court looking suitably bored out of his mind. Joe agrees to do a user test, so I begin by asking my standard demographics questions about his experience with the internet. Joe tells me he’a never used a computer, and my eyes light up. It’s very rare in San Francisco to meet a person who’s not used a computer even once, but such people are amazingly useful. It’s a unique opportunity to see what someone who hasn’t been biased by any prior usage reacts. I ask Joe if I could interview him more extensively, and he agrees.

I decide to first expose Joe to the three major browsers. I begin by pulling up Internet Explorer.

Internet Explorer (as Joe encountered it)

Me: “Joe, let’s pretend you’ve sat down at this computer, and your goal is finding a local restaurant to eat at.”

Joe: “But I don’t know what to do.”

Me: “I know, but I want you to approach this computer like you approach a city you’re not familiar with. I want you to investigate and look around try and figure out how it works. And I want you to talk out loud about what you’re thinking and what you’re trying.”

(I show Joe how to use a mouse. He looks skeptical, but takes it in his hand and stares at the screen.)

Joe: “I don’t know what anything means.”

(Joe reads the text on IE and clicks on “Suggested Sites”)

Me: “Why did you click on that?”

Joe: “I don’t really know what to do, so I thought this would suggest something to me.”

(Joe reads a notification that there are no suggestions because the current site is private)

Joe: “I guess not.”

Joe looks around a bit more, but he’s getting visibly frustrated with IE, so I move on to Firefox.

Firefox (as Joe encountered it)

I give him the same task: find a local restaurant. He stares at the screen for awhile with his hand off the mouse, looking confused. I ask what he’s looking for. “I don’t know, anything that looks like it will help!” he says.  Finally, he reads the Apple context menu at the top of the screen, and his gaze falls on the word Help.

“Help, that’s what I need!” says Joe. He clicks on Help, but looks disappointed at what he sees in the menu.

“None of these can help me,” he says.

Joe is getting frustrated again, so I move on to Chrome and give him the same task.

Chrome (as Joe encountered it)

He proceeds to read all of the words on Chrome’s new tab page, looking for any that may offer guidance. Luckily for Joe, he spies a link to Yelp which is marked San Francisco in Chrome’s new tab page. He clicks it, and, seeing restaurants, declares he’s won.

I want to put Joe through other experiments at this point, but the tests are clearly taxing him. He looks very agitated, and has frequently in the tests declared that he “just doesn’t know,” “should have learned this by now,” and “has no excuse for not taking a class on computers.” No amount of assurance that I was testing software, not him, was calming him down. So, I decide to cut Joe a break. “Alright Joe, you’ve helped me, maybe I can help you.”

Because Joe has mentioned a few times he wants email, I get him a email address and show him how to access it at a public computer. We practice logging into Gmail several times, and I end up writing a very explicit list of steps for Joe which includes items like “move mouse cursor to white box.” One of the hardest things to relate to Joe is the idea that you must first click in a text field in order to type.

When I am convinced that Joe understands how to check his email, I want to show him how he can use his new email address. So, I ask him why he had asked for an email address in the first place. I imagine he’ll say he wants to communicate with friends and relatives.

Joe: “I want discounts at Boudin Bakery.”

Me: “Sorry, what?”

Joe: “I want Boudin discounts, but they keep telling me I need email.”

(Joe takes his Boudin Bakery customer appreciation card out of his wallet and shows it to me)

I’m a little confused, but go ahead and register Joe’s Boudin Bakery card with his new email address. I show him the web summary of all the bread he’s bought lately. “Woah!” says Joe.

So, what did I learn from Joe?

  • There is little modern applications do to guide people who have never used a computer. Even when focusing on new users, designers tend to take for granted that users understand basic concepts such as cursors, text boxes, and buttons. And, perhaps, rightfully so – if all software could accommodate people like Joe, it would be little but instructions on how to do each new task. But, Joe was looking for a single point of help in an unfamiliar environment, and he never truly got it – not even in a Help menu
  • No matter their skill level, users will try to make sense of a new situation by leveraging what they know about previous situations. Joe knew nothing about computers, so he focused on the only item he recognized: text.  Icons, buttons, and interface elements Joe ignored completely
  • We shouldn’t assume that new users will inquisitively try and discover how new software works by clicking buttons and trying things out. Joe found using software for the first time to be frightening and only continued at my reassurance and (sometimes) insistence. If he was on his own in an internet cafe, I think he would have given up and left after a minute or so.  Giving visual feedback and help if someone is lost may help people like Joe feel they’re getting somewhere
  • Don’t make too many assumptions about how users will benefit from your technology – they may surprise you!


Why WSJ Mobile App Gets ** Customer Reviews


Jakob Nielsen's Alertbox, July 5, 2011  

Why WSJ Mobile App Gets ** Customer Reviews

A confusing startup screen that offends existing subscribers dooms The Wall Street Journal's iPhone app to low ratings.

The Wall Street Journal's iPhone app had just 2 stars in Apple's App Store at the time of this writing. Averaged across 68,418 consumer reviews, the rating is not just a reflection of a few irate users.

As a rough estimate, a 2-star average across 68,418 reviews means that 40,000 users gave the application a 1-star rating. Given the 90-9-1 rule for social design, most users never bother reviewing products, so 40,000 low scores represent at least half a million dissatisfied customers.

The WSJ is one of the world's most respected newspapers and has long been a digital pioneer. How can it produce a 2-star mobile app?

The answer is clear from reading the reviews. The 3 highest-rated reviews all gave 1-star ratings, and their headings were:

  • "Slap." (The first sentence? "These guys have the nerve to charge additional fees to current online subscriber.")
  • "Useless app, have to pay twice for same content."
  • "Charging for content — twice!?!"
It's clear that people are deeply offended by being asked to pay again for mobile access to the newspaper when they're already paying for a subscription.

I would agree with these users if in fact they were being charged twice for the same articles. But they're not. Mobile app access is free to paying website subscribers: they simply have to log in with their existing userid and password.

Why do so many people think they have to pay when they don't? Because of a highly confusing user interface design. The first time you launch the app, the following startup screen appears:

Screenshot of first screen shown when launching Wall St. Journal iPhone app for the first time.
The WSJ iPhone app: Startup screen.

The strongest call to action — both in terms of placement and size — is the offer to subscribe and get 2 weeks for free. (Implying, of course, that there'll be a fee after this period.) Pressing this button gives you this screen:

Screenshot of the screen you get by selecting 'Subscribe Now' from the WSJ app's startup screen
The WSJ iPhone app:
Screen shown after tapping
Subscribe Now.

It's obvious that it will cost $1.99 per week to use the mobile app after the first 2 weeks. Although this is the only logical conclusion, it's false. Click the Subscribe button on the 2nd screen and you'll get a screen saying that access is free for current website subscribers.

However, most users will never see this 3rd screen. The Subscribe button has zero information scent for current subscribers. Past experience from all websites and apps uniformly will universally lead people to the same conclusion: Subscribe is the call to action for establishing a new subscription. Thus, as soon as users see the 2nd screen, they'll back out if they don't want to pay twice for the same articles.

Most people will probably give up right here, because using the app so clearly seems to cost an extra $1.99/week. If some users are particularly persistent, however, they might return to the 1st screen and try the second button.

The second button leads to a registration screen, where you can create a new account that gives you free access to a small number of articles. You aren't told that you can use your existing website account if you have one. Again, the only sensible conclusion is that you can access the full set of articles only by ponying up again and paying the extra $1.99/week.

Wildly persistent users might notice the much smaller Log In area at the bottom of the startup screen. However, they're unlikely to press this button because their experience with the app so far has taught them that they must register (and pay extra) before being allowed to log in.

Those few users who do press Log In will finally see that they can use their existing credentials to access the app. However, as the many negative App Store reviews attest, few users ever make it this far.

(Also, our tests of hundreds of mobile apps have clearly shown a strong user preference for engaging with the first option; this is similar to what we see when testing mobile sites. Even though phone screens are small, users might still overlook the last option as they focus their attention on the top of the screen.)

Does it matter that existing website subscribers give up on the mobile app? After all, the company already has their money from the website subscription.

Also, the design works reasonably well for new subscribers — who are the only ones generating incremental revenue. So why not just focus on new subscribers and ignore old customers and their horrible user experience?

Two reasons:

  • Existing subscribers feel so insulted by having to pay twice that their negative ratings dominate the App Store feedback. Thus, many potential new subscribers will see the 2-star rating and immediately abandon the application download. With 500,000 alternatives, people don't have time for junk apps.
  • People who've paid for website access are the newspaper's most loyal fans. Paying for Web content is fairly unheard of; customers willing to do so should be treasured, not treated like garbage.
Newspapers have two strategic imperatives for surviving in the Internet age:
  • Retain credibility: they must be more highly respected than the random sites users dredge up on Google.
  • Deepen relationships with loyal users, so that they turn to the paper first instead of using one of the many aggregators that commoditize content.
Credibility and relationships both take a dive when customers are mistreated, particularly when they feel unfairly wronged.

The long-term impact of this confusing application UI is a severe erosion of the WSJ brand among the people who matter most: loyal, paying readers. As this case perfectly exemplifies, usability is not just a matter of whether users can press the correct button. User experience is branding in the interactive world. (We cover this key point in further depth in our Brand as Experience seminar.)

Another difficulty stems from the newspaper's convoluted subscription model: print subscribers must pay extra to access the online version. This requires an extra layer of explanation to distinguish different kinds of subscribers. For the sake of the redesign below, I'm not touching the pricing model, but it does complicate the UI and thus reduces the conversion rate.

A second reason to give print subscribers free access? They're the most valuable customers: print advertising is more effective than online advertising because broadsheet layouts have a stronger impact. Thus, a newspaper should encourage readers to maintain their print subscriptions by treating them well. However, it's beyond the scope of this article to resolve the tradeoff between making short-term money by overcharging print subscribers vs. the long-term loss of pushing people away from print.

Although overall user experience is paramount, as this example shows, you must also pay attention to the details. A single bad screen can cost millions of dollars in lost revenue and brand value.

As the saying goes, you get only one chance to make a first impression. That's why startup screens are crucial. This is particularly true for mobile users, who often have fairly low motivation to mess with apps they've downloaded for free. But even in "regular" software, the installation, setup, and initial screens can make or break an application.

Similarly, for e-commerce, a relatively minor element such as the confirmation email can have a huge branding impact. Write a poor subject line or from field, and customers will think you don't care about their order because they never open your messages. Consider how much money you spend on brand advertising to get a minute uplift in how people feel about your company. Compare this to the tiny cost of writing better transactional messages, which will cause a much bigger decline in customer feelings if done wrong. Actual experience beats promotional image advertising many times over in influencing brand reputation.

If a bad screen can costs millions, a better screen is worth a lot. Here's our idea for an alternative startup screen for the WSJ app:

Our proposal to redesign the WSJ mobile app for improved usability
Proposed redesign of the WSJ startup screen.

This screen eliminates the horrible usability problem discussed above by making it clear that existing subscribers can use their existing account to access the app. Also:

  • It's clear that there are 3 possible scenarios, because the use cases are spelled out rather than implied.
  • Placing buttons side-by-side reduces the possibility that users will overlook one and simply go for the first option.
  • Simplified workflow eliminates the ambiguity between subscribing and registering and presents fewer options on the first page.
We also changed how the price for new subscribers was presented — $103.48/year, instead of $1.99/week — because the customer's credit card is charged the full annual fee at sign up. Being dinged for $103 when you expect a $2 charge is a highly negative experience, and probably the source of many customer service calls.

The 2nd screen can put greater emphasis on enticing new users to pay for full rather than limited access. For example, the screen could state that the subscription fee equates to only $1.99 per week or 33 cents per day (they publish only 6 editions per week). Here, it would be prudent to run an A/B test comparing annual vs. weekly cost information on the first screen.

That said, you should only change to a weekly price if it offers dramatically better conversion. If weekly prices are only slightly better, it's best to show the annual price rather than suffer the long-term penalties of reducing brand reputation by subjecting customers to a bait-and-switch.

One of A/B testing's main weaknesses is that it encourages an overly short-term focus on users' initial behavior. It's easy to overlook the long-term effect of design changes if you look only at clicks, which is one reason I like supplementing A/B tests with qualitative tests that give more insight into users' thinking.

Initially, I was tempted to remove the Call to Subscribe link from the startup screen. (Phone contact info should definitely be on the "detailed info" screen.) My guess is that the WSJ gets numerous customer service calls because of the current confusing design. Once we make access for existing subscribers obvious and remove the misleading pricing for new subscribers, calls should drop to a fraction of the existing volume. (The near elimination of customer support costs is one of the key ROI metrics for usability.)

However, I kept the call-in number in this redesign for two reasons:

  • Our testing shows that being willing to disclose your phone number is a strong credibility marker that enhances a design's persuasive value.
  • There could be other issues — such as expired passwords or declined credit cards — that drive call volume. The company's own support center stats will show this, but I don't have the data. Also, our studies of the mobile user experience show that many people are reluctant to enter credit card numbers on their smartphones, which aren't seen as being as safe as a wired connection. This might change, but currently it's best for m-commerce to offer an alternate payment channel.
The initial WSJ app experience involves two distinct choices:
  • New vs. existing subscribers
  • Paid full access vs. free limited access
It's possible to bundle all of this into a single screen (as shown below), but the resulting UI would be too complex and error-prone for mobile use.

Instead, we decided on a 2-step workflow: first, you branch between new and old users; next, two different screens handle each case with an appropriate design for the specific circumstances.

The 2-step workflow's obvious penalty is that users must pass through the 2nd screen each time they log in. However, because this is a low-security app, it should be possible to store the login credentials on the phone and log in users automatically upon subsequent uses. (A high-security app such as online banking couldn't do this, but it's no big loss if someone can read your newspaper for free after stealing your phone.)

The following shows an alternate workflow design that would allow existing subscribers to bypass the 2nd screen by logging in directly from the 1st screen. We discarded this approach because it would cause too many usability problems for new users. In countless test sessions we've seen users being attracted by the magnetism of open type-in fields. Users' action bias is so strong that many of them go straight where they can "do stuff" instead of reading the text or considering the totality of the options on the screen.

Alternate login workflow that's NOT recommended because of usability problems for new users
Potential launch screen with fast-track option for registered users.
(Not recommended, because the design is error-prone for new users.)

Workflow design is a big issue in application usability. In many cases, a tighter workflow best expedites the paths users take to their goals. But in many other cases, it's better to add a few steps to ensure that each step is focused and self-explanatory. What matters to usability is not the number of clicks, but the amount of user frustration and time spent. For an app, there's no delay in moving from one screen to the next, so it's often better to resolve the trade-off in favor of additional screens. In contrast, mobile sites must be more conservative in adding steps because of the higher interaction cost imposed by the (currently slow) download delay for each added screen.

Ideally, the next step is to user test the redesign. You can do this with paper prototyping, so you don't have to fully implement a new app to see whether the new design solves the current design's usability problems.

Testing typically uncovers additional opportunities for improvement and thus produces an even better design. In this case, fielding a redesign should allow the WSJ to improve its App Store rating from 2 to 4 stars in short order. It would thus quickly pay for itself through increased downloads and account activations. The main payback, however, would come from the long-term benefit of no longer infuriating the company's most loyal customers.

Full-day courses at the Usability Week conference:

Utilize Available Screen Space (Jakob Nielsen's Alertbox)


Jakob Nielsen's Alertbox, May 9, 2011  

Websites and mobile apps both frequently cram options into too-small parts of the screen, making items harder to understand.

A computer screen's precious pixels are the world's most valuable real estate. Amazon's Add to Cart button is 160×27 pixels, or 0.003 square feet (0.0003 m2) at a typical 100 dpi monitor resolution. You could crowd almost 800,000 Buy buttons onto the floor space of the average American home, which currently sells for $160,000. Even a single Buy button will often bring in more than that — let alone the revenue from 800,000 buttons.

Normally, when something is extremely valuable, you try to conserve it. But screen space shouldn't be hoarded, it should be spent. I see too many designs that cram highly valuable content or action items into tiny spaces while wasting vast amounts of screen space.

8 years ago, I analyzed the space allocation across 50 website homepages and found that only 40% of screen space was used on navigation or content of interest to users. This was in the days of 800×600 monitors when space was particularly precious. Today, many websites are a bit better, and bigger monitors obviously allow for more room to play.

Still, sites frequently open important features in tiny pop-ups, violating two guidelines:

  • Don't pollute users' screens with pop-ups.
  • Show stuff in a space that's big enough to let users see everything they need to see without scrolling. For example, follow the guideline for e-commerce product pages to make photos really large when users click the Enlarge Image button.
Even on a big desktop monitor, space is at a premium and should be utilized. Mobile screens are so small that it's a sin to waste space.

Let's look at a screen from an iPad study we ran recently. This is a new study, but the style is reminiscent of the whacky designs that dominated early iPad apps:

ABC News iPad app: initial home screen
ABC News iPad app: home screen.

The user interface to the current news is a huge sphere; you can spin it using gestures. It's a neat graphic and the initial interaction is enticing: at first, users found it "kind of cool," "really cool," "fancy," "eye-catching," and "kind of fun."

Despite these positive initial reactions, users didn't like the screen and turned to an alternate view as soon as they discovered it. Regular users of the ABC app told us that they preferred the more standard-looking news listing as their main screen when using the app.

ABC News iPad app: alternative home screen
ABC News iPad app: The alternate home screen with more news.
This is what people actually use.

Why do users favor the regular, slightly boring design for everyday, actual use? Because it shows more news at a glance. The sphere offers a clear view of only a single story. Other stories have distorted images and captions that are hard-to-impossible to read. What a waste of screen space.

Higher information density = less need to move around and higher likelihood that you see what you want. Edward Tufte's call for charts with more "data ink" in his masterwork The Visual Display of Quantitative Information is related to this idea, though slightly different: printed charts are different from interactive user interfaces. In a UI, you can't cram the screen full of too much info or users will feel overwhelmed.

A second example from the recent study comes courtesy of NASA:

NASA iPad app
NASA iPad app: The home screen with drop-down menu expanded.

The main screen on NASA's iPad app is a visualization of the solar system. This is not a highly efficient use of space, but it's actually okay because it's an engaging image that offers a wide selection of choices from the opening screen, while clearly communicating what the app is all about.

In user testing, the main problem with this screen was the risk of touching the wrong heavenly body: the distance between Earth and the Moon is only 60% of our usability guideline for separating objects in touchscreen applications. (See full report with mobile usability guidelines.)

Another usability problem related to space utilization (screen space, not outer space). The drop-down menu featuring satellite missions was hard to use for two reasons. First, the button's icon looked more like a fish skeleton than a satellite, but that was a relatively minor issue.

The bigger problem was that it was physically hard to move through the long list of missions — doing so seemed to require almost infinite scrolling. Also, each satellite was hard to recognize from its picture and acronym. Most satellites look much the same unless you're even more of a space nerd than NASA has the right to expect of its users.

The designers tried to emulate a mega-menu, but failed to achieve its benefits for several reasons:

  • There's no categorization of the menu items — a good mega-menu would have broken the missions into groups and given each group a title. Doing so would make it much easier to find certain types of satellites, such as those for inner-planet exploration.
  • Labels are not explained — for example, it might be helpful to know that "ACE" measures high-energy particles, whereas "AIM" studies mesosphere ice.
  • Illustrations are meaningless. Pretty fluff, but still fluff.
Instead of the ever-scrolling menu, it would have been better to take users to a separate full-screen overview that showed all the missions in a single view, making them much easier to compare. Spending more screen space is okay, as long as you give users an easy way back to the main view.

The proliferation of small, hard-to-use interaction areas in iPad apps is partly Apple's fault because of a design mistake in the default email application. The inbox is shown as a skinny menu down the side. This is great in the landscape view, letting users quickly alternate between the inbox and individual messages so they can quickly process their mail.

In the portrait orientation, however, the inbox menu appears as an overlay that partly obscures the message content, making it impossible to work with the two panes simultaneously. Why show two panes, when you can't use both? It would be better to display the inbox across the entire tablet screen, showing more messages and/or more extensive previews of each message.

Mobile devices and tablets are inherently small, so you must optimize the use of their screen space and show things as large as possible. Desktop screens are bigger, so you can often achieve a better user experience by enlarging things even more.

There are very few exceptions to the rule that bigger is better in user interfaces. One example is found on sites that insist on opening new windows and maximizing them — even on the biggest of monitors. Once people get a 30-inch monitor, they don't want maximized browser windows anymore; it's highly annoying to find, say, a glossary entry taking up 2560×1600 pixels and obscuring the other windows you're working with.

Mainly, though, let's stop cramming information into tiny peepholes. Optimizing the UI for the available screen space is a key strategy for improving usability (and thus lifting your conversion rates).

Full-day seminar Web Page Design: The Anatomy of High-Performing Web Pages at the annual Usability Week conference.

The conference also has 2 days on mobile usability: Usability of Websites and Apps on Mobile Devices and Touchscreen Application Usability.

Optimizing a Screen for Mobile Use


Optimizing a Screen for Mobile Use

A single mobile screen with almost no features still required 10 design changes to meet usability guidelines for mobile websites.

During our recent Asia-Pacific tour, we took the opportunity to conduct several usability studies. Sometimes we tested regular websites to update seminars such as Fundamental Guidelines for Web Usability. But we spent more time on issues where we expected to find bigger regional differences, such as mobile usability and social user interfaces.

One of the mobile sites we tested was, which covers a topic of seemingly great fascination in many Asian countries: Korean pop stars.

All K Pop dot com: live homepage for the mobile site mobile site, as tested in Hong Kong.

AllKpop does many things right:

  • Most important of all, it supports a task that's perfect for mobile use: celebrity gossip. We've known since our first mobile usability studies in 2000 that killing time is a killer app for mobile. Many other tasks make little sense in the mobile scenario; no matter how great the design, the mobile versions wouldn't get much use and creating them is a waste of time.
  • Almost as important, it has a separate mobile version. Desktop computers and mobile devices are so different that the only way to offer a great user experience is to create two separate designs — typically with fewer features for mobile.
  • Because the server auto-senses whether they're using a mobile or a desktop device, users don't have to manually choose their version. As we know from testing, usability drops dramatically when the mobile and full sites have different URLs because users often end up with the wrong user interface.
  • Touch targets for each headline are fairly large.
  • Content-carrying keywords usually appear at the beginning of the headlines. For this site, the pop star's name is the most important information for users, and it typically appears first.
However, the site doesn't follow all the guidelines for mobile usability, so we decided to create an alternative design that did:

All K Pop dot com: proposed redesigned homepage
Our proposed redesign of AllKpop's mobile homepage.

Our redesign included 10 major changes:

  1. Fewer features, which we achieved by removing three elements:
    • bylines, because they aren't needed to choose an article (which is the only point of listing headlines on the front page);
    • selectable categories and tags, which were too small to hit reliably anyway (and categories like "music" seem worthless on a pop site); and
    • the triangle-button that displays a summary in place (instead, we always show a summary).
  2. Bigger touch targets. The entire story tile can now be tapped, and users no longer need the added precision of tapping the headline itself. (In the live design, each tile contains several tiny tappable areas, with low usability and questionable utility.)
  3. Full headlines instead of truncated headlines. This is probably the biggest redesign improvement, because the full headline provides substantially stronger information scent than the few words visible on the live site.
  4. Enhanced scannability by highlighting each pop star's name in the headlines.
  5. Even more information scent by showing a short story summary (a "dek") under each headline.
  6. Using pop star photos instead of date icons. Not only does this add some visual interest, it further enhances scannability and information scent as many users will recognize their favorite star's face faster than they can read a headline.
  7. Room for 4 full story tiles without scrolling. The slightly tighter spacing lets users view the entire 4th story summary in their first scan of the page. If users do scroll down, the ability to view more tiles in less space also means that they work a bit less for each new story, and so they're likely to want more of them. Because this second benefit is relatively small, we considered making the tiles smaller to display more stories on the first screen. On balance, the added information scent from the story summaries and pop star photos seemed a better use of the space — but testing an alternative would be worthwhile.
  8. Showing the publication date only as a divider between stories published on different dates. Because so many stories are published each day, users typically see only the current day's date when they access the site, unless they scroll down far enough to reach yesterday's news. Thus, the story date is not worth the substantial screen real estate it occupies in the live design. In general, it's good to question any mobile design that repeats the same information multiple times; such redundancy is probably a poor use of highly limited screen space.
  9. Adding more space between the navigation bar's two options so users are less likely to touch the wrong one.
  10. Labeling the drop-down menu instead of simply denoting it by a triangle. (It's just above search in the original design — a subtle presentation that's mostly overlooked by users.) Depending on which commands are actually in the menu, a different name might be better. We didn't redesign the entire navigation system, but we assumed that a revised categorization system would be the most valuable and usable way to navigate the site, after headline-tapping and search. (See our seminar on IA structuring for more information on how to determine the best access schemes.)
As this example shows, even a small mobile screen has room for many user interface intricacies. Even though this site (appropriately enough) has very few features, we made 10 usability improvements — including cutting the features even more.

(The sidebar shows 3 design iterations we tried before arriving at the redesign discussed here.)

The tighter the design constraints, the more you must polish the user interface to deliver optimal usability. And mobile is an incredibly constrained design problem.

How I learned to stop wasting my time in usability


How I learned to stop wasting my time in usability tests

James leaned back in his chair and rubbed his eyes. He had sat through 5 usability test sessions today and frankly, it had been hard going. Duplicate problems had started appearing with the second participant of the day, and James's attention had wandered a bit. By participant 3 he was checking email and by participant 4 he was on Facebook. He knew he should pay more attention during the sessions, but he has convinced himself that he works best when he reviews the screen recordings of each session the next day.

It was when James played back the first recording that he knew something was wrong. The screen video had recorded fine, and he could see the participant's face in the picture-in-picture recording. But there was no audio. He checked the mute button on his computer but the volume was up to maximum. He jumped to the next recording, then the next. They were all the same. For some reason the microphone hadn't worked.

He had no idea what his 5 participants were saying.

James looked back at his notes and sighed. There wasn't much on that half-page of doodling that was going to help. He could recall a couple of stand-out problems from the first few participants but he knew that wasn't going to be enough for the design team.

What would he put in his report?

The traditional approach to note-taking

If you've ever observed a usability test, you'll know that it's often hard to keep up with the tempo of what's going on. Nothing seems to be happening — and then suddenly a handful of usability problems appear at once. It seems impossible to get them all down: you write down one usability issue, but that prevents you from observing the next problem. You look at the participant — who is now struggling with a different problem — and you wonder how the participant got here and what you've missed. If you have the job of moderating the session and taking notes, it's even more difficult: how can you focus on the participant and take notes?

So it's no surprise that many people think the easiest solution to this problem is to use James's approach. Skip the note taking and just review the recordings, or alternatively just note down the key issues after each participant has finished.

Although this sounds the easy solution, it's not. Passively observing multiple usability sessions is tedious. It gets increasingly difficult to stay focused as you watch subsequent participants. I'm not sure if this is due to being cooped up in a room for a long time or because subsequent participants run into many of the same problems you've seen earlier or because some participants aren't very interesting. But whatever the cause, you need some way of helping you to pay attention, and there's only so much coffee you can drink.

Another reason why this isn't a great solution is that your notes end up as a jumble of observations. Review your notes, and you'll find participant opinions ("I prefer a larger search box") mixed up with behavioural observations ("The participant doesn't understand the 'contact' label") and both of those laced with demographic notes ("The participant spends about £30/month on their mobile phone bill").

What is datalogging?

Here's another solution.

As you watch the test, you should note down your observations of the participant's behaviour as single letter codes: an approach known as "datalogging". Datalogging is a kind of shorthand developed by students of animal and human behaviour (if you'd like more background on the method, consult the ethologist's bible, "Measuring Behaviour").

The advantages of datalogging

Datalogging has several advantages:

  • When lots of observations come at once, you need only note the observation code — you can then review this part of the session later, in the video recording.
  • When scanning your notes, the observation codes make it easy to distinguish one class of observation (e.g. the usability issues) from other observations.
  • Datalogging ensures you note all behaviours, not just the ones that stand out (this helps reduce bias in your observations caused by the reconstructive nature of human memory).
  • Lightweight documentation like this ensures that rapid, iterative design projects have a record of what drove the current design without the need for formal reporting.
  • Datalogging is one of those things you'll be glad you did when, like James found, there are problems with the video recording (e.g. when the sound is poor or when the recording is corrupted).

What to write down

As a rule of thumb, you should average about one observation per minute. But remember this is an average: observations are a bit like buses (none for ages, then three come along at once). For each observation, you should write down:

  • The time.
  • The class of observation.
  • A short description.

An excerpt from a session might look like this:

An excerpt from a typical logging session
Time Class Description
4:35 M Scans nav options but doesn't choose
5:10 C "I'm looking for a search box now"
6:07 X Doesn't seem to spot the link to the 'search' page

Where "M" is the short code for "Miscellaneous observation", "C" is the short code for "General comment" and "X" is the short code for "Usability problem".

Here's a list of marker definitions that I use in my tests. This list may seem a bit imposing if you've never done datalogging before; if that's the case, just use 2-3 items in the list until you feel more confident.

Marker definitions
Code Definition description
X Usability problem
D Duplicate usability problem (described earlier)
V Video highlight — an "Ah-ha!" moment
C Comment (general comment by participant)
P Positive opinion expressed by participant
N Negative opinion expressed by participant
B Bug
F Facial reaction (e.g. surprise)
H Help or documentation accessed
A Assist from moderator
G Gives up or wrongly thinks finished
I Design idea (design insight by logger)
M Misc (general observation by logger)

Technologies to help

All you really need to do data logging is a digital watch (or an analogue watch with a second hand), a pen and a pad. You'll probably want to print a crib sheet showing the codes you're using, just in case you forget.

But there's some technology to help you too. Todd Zazelenchuk has written a great article describing several datalogging tools. In this section, I'll describe just 3 different tools to give you a flavour of how technology can help you.


The Livescribe Pulse is part pen, part digital recorder and part scanner. As you write on the special dotted paper, an infrared camera at the tip of the smartpen tracks everything you write down and a built-in digital recorder records the audio in the room. When you review your notes, you can play back the session by pointing with the pen to the appropriate part of your notes. For example, if you point with your pen to the "X" usability observation you made, you'll actually hear (through mini-speakers in the pen) what was going on in the room as you made that observation. You can also import your notes and recordings into your computer. It sounds amazing technology — and it is. Unfortunately, the audio recording quality isn't great and the pen itself is fat and slightly awkward to hold. (The consequence of this for me is that it makes my writing look messy, and then I find I take fewer notes).


Morae is the software platform of choice for usability testers and it includes a built-in datalogging tool that timestamps and synchronises your observations with the participant screen recordings. You can search across participant recordings for different observation codes and export the observation codes and any associated notes to Excel. Techsmith has a good video on datalogging with Morae. If you already own Morae, this is the route I'd suggest you take. But if you don't own Morae, you'll find this an expensive route to datalogging.


Most people have a copy of Excel on their computer and the table-based layout of Excel makes it a natural choice for taking notes during a usability test session. It's easy to set up Excel to contain a column for the time, a column for the class of observation and a final column for your notes. What's less easy to do in Excel in to timestamp your observations. You can find various macros that will do this but you may find your company insists you disable macros in Microsoft Office to reduce the risk of viruses. And the latest version of Excel for the Mac does not support macros.

Fortunately, there's a workaround. You simply need to tell Excel to allow circular references and then create a circular formula. Here's an Excel spreadsheet I prepared that you can use to timestamp your observations:

Macro-free Excel spreadsheet to timestamp observations (xls format)

How to use your log to write a report in minutes

If you code observations with a program like Excel, you can sort and edit your observations easily. Add a couple of columns to the spreadsheet (like "Possible fix" and "Owner") and it becomes the basis of a bug list for the development team. You'll need to think about the findings, properly interpret the usability issues and develop fixes that will work, but once you've done this you can email this to the development team and you've just created the world's fastest usability report.

Overachievers might want to format the Excel observations so they look nice and pretty using Microsoft Word's mailmerge feature. Once you've set the template up, this literally takes just a few minutes to produce and everyone will think you've spent hours on it.

While your report is being read by the development team, old-school usability practitioners like James are just beginning to review the recordings of the session.

You'll find that rapid reporting like this makes you very popular with management and the development team.

Always Split Test


Always Split Test

Split testing is extremely important. I'm going to explain to you why it's so important, using a REAL life, raw, VERY SERIOUS example. I'm going to show you that there's always a way to be split testing, never settle for the industry norm, no matter what you are doing.

So, let's go through an example of an experiment that I did with a homeless man. First, realize this... This experiment improved this homeless mans earnings by over 100% over several days thanks to me and on top of that I paid him to let me take pictures (which he was very happy about being featured in a blog article online, especially after I explained exactly what I was doing).

Boring. The rate of people giving him money is low. Time for a split test.

Boring. The rate of people giving him money is low. Time for a split test.

As you can see in the first picture, he was using a very standard "I'm homeless, help me" sign with no major call to action. Just bullets of what is wrong with him. The biggest call to action on this sign is "Hi, I'm Keith" which is not really a call to action, and isn't anything that would really motivate many people to come over on. So, that's the biggest flaws in his banner.

The next problem was that his cup (goal) of what he wanted people to do , was tucked between his sign between that and his leg and was not very inviting. This would be like putting an opt-in form or buy button on the footer of a lengthy website tucked in the corner blended with the website colors so you can barely see it, etc. You need to call action to what you want someone to do, not tuck it away in the corner.

So now we move on to a split test. The first thing we always do is make slight modifications to what we already have. In this case it wasn't practical to change the copy on his banner, so we decided to at least make the goal of what he wanted people to do at least a little more appealing, so we asked him to pick up the cup and try that for a while. This didn't help his conversions much at all.

Still boring, but this time he's at least holding up the cup...

Next, we're going to do an actual split test by comparing the results of another choice with the results and information that we already have about how well his current setup works (not very well).

Introduced a higher converting banner w/ bribe, working much better.

What we did here was quite different than what most homeless people would do. We focused on a different angle. We already have the "I'm homeless, help me" stigma attached to people that are sitting on the side of the street with a cup, so we don't necessary need to make that a prominent part of our banner. The next big difference is that we changed colors and went from cardboard to white to spark the interest of people walking by instead of automatically having negative associations that they have with cardboard and homeless people. We want this to look like a new age homeless man who is really trying to make it work for himself. The biggest difference, is that we are now introducing a bribe. We are basically saying "hey I'm homeless, help me, donate to help feed my family and pay my medical bills... but not only that, if you donate right now I'll give you a free squirt of hand sanitizer". This can be applied to list building and all sorts of other aspects of Internet Marketing almost as is (don't try to bribe people to complete offers though, just bribe them to get on your email list). One solid strategy of building a list is that you get people looking at your opt-in form then give them something for free aka a bribe to get them to pull the trigger and fill out the opt-in form for your email list.

Now he's smiling, interacting and building relationships with potential customers, even more conversions.

By the way, this experiment wasn't just for shits and giggles, the homeless man started to draw a crowd VERY quickly on this busy street and people were genuinely intrigued by the fact that a homeless man was giving away free hand sanitizer for donations to help his family/medical bills. I talked to a few people walking by on the street and asked them what was going on (playing stupid) and they were saying how there was this "genius" homeless man who they were happy to give money to because he was showing that he must be sober and really trying to make things happen based on his cool concept. In fact, this may have even have made the local news.

Moral of the story, always be split testing and trying different minor tweaks on your landing pages to try to increase conversions. Things as little as colors can play a major factor. (Notice how in the bum example I was very strict to his niche (of being homeless) and wrote in black as if a homeless person wrote the sign) then I added an "out of place" kind of irrational element to the equation by using a red cup. In affect, what it did was create a buzz of the people walking by and got them talking about it and debating where he got the cup and if it was a joke or if he was some genius bum.

The better call to action (interesting) banner with a bribe worked best!

P.S. This BARELY touches the surface of how far you can go with your split tests. The next avenue to explore would to be see how many different variations of the 2nd sign we can do (with the copy moved around and the product switched/moved around etc - there would be many more tests from that point).

P.S.S. This is a visual example from real life that's meant to show you, hopefully you weren't offended by it. If you were then you got the wrong message out of this. The homeless man, Keith, was proud and happy to do it. I may bring him in for an interview and do a video on him. He really has an amazing life story. I'd like to feature him in something that can potentially help others avoid his unfortunate situation and put some more money in his pocket to get back on track with his life.

Ten Usability Heuristics


Ten Usability Heuristics

by Jakob Nielsen

These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines.

Visibility of system status
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Match between system and the real world
The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention
Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall
Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use
Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Aesthetic and minimalist design
Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help and documentation
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.