📓

About Face: The Essentials of Interaction Design

By Alan Cooper, Robert Reimann, David Cronin, and Chris Noessel
 
 

Interaction Design Principles (starts p.173)

This set of interaction design principles are largely geared towards minimizing work, by giving users greater amts of feedback & contextually useful info.
 
Christopher Alexander's architectural process work in the late 70's laid the groundwork for design patterns: capturing useful design solutions and generalizing them to other problems (think of modals, text inputs, etc.). These should always relate back to the human aspect of what the pattern is intending to solve. What is a modal attempting to solve? These filter up into patterns in programming code.
Patterns are always context-specific. When capturing a pattern, it is important to record the context to which the solution applies, one or more specific examples of the solution, the abstracted features common to all the examples, and the rationale behind the solution (why it is a good solution). But beware that patterns are not off-the-shelf solutions: each implementation of one differs a bit from the last. Alexander notes that architectural patterns are the antithesis of prefab buildings, bc the context is crucial to determining the manifest form of the assembled patterns.
The essence of an interaction design pattern is in 1) the relationships between represented (digital?) objects, and 2) between those objects and the user's goals.
 

The types of interaction design patterns

 
Postural Patterns
  • Help determine the overall product stance in relationship to the user. For example, one pattern might be "transient", meaning a person uses it for brief periods of time to achieve a larger goal elsewhere. An example might be a ticket-scanning machine in a city metro station: the interaction is designed to be brief and transactional.
Structural Patterns
  • These are related to the arrangement of info and functional elements on a screen: these are highly documented, like iOS and Android interface documentation. Some examples are views, panes, and other interface groupings.
Structural Pattern Examples
 
Behavioral Patterns
  • These solve a wide range of problems relating to specific interactions with functional or data elements. These cover many "widget behaviors", the lower-level interaction patterns.
 

 

Digital Etiquette

Clifford & Reeves' paper The Media Equation make the case that people treat computers and interactive products as if they were people. So if we want people to like a product, design it to behave in the same manner as a likable person.
Design principle: the computer does the work, and the person does the thinking.
The research also suggests that software-enabled products should be considerate: being truly concerned with the needs and goals of its users. This requires the designer to envision interactions that emulate the qualities of a sensitive and caring person.
Design principle: software should behave like a considerate human being.
For example, a considerate product:
  1. Takes an interest
      • Ex. Google chrome and firefox are great at remembering details that users enter into forms on websites. Software should work hard to remember our habits.
  1. Is deferential
      • Inconsiderate products pass judgment on human actions; it's presumptuous for it to judge or limit our actions. Software should submit to users
  1. Is forthcoming
      • Software can offer up useful information when it's related to our goals, but not in an obtrusive way.
  1. Uses common sense
      • Ex. Using inappropriate functions in inappropriate places is a bad interactive experience. Like putting controls for common functions next to ones that are rarely used.
  1. Uses discretion
      • We want our software to remember what we do and what we tell it; but there's some things it shouldn't – like credit card numbers. It should also help us protect private data by helping us choose passwords, reporting when an account is accessed from somewhere new, etc.
  1. Anticipates people's needs
      • A human assistant knows you'll need a hotel room if you're traveling, even if you don't say so. They know the type of room you like and sets it up so you don't have to. Software is often idle and could use this time to better anticipate what our needs might be.
  1. Is conscientious
      • Does it understand the subtasks or related context to the larger goal at hand? For example, I ask the computer to save a folder – but it turns out it's the same name as an existing folder, so it asks me if I want to overwrite it or not. But it still gives us the choice (is deferential)
  1. Don't burden you with personal problems
      • Software often burdens us with error messages, unnecessary notifications; it should, when possible, have the intelligence and ability to resolve its issues on its own.
  1. Keep you informed
      • We do want to be kept aware of the things that matter to us. Software can provide us with rich, modeless feedback about what's going on.
  1. Is perceptive
      • Perceptive software can observe what users are doing and use those observations to offer relevant information.
  1. Can bend the rules
      • Software often doesn't account for intermediate states: in manual, human-based systems, there is a factor of fudgeability. For example, a sales clerk can post an order before having all of a customer's details, whereas a computer would reject it outright.
       

Designing 'Smart' Products

 
Smart products put idle cycles to work
Developers often optimize programs to make sure the computer is handling a limited set of instructions at a time, to make sure that performance is nice and snappy. But software often doesn't make good use of idle time: they should be shouldering more of the burden of daily work.
Example: OS X's Spotlight search capabilities use downtime to index the entire hard drive while the user is thinking, leading to a seamless interaction that helps users find what they're looking for almost instantaneously.
 
Smart products have a memory
Human behavior is rarely random. Interfaces with memory, therefore, can essentially act like a sixth sense in an application. If a user did something a certain way the past few times, it can use that to inform how it should behave the next time. I think there's some broader implications here, especially with the rise of machine learning: what if we're reinforcing the same behavior over and over again? Does this not allow for people to change, which they often do? Does this conflict with the principle of deferment, where the computer is too broadly assuming a person's intent and therefore forcing them into a certain path? Computers are quite dumb: they don't know the (whole) context. This is probably a better tenet for the minutiae of product preferences, like font sizes, layouts, etc. and not creative decision-making in software.
 
Decision-set reduction
People tend to reduce an infinite set of choices down to a small, finite set of choices. And these choices often appear in groups, or decision sets. Instead of just one right way, several options are all correct. Software can gather context clues about which decision set(s) to offer to the user in order to make the decision on their end easier.
 
Preference Thesholds
People tend to make decision in two categories: important and unimportant. Most decisions are insignificant; software can take advantage of this.
Ex. After you decide to buy groceries, the particular checkout aisle you use doesn't really matter.
A preference threshold can help simplify an interaction: if a user selects a copy function, we don't need to ask a subsequent question about how many to copy; we can make assumptions about these things and then let the user change them if needed.
A computer should go ahead and do what it thinks is right and then allow the user to override or undo it. If our software predicts the correct action 80% of the time, that means we don't have to burden the user the majority of cases with additional options or questions. A user will only undo 2 times out of 10, rather than deal with a redundant dialog box 8 times out of 10.
 

Platform and Posture

A product's platform is its combination of hardware and software that enables it to function (for both user interactions and internal operations).
  • Ex. desktop software, web apps, kiosks, in-vehicle systems, etc.
  • Platform is loosely defined, but it should describe a number of important products features, like the physical form, display size/resolution, input methods, OS, etc.
  • Each factor has impact on how a product can be designed, built, and used – it's a balancing act between biz constraints, objectives, and tech capabilities.
  • Design principle: decisions about technical platforms are best made in concert with interaction design efforts.
A product's posture is its behavioral stance (how does it present itself to users). Posture is informed by an understanding of likely usage contexts and environments.
  • Platform and posture are closely related. A social media app on mobile phone has to accommodate a diff kind of user attention than say a page layout app on a large desktop computer.
 

Postures for the Desktop

Sovereign posture

Apps that monopolize users' attention for long continuous periods of time are sovereign-posture applications. They tend to be kept open an running continuously, taking up the full screen. This could be a word processor, spreadsheet, or Figma. It takes over a user's workflow as a primary tool.
Target intermediate users
  • Users in these types of apps have vested interest in progressing up the learning curve. Every user begins as a novice, but the initial learning curve of getting acquainted with a product is pretty short. This often means that the application should be optimized for intermediate users, not beginners (or experts).
  • Sacrificing speed or power for a clumsier (but easier to learn) idiom is out of place, and so is only providing only advanced power tools. However, an app should also accommodate its infrequent and infrequent users to remain valuable.
Be generous with screen real estate
  • Since they're often taking up the entire screen, a sovereign app shouldn't be afraid to stretch out a bit. Don't waste space, but don't be afraid to use what you need to (need 4 toolbars? Use 4 toolbars.)
  • Design principle: optimize sovereign applications for full-screen use.
Use a minimal visual style
  • Users will stare at sovereign apps for a long time: the visual presentation should be more conservative. Big bold colors may look cool to newcomers, but they become garish after constant daily use. Users will gain an innate sense of where things are over time: the designer has the freedom to do more with less. Toolbars can be smaller than normal, you can use a reduced color palette, etc.
  • Design principle: sovereign interfaces should feature a conservative visual style.
Provide rich visual feedback
  • You can productively add bits of information to the interface: status bars at the bottom of a screen, visual indicators of the app's status, etc. (but don't hopelessly clutter it)
  • First-time users won't notice the full richness of this feedback – but over time, will learn them (so provide a way to learn about them), giving them greater control and knowledge about the program.
Support rich input
  • Every frequently used aspect of the application should be controllable in several ways: direct manipulation, keyboard shortcuts, etc.
  • Ex. Microsoft word places the most common actions at the top of the interface, and state-changing, more experts functions are near the bottom.
Controls are segmented into the top and bottom; less-frequent and more dislocating are 'hidden' at the bottom.
Controls are segmented into the top and bottom; less-frequent and more dislocating are 'hidden' at the bottom.
  • Design principle: sovereign applications should exploit rich input.
Design for documents
  • Many sovereign apps are document-centric; they're centered around creating and viewing documents with rich data. Optimizing for the manipulation of documents is therefore a good starting point.
  • Design principle: maximize document views within sovereign applications
 

Transient Posture

Products with transient posture come and go: they're often a single function with a constrained set of accompanying controls. They're temporary, so users don't have the time to get familiar with it: the interface should be obvious and helpful; controls are clear and bold. Use big buttons, precise legends, and a large, easy-to-read typeface.
Design principle: transient applications must be simple, clear, and to the point.
  • This could be opening up a file explorer while working on a presentation, setting your speaker volume, etc.
  • It could also be that the user is referring to the entire computer system in a transient manner, like in medical equipment where the computer is a secondary reference point
 
Mac OS X dashboard widgets are good examples of transient apps; they're interacted with briefly. They have rich visual rendering to give them the right amount of gravity.
Mac OS X dashboard widgets are good examples of transient apps; they're interacted with briefly. They have rich visual rendering to give them the right amount of gravity.
 
Make it bright and clear
Controls can be proportionally larger than those on a sovereign application, and you can use more stylized or forceful visual design. They should have instructions built into their surface; users might only see the product once a month and will have forgotten how it works.
An example could be that a button should be labeled "Set up user preferences" instead of just "set up"; the verb/object notation is quicker to digest. Feedback should be direct and explicit.
 
Keep it simple
All the info and facilities a user needs should be in the transient app; it should demand focus on the single opened window.
Design Principle: transient applications should be limited to a single window and view.
Of course, there's exceptions: take for example this app designed to support Adobe InDesign. It separates its various functions into tabs –
notion image
 
Remember user choices
Giving the applications a memory is a good way to ensure that it's best suited to the nature of a transient app.
Design principle: a transient app should launch to its previous position and config.
 

Daemonic posture

Applications that don't normally interact with the user are daemonic-posture applications. They sit in the background, like a printer driver or network connection. They really only interact with the user at the point of installation or removal, which of course is a transient interaction.
Daemonic icons should be employed persistently only if they provide continuous and useful status information
Here, the speaker percentage is hidden underneath a hover state. The speaker icon itself still provides modeless status info, bc it changes based on the level of audio.
Here, the speaker percentage is hidden underneath a hover state. The speaker icon itself still provides modeless status info, bc it changes based on the level of audio.
A more recent example is Big Sur's control panel:
notion image
This is again a transient application, giving users a consistent place to go to configure daemons (i.e. turning bluetooth on or off)
 
 

Postures for the Web

 

Informational websites

These are the more standard linked web pages. These often take the form of personal sites, corporate marketing and support sites, intranets, etc. Wikipedia is a good example. The dominant design concerns here are the visual look and feel, layout, navigational elements, and site structure/IA. Their biggest issue is findability: the ease of finding specific information held within them.
Sovereign attributes
These sites need to display a reasonable amount of info at one time and allowing novice/infrequent users to navigate the site easily.
Assuming a sovereign stance gives designers the freedom to take all the space available to display information. But web designers need to decide early on which screen resolution to optimize the design for. This should of course be based on the common display sizes used by your users.
Transient attributes
The less frequently a user uses the sites, the more transient a stance you'll need to take. Think of this like a cooking recipe site vs. developer documentation for a library. This can also be especially relevant for mobile web access, where the user is likely multitasking and can't focus as much on the site.
 

Transactional websites

Websites with transactional functionality allow users to accomplish things beyond just getting info.
Amazon is a classic example of a transactional websites.
Amazon is a classic example of a transactional websites.
In addition to the hierarchical structure of info websites, these sites add on layers of functional elements that can lead to complex interactions. Thus they require attention to IA for content & page organization, and attention to interaction design as is relates to enabling the appropriate behaviors with functional elements. Visual design contributes to both of these, as well as effectively communicating key brand attributes.
 

Web applications

These are more akin to a robust desktop app; even if they retain the page-based model, they're more akin to views rather than web documents. This could be Google Docs, Facebook, Netflix, etc. – they're designed to feel like real desktop apps. They can take the form of sovereign apps, or something more transient. Their core strength is giving people the same data and functionality from the cloud.
Sovereign web applications
These types of web apps strive to deliver info and functionality that supports more complex human activities – think of Figma, GitHub, etc.
Designing these is best approached in the same manner as a desktop app, with the constraints of the web in mind. Users should have the feeling that they are in an environment, not navigating from place to place. Re-rendering information should be minimized.
Transient web applications
Transient web apps aim to give users better access to functionality without requiring them to install every tool they might need on the computers – it could be a routine task done once a year, like generating a report. SmallPDF is a good example.
notion image
 

Postures for Mobile Devices

Smartphones and handhelds

Satellite posture is the use of a handheld as a satellite for a desktop, so it emphasizes retrieving and viewing data. These are much less common now, but were the primary focus of devices like the Palm Pilot. A kindle could fall into this class, since it's mainly for looking at content that was synced from the cloud.
But now most handhelds are convergence devices, meaning they're essentially a computer in their own right.
Standalone posture was pioneered by Apple – the first iPhone transformed phones into general purpose computing devices. These can be sovereign – they're full screen and offer functions within menus – and also transient, since their on-to-go nature means they're not likely to be used for extensive periods of time at once.
 

Optimizing for intermediates

One of the eternal conundrums of digital product development is how to address the needs of both beginning users and expert users with a single, coherent interface.

Developers typically build for the expert; after all, they're experts on the features they're building. They give everything equal weight, even if it shouldn't be. Marketing on the other hand says everything should be beginner-friendly; they want "the training wheels to be bolted on". Trying to have it both ways, for experts and beginners, means you're inconveniencing both user groups.
Most users, for most of the time they're using a product, are intermediate users.
 
notion image
 
We can design for the perpetual intermediates by using a process called inflection. This means organizing it to minimize typical navigation within the interfaces: place the most frequently used functions in the most immediate and convenient locations; less frequently used functions are pushed deeper into the interface where they won't be stumbled over. Advanced features can be neatly tucked away in menus, dialog boxes, or drawers for when they're needed.
Commensurate effort is the principle that people will willingly work harder for something that is more valuable. If a user is deeply motivated to explore all the functionality of a program, they'll dive deep into it. But providing all of those options to a newcomer is unnecessary noise.
Progressive disclosure is an example of commensurate effort; here, advanced controls are hidden in an expanding/collapsing pane. Adobe's programs make use of this well:
notion image
 

Organizing for inflection

Controls and displays should be organized in an interface according to three attributes:
  1. Frequency of use – these should be immediately within reach. Less frequently used, no more than a click or two away. Others can be two or three clicks away.
  1. Degree of dislocation – the amount of sudden change in an interface or the info being processed by way of a function or command. These are generally better to place deeper into an interface.
  1. Degree of risk exposure – functions that are irreversible or have other dangerous ramifications. Make these more difficult to stumble across.
 

Designing for three levels of experience

Approaches should be threefold:
  1. Rapidly and painlessly move beginners into intermediacy
  1. Avoid putting obstacles in the way of intermediates who want to become experts
  1. Keep perpetual intermediates happy as they move around the middle of the skill spectrum
 

What beginners need

It's best to imagine that users are both very intelligent and very busy. They need instruction but not too much. And they'll always learn better if they understand cause and effect: the interface's mental model should match that of the user.
Beginners of course need extra help to become intermediates; but that help should get out of their way once they get there. Whatever extra help you provide should not be fixed into the interface.
Beginners also rely heavily on menus; even though they're clunky, they're also thorough and reassuring. Dialog boxes that appear as a separate guiding facility can also be a good place to address beginner issues, explain functionality, etc.
 

What experts need

Experts want shortcuts to everything; it's unavoidable for them to become innately aware of how to get to the most frequently used commands. They constantly and aggressively seek to learn more, and see more connections between their actions and the product's behavior.
For some specialized products, it's appropriate to optimize for experts – in particular those that are relied on for a big portion of professional responsibility (think Boundless admin). We can expect these users to have the background knowledge and be willing to invest time and effort in mastering the application.
 

What perpetual intermediates need

They need fast access to the most common tools; they know how to use reference materials, and are motivated to learn more. Online help is therefore a useful tool for the perpetual intermediate.
They have a good idea about which functions they use a lot and which ones they rarely use (I'd argue that they don't know a lot of what they don't know in terms of what's possible in a product).
The very existence of more advanced features, even if the intermediate isn't using them, is reassuring – it shows there's room to grow within a product, when they eventually get to that point. It's more of a reason to invest time in learning it.
 

Chapter 11: Orchestration and Flow

p.249

Flow and Transparency

It is plain that we should design products and experiences to promote and enhance flow, and avoid things that might disrupt it. The least amount of interaction is a good way to do this.
Design principle: no matter how cool your interface is, less of it is better. The ultimate interface can often be no interface.
Orchestration & harmonious interactions
Just like sentences in a passage must be weaved together – a designer must orchestrate the pieces of interactive software together in a cohesive experience.
There are no universal rules to define harmonious interaction, but there are some good guiding strategies:
  1. Follow users' mental models
      • Each person naturally forms a mental image of how a software performs a task; we instinctively look for some pattern of cause and effect to get insight into a machine's behavior. Following the model of how people perceive an activity or process is bound to create a more harmonious interaction.
  1. Less is more
      • Careful orchestration of elements is key to using fewer interactive elements; coordinate and control a product's power without overloading it with widgets and controls.
      • Providing a view with a single task, without access to related tasks, reduces the power of an interface. For example, early Windows applications didn't allow for a file to be renamed in the Save dialog – that had to be done elsewhere, creating more interface for the user to interact with.
      • We need to consider how people might use the combinations of functions together in order to accomplish something
      • Elegant design solves problems in a way that is that which is novel, simple, economical, and graceful
      • Simplicity and minimalism are tied to clear purpose (see: Google homepage)
      • This can be taken too far and start to be reductive. It's a balancing act that requires a firm knowledge of user's mental models.
  1. Let users direct rather than discuss
      • The ideal interaction with a software is like using a tool: when a carpenter nails, they don't discuss the nail with the hammer – they direct the hammer onto the nail.
      • One of the most important ways to enable this is direct manipulation.
  1. Provide choices rather than ask questions
      • Dialog boxes ask questions; toolbars and palettes provide choices. They interrupt flow and demand an answer; it should be the other way around.
  1. Keep necessary tools close at hand
      • Tools should be close at hand, usually in palettes and toolbars, or by keyboard shortcut for experts. Diverting attention to locate a tool that isn't readily available breaks flow.
  1. Provide modeless feedback
      • When users manipulate tools and data, it's usually important to clearly present the status and effect of the manipulations; it should be clear and easy to see without obscuring or interfering with the user's actions.
      • A better way to inform users than modals is with modeless feedback. Feedback is modeless whenever information is built into the structures of the interface, and doesn't stop the normal flow of activities and interaction. This could be ruler guides, thumbnail maps, what page you're on the number of words in a document, etc.
  1. Design for the probable but anticipate the possible
      • A potent method for orchestrating interfaces is segregating the possible from the probable. Designing with the optimization of what's probable will lead to a cleaner experience; think of the dialog that asks if you want to save changes to a document you're closing in Microsoft Word.
      • It's good to keep in mind that developers often view possibilities the same as probabilities.
  1. Contextualize information
      • The presentation of information should help us make sense of the facts; if software means to tell us how much disk space is left on our drive, it's more effective to do that visually with a chart than by telling us how many bytes are left. Show the data visually, rather than simply telling about it textually or numerically.
  1. Reflect object and application status
      • When an application is asleep, it should look asleep–when busy, it looks busy, etc. We use these clues in human interactions to determine someone's state – and our software should be able to do the same. For example, email clients do a good job of telling us when an email has been read or not.
  1. Avoid unnecessary reporting
      • It's distracting to know all the details of what's happening under normal conditions. We should not be using direct feedback to report normalcy – it can be done with some ambient signal.
  1. Avoid blank slates
      • Ask yourself if a particular interaction moves a person effectively and confidently towards a goal. People would much rather see what the application thinks is correct, and then manipulate those defaults to make it exactly right.
      • For example, creating a new presentation in Powerpoint starts off with a blank document with certain preset attributes rather than asking a bunch of questions up-front. Ask for forgiveness, not permission.
      • Blank slates are difficult starting points; it's easier to begin where someone else has left off.
  1. Differentiate between command and configuration
      • If you ask an app to do a function, the app should simply do it with a reasonable default or last configuration; it shouldn't badger you with configuration details each time it's used. You can always jump into the configuration interface to make tweaks. Invoking a function shouldn't demand configuration; a user uses a command 10 times before configuring it once.
  1. Hide the ejector seat levers
      • These serious actions are either irreversible or cause a significant dislocation in the interface; they shouldn't be placed next to common actions. They should ideally be hidden, with confirmations that they're the action someone wants to take.
  1. Optimize for responsiveness but accommodate latency
      • Nothing is more disturbing to flow than staring at a screen waiting for the computer to respond. This is an area where collabing with developers is quite important; some interactions can be "expensive" from a latency perspective. There's a balance between appropriately rich interaction and as little latency as possible.
      • Up to 0.1 seconds feels instantaneous; direct manipulation shines here.
      • Up to 1 second, the system feels responsive; their thought processes stay uninterrupted
      • Up to 10 seconds, the user is going to wander – here's where you need a progress bar.
      • After 10 seconds, attention is lost. Ideally processes that take this long should be conducted offline or in the background.
 

Motion, Timing, and Transitions

The use of motion and animated transitions became a critical part of digital products starting with iPhones.
Motion is a powerful mechanism for expressing the relationships between objects, creating a spatial aspect to navigation and state transitions.
However, they must be used sparingly and judiciously. It should be used directly to support a user's state of flow.
Dan Saffer's Microinteractions says they should help achieve the following:
  1. Focus user attention in the right place
  1. Show relatinoships between objects and their actions
  1. Maintain context during transitions between views or object states
  1. Provide the perception of progression or activity (i.e. spinners)
  1. Create a virtual space that helps guide users from state to state and function to function.
  1. Encourage immersion and further engagement
 
Interactions involving motion and animation should therefore be:
  1. Short, sweet, and responsive – they shouldn't slow down interactions, and should only last as long as it takes to accomplish one of the above goals.
  1. Simple, meaningful, and appropriate – when closing iPhone apps, you simply flick the screen up; a satisfying and appropriate gesture for doing so.
  1. Natural and smooth – they should feel like real interactions, mimicking things like inertia, elasticity, and gravity.
 

The ideal of effortlessness

The designer must consider how different functional elements are orchestrated to enable a sense of flow in users. The best interfaces are hardly even noticed because they can be used effortlessly.
Understanding the importance of flow, orchestrating your interface to maximize it, and making judicious use of motion and transitions to ease the user from one state or mode to another can give your apps the aura of effortlessness that helps make them seem to work like magic.
 
 

Ch. 12: Reducing work and eliminating excise

 
The goal of a designer is to minimize work done with an interface as much as possible, while still helping users achieve their goals.
Users perform 4 types of work with digital products:
  1. Cognitive work – understanding the product's behavior, text, and organization
  1. Memory work – remembering those behaviors, commands, passwords, names, locations of objects, relationships between objects
  1. Visual work – knowing where the eye should start on the screen, finding one object from many, understanding layouts, differentiating among visually coded UI elements
  1. Physical work – keystrokes, mouse movements, gestures, switching between input modes, number of clicks to navigate
We want to avoid taxing users with cognitive and physical work when they use our product. The interface should not impose roadblocks. These unnecessary actions can be considered excise tasks, because they represent extra work that don't contribute directly to reaching some goal. These can be separated from goal-directed tasks, which do directly relate to the goal at hand.
 

The types of excise

Navigational excise
Navigating through the functions/features of a digital product is largely excise; difficult navigation is one of the most common problems in interactive products.
📖
Navigation is any action that takes the user to a new part of the interface or requires them to locate objects, tools, or data elsewhere in the system.
Navigation can happen:
  • Across multiple windows, views, pages
    • Can be the most disorienting: the previous window's contents can be totally or partly obscured; if a user has to keep shuttling between windows or views, their flow is disrupted.
  • Across multiple panes or frames within a window, view, or page
    • Adjacent panes can solve many nav problems – they provide useful supporting functions close to the primary work area.
    • If there are too many supporting panes, or they're not well-placed/match users' workflows, it can result in confusion and visual clutter. Trying to be everything to everyone can lead to this overcrowding.
    • Tabbed panes can be used for things like multiple documents, but can obscure their contents unless paired with a succinct label or rich modeless feedback. Then the user has to click into each one to discover what it contains.
    • Adobe uses tabbed panes to pair multiple methods of picking a color.
      Adobe uses tabbed panes to pair multiple methods of picking a color.
    • They can be useful when there are multiple supporting panes for a primary work area, that aren't used at the same time.
  • Across tools, commands, and menus
    • Spatial organization of tools within a pane is critical; tools that are frequently used together should be grouped together and immediately available.
    • Menus require more navigational effort because their contents are not visible before clicking; menus should be reserved for infrequently accessed commands.
  • Within information displayed in a pane or frame, like scrolling, panning, zooming, or following links
    • Information can be navigated by scrolling (panning), linking (jumping), and zooming
    • Scrolling is often a necessity, but we should minimize it as much as possible. There's a tradeoff between paging and scrolling information.
    • Linking is a critical nav paradigm of the web – but it's visually dislocating, so it should be used in conjunction with visual and textual cues.
    • Zooming and panning are navigational tools for exploring 2D and 3D info
  • Directly translating mechanical representations into software can often be limiting and introduce skeuomorphic excise. This goes beyond just styling: if you turned a physical contact list into a digital one while replicating its functionality, it becomes much harder to use.
 
Modal excise
  • Error and confirmation message dialogs are quite common excise elements. The typical error modal is unnecessary – it usually tells the user something they don't care about.
  • Introducing modals interrupts flow! They should be used sparingly
  • Making users ask for permission is pure excise. If you want to change a displayed value, you should be able to change it right there – not navigate to a different place to do it. Many pieces of software have one place where the information is displayed and another where the information be changed – this follows the implementation model. If the user can modify options, they should be able to do it right where they're displayed.
Otter.ai does a good job of allowing for direct change; just hovering over a title gives you the option to click in and edit it.
Otter.ai does a good job of allowing for direct change; just hovering over a title gives you the option to click in and edit it.
Stylistic excise
  • Visual style can create mood and enhance brand, but it shouldn't conflict with utility and usability. The use of visual style (in productivity settings) should support the clear communication of info and interface behavior.
The only way to determine if something is excise is to compare it to a user's goals – excise is contextual. Transient apps may need more assistance in communicating its functionality; this can become agonizing in a sovereign app.
 

Eliminating excise

  1. Reduce the number of places to go
      • Keep the number of windows and view to a minimum; one full screen window with 2-3 views is best for many users. Keep dialogs to a minimum
      • Limit the number of adjacent panes to the minimum needed to achieve their goals. On the web, anything more than 2 navigation areas and one content area gets busy.
      • Limit the number of controls to as few as needed – base this off your personas.
      • Minimize scrolling when possible. Can mean giving supporting panes enough room to display info.
  1. Provide signposts
      • Providing better points of reference can ease navigational stress. This can be done in the form of persistent objects, like sticky navigation with clear markers of where someone currently is within a site.
      • Menus and toolbars can also act as navigational elements. Really, any persistent object has a an effect on orienting a user within a digital product; judicious use of white space and legible fonts is important here.
      • Be careful of making each page on a website look just like every other one – this can be disorienting.
  1. Provide overviews
      • These serve a similar purpose to signposts–orienting users. They can be textual or graphical.
      Two types of overviews; they both provide navigational context, but can be interacted as a navigational interface itself.
      Two types of overviews; they both provide navigational context, but can be interacted as a navigational interface itself.
      • This could also exist as a breadcrumb display.
      notion image
 
4. Properly map controls to functions
📖
Mapping describes the relationship between a control, the thing it affects, and the intended result. Poor mapping forces the user to think about a relationship that should be evident, breaking flow. See Norman's classic stove burner example.
  • Logical mapping can relate more to copy – consider the before and after here:
Before: what happens in ascending? Descending?
Before: what happens in ascending? Descending?
 
Much clearer language create a logical mapping between function and output.
Much clearer language create a logical mapping between function and output.
 
5. Avoid hierarchies
  • Hierarchies aren't natural concepts for storing and receiving arbitrary information. Things in the natural world, like bookcases and file cabinets, never exceed a single level of nesting: this is called monocline grouping. This is the mental model most people bring to software, which can conflict with the extensive amount of nesting and hierarchical structures available in computers.
  • Monocline grouping isn't great for the implementation model, but it could be useful for the represented model. This means structures can be rendered how people imagine them, but in combination with the power of search and access tools.
 
6. Don't replicate mechanical-age models
  • People don't find it difficult to adapt to new representations if they offer a significant improvement.
 

Other common excise traps – a practical list

  1. Don't force users to go to another window to perform a function that affects the current window
  1. Don't force users to remember where they put things in a hierarchical file system
  1. Don't force users to resize windows unnecessarily
  1. Don't force users to move windows
  1. Don't force users to reenter their personal settings – fonts, colors, indentations sound, etc.
  1. Don't force users to ask permission – often a symptom of not allowing input in the same place as output.
  1. Don't ask users to confirm their actions (which requires a robust undo function)
  1. Don't let the user's actions result in an error
 

Ch. 13: Metaphors, idioms, and affordances

 

Interface paradigms

Implementation-centric interfaces
These interfaces show us precisely how they're built; one button per function, one dialog per module of code, etc. This means users have to learn specifically how the software works internally in order to use it. These are clearly the easiest to build: code a function, slap some UI together to match it. These can be quite satisfying to a developer – but are needlessly complex for users. And most people would rather be successful than knowledgable. Michael called IA built in this way "showing your organizational underpants.
 
Metaphoric interfaces
These rely on real-world connections people can make to visual cues & UI. These are a step forward from implementation-centric, but led to the overuse of skeuomorphism.
We can understand metaphors intuitively – but no magical amount of intuitiveness makes something easy to use. Intuition works by inference, where we can make connections between disparate things without being distracted by their differences.
📖
Metaphors can be an efficient way to take advantage of the human ability to make inferences–but it relies heavily on the language, learned experiences, and inferential power of people.
And a global metaphor introduces a ton of navigational excise – see for example this early digital handheld that tries to mimic real-life mechanics:
notion image
 
📖
Design principle: never bend an interface to fit a metaphor. Blind adherence to a metaphor limits the abilities of software unnecessarily!
 
Metaphors don't scale well; as a process grows in complexity or size, it starts to break down. And while it may be easy to find visual metaphors for things like printers, it becomes harder for more abstract things, like processes, relationships, or services. And if the user doesn't have the same cultural background as the designer, the metaphor will fail.
There are exceptions: videogames, flight controls, music–making software. These are places where metaphor and skeuomorphism, if done well, can enhance or make an interface for fitting.
 

Idiomatic interfaces

This is based on how we learn idioms (like "beat around the bush", "cool", etc.)
Idiomatic interfaces solve the problems of implementation and metaphor based interfaces by focused on the learning of simple, non-metaphorical visual and behavioral patterns to accomplish goals and tasks. They shouldn't provoke associative connections like a metaphor. Humans are simply geared for memorizing large amounts of idioms quite easily.
Think of the mouse: its form and graphical representation on the screen aren't a metaphor for anything else. Windows aren't really metaphoric, even if the name implies it. So much of the basic graphical UI elements are idiomatic.
📖
Design principle: Good idioms must be learned only once. Think of a radio button, close boxes, drop-down menus, etc.
 

Building idioms

A well-formed interaction vocabulary can be represented by an inverted pyramid:
Graphical UIs are easy to use bc they can build complex idioms from a very small set of primitives.
Graphical UIs are easy to use bc they can build complex idioms from a very small set of primitives.
This really is the most effective vocabulary for building a system–a language that deviates from this form will be difficult to learn.
 

Manual affordances

📖
Don Norman defined the affordance as "the perceived and actual properties of the thing, primarily those fundamental properties that determine how the thing could possibly be used"
Context for affordances matter: we may see a doorbell next to a door and understand 100% that it's a doorbell. But if that doorbell appeared on the roof of a car, what could it possibly function as? We understand it may be pushable, but with no idea what it could result in.
A manual affordance is something that's clearly shaped to fit out hands or body, like a handle, a little circular button, etc. that we have a natural gravity towards interacting with.
A virtual manual affordance may run into the issue of connecting to what happens when it's interacted with. We may see that a button is clickable, but how do we know what happens when we click it?
📖
Controls must have text or iconic labels on them in order to make sense; otherwise we can only learn its function by experience or training. It's quite easy to create false impressions of what an affordance will do on the web; it's not constrained by its connections to any physical things – make sure you fulfill the expectation set by your affordance.
 

Direct manipulation and pliancy

📖
Design Principle: Rich visual feedback is the key to successful direct manipulation. Lacking clear feedback will fail to effectively create the experience of direct manipulation.
 
Direct manipulation
Art and design tools are good examples to look at – they provide a ton of direct manipulation. Think about the ability to rearrange pages and layers in Figma without going into some separate editing mode – it's always available.
However, direct manipulation isn't always appropriate. It can require skill development for users to be effective at complex tasks, like using C4D. They can require motor coordination and a sense of purpose. Even moving files between different Finder windows can require precision and purpose.
 
Pliancy & hinting
Pliancy refers to objects or screen areas that react to user input. It's important for pliant interface elements to communicate how they can be directly manipulated. Any object that is pliant should communicate that fact visually (though it may be less important for feature-rich expert applications).
There's 3 basic ways to communicate pliancy:
  1. Create static visual affordances as part of the object itself
  1. Dynamically change the object's visual affordances in reaction to change in input focus or other system events
  1. For desktop pointer-drive interfaces, changes the cursor's visual affordance as it passes over and interacts with the object.
Static hinting is when an object's pliancy it communicated by the static rendering of the object itself, like a button with a shadow behind it.
  • Static hinting every object in a control-rich interface could be impractical and cluttered!
  • It's well-suited for mobile interfaces. There's typically fewer items on-screen, they need to be large enough to manipulate with fingers. More visual hierarchy with static hinting can greatly improve usability here.
Dynamic hinting is most often used in desktop UIs – the most common is a hover state or rollover. This removes the need for a persistent, static hint, thereby eliminating visual clutter. This unfortunately isn't an option on mobile devices.
Left: static hinting. Right: dynamic hinting
Left: static hinting. Right: dynamic hinting
Pliant response hinting is when a mouse cursor is clicked but not released, showing an intermediate state that indicates it's about to undergo a state change once release. This is an important feedback mechanism for any control that invokes an action or changes its state.
Cursor hinting on desktops communicates pliancy by changing the cursor icon when it passes over a pliant element. Think about dragging the corner of a window with the cursor to resize it.
📖
Generally speaking, controls should offer static or dynamic hinting, whereas manipulable data should more frequently offer cursor hinting.
A dense data interface like a spreadsheet benefits from cursor hinting. Smaller controls don't have the luxury of static hints like a big button might; cursor hinting helps them stay more accessible to users
A dense data interface like a spreadsheet benefits from cursor hinting. Smaller controls don't have the luxury of static hints like a big button might; cursor hinting helps them stay more accessible to users
 

Ch. 14: Rethinking data entry, storage, and retrieval

Come back to this one: p.326
 
 
 

Ch. 15: Preventing errors and informing decisions

Using rich modeless feedback

The dialog has historically been a blunt instrument for communicating information to users – subtle status information is simply never communicated.
Rich visual modeless feedback (RVMF):
  1. Is rich in terms of giving in-depth info about the status or attributes of a process or object
  1. Is visual in that it makes idiomatic use of pixels on the screen
  1. Is modeless in that this info is always readily displayed, requiring no special action or mode shift on the user's part to view or make sense of.
Ex: progression of an app download is communicated visually
Ex: progression of an app download is communicated visually
 
Imagine if all the objects that had pertinent status information on your desktop or in your application could display their status in this manner. Printer icons could show how close the printer is to completing your print job. Icons for hard drives and removable media could show how full these items are. When an object is selected for drag and drop, all the places that could receive it would become highlighted to announce their receptiveness. p.360
After the user learns your representation of RVMF, they can tell what's going on at a glance. Can you try to replace as many modals as possible using RVMF?
RVMF isn't for beginners, though – it requires discovery on the user's part. It probably isn't the best method for communicating serious information–make distinctions between warning status and less critical RVMF.

Audible feedback

So much of audible feedback is negative: an annoying alert sound when you do something wrong, or a certain action is blocked. Nobody likes hearing these, and they're often paired with a visual anyways. What is audible feedback was used positively? In the real world, this is mostly the case – i.e. the satisfying cling when a camera lens snaps into place.
iPhones, lacking the tactility of a keyboard, make fake key clacking sounds because we depend on them to know if our typing is successful. The Mac OS X screenshot sound is satisfying. Our software should give us constant, small, audible clues just like our keyboards do. Silence can be a good negative auditory cue, in combination with other visual feedback.
 

Undo, redo, and reversible histories

Undo should follow mental models