Design, Design Theory

Agentive Technology, or How to Design AI that Works for People, Part 2

I originally wrote and published this post on Normative Design’s Ratsnest blog as the second part of a two-part synthesis of Design+AI, a monthly meetup hosted by Normative Design to explore how we design in a world augmented by AI. On the Sept 21, 2017 edition of the meetup, Chris Noessel, author of Designing Agentive Technology, joined us to shed some light on the area of designing agentive technology; specifically:

  • What is agentive tech?
  • What is narrow AI versus general AI?
  • How do designers need to modify their practice to design agentive tech?
  • Do agentive tech pose unique operational burdens?

Part 1 of my recap covered the questions of what agentive technology is, along with a discussion of narrow versus general AI. Part 2 ventures into the pragmatic territory of what it all means for us as designers in the field.

What changes in our practice? The good news is: not a lot.

So, from a practice perspective, what are the important differences when we design for agentive technology? What parts of our design practice — process, approach, methods, and perspective — need to be adapted? What should we be thinking about and watching for? What are the gotchas?

Chris notes that what’s missing from his book Designing Agentive Technology is explicit commentary on what changes in our practice. The good news is: not a lot changes. Research should be handled identically. We still need to go out and talk to people who will be served by agents. We still need to understand how the thing that’s in question gets done. We still need to understand the goals of the people using the agent to get things done: none of that changes.

That said, Chris sees three areas that do need to change, two of which are near term, and one that’s longer term.

1. Asking about the future

During research, ask how the future would be like for the user. This is an optional question when designing tools because we tend to face limitations in what we can do for people. However, when designing agentive tech, this is the mission. Let’s imagine we’re designing a solution for our customer Carla. We could frame up a question along the lines of “Carla, if we had infinite money to hire you an assistant who’s infinitely knowledgeable and incredibly fast, how would you hope that assistant would help you?” This uncovers what people don’t like doing — stuff that we can solve with agentive solutions. Another question we could ask might be “What would you still do?” This will uncover what people love doing, tasks or outcomes that we probably shouldn’t give to an agent (perhaps an assistant would be better, something to help people do it themselves). Or, if we do want to design an agentive solution for things that Carla enjoys doing, we need to take care in designing the agentive solution. These types of questions tend to be on the tail end of our normal research routine; with agentive technology, they’re going to be more front and centre.

2. Multiple design layers, leading to more complexity

Chris acknowledges that in his book, agentive tech comes across like a pure thing. In practice, most sophisticated technologies are in layered modes: there’s an agentive mode that sits near an automated mode that sits near an assistive mode that sits near a tool mode that gives back control to humans. For example, during his research on the robo-investor work, one of the research participants wanted to set aside 10% of his theoretical money to see if he could beat the AI because there might be something special in humans that might give us a shot. In reality, there probably aren’t pure agents or pure automatic things, or not too many of them.

In designing agentive tech, we’ll need to keep in mind four modes:

  • There are times when we want agents to handle things routinely and alert us if we need to make decisions or if there are problems;
  • There are times when we only want to be notified when things are fundamentally broken;
  • There are times when we want to do it and want help; and
  • There are times when we want to do it ourselves and don’t want help.

These four modes are likely to be present in sophisticated and certainly enterprise-level projects, and thinking through them is going to be complicated. After research, our modified design process should work through these four modes. First, design it as a manual tool — how would Carla do it herself with no help. Then, add the assistive layer: how does the AI help her do it and how does she turn it off if she wants to? Then, how might we make it agentive so that we can take the burdensome or routine parts off Carla and let her pay attention to the things that she loves. Finally, what does the autonomous mode look like? If the engineering team is confident, how might we let Carla pass it off into autonomous mode and how does she re-gain control? These are complicated layers to design for, but they’re necessary.

3. Interaction design will get more complicated when users and agents proliferate

In the longer-term future, user and agents will proliferate. And they’ll interact with one another and likely with peer groups outside their immediate domains. These interactions will make design more complicated. How will we account for the touch point experiences and the outcomes when users and agents talk to one another? For instance, Chris wonders if his health agent going to rat on his insurance agent? I expect the answer likely lies in who paid for the creation and maintenance of the agent.

Selling agentive tech

Conceptually at least, business leaders like agentive tech: there are interesting opportunities for up-sell, cross-sell, lowering the cost of human resources for routine tasks, etc. In practice, getting funding is harder. The response is typically, “Cool idea! OK, let’s get back to building this piece of junk.” How do we get past this inertia?

Find an executive champion

Chris suggests that we use the same design methodology he discussed earlier. First, design the manual solution. Get agreement that the manual solution solves the pains for the user and for the business. Once we have that agreement, we can suggest doing it one better with an agentive solution. So, for example, instead of the tool that Carla could use to locate the information she needs, with the agentive solution, the information will come to her, although she’ll have options for varying degrees of intervention. If we can get to an agreement on the benefits that the agentive mode provides, we can then talk to engineering about the costs associated with the four different modes: tool mode, assistive mode, agentive mode, autonomous mode. Expect engineers to prioritize the tool over the agent, so internal champions are going to be mission-critical.

First mover advantage

In Wired for War, P.W. Singer talks about how technology is influencing the world’s concept of war. One of the concepts is “threshold technology”. For example: once a culture adopts drones for warfare, why would it ever risk flesh and blood again? In general, once we adopt a technology, we’re loath to go back. Drones are an example of a threshold technology. This notion of threshold tech applies to agents. Once you have a Roomba, pulling out the Dyson feels like a chore. The Roomba is a threshold tech.

Likewise, agentive tech is a threshold concept. It confers a competitive advantage. The first player in a domain to introduce functional agents to their users will have their attention and loyalty, or from my perspective, at least establish the inertia that prevents customers from switching or leaving. If business leaders miss out on the first mover advantage, they’ll spend time scrambling to catch up and lose market share.

Operationalizing agentive tech

What exactly does it mean to build agentive technology? In practical terms, it doesn’t require fancy specialized coding skills. What we’re building is software to watch data streams for triggers and then to act on those triggers. Devs need to design triggers and watch data streams for a set of conditions or scenarios that designers design to be smart as a default, but design controls and flows for users to modify triggers and conditions or scenarios. There are then triggers and behaviours to enact, which is at the core of object-oriented programming.

To have humans watch for conditions and triggers, then perform the pre-defined acts is orders of magnitude greater than the server space required to house the software. Delivering this continuousness is more pragmatic and feasible using software than with manual human labour. In his book, Chris cites the Mackworth Clock Test, which taught us that even if we wanted to use humans to provide this persistent service, it isn’t something the human brain is wired for:

People are terrible at vigilance. Back in the 1940s, a researcher named Norman Mackworth made formal studies about how long people could visually monitor a system for “critical signals.” He made a “clock” with a single ticking second hand that would jump every second, but which would irregularly jump forward two seconds instead of one. He would then sit his unfortunate test subjects in front of it for two hours and ask them to press a button when the clock jumped. Then he tracked each subject’s accuracy over time. It turns out that, in general, people could go for around half an hour before their vigilance would significantly decay, and they would being missing jumps. This loss of attention has little consequence in a testing environment, but Mackworth was studying the vigilance limits of WWI radar operators, when missing a critical signal could mean life or death. ( Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.142)

From an engineering perspective, the challenge of agentive software isn’t an issue of skills but operational resources. Agents, it’s argued, can be operational sinkholes. Crashed servers and other downtime conditions require human intervention. And if we built agents to mitigate that human intervention, that could spiral into meta problems, so the argument goes. Chris points out that these problems exist today — servers go offline, software gets deprecated — we just offload them to users. We don’t need to reach far for examples: the recent release of iOS11 created an ‘appocalypse’ of apps built with 32-bit technology. Unquestionably, agentive tech imposes an operational burden, but to a large extent, every improvement for users implies some operational burden. Ops costs need to be managed, not seen as an obstacle or reason to avoid using agentive technology to improve the human experience.

Intelligence is a multivariate space

Humming below the surface of every Design and AI meetup is the fear of monolithic AI that pits human intelligence against artificial intelligence. Chris argues this is a false dichotomy. Moravec’s Paradox showed us that “contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.” Put differently, machines / computers are better at doing some things better than humans, while humans are better at other things. Chris notes that the mistake is in thinking about it as a problem of simple task allocation; it’s more complicated than that. Citing Howard Gardner’s (contentious) hypothesis of the theory of multiple intelligences, Chris prefers to think of intelligence as a multivariate space and proposes that if we design an AI to mimic human intelligence, we’ll have missed the point.

We tend to think of an over-arching agent such as a self-driving car rather than the component parts that deliver the self-driving outcome; it’s certainly a more saleable concept — after all, people buy benefits, not features. Put it differently, agency is multi-part: we confer agency to software over a series of tasks rather than agency over an outcome or job-to-be-done. Ultimately, how much agency is given to software should be determined by user preferences. In a future when self-driving cars dominate, it’s reasonable to assume that we will want to impose some degree of practice time to ensure our driving skills don’t atrophy into a dangerous state where we aren’t able to re-take control from the software agent. It’s also reasonable to assume that individuals will want to set their preferences for how much practice they want: the utilitarian will want the minimum recommended amount while the enthusiast will want more.

Chris expects that our agentive design practice is more likely to start at the small bits and then get grouped into larger bits. Some agents’ scope will be so small that they won’t be perceived as agents at all. Take blinkers in a car, for instance: they work with a single touch; the salesperson at the auto dealership doesn’t sell it as a feature, as an automatic, agentive blinker. And yet, behind the single human touch that engages the blinker are a series of tasks and processes that get triggered. It’s only when something gets to behaviour that feels ‘more human than I thought technology could do’ that we need to have a conversation about it. Until then, software agents are just improving technologies that will likely be expressed through APIs and networked communications, rather than a singular monolithic expression. The scope of artificial intelligence is still too ill-defined to get to an infrastructure of agents from the top down. It’s more likely to be emergent: lots of small, well-contained agents will be released; things will either break down or people will introduce coordinating agents that becomes, de facto, a weird war of escalation that will start to drive top-down organization.

Instead of the human versus artificial construct, framing our design practice space as narrow AI versus general AI offers more pragmatic, productive opportunities to explore this technological frontier responsibly. Chris’ hypothesis is that narrow AI will get safer as it gets smarter, while general AI will get more dangerous as it gets smarter — that’s the existential threat that Hawking and Musk refer to. Narrow AI offers opportunities for more people to contribute to the field of AI, and this democratizing effect is critical:

Don’t think about one agent. Think about all the agents, their capabilities, and their rules. We will be building a giant, worldwide database of behaviors, rules, and contexts by which we want to be individually treated. That stack would be impossible for any human to make sense of, but it might be the exact right thing to hand to the first artificial general intelligence.

Maybe in working on triggers and rules, we’re building the Ultimate Handbook of Helping Humans, one rule at a time. Agentive technology may be the best hope of ensuring general AI doesn’t end up being our Great Filter. Instead of [Asimov’s] four laws of robotics, we’ll have four trillion laws of humanity. (Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.187)

A final call to action

Designing agentive solutions means shifting “our day-to-day practice from building tools for people to do work themselves to building usable, effective, and humane agents.” And to do that effectively, ethically, and purposefully,

[w]e must build a community of practice so that we can get better at this new work .To build a shared vocabulary amongst ourselves to have good and productive discussions about what’s best. To share case studies, analysis, critiques successes, and yes, failures. To get some numbers about effectiveness and return-on-investment to share with our business leads. To push these ideas forward and share new, better development libraries, documentation techniques, and conceptual models. To reify the ways we talk to others about what it is we’re building and why. (Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.192)

I couldn’t have expressed the purpose of our design and AI meetup better than Chris. So, if you’re working on software ideas that can help people, join us.

Disclaimer: The ideas and opinions expressed in this post are my synthesis of Chris Noessel’s session at a Design and AI meetup hosted by Normative, where I work. My views are not necessarily those held by Normative nor by Chris Noessel, and any technical errors or omissions are mine alone.

References

http://rosenfeldmedia.com/books/designing-agentive-technology/
https://medium.com/@christophernoessel/ani-design-skills-f0af22360570

https://articles.uie.com/new-technologies-to-consider-for-interaction/

Advertisements
Design, Design Theory

Agentive Technology, or How to Design AI that Works for People, Part 1

I originally wrote and published this post on Normative Design’s Ratsnest blog as the first part of a two-part synthesis of Design+AI, a monthly meetup hosted by Normative Design to explore how we design in a world augmented by AI. On the Sept 21, 2017 edition of the meetup, Chris Noessel, author of Designing Agentive Technology, joined us to shed some light on the area of designing agentive technology; specifically:

  • What is agentive tech?
  • What is narrow AI versus general AI?
  • How do designers need to modify their practice to design agentive tech?
  • Do agentive tech pose unique operational burdens?

Before the conversation got going, the group indulged me by playing a game of “what is and isn’t agentive tech”. Players considered a pair of options and identify which was agentive.

Unsurprisingly, only one person correctly identified the agentive tech. Hint: most of the pairs are both agentive to some degree, while some of the pairs contained only one example of agentive tech, and one contained a super artificial intelligence. So be warned…

What the heck is agentive technology?

I asked Chris to level-set on what he mean when he says “agentive technology”.

Agentive tech was an idea that arose from two forces that came together. On the one end, Chris had been challenged by one of his designers on his vision of the future from a designer’s perspective. On the other end, Chris started to see a pattern in his own work over the past two decades.

In short, it’s a new mode of interaction enabled by recent advances in narrow AI, in which the technology does something on behalf of the user, persistently and in a hyperpersonalized way. To understand more, we have to go back in time a bit.

What is the largest possible context for the world of interaction design?

Chris posits that the history of interaction design starts in WWII with human factors engineering. Highly trained and competent pilots were crashing planes. We learned through research that the machines were just too complicated: they represented a level of cognitive load that competed with other tasks and objectives and overwhelmed the pilots. The legacy of that research is human factors engineering.

If interaction design started with HFE, where’s the other end? When will our jobs be moot? Chris hypothesizes that “General” AI is the end of our jobs as interaction designers. Once we have something that can do what we do but is smarter and faster and can collaborate with the hive mind around the world, that’s pretty much the end for interaction design as a specialist job (if not well, almost all jobs). The question is: where are we now as we head into general AI? How close or far are we from general AI?

A new mode of interaction: outsourcing the work of achieving outcomes

Over the course of two decades, Chris has worked on various solutions that all involved outsourcing work to software in order to achieve outcomes. Based on his work as well as his personal experiences of different consumer devices, Chris started to see an emergent pattern. So, what do underwater science robot towers, automatic cat feeders, robo-investors, and Roombas have in common? Let’s look at each of these examples.

At Microsoft, he worked with the University of Washington to design underwater robots with sensors for seismic measurements. The robots would be pre-programmed to watch for certain things and then travel to different areas to collect different sorts of data, areas where humans couldn’t go with measuring tools in hand. These robots weren’t directly controlled by scientists, but did work for them.

When he travels for work, Chris’ used an automatic cat feeder to keep his cat from going hungry. That said, early edition cat feeders had a limitations: when they worked, they did, and when they didn’t, they didn’t, but you wouldn’t know either way, and either your cat would go hungry or you’d be worried about your cat going hungry. It should put food in the cat’s belly and assurances of the same into the user’s attention.

His work on robo-investor software taught him that despite knowing that the algorithm had data and reaction times that were better than human, people still wanted to see if they can beat the algorithm. People wanted room for play, for serendipity.

Roombas promised set-it-and-forget-it convenience for a household chore that many of us would rather not do. Who doesn’t love the feeling of coming home to clean floors without having had to do the work? The Roomba is not a fancy way for you to do vacuuming.

Granting agency to software

All these things started to feel like they were “of a piece”. There’s a pattern that felt like “I’m not doing the work. It’s doing the work.” Chris observed that in the past, he would design things to help people do work, build tools for people to do work, but this new type of thing was different: he was telling things how to do the work, and they would handle it from that point forward.

For example, even the humble automatic feeder wasn’t a tool to better feed his cat. Chris told it when to dispense food, told it how much food to dispense, and to continue doing so until further notice. Roomba isn’t a tool for us to push a vacuum cleaner around. We tell it when to clean, and it does. A robo-investor isn’t a tool for data and information; we tell it our financial goals, and from that point forward, it would do its best to achieve them. We can still look in and provide feedback, change up a few parameters, but it would continue operating on its own. We are granting these objects agency.

Disambiguating agentive technology

Chris saw an emergent pattern: the things he was designing and using were not automatic because automatic things didn’t need his attention at all. Think of a pacemaker as a good example of an automatic tech. If the human needs to get involved, automation has failed, and that’s not a design problem but an engineering problem.

The types of things we’re talking about are not helping us do things the ourselves. Smart assistants help us do things. Agentive tech, in contrast, does the things for us. For example, you can tell Google Keep to remind you to do a task when you’re at a location and/or at a certain time, but that’s all it does: it reminds you but doesn’t do it for you. It’s an agentive alert because it watches for your location in space and time, but it’s not like those other agents because you, the human, still have to act on the reminder. This notion of “help me do things” versus “do things for me” (or assistants versus agents) is a way for us to explore and understand a new class of interactions.

This class or pattern of interactions is marked by software that takes our directives, then implements on our behalf. As he cast about for a way to describe this class of technology, Chris looked for the adjectival form of “agent”, which turned out to be “agentive”. Agent-ive…saying it like that helps people realize it has to do with agents. In fact, in Japan, agentive technology is translated as “agent-based AI”.

Within the context of the general trajectory of general artificial intelligence that our industry is on, this pattern of agentive interactions is a “weak” kind of AI called narrow AI. It’s narrow because it’s smart in narrowly defined domains. It can’t generalize its knowledge to new domains.

Agentive technology is persistent, always-on, domain-specific narrow AI that acts on its user’s behalf in a hyper-personal way.

AI versus Narrow AI versus General AI

Invariably, the question of “what is AI” reared its head. From Chris’ vantage point, asking the question “what is AI?” is both interesting and not interesting. It’s not interesting because the term AI is too ambiguous to be useful or pragmatic. We’ve been talking about it since the 1950s and still can’t quite agree on what we want it to mean. For starters, the term “artificial” somehow implies fake, or at least made by humans. Maybe? As for the term “intelligence”, we don’t have a grasp on what that is, even after 100 years of studying it. And now we’ve combined these two notions into something we call “artificial intelligence”, begging the question — what does that even mean?

Having an adjective to the term “AI” perhaps gets us a bit closer to understanding what we’ve been doing, the implications and ramifications, and more important, our responsibilities. Thinking about in terms of “general AI” versus “narrow AI” begins to unpack some of the types of “artificial intelligence” work that we’ve been doing. We can posit that “General AI” refers to a human-like intelligence; specifically, an intelligence that can generalize from domain to domain. Not only can it learn, but it can learn across domains. Roomba is not that: it can’t generalize what it’s doing to other domains. It can only vacuum.

When we use the term “general AI”, we tend to mean human-like. Narrow AI is the stuff that comes before it; it’s an asymptotic approach to general AI or human-like intelligence. Narrow AI is the suite of technologies that are improving and getting human-like in their intelligence in a specific narrow domain.

As a designer, I take the maker’s approach: learn about the thing by making it, playing with it, testing its limits. To channel Richard Feynman, “What I cannot create, I do not understand.” While it’s fun in a hurts-my-brain way to participate in armchair philosophy about a monolithic AI, its breadth and non-specificity leaves me struggling to answer the “so what does it mean for me” and “now what do I do” questions. It also leaves me no closer to clarifying my own thoughts on the ethical implications of technology. Framing the mission in terms of narrow AI, however, does help me get down to where the rubber meets the road and to begin to understand what it really means to use “artificial intelligence” to solve human problems.

In Part 2, we’ll explore how our design practice needs to adapt when designing agentive technology.

References

http://rosenfeldmedia.com/books/designing-agentive-technology/
https://medium.com/@christophernoessel/ani-design-skills-f0af22360570

https://articles.uie.com/new-technologies-to-consider-for-interaction/

Disclaimer: The ideas and opinions expressed in this post are my synthesis of Chris Noessel’s session at a Design and AI meetup hosted by Normative, where I work. My views are not necessarily those held by Normative nor by Chris Noessel, and any technical errors or omissions are mine alone.

Design, Design Process, Design Theory

A Framework to Design for Impact

Design serves a purpose, solves a problem, addresses a need. This gives us the natural boundaries to create a design framework that we can use repeatedly to deliver results. To make an impact as designers, we must be motivated by delivering results. The good news is, there’s a repeatable process we can use that many designers before us have proven.

To design for impact, we need answers to three main questions:

  • Whose problem is it?
  • What’s the real problem we’re trying to solve?
  • How will we know if we’ve succeeded?

Whose problem is it?

Our goal here is to build empathy: for the business, for customers, for the technical landscape. We’re seeking to understand what other people are going through and what the system is capable or incapable of in order to solve problems and find solutions. This desire will drive our commitment and creativity. But we don’t want to get so immersed that we lose objectivity.

We need to walk that mile in someone else’s shoes, then put our own back on.

To understand the problem we’re trying to solve, we need context: business and customer context, as well as any delivery constraints. Often, business stakeholders ask for a feature to solve a customer problem when, in fact, it’s a business problem. Or worse, a problem that customers experience is actually one that’s created by the business, rather than a customer need.

Understand the business context

Why is the business asking for a feature? They almost always frame it as a feature rather than a problem. It’s our job as designers to de-construct that feature into a problem statement in order to unpack the motivations.

  • Why is the business asking for the feature / problem to be solved in the time frame that they’re asking?
  • Is there competitive pressure?
  • Is it a first-mover advantage that the business wants to take advantage of?
  • Is that team’s funding dependent on this problem being solved?
  • Etc.

For every “crazy” request, there are usually deeper reasons. Find out what those reasons are. As designers, we’ve been conditioned to have empathy for users. Go one step further: build empathy for stakeholders. This delivers a one-two punch: first, building empathy for stakeholders help us unpack the reasons behind the seemingly crazy requests, and more important, the empathetic relationships we build foster mutual trust and may actually empower us to dial down the crazy.

Understand the customer’s context

Dig into the motivations behind the requested feature or the problem statement from a customer’s perspective:

  • Why are customers asking for the feature?
  • What are the underlying motivations?
  • What conditions generated the problems that customers are experiencing?
  • Could the problem at hand be eliminated by resolving an issue upstream?
  • Etc.

Understand the delivery constraints

Often, especially in enterprise settings, the most usable/user-friendly/customer-centric solution is not feasible for a variety of reasons:

  • The existing antiquated technology infrastructure can’t support the solution
  • The solution runs counter to the organization’s business model
  • There’s no business appetite to pay for the ideal solution.
  • The technology needed to support the solution doesn’t exist right now
  • Etc.

We can sit around and bemoan the delivery constraints, or we can find ways past or around them. It’s not helpful to think in terms of ideal versus compromise. The real world is all about costs and benefits, pros and cons, give and take: our job as designers is to find a balancing point that delivers a user experience that the business can feasibly fund and that engineering can technically enable.

What’s the real problem are we trying to solve?

The business, customer, and technical context we gathered gives us the data we need to triangulate the real problem we’re trying solve.

The project might have started off with one request, but in digging behind the request, we may uncover something deeper. True story: once upon a time, the business asked for an ‘download PDF’ feature, but digging deeper revealed what customers actually needed was a spreadsheet they can manipulate for custom queries they need to run. Without the research, we would’ve agonized over a ‘download PDF’ feature that included trying to figure out how to message customers whose PDFs take 24 hours to generate, when all they really needed was a .csv export. In fact, because a PDF is a static artifact, customers are copying and pasting the data from the PDF into a spreadsheet.

Defining the right problem is more than half the battle. We can’t reliably deliver an effective solution if we focus on the wrong problem.

How will we know we’ve succeeded?

Building in a feedback loop is critical to the success of any design solution.

  • Did we frame the problem statement correctly?
  • Did our solution create any new problems?
  • Did our solution solve the problem?

Always be testing

In the problem framing stage, we talk to customers to validate our problem statements. We believe our idea Y solves problem X. First, we need to be sure that problem X represents a real business opportunity. Many ideas arise out of problems we’re solving for ourselves. For our solution to be financially viable, we need to find out if enough other people share the same problem and are willing to pay for a solution. In other situations, businesses create problems for their customers and then try to sell a new solution for those problems; is it possible to eliminate the original problem to begin with? Perhaps the real problem to be solved is further upstream.

In the solutions framing stage, we test our solutions to identify any usability issues before we release our solution into market. If our solutions are complicated, hard to use, or otherwise unusable, we haven’t actually solved the problem. Testing solutions before we commit the time and resources to make those solutions is a gut-check: the goal is to identify as many usability issues as possible. Better that we discover and address usability issues than to have the support team be overwhelmed by customer complaints.

Analytics

In the delivery stage, we need to bake measurement points into our solution to feed us the data that tells us whether our solution worked. These measurement points can range from specific things we want people to do, such as calls-to-action, to behind-the-scenes tools such as heat map and click analytics tracking.

Often, we release and walk away. As designers, we should hold ourselves and our work more accountable. From a career perspective, holding ourselves accountable gives us credibility. If we work as consultants, holding ourselves accountable is also good for business: it presents natural opportunities to check in with clients after the project is done.

Conversion

If we can’t answer the following two questions, we need to go back to problem-framing:

  • What do we want people to do?
  • How will we know if they’ve done it?

Being clear on what we want people to do helps us understand how to measure success. Maybe it’s “sign up for our services”;  “buy our products”; “create better onboarding so that customers don’t have to waste time calling us for support”. Sometimes, it can even uncover business gaps or opportunities: for instance, if we want people to go to a brick-and-mortar store to complete an action, we need to be prepared to handle the increased store traffic or risk alienating hard-won new customers.

Easier said than done

Three steps. Is that all? Yes, these are admittedly three giant steps. And they’re not easy either. However, they offer a structured, foolproof way to get started. For projects that are complex, this 3-question design framework can help us manage the chaos. For projects that appear simple, this 3-question framework can sometimes expose underlying complexities early enough for us to organize around. It’s a framework that has served me well and helped me design for impact.

Design, Design Process, Shop Talk, UX

Held Hostage: the Forgotten Users of Software for Business

I had to write down step-by-step instructions so I could remember what to do.
Usage was incredibly frustrating. The dark green with black text is impossible to read at a glance, and the contrast with the bright yellow is really harsh if you’re trying to find something. You can resize the columns and sort by the columns, but resizing and sorting needs to be done every time you come back to the screen. So the filter option is the only realistic option.

Meet Valerie and Michelle – two individuals I happen to know.

I reached out to them on Facebook. They don’t describe their experience with Facebook in the same way.

The thing is, they’re still Valerie and Michelle whether they’re using software for work or for pleasure.

Business software: we have no choice but to use it.

Most business software is badly designed. From custom software that’s home-brewed by engineers and business analysts working in big companies to design software like Photoshop, users of these products face steep learning curves. These steep learning curves are a harbinger of opportunities for improvement, if not downright disruption.

It’s easy to find companies that peddle consumer-grade software caring about design. The resulting usability of well-designed products reap measurable dividends. A key reason these products succeed is the focus on the user. Before “user experience” was all the rage, “user-centred design” was the big awakening. In all the excitement, business-grade software – and the humans who are forced to use that software – were left behind and forgotten.

It’s hard to find designers with the appetite and aptitude to design business software. Paul Adams’ pithy coinage of the phrase “the dribblization of design” reflects the persistence of design being perceived as visual or graphic design. It’s a mental model that plagues designers and non-designers alike.

I’m certainly not the first or the only one to have raised this point.

Dylan Willbanks recently noted, “…when you talk to the leadership of these enterprise companies, they want a consumer-grade experience built into their SaaS-based billion dollar applications. So they bring in consumer-grade user experience designers, raised on user-centered design, taught that “innovation” is supreme. Bolstered by a “make it pretty” attitude in the executives, they set to work trying not for Olive Garden but more Eleven Madison Park — locally foraged! Haute cuisine! Sous vide! And their resulting designs end up emphasizing the wrong things. Icons get prettier. Cool new animations in a cool new iOS version of the application. The aesthetics are greatly improved. But the underlying functionality is still a mess, performance is still slow, and even as they’re defending their slick new mobile app[,] there’s a nagging doubt whether someone really does want to review complex spreadsheets on their phone. The drive is on presentation, but experience driven design goes by the wayside.”

Tom Hobbs, back in 2015, issued a ‘call to arms’ for improving the design of enterprise software. And earlier this year, Facebook’s Margaret Stewart was at O’Reilly’s Design Conference trying to rouse the troops to take up the cause of designing for business users. And yet others, such as Uday Gajendar, have felt the need to be apologists for heeding the call.

Business users are people, too!

I’d like to add my voice to the ‘call to arms’ by making the case from the perspective of the users.

The design community has made great strides in improving the experiences for many people when they use software for pleasure. We can do the same by remembering that these same people also need to use software for work. In fact, they’re the same people for whom we designed Pinterest, Uber, and all the other digital darlings du jour.

What if the software they use for work sucked a little less?

So, what’s the field of play? What’s software for business? Most people define it as software built by companies for use internally by employees. I would also include software designed, built, and sold by companies to their customers, as well as software designed and built by companies for use by their customers. Most of this stuff is terrible; some of it is really terrible. And the kicker is that lots of time and money was involved in producing this terrible stuff to be inflicted on hapless people who have to use this software every mind-numbing, rage-inspiring day.

We can do better. We should do better.

How can business software be more human-optimized?

Back to first principles. Our designer’s toolkit is still valid, though some of those tools may need a bit of imagination and creativity to be adapted to handle the more complex data models and mental models of the business landscape:

  • Develop empathy for the people who have to use it
    • users of business software are humans, too
    • go beyond tasks to understand the real reason they’re using the software
    • understand the workarounds they’re coming up with to get through their day and still catch their commuter train home despite the bad UX of the software they need to work with
    • creating taxonomy, IA, and mental models that are understandable by users rather than the engineering vocabulary used to describe the system;
  • Design for the network
    • no enterprise software exists in isolation: it’s always connected to something upstream, something downstream, and often something adjacent
    • work flows in a single application typically traverse multiple systems, some of which aren’t even digital;
  • Consider AI augmentation
    • consider the possibility of designing parts of the system to offload work that scripts and bots can do in order to free up humans to do the work that only humans can do.

Business software is the next frontier of UX design, rich with opportunities and relatively free of competition for right-minded designers to make our mark.