I originally wrote and published this post on Normative Design’s Ratsnest blog as the second part of a two-part synthesis of Design+AI, a monthly meetup hosted by Normative Design to explore how we design in a world augmented by AI. On the Sept 21, 2017 edition of the meetup, Chris Noessel, author of Designing Agentive Technology, joined us to shed some light on the area of designing agentive technology; specifically:
- What is agentive tech?
- What is narrow AI versus general AI?
- How do designers need to modify their practice to design agentive tech?
- Do agentive tech pose unique operational burdens?
Part 1 of my recap covered the questions of what agentive technology is, along with a discussion of narrow versus general AI. Part 2 ventures into the pragmatic territory of what it all means for us as designers in the field.
What changes in our practice? The good news is: not a lot.
So, from a practice perspective, what are the important differences when we design for agentive technology? What parts of our design practice — process, approach, methods, and perspective — need to be adapted? What should we be thinking about and watching for? What are the gotchas?
Chris notes that what’s missing from his book Designing Agentive Technology is explicit commentary on what changes in our practice. The good news is: not a lot changes. Research should be handled identically. We still need to go out and talk to people who will be served by agents. We still need to understand how the thing that’s in question gets done. We still need to understand the goals of the people using the agent to get things done: none of that changes.
That said, Chris sees three areas that do need to change, two of which are near term, and one that’s longer term.
1. Asking about the future
During research, ask how the future would be like for the user. This is an optional question when designing tools because we tend to face limitations in what we can do for people. However, when designing agentive tech, this is the mission. Let’s imagine we’re designing a solution for our customer Carla. We could frame up a question along the lines of “Carla, if we had infinite money to hire you an assistant who’s infinitely knowledgeable and incredibly fast, how would you hope that assistant would help you?” This uncovers what people don’t like doing — stuff that we can solve with agentive solutions. Another question we could ask might be “What would you still do?” This will uncover what people love doing, tasks or outcomes that we probably shouldn’t give to an agent (perhaps an assistant would be better, something to help people do it themselves). Or, if we do want to design an agentive solution for things that Carla enjoys doing, we need to take care in designing the agentive solution. These types of questions tend to be on the tail end of our normal research routine; with agentive technology, they’re going to be more front and centre.
2. Multiple design layers, leading to more complexity
Chris acknowledges that in his book, agentive tech comes across like a pure thing. In practice, most sophisticated technologies are in layered modes: there’s an agentive mode that sits near an automated mode that sits near an assistive mode that sits near a tool mode that gives back control to humans. For example, during his research on the robo-investor work, one of the research participants wanted to set aside 10% of his theoretical money to see if he could beat the AI because there might be something special in humans that might give us a shot. In reality, there probably aren’t pure agents or pure automatic things, or not too many of them.
In designing agentive tech, we’ll need to keep in mind four modes:
- There are times when we want agents to handle things routinely and alert us if we need to make decisions or if there are problems;
- There are times when we only want to be notified when things are fundamentally broken;
- There are times when we want to do it and want help; and
- There are times when we want to do it ourselves and don’t want help.
These four modes are likely to be present in sophisticated and certainly enterprise-level projects, and thinking through them is going to be complicated. After research, our modified design process should work through these four modes. First, design it as a manual tool — how would Carla do it herself with no help. Then, add the assistive layer: how does the AI help her do it and how does she turn it off if she wants to? Then, how might we make it agentive so that we can take the burdensome or routine parts off Carla and let her pay attention to the things that she loves. Finally, what does the autonomous mode look like? If the engineering team is confident, how might we let Carla pass it off into autonomous mode and how does she re-gain control? These are complicated layers to design for, but they’re necessary.
3. Interaction design will get more complicated when users and agents proliferate
In the longer-term future, user and agents will proliferate. And they’ll interact with one another and likely with peer groups outside their immediate domains. These interactions will make design more complicated. How will we account for the touch point experiences and the outcomes when users and agents talk to one another? For instance, Chris wonders if his health agent going to rat on his insurance agent? I expect the answer likely lies in who paid for the creation and maintenance of the agent.
Selling agentive tech
Conceptually at least, business leaders like agentive tech: there are interesting opportunities for up-sell, cross-sell, lowering the cost of human resources for routine tasks, etc. In practice, getting funding is harder. The response is typically, “Cool idea! OK, let’s get back to building this piece of junk.” How do we get past this inertia?
Find an executive champion
Chris suggests that we use the same design methodology he discussed earlier. First, design the manual solution. Get agreement that the manual solution solves the pains for the user and for the business. Once we have that agreement, we can suggest doing it one better with an agentive solution. So, for example, instead of the tool that Carla could use to locate the information she needs, with the agentive solution, the information will come to her, although she’ll have options for varying degrees of intervention. If we can get to an agreement on the benefits that the agentive mode provides, we can then talk to engineering about the costs associated with the four different modes: tool mode, assistive mode, agentive mode, autonomous mode. Expect engineers to prioritize the tool over the agent, so internal champions are going to be mission-critical.
First mover advantage
In Wired for War, P.W. Singer talks about how technology is influencing the world’s concept of war. One of the concepts is “threshold technology”. For example: once a culture adopts drones for warfare, why would it ever risk flesh and blood again? In general, once we adopt a technology, we’re loath to go back. Drones are an example of a threshold technology. This notion of threshold tech applies to agents. Once you have a Roomba, pulling out the Dyson feels like a chore. The Roomba is a threshold tech.
Likewise, agentive tech is a threshold concept. It confers a competitive advantage. The first player in a domain to introduce functional agents to their users will have their attention and loyalty, or from my perspective, at least establish the inertia that prevents customers from switching or leaving. If business leaders miss out on the first mover advantage, they’ll spend time scrambling to catch up and lose market share.
Operationalizing agentive tech
What exactly does it mean to build agentive technology? In practical terms, it doesn’t require fancy specialized coding skills. What we’re building is software to watch data streams for triggers and then to act on those triggers. Devs need to design triggers and watch data streams for a set of conditions or scenarios that designers design to be smart as a default, but design controls and flows for users to modify triggers and conditions or scenarios. There are then triggers and behaviours to enact, which is at the core of object-oriented programming.
To have humans watch for conditions and triggers, then perform the pre-defined acts is orders of magnitude greater than the server space required to house the software. Delivering this continuousness is more pragmatic and feasible using software than with manual human labour. In his book, Chris cites the Mackworth Clock Test, which taught us that even if we wanted to use humans to provide this persistent service, it isn’t something the human brain is wired for:
People are terrible at vigilance. Back in the 1940s, a researcher named Norman Mackworth made formal studies about how long people could visually monitor a system for “critical signals.” He made a “clock” with a single ticking second hand that would jump every second, but which would irregularly jump forward two seconds instead of one. He would then sit his unfortunate test subjects in front of it for two hours and ask them to press a button when the clock jumped. Then he tracked each subject’s accuracy over time. It turns out that, in general, people could go for around half an hour before their vigilance would significantly decay, and they would being missing jumps. This loss of attention has little consequence in a testing environment, but Mackworth was studying the vigilance limits of WWI radar operators, when missing a critical signal could mean life or death. ( Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.142)
From an engineering perspective, the challenge of agentive software isn’t an issue of skills but operational resources. Agents, it’s argued, can be operational sinkholes. Crashed servers and other downtime conditions require human intervention. And if we built agents to mitigate that human intervention, that could spiral into meta problems, so the argument goes. Chris points out that these problems exist today — servers go offline, software gets deprecated — we just offload them to users. We don’t need to reach far for examples: the recent release of iOS11 created an ‘appocalypse’ of apps built with 32-bit technology. Unquestionably, agentive tech imposes an operational burden, but to a large extent, every improvement for users implies some operational burden. Ops costs need to be managed, not seen as an obstacle or reason to avoid using agentive technology to improve the human experience.
Intelligence is a multivariate space
Humming below the surface of every Design and AI meetup is the fear of monolithic AI that pits human intelligence against artificial intelligence. Chris argues this is a false dichotomy. Moravec’s Paradox showed us that “contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.” Put differently, machines / computers are better at doing some things better than humans, while humans are better at other things. Chris notes that the mistake is in thinking about it as a problem of simple task allocation; it’s more complicated than that. Citing Howard Gardner’s (contentious) hypothesis of the theory of multiple intelligences, Chris prefers to think of intelligence as a multivariate space and proposes that if we design an AI to mimic human intelligence, we’ll have missed the point.
We tend to think of an over-arching agent such as a self-driving car rather than the component parts that deliver the self-driving outcome; it’s certainly a more saleable concept — after all, people buy benefits, not features. Put it differently, agency is multi-part: we confer agency to software over a series of tasks rather than agency over an outcome or job-to-be-done. Ultimately, how much agency is given to software should be determined by user preferences. In a future when self-driving cars dominate, it’s reasonable to assume that we will want to impose some degree of practice time to ensure our driving skills don’t atrophy into a dangerous state where we aren’t able to re-take control from the software agent. It’s also reasonable to assume that individuals will want to set their preferences for how much practice they want: the utilitarian will want the minimum recommended amount while the enthusiast will want more.
Chris expects that our agentive design practice is more likely to start at the small bits and then get grouped into larger bits. Some agents’ scope will be so small that they won’t be perceived as agents at all. Take blinkers in a car, for instance: they work with a single touch; the salesperson at the auto dealership doesn’t sell it as a feature, as an automatic, agentive blinker. And yet, behind the single human touch that engages the blinker are a series of tasks and processes that get triggered. It’s only when something gets to behaviour that feels ‘more human than I thought technology could do’ that we need to have a conversation about it. Until then, software agents are just improving technologies that will likely be expressed through APIs and networked communications, rather than a singular monolithic expression. The scope of artificial intelligence is still too ill-defined to get to an infrastructure of agents from the top down. It’s more likely to be emergent: lots of small, well-contained agents will be released; things will either break down or people will introduce coordinating agents that becomes, de facto, a weird war of escalation that will start to drive top-down organization.
Instead of the human versus artificial construct, framing our design practice space as narrow AI versus general AI offers more pragmatic, productive opportunities to explore this technological frontier responsibly. Chris’ hypothesis is that narrow AI will get safer as it gets smarter, while general AI will get more dangerous as it gets smarter — that’s the existential threat that Hawking and Musk refer to. Narrow AI offers opportunities for more people to contribute to the field of AI, and this democratizing effect is critical:
Don’t think about one agent. Think about all the agents, their capabilities, and their rules. We will be building a giant, worldwide database of behaviors, rules, and contexts by which we want to be individually treated. That stack would be impossible for any human to make sense of, but it might be the exact right thing to hand to the first artificial general intelligence.
Maybe in working on triggers and rules, we’re building the Ultimate Handbook of Helping Humans, one rule at a time. Agentive technology may be the best hope of ensuring general AI doesn’t end up being our Great Filter. Instead of [Asimov’s] four laws of robotics, we’ll have four trillion laws of humanity. (Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.187)
A final call to action
Designing agentive solutions means shifting “our day-to-day practice from building tools for people to do work themselves to building usable, effective, and humane agents.” And to do that effectively, ethically, and purposefully,
[w]e must build a community of practice so that we can get better at this new work .To build a shared vocabulary amongst ourselves to have good and productive discussions about what’s best. To share case studies, analysis, critiques successes, and yes, failures. To get some numbers about effectiveness and return-on-investment to share with our business leads. To push these ideas forward and share new, better development libraries, documentation techniques, and conceptual models. To reify the ways we talk to others about what it is we’re building and why. (Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.192)
I couldn’t have expressed the purpose of our design and AI meetup better than Chris. So, if you’re working on software ideas that can help people, join us.
Disclaimer: The ideas and opinions expressed in this post are my synthesis of Chris Noessel’s session at a Design and AI meetup hosted by Normative, where I work. My views are not necessarily those held by Normative nor by Chris Noessel, and any technical errors or omissions are mine alone.