The Secret Life of Information Architecture: the Space Between Tools and Theories

What’s IA?

What comes first to mind when we think of IA? Spreadsheets, card sorts, tree tests: these are the visible tools of our trade. All this sorting, grouping, and naming is hard-wired into our brains. It’s one of the first things we’re taught to do to develop our cognitive skills, to teach us how to parse our experiences.

Child playing with a game set of multi-coloured shapes to be sorted and fitted into different pegs

Family trees and org charts show us how we see ourselves. They impose a specific order and a structure. And they tend to be hierarchical, which should give us all a clue about the underlying organizing principle: power.

At a primal level, we humans fear chaos.

Confronted with disorder, most of us feel deeply uncomfortable. We say things like “What a mess. I don’t understand what’s going on.” As designers, we see a mess, and we’re pathologically compelled to want to order it. Unsurprisingly, Chaos was the first thing to exist in the Greek cosmogeny. From there on out, we’re just trying to manage the beast.

There are quite a few visualization of the tree of life. I particularly like this one: https://www.onezoom.org/. There’s also https://itol.embl.de/.

Screen shot of onezoom.org tree of life visualization in the shape of a tree

Trees of life are a manifestation of our attempt to IA our most primal questions. We want to know what we are and where we are in relation to everything else on Earth.

IA is about wayfinding & sense-making.

We use IA to find our way to make sense of stuff.

Like much of design, our craft is about more than our tools. IA is about more than labeling, classifications, navigation schemes. More than card sorting and tree testing. More than what to put into navigation menus.

Before we dive into design tasks and theories, let’s step back to consider the whole system. Beneath the taxonomy are actors and objects. We need to find out who and what they are by starting with some questions:

  • Who’s in it?
  • Who’s related to whom?
  • How are the whos related to one another?
  • What’s in it?
  • How are the whats related to one another?
  • How are the whos and the whats connected?
  • When we go to work, we’re still human.

Once upon a time, I was part of a team tasked with redesigning one of our company’s websites. Imagine today you went to work and your business was on the top level navigation.

Then the website got re-designed. And the design team comes along and shows you the new website. And you see that your part of the business is now buried a couple levels deep in the menu. And the designer proudly points out that the new navigation menu is customer-centric.

Meanwhile, your mind is racing. You’re thinking, “Why is my part of the business buried deep in the navigation?” And “But we can’t service our customers the way the navigation menu is promising.” “Am I getting re-org’d?”

How might you react to this redesign?

Visible hierarchies and labels are visual representations of underlying systems.

It should surprise no one that it took 18 months to get everyone aligned on what goes in the navigation menu. And that customer-centric IA didn’t make the cut.

What is going on?

When we’re asked to design navigation menus, we need to remember that it’s bigger than the design of the surface. The real design work of IA and navigation menus is in probing these questions: What’s the organization’s business model? Or if it’s a not-for-profit, what’s the funding model?

In crime shows and movies, they like to say “follow the money”. That’s solid advice for us, too.

A purely customer-centric IA might not align with how the organization is set up to work, making it hard or impossible for the IA to come to life. I’m not saying it can’t, but what you’re actually re-designing is not the navigation menu or the IA of the site: you’re re-designing the organization.

My argument would seem self-evident, but then I hear talks and read blog posts critiquing navigation schemes that make no sense because they’re org-centric. Well, before we critique, we should ask: did or could the designer address the underlying social, political, and operational structure?

When things appear nonsensical, it’s worth zooming out to see the broader context.

A few Fridays ago, my design team took a day off-site to plan our 2020 individual practice goals. One of the activities involved the jars in the photograph.

Row of swing-top glass jars with gaskets of different colours

What do you notice about them?

When I went to buy those jars the night before our off-site, all I saw were the colourful lids, got very excited, and grabbed two dozen jars. I quickly developed a mental model about organizing them by the colour of the lids. The next day, I’m setting up the jars in our workshop room.

That’s when I realized the colour of the lids wasn’t the only thing different. The jars also had DIFFERENT. SHAPES.

Part of the point of this story is that I anchored to the colour of the lids because I saw only the surface.

Information architecture embodies perspectives.

But the real moral of this story is a story of perspective, which is at the heart of information architecture. How do we or should we sort and organize? By colour? By shape? Colour first, then shape? Or Shape first, then colour?

These decisions depend on perspectives.

If you’re blind, and I’ve sorted the jars by colour, it’s meaningless to you. The “obvious” solution is to be inclusive: sort by shape first because everyone can tell them apart, then colour for those who can see colour.

Or is it?

If you’re not blind, what if you want to find jars of different shapes but all with yellow lids? By making shape the top hierarchy, I’ve made you work just a bit harder to find what you’re looking for.

Now, in digital space, we can tackle this with faceted navigation, filtering, and sorting.

These solutions give agency back to us as users, so we can explore the catalog of jars using whatever organizing principle we like.

In meat space, our options are different than in digital space.

The Cotton Library: a Story of Contexts, Constraints, and Mental Models.

Portrait of Sir Robert Cotton

Sir Robert Cotton lived during the Renaissance, working in the courts of Queen Elizabeth I and after her death, King James I of England and VI of Scotland. One of his claims to fame is his personal library of manuscripts of some of the oldest English writing that we know of.

Now, the 16th and 17th centuries were a long way before Melvil Dewey invented the Dewey Decimal System in the 19th century. Cotton lived during the Renaissance, when Europe was just emerging from the so-called Dark Ages, rediscovering the learning of the Ancients.

When he designed his library, the best mental model he had was formed with three things: the physical space of his personal library, which measured 26 by 6 feet; some bookcases; and the busts of a bunch of Roman Emperors.

His IA strategy was to place the bust of a Roman Emperor on top of each bookcase, assign a letter of the alphabet for each shelf, and a Roman numeral starting from the left.

People are still arguing about what the logic could possibly have been. Check out this post from author Matt Kuhns, who wrote a book on the Cotton Library, to start your journey down this rabbit hole: https://www.mattkuhns.com/2017/05/cottons-memory-palace.

To our modern eyes, the IA of the Cotton Library seems…well, “idiosyncratic” is a word often used. It’s not unlike our reaction when we look at the navigation menus of business websites. Now, we can sit back smugly with our Dewey Decimal System to SMH and LOL at Sir Robert Cotton and his weird IA. But we need to remember: the man was still counting with roman numerals (like our friends in the NFL with their Superbowl LIV). His context and constraints presented different possibilities. When we make IA decisions, we need to understand the contexts, constraints, and mental models that govern the social and political of the organization.

Best solution Solution that best fits all the conditions we need to balance.

As designers, we sometimes have the luxury of designing and solving for a narrow set of conditions. More often than not, we have a complex set of conditions to balance. We have to ask “Whose perspective do we need to consider?”

And we need to be aware that problems and opportunities that exist in one set of contexts and constraints might not be available in another set of conditions. We live and work in a world that’s too complex to allow us simply to “design beautiful experiences that people love to use.”

People often frame the conversation about design decisions in terms of “compromises.” I dislike that word. It implies that if you could get rid of some constraints, then you’d have something you didn’t have to compromise on, that you could have “best” or “perfect.”

But guess what? Some of those constraints are people. You can’t get rid of people.

So, I prefer to think about it this way. It’s less about finding the best solution and more about finding solutions that BEST FIT all the conditions we need to balance.

2 cartoon characters looking down at a baby mobile toy with cute animals, one saying “I love it” and the other saying “Me, too”. A baby lying in a crib looking up at the same toy but seeing only the bottom of the animals.

What contexts and constraints are we balancing here? What’s the best fit design?

If we hope to create change, we must know things as they truly are.

Broadening our perspective beyond designing surfaces, beyond our fixation with the tools and theories of IA is important because if we hope to create change, we must know things as they truly are. I see the act of architecting information as a cultural act, a statement of values, an expression of power. Those with the power get to define the hierarchy, and if we have that privilege, we need to zoom out to get a broader perspective before making decisions.

Consider this leaf.

Macro photo of a leaf in high detail

If we were thinking of designing just this leaf, we’d miss seeing the broader context of where this leaf fits.

View of a forest looking up from the ground

So let’s zoom out a bit. So now we see this leaf lives in this forest.

What if we zoomed out further?

Aerial view of a clear-cut forest

How does the design of the leaf matter when seen in the context of a clear-cut forest? What other questions does this zoom level open up?

People often confuse my bald statements of current reality as being negative, so I want to be clear: I make these statements not to be a Debbie Downer but to acknowledge reality because we can’t move forward by looking at the world through a narrow lens. Or by putting on rose-tinted glasses and sweeping reality under the rug.

Progress and change comes from acknowledging what needs to change, to understand why it needs to change. Otherwise, we might be solving the wrong problem, using the wrong tools for the job, or missing the real opportunity altogether.

I’d like us all to consider zooming out of the details: look at the big picture before diving into the tools and theories because it gives us a chance to frame the problem or opportunity space more broadly, and it gives us more options for the way forward.

IA is fundamentally about way-finding to achieve sense-making. And we won’t find the way if we see only the details. It’s worth zooming out away from the trees to see the forest.

The Design Pyramid – draft

Screen Shot 2019-05-23 at 10.39.21 PMJesse James Garrett’s Elements of User Experience has guided me in my practice for so long, it feels like a comfortable second skin. I continue to teach this model to new generations of designers whom I’ve had the privilege to coach and mentor.

For some time now, I’ve layered on my own flavour based on my personal field experience. I’ve drawn my design pyramid many, many times for different audiences (designers, product managers, business leads)…and in several different ways. At long last, I’ve gotten tired of drawing the thing over and over again, so I thought I’d commit a bit by publishing on my blog. For one thing, I can finally stop drawing it, at least for a while. For another, I’m curious as to how my own thinking will continue to evolve.

This is meant more for my own thinking and as a spark for those who are interested in casual intellectual conversation. It’s by no means meant as an improvement or commentary on Jesse’s work. Not by a long shot. It’s merely how I’ve used Jesse’s framework and forked it within and for my own practice.

design pyramid v1

Agentive Technology, or How to Design AI that Works for People, Part 2

I originally wrote and published this post on Normative Design’s Ratsnest blog as the second part of a two-part synthesis of Design+AI, a monthly meetup hosted by Normative Design to explore how we design in a world augmented by AI. On the Sept 21, 2017 edition of the meetup, Chris Noessel, author of Designing Agentive Technology, joined us to shed some light on the area of designing agentive technology; specifically:

  • What is agentive tech?
  • What is narrow AI versus general AI?
  • How do designers need to modify their practice to design agentive tech?
  • Do agentive tech pose unique operational burdens?

Part 1 of my recap covered the questions of what agentive technology is, along with a discussion of narrow versus general AI. Part 2 ventures into the pragmatic territory of what it all means for us as designers in the field.

What changes in our practice? The good news is: not a lot.

So, from a practice perspective, what are the important differences when we design for agentive technology? What parts of our design practice — process, approach, methods, and perspective — need to be adapted? What should we be thinking about and watching for? What are the gotchas?

Chris notes that what’s missing from his book Designing Agentive Technology is explicit commentary on what changes in our practice. The good news is: not a lot changes. Research should be handled identically. We still need to go out and talk to people who will be served by agents. We still need to understand how the thing that’s in question gets done. We still need to understand the goals of the people using the agent to get things done: none of that changes.

That said, Chris sees three areas that do need to change, two of which are near term, and one that’s longer term.

1. Asking about the future

During research, ask how the future would be like for the user. This is an optional question when designing tools because we tend to face limitations in what we can do for people. However, when designing agentive tech, this is the mission. Let’s imagine we’re designing a solution for our customer Carla. We could frame up a question along the lines of “Carla, if we had infinite money to hire you an assistant who’s infinitely knowledgeable and incredibly fast, how would you hope that assistant would help you?” This uncovers what people don’t like doing — stuff that we can solve with agentive solutions. Another question we could ask might be “What would you still do?” This will uncover what people love doing, tasks or outcomes that we probably shouldn’t give to an agent (perhaps an assistant would be better, something to help people do it themselves). Or, if we do want to design an agentive solution for things that Carla enjoys doing, we need to take care in designing the agentive solution. These types of questions tend to be on the tail end of our normal research routine; with agentive technology, they’re going to be more front and centre.

2. Multiple design layers, leading to more complexity

Chris acknowledges that in his book, agentive tech comes across like a pure thing. In practice, most sophisticated technologies are in layered modes: there’s an agentive mode that sits near an automated mode that sits near an assistive mode that sits near a tool mode that gives back control to humans. For example, during his research on the robo-investor work, one of the research participants wanted to set aside 10% of his theoretical money to see if he could beat the AI because there might be something special in humans that might give us a shot. In reality, there probably aren’t pure agents or pure automatic things, or not too many of them.

In designing agentive tech, we’ll need to keep in mind four modes:

  • There are times when we want agents to handle things routinely and alert us if we need to make decisions or if there are problems;
  • There are times when we only want to be notified when things are fundamentally broken;
  • There are times when we want to do it and want help; and
  • There are times when we want to do it ourselves and don’t want help.

These four modes are likely to be present in sophisticated and certainly enterprise-level projects, and thinking through them is going to be complicated. After research, our modified design process should work through these four modes. First, design it as a manual tool — how would Carla do it herself with no help. Then, add the assistive layer: how does the AI help her do it and how does she turn it off if she wants to? Then, how might we make it agentive so that we can take the burdensome or routine parts off Carla and let her pay attention to the things that she loves. Finally, what does the autonomous mode look like? If the engineering team is confident, how might we let Carla pass it off into autonomous mode and how does she re-gain control? These are complicated layers to design for, but they’re necessary.

3. Interaction design will get more complicated when users and agents proliferate

In the longer-term future, user and agents will proliferate. And they’ll interact with one another and likely with peer groups outside their immediate domains. These interactions will make design more complicated. How will we account for the touch point experiences and the outcomes when users and agents talk to one another? For instance, Chris wonders if his health agent going to rat on his insurance agent? I expect the answer likely lies in who paid for the creation and maintenance of the agent.

Selling agentive tech

Conceptually at least, business leaders like agentive tech: there are interesting opportunities for up-sell, cross-sell, lowering the cost of human resources for routine tasks, etc. In practice, getting funding is harder. The response is typically, “Cool idea! OK, let’s get back to building this piece of junk.” How do we get past this inertia?

Find an executive champion

Chris suggests that we use the same design methodology he discussed earlier. First, design the manual solution. Get agreement that the manual solution solves the pains for the user and for the business. Once we have that agreement, we can suggest doing it one better with an agentive solution. So, for example, instead of the tool that Carla could use to locate the information she needs, with the agentive solution, the information will come to her, although she’ll have options for varying degrees of intervention. If we can get to an agreement on the benefits that the agentive mode provides, we can then talk to engineering about the costs associated with the four different modes: tool mode, assistive mode, agentive mode, autonomous mode. Expect engineers to prioritize the tool over the agent, so internal champions are going to be mission-critical.

First mover advantage

In Wired for War, P.W. Singer talks about how technology is influencing the world’s concept of war. One of the concepts is “threshold technology”. For example: once a culture adopts drones for warfare, why would it ever risk flesh and blood again? In general, once we adopt a technology, we’re loath to go back. Drones are an example of a threshold technology. This notion of threshold tech applies to agents. Once you have a Roomba, pulling out the Dyson feels like a chore. The Roomba is a threshold tech.

Likewise, agentive tech is a threshold concept. It confers a competitive advantage. The first player in a domain to introduce functional agents to their users will have their attention and loyalty, or from my perspective, at least establish the inertia that prevents customers from switching or leaving. If business leaders miss out on the first mover advantage, they’ll spend time scrambling to catch up and lose market share.

Operationalizing agentive tech

What exactly does it mean to build agentive technology? In practical terms, it doesn’t require fancy specialized coding skills. What we’re building is software to watch data streams for triggers and then to act on those triggers. Devs need to design triggers and watch data streams for a set of conditions or scenarios that designers design to be smart as a default, but design controls and flows for users to modify triggers and conditions or scenarios. There are then triggers and behaviours to enact, which is at the core of object-oriented programming.

To have humans watch for conditions and triggers, then perform the pre-defined acts is orders of magnitude greater than the server space required to house the software. Delivering this continuousness is more pragmatic and feasible using software than with manual human labour. In his book, Chris cites the Mackworth Clock Test, which taught us that even if we wanted to use humans to provide this persistent service, it isn’t something the human brain is wired for:

People are terrible at vigilance. Back in the 1940s, a researcher named Norman Mackworth made formal studies about how long people could visually monitor a system for “critical signals.” He made a “clock” with a single ticking second hand that would jump every second, but which would irregularly jump forward two seconds instead of one. He would then sit his unfortunate test subjects in front of it for two hours and ask them to press a button when the clock jumped. Then he tracked each subject’s accuracy over time. It turns out that, in general, people could go for around half an hour before their vigilance would significantly decay, and they would being missing jumps. This loss of attention has little consequence in a testing environment, but Mackworth was studying the vigilance limits of WWI radar operators, when missing a critical signal could mean life or death. ( Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.142)

From an engineering perspective, the challenge of agentive software isn’t an issue of skills but operational resources. Agents, it’s argued, can be operational sinkholes. Crashed servers and other downtime conditions require human intervention. And if we built agents to mitigate that human intervention, that could spiral into meta problems, so the argument goes. Chris points out that these problems exist today — servers go offline, software gets deprecated — we just offload them to users. We don’t need to reach far for examples: the recent release of iOS11 created an ‘appocalypse’ of apps built with 32-bit technology. Unquestionably, agentive tech imposes an operational burden, but to a large extent, every improvement for users implies some operational burden. Ops costs need to be managed, not seen as an obstacle or reason to avoid using agentive technology to improve the human experience.

Intelligence is a multivariate space

Humming below the surface of every Design and AI meetup is the fear of monolithic AI that pits human intelligence against artificial intelligence. Chris argues this is a false dichotomy. Moravec’s Paradox showed us that “contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.” Put differently, machines / computers are better at doing some things better than humans, while humans are better at other things. Chris notes that the mistake is in thinking about it as a problem of simple task allocation; it’s more complicated than that. Citing Howard Gardner’s (contentious) hypothesis of the theory of multiple intelligences, Chris prefers to think of intelligence as a multivariate space and proposes that if we design an AI to mimic human intelligence, we’ll have missed the point.

We tend to think of an over-arching agent such as a self-driving car rather than the component parts that deliver the self-driving outcome; it’s certainly a more saleable concept — after all, people buy benefits, not features. Put it differently, agency is multi-part: we confer agency to software over a series of tasks rather than agency over an outcome or job-to-be-done. Ultimately, how much agency is given to software should be determined by user preferences. In a future when self-driving cars dominate, it’s reasonable to assume that we will want to impose some degree of practice time to ensure our driving skills don’t atrophy into a dangerous state where we aren’t able to re-take control from the software agent. It’s also reasonable to assume that individuals will want to set their preferences for how much practice they want: the utilitarian will want the minimum recommended amount while the enthusiast will want more.

Chris expects that our agentive design practice is more likely to start at the small bits and then get grouped into larger bits. Some agents’ scope will be so small that they won’t be perceived as agents at all. Take blinkers in a car, for instance: they work with a single touch; the salesperson at the auto dealership doesn’t sell it as a feature, as an automatic, agentive blinker. And yet, behind the single human touch that engages the blinker are a series of tasks and processes that get triggered. It’s only when something gets to behaviour that feels ‘more human than I thought technology could do’ that we need to have a conversation about it. Until then, software agents are just improving technologies that will likely be expressed through APIs and networked communications, rather than a singular monolithic expression. The scope of artificial intelligence is still too ill-defined to get to an infrastructure of agents from the top down. It’s more likely to be emergent: lots of small, well-contained agents will be released; things will either break down or people will introduce coordinating agents that becomes, de facto, a weird war of escalation that will start to drive top-down organization.

Instead of the human versus artificial construct, framing our design practice space as narrow AI versus general AI offers more pragmatic, productive opportunities to explore this technological frontier responsibly. Chris’ hypothesis is that narrow AI will get safer as it gets smarter, while general AI will get more dangerous as it gets smarter — that’s the existential threat that Hawking and Musk refer to. Narrow AI offers opportunities for more people to contribute to the field of AI, and this democratizing effect is critical:

Don’t think about one agent. Think about all the agents, their capabilities, and their rules. We will be building a giant, worldwide database of behaviors, rules, and contexts by which we want to be individually treated. That stack would be impossible for any human to make sense of, but it might be the exact right thing to hand to the first artificial general intelligence.

Maybe in working on triggers and rules, we’re building the Ultimate Handbook of Helping Humans, one rule at a time. Agentive technology may be the best hope of ensuring general AI doesn’t end up being our Great Filter. Instead of [Asimov’s] four laws of robotics, we’ll have four trillion laws of humanity. (Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.187)

A final call to action

Designing agentive solutions means shifting “our day-to-day practice from building tools for people to do work themselves to building usable, effective, and humane agents.” And to do that effectively, ethically, and purposefully,

[w]e must build a community of practice so that we can get better at this new work .To build a shared vocabulary amongst ourselves to have good and productive discussions about what’s best. To share case studies, analysis, critiques successes, and yes, failures. To get some numbers about effectiveness and return-on-investment to share with our business leads. To push these ideas forward and share new, better development libraries, documentation techniques, and conceptual models. To reify the ways we talk to others about what it is we’re building and why. (Noessel, Designing Agentive Technology, Rosenfeld Media, 2017, p.192)

I couldn’t have expressed the purpose of our design and AI meetup better than Chris. So, if you’re working on software ideas that can help people, join us.

Disclaimer: The ideas and opinions expressed in this post are my synthesis of Chris Noessel’s session at a Design and AI meetup hosted by Normative, where I work. My views are not necessarily those held by Normative nor by Chris Noessel, and any technical errors or omissions are mine alone.

References

http://rosenfeldmedia.com/books/designing-agentive-technology/
https://medium.com/@christophernoessel/ani-design-skills-f0af22360570

https://articles.uie.com/new-technologies-to-consider-for-interaction/