inkblurt Information Architecture, User Experience & Other Obsessions Sat, 21 Nov 2015 02:39:04 +0000 en-US hourly 1 72548679 A Life More Local Fri, 09 Oct 2015 00:28:55 +0000 I’ve been back in the on-the-road consultant game for nearly six years now, and it’s been a blast. I’ve especially loved the last 3.5 of those years because it’s been with the only consultancy I think I’d ever want to work for again, The Understanding Group. In fact, I can honestly say I’ve never felt like a job was “like family” the way I’ve felt that at TUG.

So, it was excruciatingly difficult to wrestle with the decision I’ve recently made: to change my relationship with TUG so I can have a non-travel-dependent, more locally centered day-job.

I’ve accepted a position as Senior Digital Experience Architect at State Farm — a Fortune 50 company that is investing heavily in user experience design, and creating a world-class design division at its huge new hub in Atlanta.

As I’ve been telling everyone I’ve talked to about this so far, this is the first time I’ve changed jobs before I was really ready to leave my current gig. I’ve gotten to work with such amazing people at TUG. I’ve learned so much from every person on the team. The mission, values, and vision that its founders baked into the place are second to none — as are the founders themselves, Bob Royce and Dan Klyn, who are now also my very dear friends. Luckily, we’re already hatching ways to continue an association after I change my full-time employment status.

So, why change? Mainly it’s to be more present, available, and involved with family and my local community. Since purchasing a home last summer in our quirky, fabulous neighborhood, and since some big transitions have occurred for immediate and extended family members in the area, I’ve been feeling an increasing need, and desire, to be on the road a lot less often.

Professionally, I’ve also been itching to work with longer-term challenges that I can shape over significant time horizons, rather than the few months normally available to me as an external vendor.

While I am sad to give up my full-time relationship with TUG, I’m excited about the new gig. State Farm is doing some really impressive things to re-invent how it works with its customers across its many business lines, channels, and touchpoints. They’re not afraid of architectural thinking and doing, and the sort of mindset and approach that I bring from my experience with TUG: mapping, modeling, defining, and aligning, then guiding the specifics of design to make all that understanding into working realities.

And I’m particularly chuffed that State Farm is taking part in the transformation of a key area of an Atlanta suburb into a 21st century mixed-use, urban-style environment, directly connected to Atlanta’s rail system. For those familiar with the politics of MARTA in Atlanta, you know it’s a pretty big deal here to build one of the biggest developments in the metro area as an implicit endorsement of the value of public rail transportation. It also means I get to ride the train to work: something I always assumed I’d have to live in some other city to enjoy.

So, there it is. I’ll be starting with the new job soon after I return from what will surely be a lovely trip to the Italian IA Summit, where TUG cofounder Dan Klyn and I are teaching a workshop and speaking, something we hope to continue doing together in one way or another, as TUG-brothers in mind and spirit.

So it goes, and so it goes.




]]> 1 1192
Summer Update — Talks, Posts, and other things. Mon, 27 Jul 2015 16:05:44 +0000 I’ve been pretty busy since my last blog post in December, since Understanding Context launched. Some really great work with clients, lots of travel, and a number of appearances at events have kept me happily occupied. Some highlights:

Talks and Things

O’Reilly: Webcast for Understanding Context, presented on June 10. Luckily, with a quick registration, you can sign up to watch the whole thing for free!

IA Summit:, where I co-facilitated a workshop on Practical Conceptual Modeling with my TUG colleagues Kaarin Hoff and Joe Elmendorf. (See the excellent post Kaarin created at TUG. summarizing choice bits of the workshop).

SXSW Workshop: I taught an invited workshop at SXSW with my colleague Dan Klyn, on “Information Architecture Essentials” — which was wildly successful and well-reviewed. We’re happy to say we’ll be teaching versions of this workshop again this year, at IA Summit Italy, and WebVisions Chicago!

UX Lisbon: where I taught a workshop on analyzing and modeling context for user experiences (which I also taught in abbreviated form at IA Summit, and which I’ll be reprising at UX Week later this summer).

UX Podcast: While in Lisbon, I had the pleasure of doing a joint podcast interview jointly with Abby Covert, hosted by the nice folks at UX Podcast.

Upcoming Appearances

As mentioned above, there are some upcoming happenings — I encourage you to sign up for any that aren’t already sold out!

Understanding Context — Some thoughts on writing the book. Thu, 18 Dec 2014 16:23:06 +0000 Understanding Context - CoverAfter several years of proposing, writing, revising, and production, Understanding Context is finally a real book. For obvious reasons, I’ve not been especially prolific here at Inkblurt, since every spare moment was mostly used to get the book done.

And it’s still not really done … like the old saying goes, a work of writing is never finished, only abandoned. As I say in the Preface (now online), the book is definitely an act of understanding that is in progress. It’s an invitation to readers to come along on the journey and keep it moving in their own ways, from their own perspectives.

Since this is my personal blog, I don’t mind saying that this was one of the hardest things I’ve ever done. I’ve always thought of myself as a writer, but secretly worried that I would never have the ability to finish an actual book. My professional or otherwise published writing has always been shorter pieces: poems, stories, essays, articles.  Add to that the wacky impostor-syndrome spikes, the anxieties about wading into waters that feel over my head, and the strange self-torture that is looking at one’s own prose at length, with all its bad habits, for months and months: I’m not sure how I didn’t throw in the towel. This all sounds so dramatic, when in fact (as I had to keep reminding myself) writing a book is just a matter of putting a lot of words in a particular order, until you have a book. When it comes down to it, it’s just work. I just have a terribly noisy mind, and a herd-of-cats frontal lobe.

Counterbalancing my shortcomings: I had a lot of support and interest from many kind, optimistic people; and even if I was losing interest and motivation, others were still interested and motivated and telling me to get to work. So I kept doing it. I’m glad I did, and I’m thankful for all those supportive moments of encouragement.

Another thing that’s been bemusing me … it may sound silly, but I did not mean to write such a long book. The short version of what happened: in order for me to write the second half of the book, I had to write the first half as a foundation. So, the first half is pretty dense with theoretical concepts like embodied cognition and semiotics (hopefully with enough concrete examples to make them relevant). If I removed a lot of that stuff, it would’ve taken away the frame that the rest of the book needed.

Generally, there was a frustrating tension between trying to make a book that would be an enjoyably efficient, casual read, and one that puts in print some ideas that I came to believe are essential for design practice in general (and information architecture in particular). I did my best to strike the balance.

I wish this thing well — we’ve been companions for a long time. We still are, I guess, but the relationship is different now. It’s out in the world, having its own life.

If you run across this now-independent companion of mine, let me know.

Language is Infrastructure at IA Summit 2014 Thu, 07 Aug 2014 21:10:23 +0000 I presented this talk at the IA Summit in San Diego this year, back in the spring. I’m adding it to inkblurt so it’ll have a home here, but I already wrote about it over at TUG a few months ago.

It’s all about how language makes stuff in the world that we need to treat like serious parts of our environment — material for design — and how there’s no such thing as “just semantics.”

The World is the Screen Wed, 19 Feb 2014 18:03:32 +0000 Throughout 2013 and part of 2014, I gave various versions of a talk entitled “The World is the Screen”. (The subtitle varied.)

The general contention of the talk: as planners and makers of digital things and places that are increasingly woven into the fabric of the world around us, we have to expand our focus to understanding the whole environment that people inhabit, not just specific devices and interfaces.

As part of that mission, we need to bring a more rigorous perspective to understanding our materials. Potters and masons and painters, as they mature in their work, come to understand their materials better and more deeply than they would expect the users of their creations to understand them. I argue that our primary material is information … but we don’t have a good, shared concept of what we mean when we say “information.”

Rather than trying to define information in just one way, I picked three major ways in which information affects our world, and the characteristics behind each of those modes. Ultimately, I’m trying to create some foundations for maturing how we understand our work, and how it is more about environments than objects (though objects are certainly critical in the context of the whole).

Anyway … the last version of the talk I gave was at ConveyUX in Seattle. It is a shorter version, but I think it’s the most concisely clear one. So I’m embedding it below. [Other, prior (and longer) versions are also on Speakerdeck – one from IA Summit 2013, and one from Blend Conference 2013. I also posted about it at The Understanding Group.]

]]> 1 1121
Meanwhile, On the Internet Thu, 06 Feb 2014 00:50:35 +0000 I haven’t been inkblurting much here for a few months. There are a few reasons.

1. I’ve been writing and revising a book that I’ve been hammering away at for the last two years. I started writing it based on hunches about its subject, and vaguely literary aspirations of a “thought piece” sort of nonfiction tome that would be just so fascinating… only to discover that I really didn’t know what the hell I was writing about, and had to learn some actual science and stuff before I could say anything with any credibility. I mean, I’ve been doing information architecture and interaction design for a pretty long time, so I had that credibility, but when it comes to things like embodied cognition or how language works, well … it feels like I’ve been going back to grad school. But I’m glad I did the work, and it’s turning out nicely, at least from what I can tell from my bleary-eyed perspective, knotted like a homunculus in my digital bunker, gutting my overlong, meandering first draft, and wrangling what’s left into something I hope will do the job. Writing, man. Whaddya do?

2. I’ve also been posting the occasional bit over at the blog for my delightful employer, TUG (The Understanding Group). A few of them have included some thoughts on how no project is ever just what we see on its face, so we should design the “meta” side of the project as much as the thing the project is supposedly for. Another on how information architecture and business strategy have a long relationship, that’s becoming even more interdependent. And a couple of posts about stuff I presented at Midwest UX, including a workshop I co-led with colleague Dan Eizans, on Making Places with IA & Content Strategy, and my solo talk about maps and territories and how language creates places. I’m fairly obsessed with this whole language-as-infrastructure thing, which is leading me to also do a talk on that topic at IA Summit this March in San Diego.

3. Speaking of IA Summit, I’m proud and pleased to be co-teaching a new workshop there with the brilliant and wise Jorge Arango. It’s about Information Architecture Essentials, and proceeds will go to the Information Architecture Institute. We hope the content will be enlightening and useful, and a nice overview of some basic IA stuff, but also where IA is headed as a practice & discipline. We think old hands will get something out of it, not just newcomers. Fear not, although there will be “theory,” we’re packing it with practical goodness, and structuring it along a typical project timeline. Groundedness FTW.

I Remember the Miracle Strip Sun, 01 Sep 2013 20:07:46 +0000 So what on earth would prompt me to actually write a personal blog post after about a six-month gap?

I suppose it’s a combination of things. My daughter just started her senior year of high school, which obviously brings some rumination along the lines of how the hell did that much time pass that fast. Plus it’s Labor Day weekend, which has several weighty too-large-for-carryon chunks of personal baggage for me. I won’t go into all that, but something I ran across earlier today seriously picked the lock on my memory-closet.

It was this picture, in the middle of a bunch of other pictures of abandoned amusement parks.

Abominable Snowman From flickr user stevesobczuk

Abominable Snowman – Flickr User stevesobczuk

See, when I was a kid growing up around Atlanta through the 70s and early 80s, my family would generally end up going to Panama City Beach for vacation — often around this time of year. I didn’t know it at the time, but I benefited from a unique period in that area’s history. It was in the midst of the first big boom in tourism there, which evidently had started in the early/mid-60s (my parents told me stories of sleeping on its nearly deserted beaches in the 50s, their Chevy parked beside them). But it was before PC Beach became synonymous with MTV-style spring breakers in the 80s and 90s, and before the real estate land-grab of the 2000s which razed almost everything left of the indigenous culture (such as it was) in favor of gigantic condo developments.

I wasn’t much of a beach person even as a kid. I mean, I had fun on the beach — digging for sand crabs, daring waves to knock me down, making sand castles, and getting seriously sunburned. But for me it was all prelude to visiting the amusement parks in the area at night. Especially the “Miracle Strip Amusement Park.”
This picture in particular was of the Abominable Snowman ride, where they basically had a classic Scrambler ride inside a big dome, with the snowman crouched over the door. The snowman dome wasn’t added until I was about 12, but I have vivid memories of waiting what seemed forever to get into the dome to get slung around in the dark with giant speakers pumping Van Halen and a light show timed to the music — and especially the air conditioning inside, which made the wait all the more worthwhile. Few things are so wrapped up with my visceral memories of early adolescence.

My favorite parts of the park were the scary ones, though; and those are the parts that tap into my very early memories of the sort of thing that still scares and thrills me the most. Miracle Strip had two “dark” attractions: one was a Haunted Castle, which had cars on tracks that would take you through jarring, loud haunted-funhouse moments, including a terrifically psychedelic twirling tunnel.
The other was a walk-through attraction called the Old House, complete with a hidden passage behind a fireplace, and a balcony that would drop at an angle suddenly and blow air up from its floor to feel as though you were about to fall to the ground two floors below.

An early publicity picture for the Old House attraction from Flickr user kingpowercinema

An early publicity picture for the Old House attraction from Flickr user kingpowercinema

The Old House had a clockwork ghost or two — I recall a few on the very top of it that you could just barely see going in a circle as if in some evil ritual (impossibly high up on what seemed such a huge haunted house to a grade schooler); and there was another that would come out a door about halfway up, onto a balcony, then look out at the crowd to show its hideously frightening skeletal face, only to turn quickly back and go back into the house.
Anyway … these and more wonderful things have been preserved in the sprawling simulacra of the Internet. Thankfully, since the Miracle Strip was closed in 2004 to make room for a real-estate-boom condo orgy that never happened, only to leave the bones of the park to sit there until chunks were sold off over time, or just left to fall apart like the now-venerable Snowman.

One Flickr user in particular apparently has purchased much of the Haunted Castle ride and re-built it in Oxford, Alabama as both an homage and a neighborhood Halloween attraction.

The Old House was shuttered and never moved anywhere else, but the creative minds behind it designed and built a very similar attraction in Gatlinburg called the Mysterious Mansion.

There are some great collections online of photos from the wacky stuff I grew up seeing at PC Beach, of which the Miracle Strip (Miracle Strip Flickr Group) was only part. There was also something called Jungle Land that had a realistic volcano you could walk through, with throbbing “lava” red lights and face-sized holes you could look through in the fake-lava-rock walls to see scary stuff like piles of skulls and the depths of the “volcano.”

Jungleland attraction on Front Beach Rd.  Panama City Beach, Florida

There are still some relics left (the amazing “Goofy Golf” assembly of grotesqueries is evidently still around, and the volcano is still there as part of a gaudy beach-gift-shop chain).
Whenever I end up on a memory-jag like this, it’s both rewarding and draining. I don’t think we’re built to experience such ease of access to artifacts from our past. Emotional vertigo — that’s how I tend to think of it.

But it was great to learn more about all this stuff, especially that there were particular individuals (especially a guy named Vincent “Val” Valentine, an erstwhile animator for things like the Popeye cartoons, and an apparently legendary designer of “dark” amusement park attractions named Bill Tracy) behind the design of so much of this stuff. I didn’t realize when I was a kid how odd it was that such environmental design virtuosity was right there on the “Redneck Riviera.”

]]> 2 1104
Context Design Talk for World IA Day Ann Arbor Mon, 18 Feb 2013 17:32:53 +0000 The 2013 World IA Day was a huge success. Only its 2nd year in existence, and it had big crowds in 20+ locations (15 official). Congratulations to everyone involved in organizing the day, and to the intrepid board members of the IA Institute who decided to risk transforming the more US-based IDEA conference into this terrific, global, community-driven event.

I was fortunate to be asked to speak at the event in Ann Arbor, MI, where I talked about how information shapes context — the topic I’ve been writing a book about for a while now. I’ll probably continue having new permutations of this talk for quite some time, but here’s a snapshot at least, describing some central ideas I’m fleshing out in the book. I’m calling this “beta 2” — since it has somewhat different and/or updated content vs the one I did for CHI Atlanta back in the fall of 2012.

Video and Slides-with-notes embedded below. Enjoy!



Context Book: A Shape Emerging Thu, 17 Jan 2013 17:45:58 +0000 I’ve been writing a book on designing context for about a year now. It’s been possibly the most challenging thing I’ve ever done.

I’m starting to see the end of the draft. It’s just beyond my carpal-tunnel-throbbing clutches. Of course, there are still many weeks of revision, review, and the rest still to go.

When I proposed the book to O’Reilly Media, I included an outline, as required. But I knew better than to post that outline anywhere, since I figured it would likely change as I wrote. It turns out, I was more right than I knew. So many of the hunches that nudged me into doing this work turned out to be a lot more complicated, but mostly in a good way.

One major discovery for me was how important the science around “embodied cognition” would be to sorting all this out; also, how little I actually knew about the subject. Now, I find myself fully won over by what some call the “Radical Embodied Cognition” school of thought. An overview of the main ideas can be found in a post at the Psych Science Notes blog, written by a couple of wonderful folks in the UK, from whom I’ve learned a great deal. (They also tweet via @PsychScientists)

At this point, I think the book has a fairly stable structure that’s emerged through writing it. There are 5 chapters; I have about 1/3 of the 4th chapter, and the 5th chapter, to go. (These shouldn’t take me nearly as long as the earlier stuff, for which I had to do a lot more research and learning.)

Partly to help explain this structure to myself, I came up with a diagram that shows how the points covered early on are revisited and built upon, layer by layer. (Touch/click to see full size in separate window)



Admittedly, the topics listed here don’t sound like a typical O’Reilly book; some might look at it and say “this is too theoretical, it’s not practical enough for me.” But, as I mention in the (still in draft) Preface, “there’s nothing more practical than understanding the properties of the materials you work with, and the principles behind how people live with the things you make.”

There will be “practical examples” of course, though perhaps not every 2-3 pages like in many UX-related books. (Nothing wrong with that, of course, it’s just not as appropriate for this subject matter.)

However — I’m still in the thick of writing, so who knows what could change? Now back to the manuscript. *typetypetypetype*





]]> 2 1065
Roughly Half Done Wed, 12 Dec 2012 15:16:25 +0000 Yesterday I turned 45 years old. For some reason, as I got up this morning, it hit me more than it did the day before.

Here are a few thoughts about it.

I’m technically halfway to 90 years old. I hope to live even longer than that, even though it’s older than the current average age of departure. But assuming the next 45 years will result in even better technologies for keeping us alive, I’m being optimistic. Still, there’s no denying this is, officially, “middle age.”

I don’t feel middle aged though. At least not emotionally. I’m starting to learn from others that this is not unusual. We all evidently wrestle with how we perceive our relationship with “time” — which is such a reified, non-thing to begin with.

Part of me would love to feel what I assume is a level of authority and confidence that comes with hitting time-based number that most would agree is undeniably “adult.” That’s the insecure part of me, though. The one that depends on something outside of me to tell me what and who I am. On a daily basis, I have to put that part of me in “time out.” It speaks out of turn and still isn’t quite housebroken.

Nowadays I have to work harder to catch myself resting on laurels or assumptions from prior experience. My eye-roll reflex is now nimbler and quicker on the draw than when I was younger. And that’s a dangerous reflex to exercise. When I notice myself being unthinkingly dismissive of a new idea, or an old idea revived in a new situation, I feel a little like Roy Batty in Bladerunner, catching his limbs in the first stages of rigor mortis, willing them to keep moving. Luckily I don’t have to stab myself with a nail. I just have to breathe, and remember to listen more closely. (Not that I find either of those things easy; some days I’d rather stick myself with a nail.)

I’m glad I’m working in a job where the company is still finding itself, watching itself evolve. It feels more like the old meaning of “company” — a group of companions, compatriots, fellow travelers.

I’m glad I’m working on this crazy book about stuff that I still wonder if I fully understand. I’m having to learn things in order to write it, and I’m never sure if I’m fully grasping and articulating the material; but I suppose that’s better than the comfortable stasis of writing only certainties. Even though I have weekly battles of self-doubt, I’m learning so much, and accomplishing something I never thought I’d have the wherewithal to do.

My wife and I are moving again, back to Philadelphia. We’ve been wanting to finally end up someplace where we could say “this is where we live” and put down some roots, at least for a while. So we’re going back to the place where we met, and where, when we feel homesick, it’s for that place. Is this going backward? Maybe. But only if we expect to be the same people we were when we lived there before. We’re not. We’ve grown a lot in four years; changed. Plus, now we have a dog. So we’ll see.

And watching my daughter become who she is becoming. Almost 17 now. Kind, intelligent, curious, good. Her mother has worked miracles in raising her. My daughter, who has to learn her own lessons, find her own way, no matter how much her parents would like to carve a safe, happy path in front of her. What a bracing, beautiful paradox it is, to have the power to bring a human being into the world, but be so utterly powerless in the face of their own story that only they can make.

So. Halfway. Even this far into my own story, I’m still a rough draft. I suppose I’ve always felt “half done” about most things, even the ones I’ve technically finished. I’ve always felt suspended between the poles of “making” and “unmaking.” There are more days, now, when I feel at peace with that unmoored oscillation. Not many, but more.


]]> 2 1051
The “E” in E-commerce; also my “Happiness Machines” talk Mon, 27 Aug 2012 16:29:18 +0000 I’ve been doing some writing over at the blog for The Understanding Group.

  • Last month, I posted on e-commerce, and how it’s really just commerce now, but there are still many legacy impediments for retailers.
  • And this week, I wrote a bit about the talk I gave at WebVisions Portland in May (with an interview video, and my slides) on “Happiness Machines.”


The Composition of Context: a workshop proposal Sat, 16 Jun 2012 20:16:45 +0000 Andrea Resmini and co-organizers of the upcoming workshop on Architectures of Meaning (part of the Pervasive Computing conference at Newcastle University in the UK) asked me to participate this year. I’m not able to be there in person, unfortunately, but plan to join remotely. What follows is the “paper” I’m presenting. It’s not a fully fledged academic piece of writing — more like a practitioner-theorist missive.

I’m sharing it here because others may be curious, and it’s also the best summary I’ve done to date of the ideas in the book I’m writing on IA and designing context.

This is a straight dump from MS Word (with a few tweaks). Caveat emptor.


Information Architecture and the Composition of Context

Andrew Hinton

Final Draft for Architectures of Meaning Workshop

June 18, 2012



We lack fully articulated models for context, yet information architecture is especially significant in how context is created, changed or communicated in digital-based information environments. This thesis proposes some principles, models and foundational theories for the beginnings of a framework of context and proposes composition as a rubric for tying these ideas together into IA practice.

The thesis follows a line of reasoning thus:

Context is constructed.

There’s a deep and wide intellectual history around the topic of context. Suffice it to say that there are many layers and threads in the ongoing conversation among experts on the subject. Even though all those threads don’t agree on every point, they add up to some generally accepted ideas, such as:

  • Context is both internal and external. Our minds and bodies determine and influence how we perceive reality, and that internal experience is affected by external objects and interactions. Both affect one another to the point where the distinction between “inner” and “outer” is almost entirely academic.
  • Context has both stable and fluid characteristics. Certainly there are some elements of our lives that are stable enough to be considered “persistent.” But our interactions with (and understanding of) those elements still can make them mean something very different to us from moment to moment. Context exists along an undulating spectrum between those poles.
  • Context is social. Our experience of context emerges from a cognitive history as social beings, with mental models, languages, customs — really pretty much everything — originating from our interactions with others of our kind.

Context is not so simple as “object A is in surrounding circumstance X” — the roles are interchangeable and interdependent. This is why context is so hard to get our hands around as a topic.

(In particular, I’m leaning on the work of Paul Dourish, Bonnie Nardi, Jean Lave, Marcia Bates and Lucy Suchman.)

Context is about understanding.

This phenomenological & post-modern frame for context necessarily complicates the topic — not to point out these complexities would keep us from getting at a real comprehension of how context works.

Still, it can be helpful to have a simple model to use as a compass in this Escher-like landscape.  Hence, the following:

Context is conventionally defined as the interplay between several elements:

  • Situation: the circumstances that comprise the setting (place, time, surroundings, actions, etc.). The concept of “place” figures very heavily here.
  • Subject (Event/Person/Statement/Idea): the thing that is in the situation, and that is the subject of the attempted understanding.
  • Understanding: an apprehension of the true nature of the subject, through awareness and/or comprehension of the surrounding situation.
  • Agent: the individual who is trying to understand the subject and situation (this element is implied in most definitions, rather than called out explicitly).

Context, then, is principally about understanding. There is no need for discussion of context unless someone (agent) is trying to understand a subject in a given situation. That is, context does not exist out in the world as a thing in itself. It emerges from the act of seeking to understand.

This also forms a useful, simple model for talking about context and parsing the elements in a given scenario. However, it gets more complicated due to the ideas, mentioned above, about how context is constructed. Just a few of the wrinkles that come to light:

  • There can be multiple subjects, even if we understand them by focusing on (or foregrounding) one at a time.
  • The subject is also always part of the situation, and any of the circumstances could easily be one or more subjects.
  • In fact, in order to understand the situation, it has to be focused on as a subject in its own right.
  • All of these elements affect one another.
  • Importantly, the subject may be the agent. And there can be multiple agents, where another observer-agent may be able to understand the situation better than the subject-agent, because the subject-agent “can’t see the forest for the trees.” In design for a “user” this is an especially important point, because the user is both agent and subject — a person trying to understand and even control his or her own context.

As you can see, what looks like a simple grammar of what makes context can actually expose a lot of complexity. But this simple model of elements helps us at least start to have a framework for picking apart scenarios to figure out who is perceiving what, which elements are affecting others, and where understanding is and isn’t happening.

In order to unravel this massive tapestry, we have to grab a thread; a good one to grab is what we mean by “understanding.”

And that means we have to understand cognition, which is the engine we use for understanding much of anything.

Cognition is embodied and extended.

The embodied mind thesis is not fully settled science, and there are many varying threads and contentions in that body of work. However, the basic ideas I’ve with which I’ve aligned my thinking contend that the evolutionary history of all cognition is rooted in bodily, sensorimotor experience, and that current human cognition still relies on the body and the environment.

That means cognition is not “in the mind” and separate from “the body” and “the world” – cognition is the result of the interplay of all those dimensions. In the act of perception, cognition doesn’t make meaningful distinctions between these dimensions.

This does not mean that all cognition for the contemporary human is dependent on “on-line” sensorimotor activity. In the language of Andy Clark at University of Edinburgh, cognition is “loopy” – it loops within and without the body, in a hybrid spectrum of cognitive methods, depending on the needs in the moment.

For example: I may count with my fingers or count with sticks; or I may count with more recent abstractions like numbers and mathematical operators, whether written down or typed into a device, and I may simultaneously be doing parts of that work “in my head.” I hardly pay attention to which methods I’m using and in what combination; I just use them to get to the answer I need.

Cognition depends on Perception.

At the core of my understanding of cognition is a model of perception developed by American psychological researcher and theorist James J. Gibson. Gibson’s theory starts with vision, but Gibson meant that it would be applied to any mode of perception.

Gibson on Perception: To Gibson, perception is “ecological.” He starts with what the environment contains (objects, shapes, material properties of gas, solid, liquid, etc.) and then explores how light interacts with those physical properties to create information that can be perceived by visually-enabled organisms. Gibson’s thesis is that the brain does not have to process all the visual information real-time in order for the organism to behave, move, and survive in the environment, because the organism’s physical capabilities already respond appropriately to most of the environmental information.

Gibson on Information: To Gibson, information is the perceptual stimulus created by the interplay of energy in the environment and the physical properties (shape, material, etc.) of the environment. In the case of vision: the way light interacts with the physical environment creates structural information that shows difference between boundaries and connection points. Water reflects light differently from land, direct light sources create visual information differently from reflected light sources, and so on.  It is only “information” because it is being perceived by an organism that can make use of it (i.e. something has to be ‘informed’).

Gibson on Affordance: Probably, Gibson’s best known idea among user-centered-design circles is that of affordance. For Gibson, affordances are action-relevant properties of the environment. The way light interacts with air is different from its interaction with water, so an organism learns the physical properties of those different experiences of light and moves through them or around them accordingly.  The visual information from the interaction of light and land affords the actions of walking and standing. The information from air + light affords “moving through” and breathing. The light reflected from a fallen branch shows it to be roughly the right size and shape to be used as a tool for digging.

I dwell on Gibson’s ideas here because they form a rigorously reasoned foundation for understanding how cognition depends heavily on physical perception, which therefore is also a core concern for context.

Information: Three Modes

A small detour here to sketch out what I think of as three modes of information:

  1. Ecological (The Gibson mode): information as explained above — the interplay of energy (light, sound, etc.) and environmental properties (shape, material, etc.) and the perceiving organism (in our case, a human “agent”). The “meaning” in this sort of information is rooted in affordance, for basic needs like locomotion and physical survival as well as higher-order activities like manipulating tools.
  2. Linguistic: the perceived environment humans have collectively invented first through speech and much later through writing. The “meaning” in this mode is semantic rather than based in physical-properties-plus-energy (although written language makes use of that sort of information as well, as way to communicate the symbolic/semantic stuff, but getting into that is more than we have time for here). We will look at this more below, when considering language as a human-made environment.
  3. Digital: information in the Claude Shannon sense, as essentially difference: one or zero, and anything that can be built up from that logical switch.  Shannon’s breakthrough was seeing that to get machines to work effectively and accurately with human-made information, we had to ignore “meaning” entirely and focus on encoding and transmitting it in such a way that semantics and human-perceived affordance are no longer involved, at least until some output needs to be human-readable, in which case it gets translated (decoded).

A big challenge with context in digital-based environments is that we don’t make an explicit, careful distinction between these modes, and the meaning-free mode ends up leaking into the others.

Language is a human-generated environment.

An environment, in the Gibson sense, is the collection of stuff in the world with which we interact, and which generates information through interplay with energy, which we perceive as affordance.

In that sense of environment, language can be seen as a human-generated environment. But instead of speaking words that make trees, water, land and horizon appear in front of us, the words instead create cognitive simulations – sort of like shared hallucinations – that we inhabit together simultaneously with the physical environment.

Language is the human species’ way of creating its own environment of information, with which we label the world around us, plan together about how to perform actions in that world, or anything else that we find valuable. Language both changes how we perceive the physical world and also creates new, non-physical worlds (concepts, ideas, abstractions) that we inhabit together in conversation, discussion and stories.

Language is embodied.

It’s no accident that the initial exploration of embodied cognition came from linguistics, in the work of Lakoff and Johnson (“Metaphors We Live By”). A large part of how cognition works involves the use of signs and symbols that represent or arise from abstractions of the body, and the body’s sensorimotor experience of and interactions with the world.

There is compelling evidence that language has been with our species longer than conventionally assumed, and spoken language “co-evolved” with us long enough to act as a powerful force in our natural selection. That is, language acted as a shaping constraint in the survivability and adaptation of homo sapiens. (Leaning heavily here on the work of anthropologist Terrence Deacon).

Regardless of whether language is millions or just many tens of thousands of years old, it is not just an abstracted layer of non-physical signification, tacked onto human life as a mere convenience. Language is a collectively emergent technology we’ve come to rely upon for survival over a long enough period that it’s essential to the nature of being human; it’s a metaphysical, shared organ of meaning, identity, and understanding. A human being with no language would be, compared to other humans, cognitively “broken.”

We’ve been coexisting in the language environment in just as profound a way as we coexist in the physical one.

Therefore …

Language is essential to contextual experience.

So, to understand context, we have to understand cognition; in particular we have to understand how cognition is “embodied.”

And in understanding cognition, we have to understand how language works as an organ of individual & communal cognition & understanding.

We then see that language doubles back into context as a core element: it nudges, informs, shapes and even creates contextual experience.

Since language first emerged among humans, it has been a sort of technology for constructing our shared reality. We inhabit a shared linguistic construct that literally changes what surroundings, objects and experience mean to us; and that construct also creates new surroundings — situations, conditions, contexts — that are just as immersive and meaningful to our lives as any mountain, stream, building or city square. In fact, it’s language that gives even those physical entities so much shared cultural significance.

Digital information brings new complications to Ecological & Linguistic information coherence and comprehension. 

Since human beings first used any sort of language, we’ve been inhabiting a shared “information dimension” of sorts. But the Internet has made what was tacit, analog and “meta” into something explicit, digital and “actual.”

This dimension unmoors context from many of the embodied, physical referents that our brains and bodies evolved to take for granted. This brings new complications to how context is experienced. As mentioned above, we tend to create things using digital information as raw material, without paying attention to the full needs of how we comprehend ecological & linguistic information.

Information architecture arose as a way to compose habitable structures in this new dimension.

Information architecture is a powerful framework for designing context; the challenge of designing context with information architecture can better be understood and performed if framed as an act of composition. Composition is a relevant frame in several ways:

  • We compose context in the sense of two dimensional art (e.g. “a well-composed photograph”): composition in art is about the arrangement of elements within the frame of the work of art, foreground and background, juxtaposition and relevance, what is included and not included — all these affect the user contextually as they affect the viewer of an artwork.
  • We compose context in the sense of architectural composition: the arrangement of spatial cues and physical boundaries to create specially recognized places, that afford certain actions over others. A hallway affords travel from one place to another; a foyer affords entry, meeting guests, getting settled; etc. The space is understood in an especially embodied way. Similarly for context, which is also about what activities are afforded over others, public vs private, etc.
  • We compose context in the sense of creating meaning with language (e.g. “composing a sentence): composition of language involves syntax, semantics and semiotics to arouse messages and meanings in other people; similarly, context is highly dependent on language; for example, labels can completely change the experienced nature of two otherwise identical objects or places.

It turns out that for the information dimension, language has to do much of the work of all these forms of composition, not just the “writing sentences” variety; and an understanding of syntax, semantics and semiotics is similar to having an understanding of physics for architecture, or an understanding of perspective in visual art.

Composition of context, then, depends upon the careful and fully aware creation and manipulation of all three modes of information. As we continue to mature the practice of information architecture, we need to improve our understanding how the materials we use function as information for cognitive affordance, and how the structure of these affordances creates and changes how the agent (user) comprehends context: the understanding of the relationships between subjects an situations.



Note: This is based on work in progress for a book on the design of context. I am not a full-time academic researcher and writer, but a theoretically-minded practitioner looking to ground these ideas in responsible research. Any critique, suggested sources or other feedback is quite welcome.





]]> 4 1038
My next move is a TUG. Thu, 03 May 2012 15:41:49 +0000 I’m very happy to announce I’m joining The Understanding Group as an Information Architect.

I’m a big believer in TUG’s mission: using information architecture to make things “be good.”

Since I’ve been blathering on and on about the importance of IA for over a decade now, I figured I might as well put my career where my mouth is and join up with this exciting new firm that has IA as its organizing principle. It doesn’t hurt that the people are pretty awesome too.

For the time being I’ll still be living in Atlanta, but traveling on occasion to Michigan, NYC and wherever else necessary to collaborate with clients and team members.

But unfortunately, going on to something new means having to leave behind something else.

I want to say that I’ll miss working with the great people at Macquarium. The two years I’ve spent with “MQ” have been among the best of my career, in terms of the practitioners I’ve gotten to know, the clients I’ve been able to partner with, and the fascinating, challenging work I’ve gotten to do.

Macquarium is doing some of the most cutting-edge work I’ve heard of in the cross-channel, service-design and organizational design spaces. I’m very fortunate to have had the chance to be part of their team.



]]> 2 1027
Embodied Responsiveness Tue, 01 May 2012 18:05:31 +0000 I’ve been thinking a lot lately about responsiveness in design, and how we can build systems that work well in so many different contexts, on so many different devices, for so many different scenarios. So many of our map-like ways of predicting and designing for complexity are starting to stretch at the seams. I have to think we are soon reaching a point where our maps simply will not scale.

Then there are the secret-sauce, “smart” solutions that promise they can take care of the problem. It seems to happen on at least every other project: one or more stakeholders are convinced that the way to make their site/app/system truly responsive to user needs is to employ some kind of high-tech, cutting-edge technology.

This can take the form of clippy-like “helpers” that magically know what the user needs, to “conversation engines” that try to model a literal conversational interaction with users, like Jellyvision, or established technologies like the “collaborative filtering” technique pioneered by places like Amazon.

Most of the time, these sorts of solutions hold out more promise than they can fulfill. They aren’t bad ideas — even Clippy had merit as a concept. But to my mind, more often than not, these fancy approaches to the problem are a bit like building a 747 to take people across a river — when all that’s needed is a good old-fashioned bridge. That is, most of the time the software in question isn’t doing the basics. Build a bridge first, then let’s talk about the airliner.

Of course, there are genuine design challenges that do seem to still need that super-duper genius-system approach. But I still think there are more “primitive” methods that can do most of the work by combining simple mechanisms and structures that can actually handle a great deal of complexity.

We have a cognitive bias that makes us think that anything that seems to respond to a situation in a “smart” way must be “thinking” its way through the solution. But it turns out, that’s not how nature solves complex problems — it’s not even really how our bodies and brains work.

I think the best kind of responsiveness would follow the model we see in nature — a sort of “embodied” responsiveness.

I’ve been learning a lot about this through research for the book on designing context I’m working on now. There’s a lot to say about this … a lot … but I need to spend my time writing the book rather than a blog post, so I’ll try to explain by pointing to a couple of examples that may help illustrate what I mean.

Consider two robots.

One is Honda’s famous Asimo. It’s a humanoid robot that is intricately programmed to handle situations … for which it is programmed. It senses the world, models the world in its brain and then tells the body what to do. This is, by the way, pretty much how we’ve assumed people get around in the world: the brain models a representation of the world around us and tells our body to do X or Y. What this means in practice, however, is that Asimo has a hard time getting around in the wild. Modeling the world and telling the limbs what to do based on that theoretical model is a lot of brain work, so Asimo has some major limitations in the number of situations it can handle.  In fact, it falls down a lot (as in this video) if the terrain isn’t predictable and regular, or if there’s some tiny error that throws it off. Even when Asimo’s software is capable of handling an irregularity, it often can’t process the anomaly fast enough to make the body react in time. This, in spite of the fact that Asimo has one of the most advanced “brains” ever put into a robot.

Another robot is nicknamed Big Dog, by a company called Boston Dynamics. This robot is not pre-programmed to calculate its every move. Instead, its body is engineered to respond in smart, contextually relevant ways to the terrain. Big Dog’s brain is actually very small and primitive, but the architecture of its body is such that its very structure handles irregularity with ease, as seen in this video where, about 30 seconds in, someone tries to kick it over and it rights itself.

The reason why Big Dog can handle unpredictable situations is that its intelligence is embodied. It isn’t performing computations in a brain — the body is structured in such a way that it “figures out” the situation by the very nature of its joints, angles and articulation. The brain is just along for the ride, and providing a simple network for the body to talk to itself. As it turns out, this is actually much more like how humans get around — our bodies handle a lot more of our ‘smartness’ than we realize.

I won’t go into much more description here. (And if you want to know more, check this excellent blog post on the topic of the robots, which links/leads to more great writing on embodied/extended cognition & related topics.)

The point I’m getting at is that there’s something to be learned here in terms of how we design information environments. Rather than trying to pre-program and map out every possible scenario, we need systems that respond intelligently by the very nature of their architectures.

A long time ago, I did a presentation where I blurted out that eventually we will have to rely on compasses more than maps. I’m now starting to get a better idea of what I meant. Simple rules, simple structures, that combine to be a “nonlinear dynamical system.” The system should perceive the user’s actions and behaviors and, rather than trying to model in some theoretical brain-like way what the user needs, the system’s body (for lack of a better way to put it) should be engineered so that its mechanisms bend, bounce and react in such a way that the user feels as if the system is being pretty smart anyway.

At some point I’d like to have some good examples for this, but the ones I’m working on most diligently at the moment are NDA-bound. When I have time I’ll see if I can “anonymize” some work well enough to share. In the meantime, keep an eye on those robots.




]]> 2 1030
Notes on IA from 2002 Tue, 20 Mar 2012 04:05:28 +0000 Tonight, I ran across some files from 2002 (10 yrs ago), some of which were documents from the founding of the IA Institute. At some point I need to figure out what to do with all that.

But among these files was a text clipping that looks as if it was probably part of a response I was composing for a mailing list or something. And it struck me that I’ve been obsessing over the same topics for at least 10 years. Which is … comforting… but also disconcerting. I suppose i’m glad I’m finally writing a book on some of these issues because now maybe I can exorcise them and move on.

Here’s the text clipping.

I agree it’s not specific to the medium. If you can call the Internet a medium. I really think it’s about creating spaces from electrons rather than whole atoms.

If putting two bricks together is architecture (Mies), then putting two words together is writing. The point is that you’re doing architecture or writing, but not necessarily well. Both acts have to be done with a rationale, with intention and skill. And their ultimate success as designs depend upon how well they are used and/or understood.

But what about putting two ideas together, when the ideas manifest themselves not as words alone, but as conceptual spaces that are experienced physically, with clicking fingers and darting eyeballs. No walking necessary, just some control that’s quick enough to follow each connecting thought.

What really separates IA from writing? I could say that putting About and Careers together is “writing” … It’s a phrase “about careers.” But if I put About and Careers together in the global navigation of a website, with perhaps a single line between them to separate them, there’s another meaning implied altogether.

Yet those labels are just the signs representing larger concepts, that bring with them their own baggage and associations, and that get even weirder when we put them together (they tend to exert force on one another, like gravity, in their juxtaposition). The decision to name them as they are, to place the entryways (signs/labels) to these areas in a globally accessible area of the interface, to group them together, and how the resulting “rooms” of this house unfold within those concepts — that’s information architecture.

We use many tools for the structuring of this information within these conceptual rooms, and these can include controlled vocabularies, thesauri, etc. There is a whole, deep, ancient and respected science behind these tools alone. But just as physics and enginnering do not make up the whole of physical Architecture, these tools do not make up the whole of Information Architecture.

Why did we not have to think about this stuff very much before the Web? Because no electron-based shared realities were quite so universally accessed before. Yes, we had HCI and LIS. Yes, we had interaction design and information design. We had application design and workflow and ethnographic discovery methods and business logic and networked information.

But the Web brings with it the serendipitous combination of language, pictures, and connections between one idea and another based on nothing but thought. Previous information systems were tied primarily to their directory structures. But marrying hypertext (older than the web) to an easy open source language (html) and nearly universal access, instantaneously from around the world (unlike hypertext applications and documents, such as we made with HyperCard) created an entirely new entity that we still haven’t gotten our heads around quite yet.

We’re still drawing on cave walls, but the drawings become new caves that connect to other caves. All we have to do is write the sign, the word, the picture, whatever, on the wall, and we’ve brought another place into being.

I wonder if Information Architecture can be seen as Architecture without having to worry so much about time and space? Traditional architecture sans protons and nuclei?

What if Jerusalem were an information space rather than a physical one? I wonder if many faiths could then somehow live there together in peace, with some clever profile-based dynamic interface control? (One user sees a temple, another sees a mosque?)

I wonder if Information Architecture is more about anthills and cowpaths than semantic hierarchies?

I wonder if MUSH’s, MOO’s and Multiplayer Quake already took Information Architecture as far as it’ll ever go, and we’re just trying to get business-driven IA to catch up?


Reading this now is actually disturbing to me. Not unlike if I were Jack Torrance’s wife looking at his manuscript in The Shining … but then realizing I was Jack. Or something.

So. Exorcism. Gotta keep writing.


Designing Context: About the Book Sun, 19 Feb 2012 19:36:30 +0000 Thanks for checking out the post, however …

I’ve moved the information about the book over to its own page.


The Path to Fail is Paved with Good Intentions Wed, 08 Feb 2012 15:35:57 +0000 I joined Path on December 1st, 2011. I know this because it says so, under my “path” in the application on my iPhone.
That same day, I posted this message in the app:

“Wondering how Path knew whom to recommend as friends?!?”

I’ve used a lot of social software over the years (technically since 1992 when the Internet was mainly a social platform, before the e-commerce era), and I do this Internet stuff for a living, so I have a pretty solid mental model for where my data is and what is accessing it. But this was one of those moments where I realized something very non-transparent was happening.

How did it know? 

Path was very smartly recommending users on Path to me, even though it knew nothing about me other than my email address and the fact that it was on my phone. I hadn’t given it a Twitter handle; I hadn’t given it the same email address I use on Facebook (which isn’t public anyway). So how did it know?
I recall in a dinner conversation with co-workers deciding that it must just be checking my address book on my phone. That bugged me, but I let it slide.
Now, I’m intrigued with why I let it go so easily. I suspect a few reasons:

  • Path had positioned itself as an app for intimate connections with close friends. It set the expectation that it was going to be careful and safe, more closed than most social platforms.
  • It was a very pleasing experience to use the app; I didn’t want to just stop using it, but wanted to keep trying it out.
  • I was busy and in the middle of a million other things, so I didn’t take the time to think much about it beyond that initial note of dismay.
  • I assumed it was only checking names of contacts and running some kind of smart matching algorithm — no idea why I thought this, but I suppose the character of the app caused me to assume it was using a very light touch.

Whatever the reasons, Path set me up to assume a lot about what the app was and what it was going to do. After a few weeks of using it sporadically, I started noticing other strange things, though.

  • It announces, on its own, when I have entered a new geographical area. I had been assuming it was only showing me this information, but then I looked for a preference to set it as public or private and found none. But since I had no way of looking at my own path from someone else’s point of view, I had to ask a colleague: can you see that I just arrived in Atlanta? He said yes, and we talked about how odd that was… no matter how close your circle of friends, you don’t necessarily want them all knowing where you are without saying so.
  • When someone “visited my path” it would tell me so. But it wasn’t entirely clear what that meant. “So and so visited your path” sounds like they walked up to the front of my house and spent a while meditating on my front porch, but in reality they may have just accidentally tapped something they thought would allow them to make a comment but ended up in my “path” instead. And the only way to dismiss this announcement was to tap it, which took me to that person’s path. Were they now going to get a message saying I had visited their path? I didn’t know … but I wondered if it would misconstrue to the other users what I’d done.
  • Path also relies on user pictures to convey “who” … if someone just posts a picture, it doesn’t say the name of the person, just their user picture. If the picture isn’t of the person (or is blank) I have no idea who posted it.

All of these issues, and others, add up to what I’ve been calling Context Management — the capabilities that software should be giving us to manage the multifaceted contexts it exposes us to, and that it allows us to create. Some platforms have been getting marginally better at this (Facebook with its groups, Google + with its circles) but we’re a long way from solving these problems in our software. Since these issues are so common, I mostly gave Path a pass — I was curious to see how it would evolve, and if they’d come up with interesting solutions for context management.

It Gets Worse

And now this news … that Path is actually uploading your entire address book to Path’s servers in order to run matching software and present possible friends.

Once I thought about it for half a minute, I realized, well yeah of course they are. There’s no way the app itself has all the code and data needed to run sophisticated matching against Path’s entire database. They’d have to upload that information, the same way Evernote needs you to upload a picture of a document in order to run optical character recognition. But Evernote actually tells me it’s doing this … that there’s a cloud of my notes, and that I have to sync that picture in order for Evernote to figure out the text. But Path mentioned nothing of the sort. (I haven’t read their license agreement that I probably “signed” at some point, because nobody ever reads that stuff — I’d get nothing else done in life if I actually read the terms & conditions of every piece of software I used; it’s a broken concept; software needs to explain itself in the course of use.)

When you read the discussion going on under the post I linked to, you see the Path CEO joining in to explain what they did. He seems like a nice chap, really. He seems to actually care about his users. But he evidently has a massive blind spot on this problem.

The Blind Spot

Here’s the deal: if you’re building an app like Path and look at user adoption as mainly an engineering problem, you’re going to come to a similar conclusion that Path did. To get people to use Path they have to be connected to friends and family, and in order to prime that pump, you have to go ahead and grab contact information from their existing social data. And if you’re going to do that effectively, you’re going to have to upload it to a system that can crunch it all so it surfaces relevant recommendations, making it frictionless for users to start seeding their network within the Path context.

But what Path skipped was the step that most such platforms take: asking your permission to look at and use that information. They essentially made the same mistake Google Buzz and Facebook Beacon did — treating your multilayered, complex social sphere as a database where everyone is suddenly in one bucket of “friends” and assuming that grabbing that information is more important than helping you understand the rules and structures you’ve suddenly agreed to live within.

Using The Right Lenses

For Path, asking your permission to look at your contacts (or your Twitter feed, or whatever else) would add friction to adoption, which isn’t good for growing their user base. So, like Facebook has done so many times, they err on the side of what is best for their growth rather than what is best for users’ peace of mind and control of their contextual reality. It’s not an evil, calculated position. There’s no cackling villain planning how to expose people’s private information.

It’s actually worse than that: it’s well-meaning people looking only through a couple of lenses and simply not seeing the problem, which can be far more dangerous. In this case, the lenses are:

  • Aesthetics (make it beautiful so people want to touch it and look at it),
  • Small-bore interaction design (i.e. delightful & responsive interaction controls),
  • Engineering (very literally meeting a list of decontextualized requirements with functional system capabilities), and
  • Marketing (making the product as viral as possible, for growth and market valuation purposes).

What’s missing?

  • Full-fledged interaction design (considering the entire interaction framework within which the small, delightful interactions take place — creating a coherent language of interaction that actually makes sense rather than merely window-dresses with novelty)
  • Content strategy (in part affecting the narrative around the service that clearly communicates what the user’s expectations should be: is it intimate and “safe” or just another social platform?)
  • Information architecture (a coherent model for the information environment’s structure and structural rules: where the user is, where their information lives, what is being connected, and how user action is affecting contexts beyond the one the user thinks they’re in — a structural understanding largely communicated by content & interaction design, by the way)

I’m sure there’s more. But what you see above is not an anomaly. This is precisely the diagnosis I would give nearly every piece of software I’m seeing launched. Path is just an especially egregious example, in part because its beauty and other qualities stand in such stark contrast to its failings.

Path Fail is UX Fail

This is in part what some of us in the community are calling the failure of “user experience design” culturally: UX has largely become a buzzword for the first list, in the rush to crank out hip, interactively interesting software. But “business rules” which effectively act as the architecture of the platform are driven almost entirely by business concerns; content is mostly overlooked for any functional purposes beyond giving a fun, hip tone to the brand of the platform; and interaction design is mainly being driven by designers more concerned with “taste” performance and “innovative” UI than creating a rigorously considered, coherent experience.

If a game developer released something like this, they’d be crushed. The incoherence alone would make players throw up their hands in frustration and move on to a competitor in a heartbeat; Metacritic would destroy its ability to make sales. How is it, then, that we have such low standards and give such leeway to the applications being released for everything else?

So, there’s my rant. Will I keep using Path? Well … damn… they already have most of my most personal information, so it’s not like leaving them is going to change that. I’m going to ride it out, see if they learn from mistakes, and maybe show the rest of the hip-startup software world what it’s like to fail and truly do better. They have an opportunity here to learn and come back as a real champion of the things I mentioned above. Let’s hope for the best.

]]> 2 1003
So I’m writing a book on Designing Context Mon, 06 Feb 2012 22:03:09 +0000 As I hinted in a post a couple of weeks ago, I’m writing a book. The topic: Designing Context.
If the phrase sounds a little awkward, that’s on purpose. It’s not something we’re used to talking about yet. But I believe “context” to be a medium of sorts, that we’ve been shaping for years without coming to grips with the full implications of our work.
Although I have written many things, some of them pretty long, I have never written anything this long before. I’m a little freaked out.
But I have to keep reminding myself that the job of this book isn’t to definitively and comprehensively cover everything having to do with its subject. I just want to do a good job getting some fascinating, helpful ideas about this topic into the hands of the community in a nice, readable format that gives me the room to tell the story well.
This isn’t a how-to book, more of a “let’s look at things this way and see what happens” book. It’s also not an academic book–I’m not an academic and still have a 50+ hour a week job, so there’s no way I’ll ever have time to read & reference every related/relevant work on the topic, even though that seems to be what I’m trying to do in spite of myself.
And I’m going to be very honest about the fact that it’s largely a book on information architecture: how information shapes & creates context for humans.
Thanks to O’Reilly Media for working with me on getting this thing going, and to Peter Morville for the prodding & encouragement.
Now … time to write.

PS for a better idea of what I’m getting at, here are some previous writings:

]]> 3 995
Users Don’t Have Goals Fri, 03 Feb 2012 16:48:22 +0000 My talk for Interaction 12 in Dublin, Ireland.

Another 10-minute, abbreviated talk.

You can see the video on Vimeo.

]]> 2 991
The Contexts We Make Fri, 20 Jan 2012 22:12:18 +0000 I’ve been presenting on this topic for quite a while. It’s officially an obsession. And I’m happy to say there’s actually a lot of attention being paid to context lately, and that is a good thing. But it’s mainly from the perspective of designing for existing contexts in the world, and accommodating or responding appropriately to them.

For example, the ubicomp community has been researching this issue for many years — if computing is no longer tied to a few discrete devices and is essentially happening everywhere, in all sorts of parts of our environment, how can we make sure it responds in relevant, even considerate ways to its users?

Likewise, the mobile community has been abuzz about the context of particular devices, and how to design code and UI that shapes the experience based on the device’s form factor, and how to balance the strengths of native apps vs web apps.

And the Content Strategy practitioner community has been adroitly handling the challenges of writing for the existing audience, situational & media contexts that content may be published or syndicated into.

All of these are worthy subjects for our attention, and very complex challenges for us to figure out. I’m on board with any and all of these efforts.

But I genuinely think there’s a related, but different issue that is still a blind spot: we don’t only have to worry about designing for existing contexts, we also have to understand that we are often designing context itself.

In essence, we’ve created a new dimension, an information dimension that we walk around in simultaneously with the one where we evolved as a species; and this dimension can significantly change the meaning of our actions and interactions, with the change of a software rule, a link name or a label. There are no longer clear boundaries between “here” and “there” and reality is increasingly getting bent into disorienting shapes by this pervasive layer of language & soft-machinery.

My thinking on this central point has evolved over the last four to five years, since I first started presenting on the topic publicly. I’ve since been including a discussion of context design in almost every talk or article I’ve written.

I’m posting below my 10-minute “punchy idea” version developed for the WebVisions conference (iterations of this were given in Portland, Atlanta & New York City).

I’m also working on a book manuscript on the topic, but more on that later as it takes more shape (and as the publisher details are ironed out).

I’m really looking forward to delving into the topic with the attention and breadth it needs for the book project (with trepidation & anxiety, but mostly the positive kind ;-).

Of course, any and all suggestions, thoughts, conversations or critiques are welcome.

PS: as I was finishing up this post, John Seely Brown (whom I consider a patron saint) tweeted this bit: “context is something we constantly underplay… with today’s tools we can now create context almost as easily as content.” Synchronicity? More likely just a result of his writing soaking into my subconscious over the last 12-13 years. But quite validating to read, regardless :-)

I’m pasting the SlideShare-extracted notes below for reference.


1. THE CONTEXT PROBLEM A 10 Minute ‘Punchy Idea’ WebVisions | NYC | 2012 Andrew Hinton [Macquarium]| @inkblurt
2. Where are you, right now?
3. To us in the room, you’re ‘here.’ online room To people online, you’re ‘here.’So where are you right now? just think about that a second.>>To those of us in the room we look over and see you’re here. With us in the room.>>But on Twitter, or instant messenger, or facebook — wherever else you’re communicating atthe moment — you’re “there”This isn’t just an idle thought. These words matter because they indicate the way wecognitively comprehend reality.
4. Where is “here” in this tweet?Check out this tweet … “I’m here. Is anybody here?”How do you interpret this question?Where is “here” here? She could have arrived at a restaurant and is asking if her friends arethere yet. Or she could be asking if anyone she knows is looking at Twitter at the moment.Notice I referred to even the statement as “here” — as if it’s a place. “Let’s look at what thisperson is saying ‘here’”
5. Reality hacking. Context “Fountain” | Marcel Duchamp ~ 1917Recognize this?>> This was named by art experts as the most in?uential work of art of the 20th century.Not because of its beauty, but because it signaled & partly catalyzed a rift in how we thinkabout culture. Duchamp and friends grabbed a urinal and signed it with a fake artist’s name,and entered it in an art show. It didn’t get in — but then they publicized the “injustice” ofbeing rejected so widely it became famous, and started conversations about what the natureof art really is. Who decides it?>> And it was all done by adding a bit of language to an object. By changing its context.>> It’s a sort of reality hacking. Why?
6. flickr – uicdigitalInformation changes how we experience the physical.Because information changes how we experience the physical world.Look at this photo — there’s information everywhere in this scene.>>The lines on the road tell us where to drive; the traffic light is a virtual barrier that affectsour behavior; the road signs give us a layer of instruction that adds meaning to the cityaround us. without the information here, it would quite literally be a different place.
7. flickr – aokkone More pervasive; more immersive.Now look at today.When you’re using a GPS, where are you driving?Your brain merges the information from the device with what you’re seeing in the windshield.They become essentially the same.So now we’re in even richer information environments.
8. More pervasive; more immersive.In fact, research is happening now to actually increase the detail & realism the informationdimension for drivers.
9. Information makes places, kind of like this picture makes a pipe. If you could smoke the pipe.This is the famous Magritte painting — it says “this is not a pipe”The picture de?nitely shows a pipe but it’s not a real pipe you can smoke.>>Information is kind of like this in the way it makes places.>>Except for a key difference that, withInformation, you can smoke the pipe.
10. photo: died.htmlRecognize this? It’s a home-made dungeon for Dungeons & Dragons.This is an information environment — but it’s only barely part of the physical world. It’s alljust information. But we experienced it as feeling very real, with real consequences andmeaning with our peers.Ok whatever — that’s D&D. Can’t take that seriously right?
11. US Constitution Some immersive information frameworks aren’t physical at all. archives.govWhat about this?How is this all that different from a D&D ruleset?Some people got together and wrote an information artifact, just words on pages, but it’s theframework the United States has existed within for over 2 centuries.Information is real, and it creates contexts that can have powerful effects on the reality welive in.
12. “Beacon” “Buzz”Which is why people get so upset when some of the places they live in suddenly change theirrules. Without representation, without explanation.What did these two platforms get so wrong?They assumed that, just because the environments they created were digital — informational– the rules of physical social context didn’t apply. They oversimpli?ed or ignored some verycomplex things about how people really live.They treated these designs as software engineering solutions, rather than life solutions.
13. “Friend?”For example, they warped what the word ‘friend’ means.Sure, it’s just language.But used as an entity in a relational database, behind a massive platform where millions ofpeople conduct big, meaningful slices of their lives … it becomes more than just a word.It becomes architecture.
14. In the information dimension, language is architecture.In the information dimension, language is architecture.
15. vs flickrcom – shimonkey – anirudhkoul Obvious difference.For example, in physical space, there’s an obvious difference between a little nook in the corner of a room where you canwhisper to someone, and a stage in front of thousands of people where a microphone will announce what you say to all of them.Whisper image CC http://? image CC http://?
16. D vs @ flickrcom – shimonkey – anirudhkoul Not so obvious.But on Twitter, all it takes is D vs @ to make that difference. It changes from requiring a big, physicalchange to a tiny alphanumeric slip.The information environments we’re creating are littered with these dangerously thin barriers betweencontexts.Whisper image CC http://? image CC http://?
17. We’ve always lived in language. abcdefghijklmn opqrstuvwxyz abcdefghijklmn opqrstuvwxyz Map = Territory Now we live in software. photo:’ve always lived in language — since the earliest beginnings of civilization, it’s been partof what makes us people.>> But now we also live in software, which is language made into architecture. Places weinhabit.>> The map has become the territory.So, in a weird way, the D&D geeks won … we all live in their dungeons now.
18. Existing Context online room The Context we design.We aren’t just designing for existing contexts anymore.We are designing the context itself.And the more that information dimension pervades our physical space …
19. What we make for the “screen” changes the world “outside the screen.” Existing Context room online The Context we design.The more we’re actually designing all human context.>>What we make for the screen changes the world outside the screen.
20. Actually, we’re turning the world into the “screen.”Actually, we’re turning the world into the screen.
21. We don’t fully understand what we have wrought.I don’t think we really understand what we have made. We keep going as if everything we dowith this technology just has to be great, but we end up making mistakes and wondering howwe screwed up.
22. So what do we do?
23. Be aware of, and understand, the problem.The ?rst step is just to be aware of the problem. I think this is an area of design that wehaven’t fully come to grips with yet. So let’s keep working on that.
24. Task Task Need Cognitive Physical Situation Task Task Need Need Emotional Task Task Task Task “Scenario”It all comes back to understanding the whole person, and the whole contextual situation inwhich they live and where their needs come from.>>We have to be careful that we’re not so focused on the individual task we’re designingfor …>> That we ignore the incredible ripple effect it has, and how it alters the reality that personis living in.
25. Punching, complete.[ | @inkblurt]

]]> 2 976
French Toast Tue, 03 Jan 2012 18:31:56 +0000 I’m using this post to give a home to a video clip from the show M*A*S*H. I sometimes use the clip in presentations, but it doesn’t seem to be compatible with YouTube, so I’m putting it here instead. QuickTime m4v format; just click the link to view:  French Toast

Why Second Life Matters Sun, 18 Dec 2011 04:32:32 +0000 So, the short version of my point in this post (the “tl;dr” as it were) is this: possibly the most significant value of Second Life is as a pioneering platform for navigating & comprehending the pervasive information dimension in a ubiquitous/pervasively networked physical environment.

That’s already a mouthful … But here’s the longer version, if you’re so inclined … Second Life Hand Logo

It’s easy to dismiss Second Life as kitsch now. Even though it’s still up and running, and evidently still providing a fulfilling experience for its dedicated user-base, it no longer has the sparkle of the Next Big Thing that the hype of several years ago brought to it.

I’ll admit, I was quite taken by it when I first heard of it, and I included significant commentary about it in presentations and writings I did at the time. But after only a few months, I started realizing it had serious limitations as a mainstream medium. For one thing, the learning curve for satisfying creation was too steep.

Three-dimensional modeling is hard enough with even the best tools, but Second Life’s composition toolset at the height of its popularity was frustratingly clumsy. Even if it had been state-of-the-art, however, it takes special knowledge & ability to draw in three dimensions. Unlike text-based MUDs, where anyone with half decent grasp of language could create relatively convincing characters, objects, rooms, Second Life required everything to be made explicitly, literally. Prose allows room for gestalt — the reader can fill in the details with imagination. Not in an environment like Second Life, though.

Plus, to make anything interactive, you had to learn a fairly complex scripting language. Not a big deal for practiced coders, but for regular people it was daunting.

So, as Second Life attracted more users, it became more of a hideous tragedy-of-the-commons experience, with acres of random, gaudy crap lying about, and one strange shopping mall after another with people trying to make money on the platform selling clothing, dance moves, cars and houses — things that imaginative players would likely have preferred to make for themselves, but instead had to piece together through an expensive exercise in collage.

At the heart of what made so many end up dismissing the platform, though, was its claim to being the next Web … the new way everyone was supposed to interact digitally online.

I never understood why anyone was making that claim, because it always seemed untenable to me. Second Life was inspired by Neal Stephenson’s virtual reality landscape in Snow Crash (and somewhat more distantly, Gibson’s vision of “cyberspace”), and managed an adroit facsimile of how Stephenson’s fictional world sounded. But Stephenson’s vision was essentially metaphorical.

Still, beyond the metaphor issue, the essential qualities of the Web that made it so ubiquitous were absent from Second Life: the Web is decentralized, not just user-created but non-privatized and widely distributed. It exists on millions of servers run by millions of people, companies, universities and the like. The Web is also made of a technology that’s much simpler for creators to use, and perhaps most importantly, the Web is very open and easily integrated into everything else. Second Life never got very far with being integrated in that way, though it tried. The main problem was that the very experience itself was not easily transferable to other media, devices etc. Even though they tried using a URL-like linking method that could be shared anywhere as text, the *content* of Second Life was essentially “virtual reality” 3D visual experience, something that just doesn’t transfer well to other platforms, as opposed to the text, static images & videos we share so easily across the Web & so many applications & devices.

Well, now that I’ve said all that somewhat negative stuff about the platform, what do I mean by “what we learned”?

It seems to me Second Life is an example of how we sometimes rehearse the

Recent version of the SL "Viewer" UI (

Recent version of the SL "Viewer" UI (

future before it happens. In SL, you inhabit a world that’s essentially made of information. Even the physical objects are, in essence, information — code that only pretends to be corporeal, but that can transform itself, disappear, reappear, whatever — a reality that can be changed as quickly as editing a sentence in a word processor.

While it’s true that our physical world can’t literally be changed that way, the truth is that the information layer that pervades it is becoming more substantial, more meaningful, and more influential in our experience of the world around us.

If “reality” is taken to be the sum total of all the informational and sensory experience we have of our environs, and we acknowledge that the informational (and to some degree sensory, as far as sight and sound go) layer is becoming dominated by digitally mediated, networked experience, then we are living in a place that is not too far off from what Second Life presents us.

Back when I was on some panels about Second Life, I would explain that the most significant aspect of the platform for user experience wasn’t the 3D space we were interacting with, but the “Viewer” — the mediating interface we used for navigating and manipulating that space. Linden Labs continually revised and matured the extensive menu-driven interface and search features to help inhabitants navigate that world, find other players & interest groups, or to create layers of permissions rules for all the various properties and objects. It was flawed, frustrating, volatile — but it was tackling some really fascinating, complex problems around how to live in a fluid, information-saturated world where wayfinding had more to do with the information layer *about* the actual places than the “physical” places themselves.

If we admit that the meaning & significance of our  physical world is becoming largely driven by networked, digital information, we can’t ignore the fact that Second Life was pioneering the tools we increasingly need for navigating, searching, filtering & finding our way through our “real life” environments.

What a city “means” to us is tied up as much in the information dimension that pervades it — the labels & opinions, statistics & rankings — the stuff that represents it on the grid, as it is the physical atoms we touch as we walk its sidewalks or drive through its streets, or as we sit in its restaurants and theaters. All those experiences are shaped powerfully by reviews and tips of Yelp, or the record of a friend having been in a particular spot as recorded in Foursquare, or a picture we see on Flickr taken at a particular latitude and longitude. Or the real-time information about where our friends are *right now* and which places are kinda dead tonight. Not to mention the market-generated information about price, quantity & availability.

It’s always been the case that the narrative of a place has as much to do with how we experience the reality of the place as the physical sensations we have of it in person. But now that narrative has been made explicit, as a matter of record, and cumulative as well — from the interactions of everyone who has gone before us there and left some shadow of their presence, thoughts, reactions.

One day it would be interesting to compare all the ways in which various bits of software are helping us navigate this information dimension to the tools invented for inhabiting and comprehending the pure-information simulacra of Second Life. I bet we’d find a lot of similarities.


Unhappiness Machine Sat, 12 Nov 2011 00:18:57 +0000 I posted the content below over on the Macquarium Blog, but I’m repeating here for posterity, and to first add a couple other thoughts:

1. It’s amazing how easily corporations can fool themselves into feeling good about the experiences they create for their users by making elaborate dreamscapes & public theater — as if the fictions they’re creating somehow make up for the reality of what they deliver (and the hard work it takes to make reality square in any way with that imagined experience). This reminds me a bit of the excellent, well-executed dismemberment of this sort of thinking that Bret Victor posted this past week on the silliness & laziness behind things like the Microsoft “everything is a finger-tap slab” future-porn. Go read it.

2. Viral videos like the CocaCola Happiness Machine don’t only fool the originating brand into feeling overconfident — they make the audience seeing the videos mistake the bit of feel-good emotion they receive as substantial experience, and then wonder “how can my own company give such delight?” I’ve seen so many hours burned with brainstorming sessions where people are trying to come up with the answer to that — and they end up with more reality-numbing theatrics rather than fixing difficult problems with their actual product or service delivery.

Post after the cut — but it looks nicer on the MQ Blog ;-)

Coke Happiness Machine Video on YouTubeI don’t know about you, but I’ve been seeing the CocaCola “Happiness Machine” video off and on for several years now. It keeps being passed around as a champion of great customer experiences. If you haven’t seen it yet, feel free to check it out on YouTube. It’s up to over 4 million views and still counting.

Every time I see this movie I get a little more frustrated and annoyed. Maybe I’m a big old Scrooge for feeling this way. Or maybe I’m just human.

Because when I use a Coke machine, I don’t encounter a benevolent puppet-show-cornucopia of good feelings, free pizza and flower bouquets. I experience something very different. So I made my own little movie — and here it is.

So what happened here? And what’s my point?

The product was a Coke — but in order for me to get to the product, I had to go through some very impersonal, unpleasant machinery.

That’s not unlike what so many organizations are having to do on the web and other digital media: they’re not really in the software business, in the same way that Coca Cola isn’t really a vending-machine-design company. But the experience I have with the core product is definitely affected by the experience I have with its medium of delivery.

Using a vending machine like this hasn’t gotten much better in about 20-30 years, frankly. This machine I used is pretty new, but it’s much like the one I used in my high school break room in 1983 (one of the first I encountered that took paper money, not just coins).

Does the machine meet the requirements of its engineering? I suppose so… it takes money (sometimes), and spits out a product. It makes it clear what brand the product is and what you’ll get for your dollar. But what it doesn’t do is have the right kind of conversation with me around the product.

Why can’t I see which brands of soft drink are in stock and which aren’t? Even when I push the button it won’t tell me — I have to put in my money first. Why on earth?

And why would anyone design a machine like this so that it shakes the soda so it spews all over me when I open it? (Reducing the carbonation, btw, making it less pleasant to drink once the foam dies down …)

I’m not just picking on Coke by the way. As a native Atlantan, I grew up drinking this stuff like water, so I’m picking on it partly because it’s part of my background. But we run into things like this all the time, whether soft drink machines or school systems, auto repair or the doctor’s office. Bureaucracy is its own kind of ‘soft machine’ and often needs re-designing as much as any mechanical contraption.

When we make machinery — software or hardware — experience design is about getting at these human dimensions, and making sure the medium of delivery not only avoids spoiling how the customer feels about the product, but enhances that experience.

]]> 1 959
In Defense of D Fri, 16 Sep 2011 14:58:33 +0000 DTDT means lots of things

A long time ago, in certain communities of practice in the “user experience” family of practices, an acronym was coined: “DTDT” aka “Defining the Damned Thing”.

For good or ill, it’s been used for years now like a flag on the play in a football game. A discussion gets underway, whether heated or not, and suddenly someone says “hey can we stop defining the damned thing? I have work to do here, and you’re cluttering my [inbox / Twitter feed / ear drums / whatever …]”

Sometimes it rightly has reset a conversation that has gone well off the rails, and that’s fine. But more often, I’ve seen it used to shut down conversations that are actually very healthy, thriving and … necessary.

Why necessary? Because conversation *about* the practice is a healthy, necessary part of being a practitioner, and being in a community of other practitioners. It’s part of maturing a practice into a discipline, and getting beyond merely doing work, and on to being self-aware about how and why you do it.

It used to be that people weren’t supposed to talk about sex either. That tended to result in lots of unhappy, closeted people in unfulfilling relationships and unfulfilled desires. Eventually we learned that talking about sex made sex better. Any healthy 21st century couple needs to have these conversations — what’s sex for? how do you see sex and how is that different from how I see it? Stuff like that. Why do people tend to avoid it? Because it makes them uncomfortable … but discomfort is no reason to shun a healthy conversation.

The same goes for design or any other practice; more often than not, what people in these conversations are trying to do is develop a shared understanding of their practice, developing their professional identities, and challenging each other to see different points of view — some of which may seem mutually exclusive, but turn out to be mutually beneficial, or even interdependent.

I’ll grant that these discussions often have more noise than signal, but that’s the price you pay to get the signal. I’ll also grant that actually “defining” a practice is largely a red herring — a thriving practice continues to evolve and discover new things about itself. Even if a conversation starts out about clean, clinical definition, it doesn’t take long before lots of other more useful (but muddier, messier) stuff is getting sorted out.

It’s ironic to me that so many people in the “UX family” of practitioner communities utterly lionize “Great Figures” of design who are largely known for what they *wrote* and *said* about design as much as for the things they made, and then turn to their peers and demand they stop talking about what their practice means, and just post more pat advice, templates or tutorials.

A while back I was doing a presentation on what neuroscience is teaching us about being designers — how our heads work when we’re making design decisions, trying to be creative, and the rest. And one of the things I learned was the importance of metacognition — the ability to think about thinking. I know people who refuse to do such a thing — they just want to jump in and ACT. But more often than not, they don’t grow, they don’t learn. They just keep doing what they’re used to, usually to the detriment of themselves and the people around them. Do you want to be one of those people? Probably not.

So, enough already. It’s time we defend the D. Next time you hear someone pipe up and say “hey [eyeroll] can we stop the DTDT already?” kindly remind them that mature communities of practice discuss, dream, debate, deliberate, deconstruct and the rest … because ultimately it helps us get better, deeper and stronger at the Doing.

]]> 2 942
Identity is more than a name Fri, 05 Aug 2011 21:34:13 +0000 From the point of view of a binary mindset, identity is a pretty simple thing. You, an object = [unique identifier]. You as an object represented in a database should be known by that identifier and none other, or else the data is a mess.

The problem is, people are a mess. A glorious mess. And identity is not a binary thing. It’s much more fluid, variegated and organic than we are comfortable admitting to ourselves.

Lately there’s been some controversy over policies at Facebook and the newly ascendant Google + that demand people use their “real” names. Both companies have gone so far as to actually pull the plug on people who they suspect of not following those guidelines.

But this is actually a pretty wrong-headed thing to do. Not only does the marketplace of ideas have a long, grand tradition of the use of pseudonyms (see my post here from a couple years ago), but people have complex, multifaceted lives that often require they not put their “public identification attribute” (i.e. their ‘real name’) out there on every expression of themselves online.

There are a lot of stories emerging, such as this one about gender-diverse people who feel at risk having to expose their real names, that are showing us the canaries in the proverbial coal mine — the ones first affected by these policies — dropping off in droves.

But millions of others will feel the same pressures in more subtle ways too. Danah Boyd has done excellent work on this subject, and her recent post explains the problem as well as anyone, calling the policies essentially an “abuse of power.”

I’m sure it comes across as abusive, but I do think it’s mostly unwitting. I think it’s a symptom of an engineering mindset (object has name, and that name should be used for object) and a naive belief in transparency as an unalloyed “good.” But on an internet where your name can be searched and found in *any* context in which you have ever expressed yourself, what about those conversations you want to be able to have without everyone knowing? What about the parts of yourself you want to be able to explore and discover using other facets of your personality? (Sherry Turkle’s early work is great on this subject.)

I can’t help but think a Humanities & Social Sciences influence is so very lacking among the code-focused, engineering-cultured wizards behind these massive information environments. There’s a great article by Paul Adams, formerly of Google (and Google +), discussing the social psychology angle and how it influenced “Circles,” how FaceBook got it somewhat wrong with “Groups,” and why he ended up at Facebook anyway. But voices like his seem to be in the minority among those who are actually making this stuff.

Seeing people as complex coalescences of stories, histories, desires, relationships and behaviors means giving up on a nice, clean entity-relationship-diagram-friendly way of seeing the world. It means having to work harder on the soft, fuzzy complicated stuff between people than the buckets you want people to put themselves in. We’re a long way from a healthy, shared understanding of how to make these environments human enough.

I realize now that I neglected to mention the prevailing theory of why platforms are requiring real names: marketing purposes. That could very well be. But that, too, is just another cultural force in play. And I think there’s a valid topic to be addressed regarding the binary-minded approach to handling things like personal identity.

There’s an excellent post on the subject at The Atlantic. It highlights a site called My Name is Me, which describes itself as “Supporting your freedom to choose the name you use on social networks and other online services.”

]]> 2 933
Two Fixes for Twitter Tue, 24 May 2011 14:31:27 +0000 There are two things in particular that everyone struggles with on Twitter. Here are my humble suggestions as to how Twitter can do something about it.

1. The Asymmetrical Direct-Message Conundrum

What it is: User A is following user B, but User B is not following User A. User B direct-messages User A, and when User A tries to reply to that direct message, they cannot, because User B is not following them.

Fix: Give User B a way to set a message that will DM User A with some contact info automatically. Something like “Unfortunately I can’t receive direct messages from you, but please contact me at blahblah@domain.blah.” A more complicated fix that might help would be to allow User B to set an optional exception for receiving direct messages for anyone User B has direct-messaged (but whom User B is not following), for a given amount of time or a number of messages. It’s not perfect, but it will handle the majority of these occurrences.

2. The “DM FAIL”

What it is: User A means to send a direct message to User B, but accidentally tweets it to the whole wide world.

There are a couple of variations:
a) The SMS Reflex Response: User A gets a text from Twitter with a direct message from User B; User A types a reply and hits “send” before realizing it’s from Twitter and should’ve had “d username” (or now “m username” ?!?) typed before it.

b) The Prefix Fumble: User A is in same situation as above, but does realize it’s a text from Twitter — however, since they’re so used to thinking of Twitter usernames in the form of “@username” they type that out, forgetting they should be using the other prefix instead.

Fix: allow me to turn *off* the ability to create a tweet via SMS; and reply to my SMS text with a “hey you can’t do that” reminder if I forget I have it turned off and try doing it anyway. Let me turn it on and off via SMS text with commands, so if I’m stuck on a phone where I need to tweet that way, I can still do it. But so many people have smart-phones with Twitter apps, there’s no reason why I can’t receive SMS from Twitter without being able to create via SMS as well.

There you go, Twitter! My gift to you :-)

(By the by, I have no illusions that I’m the only one thinking about how to solve for these problems, and the bright designers at Twitter probably already have better solutions. But … you know, I thought I’d share, just in case … )

]]> 1 925
Links, Maps and Habitats Tue, 17 May 2011 21:29:00 +0000 To celebrate the recent publication of Resmini & Rosati’s “Pervasive Information Architecture,” I’m reprinting, here, my contribution to the book. Thank you, Andrea & Luca, for asking me to add my own small part to the work!

It’s strange how, over time, some things that were once rare and wondrous can become commonplace and practically unnoticed, even though they have as much or more power as they ever had. Consider things like these: fire; the lever; the wheel; antibiotics; irrigation; agriculture; the semiconductor; the book. Ironically, it’s their inestimable value that causes these inventions to be absorbed into culture so thoroughly that they become part of the fabric of societies adopting them, where their power is taken for granted.

Add to that list two more items, one very old and one very new: the map and the hyperlink.

Those of us who are surrounded by inexpensive maps tend to think of them as banal, everyday objects – a commoditized utility. And the popular conception of mapmaking is that of an antiquated, tedious craft, like book binding or working a letter-press – something one would only do as a hobby, since after all, the whole globe has been mapped by satellites at this point; and we can generate all manner of maps for free from the Internet.

But the ubiquity of maps also shows us how powerful they remain. And the ease with which we can take them for granted belies the depth of skill, talent and dedicated focus it takes for maps (and even mapping software and devices) to be designed and maintained. It’s easy to scoff at cartography as a has-been discipline – until you’re trying to get somewhere, or understand a new place, and the map is poorly made.

Consider as well the hyperlink. A much younger invention than the map, the hyperlink was invented in the mid-1960s. For years it was a rare creature living only in technology labs, until around 1987 when it was moderately popularized in Apple’s HyperCard application. Even then, it was something used mainly by hobbyists and educators and a few interactive-fiction authors; a niche technology. But when Tim Berners-Lee placed that tiny creature in the world-wide substrate of the Internet, it bloomed into the most powerful cultural engine in human history. 

And yet, within only a handful of years, people began taking the hyperlink for granted, as if it had always been around. Even now, among the digital classes, mention of “the web” is often met with a sniff of derision. “Oh that old thing — that’s so 1999.” And, “the web is obsolete – what matters now are mobile devices, augmented reality, apps and touch interfaces.” 

One has to ask, however, what good would any of the apps, mobile devices and augmented reality be without digital links? 

Where these well-meaning people go wrong is to assume the hyperlink is just a homely little clickable bit of text in a browser. The browser is an effective medium for hyperlinked experience, but it’s only one of many. The hyperlink is more than just a clicked bit of text in a browser window — it’s a core element for the digital dimension; it’s the mechanism that empowers regular people to point across time and space and suddenly be in a new place, and to create links that point the way for others as well. 

Once people have this ability, they absorb it into their lives. They assume it will be available to them like roads, or language, or air. They become so used to having it, they forget they’re using it — even when dazzled by their shiny new mobile devices, augmented reality software and touch-screen interfaces. They forget that the central, driving force that makes those technologies most meaningful is how they enable connections — to stories, knowledge, family, friends. And those connections are all, essentially, hyperlinks: pointers to other places in cyberspace. Links between conversations and those conversing — links anybody can create for anybody to use. 

This ability is now so ubiquitous, it’s virtually invisible. The interface is visible, the device is tangible, but the links and the teeming, semantic latticeworks they create are just short of corporeal. Like gravity, we can see its physical effects, but not the force itself.  And yet these systems of links — these architectures of information — are now central to daily life. Communities rely on them to constructively channel member activity. Businesses trust systems of links to connect their customers with products and their business partners with processes. People depend on them for the most mundane tasks — like checking the weather — to the most important, such as learning about a life-changing diagnosis. 

In fact, the hyperlink and the map have a lot in common. They both describe territories and point the way through them. They both present information that enables exploration and discovery. But there is a crucial difference: maps describe a separate reality, while hyperlinks create the very territory they describe. 

Each link is a new path — and a collection of paths is a new geography. The meaningful connections we create between ourselves and the things in our lives were once merely spoken words, static text or thoughts sloshing around in our heads. Now they’re structural — instantiated as part of a digital infrastructure that’s increasingly interwoven with our physical lives. When you add an old friend on a social network, you create a link unlike any link you would have made by merely sending a letter or calling them on the phone. It’s a new path from the place that represents your friend to the place that represents you. Two islands that were once related only in stories and memories, now connected by a bridge. 

Or think of how you use a photograph. Until recently, it was something you’d either frame and display on a shelf, carry in your wallet, or keep stored in a closet. But online you can upload that photo where it has its own unique location. By creating the place, you create the ability to link to it — and the links create paths, which add to the the ever-expanding geography of cyberspace. 

Another important difference between the hyperlinks and traditional maps is that digital space allows us to create maps with conditional logic. We can create rules that cause a place to respond to, interact with, and be rearranged by its inhabitants. A blog can allow links to add comments or have them turned off; a store can allow product links to rearrange themselves on shelves in response to the shopper’s area of interest; a phone app can add a link to your physical location or not, at the flick of a settings switch. These are architectural structures for informational mediums; the machinery that enables everyday activity in the living web of the networked dimension. 

The great challenge of information architecture is to design mechanisms that have deep implications for human experience, using a raw material no one can see except in its effects. It’s to create living, jointed, functioning frameworks out of something as disembodied as language, and yet create places suitable for very real, physical purposes.  Information architecture uses maps and paths to create livable habitats in the air around us, folded into our daily lives — a new geography somehow separate, yet inseparable, from what came before. 

]]> 2 917
More than a Metaphor: A few thoughts on IA & Architecture Thu, 07 Apr 2011 15:57:24 +0000 I was lucky enough to be part of a panel at this year’s IA Summit that included Andrea Resmini and Jorge Arango (thanks to Jorge for suggesting the idea and including me!). We had at least 100 show up to hear it, and it seemed to go over well. Eventually there will be a podcast, I believe. Please also read Andrea’s portion, and Jorge’s portion, because they are both excellent.

Update: There’s now an archive of podcasts from IA Summit 2011! And here’s a direct link to the podcast for this session (mp3). Or see them on iTunes.

On Cyberspace Sat, 05 Mar 2011 20:19:31 +0000 A while back, I posted a rant about information architecture that invoked the term “cyberspace.” I, of course, received some flack for using that word. It’s played out, people say. It invokes dusty 80s-90s “virtual reality” ideas about a separate plane of existence … Tron-like cyber-city vistas, bulky goggles & body-suits, and dystopian worlds. Ok…yeah, whatever. For most people that’s probably true.

So let’s start from a different angle …

Over the last 20 years or so, we’ve managed to cause the emergence of a massive, global, networked dimension of human experience, enabled by digital technology.

It’s the dimension you visit when you’re sitting in the coffee shop catching up on your Twitter or Facebook feed. You’re “here” in the sense of sitting in the coffee shop. But you’re also “there” in the sense of “hanging out ‘on’ <Twitter/Facebook/Whatever>.”

It’s the dimension brave, unhappy citizens of Libya are “visiting” when they read, in real-time, the real words of regular people in Tunisia and Egypt, that inspire them to action just as powerfully as if those people were protesting right next to them. It may not be the dimension where these people physically march and bleed, but it’s definitely one dimension where the marching and bleeding matter.

I say “dimension” because for me that word doesn’t imply mutual exclusivity between “physical” and “virtual”: you can be in more than one “dimension” at once. It’s a facet of reality, but a facet that runs the length and breadth of that reality. The word “layer” doesn’t work, because “layer” implies a separate stratum. (Even though I’ve used “layer” off and on for a long time too…)

This dimension isn’t carbon-based, but information-based. It’s specifically human, because it’s made for, and bound together with, human semantics and cognition. It’s the place where “knowledge work” mostly happens. But it’s also the place where, more and more, our stories live, and where we look to make sense of our lives and our relationships.

What do we call this thing?

Back in 2006, Wired Magazine had a feature on how “Cyberspace is Dead.” They made the same points about the term that I mention above, and asked some well-known futurist-types to come up with a new term. But none of the terms they mentioned have seemed to stick. One person suggests “infosphere” … and I myself tried terms like “infospace” in the past. But I don’t hear anyone using those words now.

Even “ubiquitous computing” (Vint Cerf’s suggestion, but the late Mark Weiser’s coinage) has remained a specialized term of art within a relatively small community. Plus, honestly, it doesn’t capture the dimensionality I describe above … it’s fine as a term for the activity of  “computing” (hello, antiquated terminology) from anywhere, and for reminding us that computing technology is ubiquitously present, but doesn’t help us talk about the “where” that emerges from this activity.

There have been excellent books about this sort of dimension, with titles like Everyware, Here Comes Everybody, Linked, Ambient Findability, Smart Things … books with a lot of great ideas, but without a settled term for this thing we’ve made.

Of course, this begs the question: why do we need a term for it? As one of the people quoted in the Wired article says, aren’t we now just talking about “life”? Yeah, maybe that’s OK for most people. We used to say “e-business” because it was important to distinguish internet-based business from regular business … but in only a few years, that distinction has been effaced to meaninglessness. What business *isn’t* now networked in some way?

Still, for people like me who are tasked with designing the frameworks — the rule sets and semantic structures, the links and cross-experiential contexts, I think it’s helpful to have a term of art for this dimension … because it behaves differently from the legacy space we inherited.

It’s important to be able to point at this dimension as a distinct facet of the reality we’re creating, so we can talk about its nature and how best to design for it. Otherwise, we go about making things using assumptions hardwired into our brains from millions of years of physical evolution, and miss out on the particular power (and overlook the dangers) of this new dimension.

So, maybe let’s take a second look at “cyberspace” … could it be redeemed?

At the Institute for the Future, there’s a paper called “Blended Reality” (yet another phrase that hasn’t caught on). In the abstract, there’s a nicely phrased statement [emphasis mine]:

We are creating a new kind of reality, one in which physical and digital environments, media, and interactions are woven together throughout our daily lives. In this world, the virtual and the physical are seamlessly integrated. Cyberspace is not a destination; rather, it is a layer tightly integrated into the world around us.

The writer who coined the term, William Gibson, was quoted in the “Cyberspace is Dead” piece as saying, “I think cyberspace is past its sell-by, but the problem is that everything has become an aspect of, well, cyberspace.” This strikes me, frankly, as a polite way of saying “yeah I get your point, but I don’t think you get what I mean these days by the term.” Or, another paraphrase: I agree the way people generally understand the term is dated and feels, well, spoiled like milk … but maybe you need to understand that’s not cyberspace …”

Personally, I think Gibson sees the neon-cyberpunk-cityscape, virtual-reality conception of cyberspace as pretty far off the mark. In articles and interviews I’ve read over the years, he’s referenced it on and off … but seems conscious of the fact that people will misunderstand it, and finds himself explaining his points with other language.

Frankly, though, we haven’t listened closely enough. In the same magazine as the “Cyberspace is Dead” article, seven years prior, Gibson posted what I posit to be one of the foundational texts for understanding this… whatever … we’ve wrought. It’s an essay about his experience with purchasing antique watches on eBay, called “My Obsession.”  I challenge anyone to read this piece and then come up with a better term for what he describes.

It’s beautiful … so read the whole thing. But I’m going to quote the last portion here in full:

In Istanbul, one chill misty morning in 1970, I stood in Kapali Carsi, the grand bazaar, under a Sony sign bristling with alien futurity, and stared deep into a cube of plate glass filled with tiny, ancient, fascinating things.

Hanging in that ancient venue, a place whose on-site café, I was told, had been open, 24 hours a day, 365 days a year, literally for centuries, the Sony sign – very large, very proto-Blade Runner, illuminated in some way I hadn’t seen before – made a deep impression. I’d been living on a Greek island, an archaeological protectorate where cars were prohibited, vacationing in the past.

The glass cube was one man’s shop. He was a dealer in curios, and from within it he would reluctantly fetch, like the human equivalent of those robotic cranes in amusement arcades, objects I indicated that I wished to examine. He used a long pair of spring-loaded faux-ivory chopsticks, antiques themselves, their warped tips lent traction by wrappings of rubber bands.

And with these he plucked up, and I purchased, a single stone bead of great beauty, the color of apricot, with bright mineral blood at its core, to make a necklace for the girl I’d later marry, and an excessively mechanical Swiss cigarette lighter, circa 1911 or so, broken, its hallmarked silver case crudely soldered with strange, Eastern, aftermarket sigils.

And in that moment, I think, were all the elements of a real futurity: all the elements of the world toward which we were heading – an emerging technology, a map that was about to evert, to swallow the territory it represented. The technology that sign foreshadowed would become the venue, the city itself. And the bazaar within it.

But I’m glad we still have a place for things to change hands. Even here, in this territory the map became.

I’ve written before about how the map has become the territory. But I’d completely forgotten, until today, this piece I read over 10 years ago. Fitting, I suppose, that I should rediscover it now by typing a few words into Google, trying to find an article I vaguely remembered reading once about Gibson and eBay. As he says earlier in the piece quoted above, “We are mapping literally everything, from the human genome to Jaeger two-register chronographs, and our search engines grind increasingly fine.”

Names are important, powerful things. We need a name for this dimension that is the map turned out from itself, to be its own territorial reality. I’m not married to “cyberspace” — I’ll gladly call it something else.

What’s important to me is that we have a way to talk about it, so we can get better at the work of designing and making for it, and within it.


Note: Thanks to Andrea Resmini & Luca Rosati for involving me in their work on the upcoming book, Pervasive IA, from which I gleaned the reference to the Institute for the Future article I mentioned above.

Context Management at Plaxo Thu, 24 Feb 2011 17:03:18 +0000 Earlier I shared a post about designing context management, and wanted to add an example I’d seen. I knew I’d made this screenshot, but then couldn’t remember where; luckily I found it today hiding in a folder.

This little widget from Plaxo is the only example I’ve noticed where an online platform allows you to view information from different contextual points of view (other than very simple examples like “your public profile” and “preview before publish”).

Plaxo’s function actually allows you to see what you’re sharing with various categories of users with a basic drop-down menu. It’s not rocket science, but it goes miles further than most platforms for this kind of functionality.

If anybody knows of others, let me know?

]]> 3 888
This is Your Brain on Design Thu, 10 Feb 2011 21:46:32 +0000 Almost a year later, I’m finally posting this presentation to Slideshare. I have no idea what took me so long … but I’m sure that brain science has an answer :-)

I think there’s a lot of potential for design training & evolving methods to incorporate abetter understanding of how our brains function when we’re doing all the work of design.

See the program description on the conference site, and download the podcast or read the transcript at Boxes & Arrows.

Also, thanks to Luke W for the excellent summary of my talk.

Designing the Engagement – About our Workshop for IA Summit Thu, 03 Feb 2011 22:52:44 +0000 I’m happy to announce I’m collaborating with my Macquarium colleague, Patrick Quattlebaum, and Happy Cog Philadelphia’s inimitable Kevin Hoffman on presenting an all-day pre-conference workshop for this year’s Information Architecture Summit, in Denver, CO. See more about it (and register to attend!) on the IA Summit site.

One of the things I’ve been fascinated with lately is how important it is to have an explicit understanding of the organizational and personal context not only of your users but of your own corporate environment, whether it’s your client’s or your own as an internal employee. When engaging over a project, having an understanding of motivations, power structures, systemic incentives and the rest of the mechanisms that make an organization run is immeasurably helpful to knowing how to go about planning and executing that engagement.

It turns out, we have excellent tools at our disposal for understanding the client: UX design methods like contextual inquiry, interviews, collaborative analysis interpretation, personas/scenarios, and the like; all these methods are just as useful for getting the context of the engagement as they are for getting the context of the user base.

Additionally, there are general rules of thumb that tend to be true in most organizations, such as how process starts out as a tool, but calcifies into unnecessary constraint, or how middle management tends to work in a reactive mode, afraid to clarify or question the often-vague direction of their superiors. Not to mention tips on how to introduce UX practice into traditional company hierarchies and workflows.

It’s also fascinating to me how understanding individuals is so interdependent with understanding the organization itself, and vice-versa. The ongoing explosion of new knowledge in social psychology and neuroscience  is giving us a lot of insight into what really motivates people, how and why they make their decisions, and the rest. These are among the topics Patrick & I will be covering during our portion of the workshop.

As the glue between the individual, the organization and the work, there are meetings. So half the workshop, led by Kevin Hoffman, will focus specifically on designing the meeting experience.  It’s in meetings, after all, where the all parties have to come to terms with their context in the organizational dynamics — so Kevin’s techniques for increasing not just the efficiency of meetings but the human & interpersonal growth that can happen in them, will be invaluable. Kevin’s been honing this material for a while now, to rave reviews, and it will be a treat.

I’m really looking forward to the workshop; partly because, as in the past, I’m sure to learn as much or more from the attendees as they learn from the workshop presenters.

]]> 1 864
Context Management Tue, 25 Jan 2011 21:50:47 +0000 Note: a while back, Christian Crumlish & Erin Malone asked me to write a sidebar for a book they were working on … an ambitious tome of design patterns for social software. The book, (Designing Social Interfaces) was published last year, and it’s excellent. I’m proud to be part of it. Christian encouraged contributors to publish their portions online … I’m finally getting around to doing so.

In addition to what I’ve posted below, I’ll point out that there have been several infamous screw-ups with context management since I wrote this … including Google Buzz and Facebook’s Groups, Places and other services.

Also to add: I don’t think we need a new discipline for context management. To my mind, it’s just good information architecture.


There was a time when we could be fairly certain where we were at any given time. Just looking at one’s surroundings would let us know if we were in a public park or a quiet library, a dance hall or a funeral parlor. And our actions and conversations could easily adapt to these contexts: in a library, we’d know not to yell “heads up” and toss a football, and we’d know to avoid doing the hustle during someone’s eulogy.

But as more and more of our lives are lived via the web, and the contexts we inhabit are increasingly made of digits rather than atoms, our long-held assumptions about reality are dissolving under our typing-and-texting fingertips.

A pre-web example of this problem is something most people have experienced: accidentally emailing with “reply all” rather than “reply.”  Most email applications make it brutally easy to click Reply All by accident. In the physical world in which we evolved, the difference between a private conversation and a public one required more physical effort and provided more sensory clues. But in an email application, there’s almost no difference:  the buttons are usually identical and only a few pixels apart.

You’d think we would have learned something from our embarrassments with email, but newer applications aren’t much of an improvement. Twitter, for example, allows basically the same mistake if you use “@” instead of “d.” Not only that, but you have to put a space after the “d.”

Twitter users, by the time of this writing, are used to seeing at least a few of these errors made by their friends every week, usually followed by another tweet explaining that was a “mis-tweet” or cursing the d vs @ convention.

At least with those applications, it’s basically a binary choice for a single piece of data: one message goes either to one or multiple recipients: the contexts are straightforward, and relatively transparent. But on many popular social nework platforms, the problem becomes exponentially more complicated.

Because of its history, Facebook is an especially good example. Facebook started as a social web application with a built-in context: undergraduates at Harvard. Soon it expanded to other colleges and universities, but its contextual architecture continued to be based on school affiliation. The power of designing for a shared real-world context allowed Facebook’s structure to assume a lot about its users: they would have a lot in common, including their ages, their college culture, and circles of friends.

Facebook’s context provided a safe haven for college students to express themselves with their peers in all their immature, formative glory; for the first time a generation of late-teens unwittingly documented their transition to adulthood in a published format. But it was OK, because anybody on Facebook with them was “there” only because they were already “there” at their college, at that time.

But then, in 2006 when Facebook opened its virtual doors to anyone 13 or over with an email address, everything changed.  Graduates who were now starting their careers found their middle-aged coworkers asking to be friends on Facebook. I recall some of my younger office friends reeling at the thought that their cube-mates and managers might see their photos or read their embarrassing teenage rants “out of context.”

The Facebook example serves a discussion of context well because it’s probably the largest virtual place to have ever so suddenly unhinged itself from its physical place. Its inhabitants, who could previously afford an assumed mental model of “this web place corresponds to the physical place where I spent my college years,” found themselves in a radically different place. A contextual shift that would have required massive physical effort in the physical world was accomplished with a few lines of code and the flip of a switch.

Not that there wasn’t warning. The folks who run Facebook had announced the change was coming. So why weren’t more people ready? In part because such a reality shift doesn’t have much precedent; few people were used to thinking about the implications of such a change. But also because the platform didn’t provide any tools for managing the context conversion.

This lack of tools for managing multiple contexts is behind some of the biggest complaints about Facebook and social network platforms (such as MySpace and LinkedIn). For Facebook, long-time residents realized they would like to still keep up their immature and embarrassing memories from college to share just with their college friends, just like before — they wanted to preserve that context in its own space. But Facebook provided no capabilities for segmenting the experience. It was all or nothing, for every “friend” you added. And then, when Facebook launched its News feed — showing all your activities to your friends, and those of your friends to you — users rebelled in part because they hadn’t been given adequate tools for managing the contexts where their information might appear. This is to say nothing of the disastrous launch of Facebook’s “Beacon” service, where all users were opted in by default to share information about their purchases on other affiliated sites.

On MySpace, the early bugbear was the threat of predator activity and the lack of privacy. Again, the platform was built with the assumption that users were fine with collapsing their contexts into one space, where everything was viewable by every “friend” added. And on LinkedIn, users have often complained the platform doesn’t allow them to keep legitimate peer connections separate from others such as recruiters.

Not all platforms have made these mistakes. The Flickr photo site has long distinguished between Family and Friends, Private and Public. LiveJournal, a pioneering social platform, has provided robust permissions controls to its users for years, allowing creation of many different user-and-group combinations.

However, there’s still an important missing feature, one which should be considered for all social platforms even as they add new context-creation abilities. It’s either impossible or difficult for users to review their profiles and posts from others’ point of view.

Giving users the ability to create new contexts is a great step, but they also need the ability to easily simulate each user-category’s experience of their space. If a user creates a “co-workers” group and tries to carefully expose only their professional information, there’s no straightforward way to view their own space using that filter. With the Reply All problem described earlier, we at least get a chance to proof-read our message before hitting the button. But most social platforms don’t even give us that ability.

This function — perhaps call it “View as Different User Type” — is just one example of a whole class of design patterns we still need for managing the mind-bending complexity we’ve created for ourselves on the web. There are certainly others waiting to be explored. For example, what if we had more than just one way to say “no thank you” to an invitation or request, depending on type of person requesting? Or a way to send a friendly explanatory note with your refusal, thereby adding context to an otherwise cold interaction? Or what about the option to simply turn off whole portions of site functionality for some groups and not others? Maybe I’d love to get zombie-throwing-game invitations from my relatives, but not from people I haven’t seen since middle school?

In the rush to allow everyone to do everything online, designers often forget that some of the limitations of physical life are actually helpful, comforting, and even necessary. We’re a social species, but we’re also a nesting species, given to having our little nook in the tribal cave. Maybe we should take a step back and think of these patterns not unlike their originator, Mr Alexander, did — how have people lived and interacted successfully over many generations? What can we learn from the best of those structures, even in the structureless clouds of cyberspace? Ideally, the result would be the best of both worlds: architectures that fit our ingrained assumptions about the world, while giving us the magical ability to link across divides that were impossible to cross before.

Message isn’t just message anymore Wed, 17 Nov 2010 20:33:30 +0000 I remember back in 1999 working in a web shop that was a sibling company with a traditional ad firm, and thinking “do they realize that digital means more than just packaging copy & images for a new medium?”

Then over the years since, I’ve continually been amazed that most advertising & marketing pros still don’t seem to get the difference between “attention” and actual “engagement” — between momentary desire and actual usefulness.

Then I read this quote from a veteran advertising creative officer:

Instead of building digital things that had utility, we approached it from a messaging mind-set and put messaging into the space. It took us a while to realize … the digital space is completely different.

via The Future of Advertising | Page 4 | Fast Company.

I guess better late than never …

I actually love advertising at its best. Products and brands need to be able to tell great stories about themselves, and engage people’s emotions & aspirations. It’s easy to dump on advertising & marketing as out of touch and wrong-headed — but that’s lazy, it seems to me.

I appreciated the point Bill Buxton made in a talk I saw online a while back about how important the advertising for the iPod was … that it wasn’t just an added-on way to talk about the product; it was part of the whole product experience, driving much of how people felt about purchasing, using and especially *wearing* an iPod and its distinctive white earphones.

But this distinction between utility and pure message is an important one to understand, partly so we can understand how blurred the line has become between them. Back when the only way to interact with a brand was either to receive its advertising message passively, or to purchase and touch/experience its product or service — and there was precious little between — the lines were pretty clear between the message-maker and the product-creator.

These days, however, there are so many opportunities for engagement through interaction, conversation, utility and actual *use* between the initial message and the product itself.

Look at automobiles, for example: once upon a time, there were ads about cars, and then there were the actual cars … and that was pretty much it. But now we get a chance to build the car online, read about it, imagine ourselves in it with various options, look for reviews about it, research prices … all of that before we actually touch the car itself. By the time you touch the car, so much physical engagement has happened on your way to the actual object that your experience is largely shaped already — the car is going to feel different to you if that experience was positive rather than if it was negative (assuming a negative experience didn’t dissuade you from going for a test drive at all).

Granted to some degree that’s always been the case. The advertising acts like the label on a bottle of wine — shaping the expectation of the experience inside the bottle, which we know can make a huge difference.  But the utility experience brings a whole new, physical dimension that affects perception even more: the ability to engage the car interactively rather than passively receiving “messaging” alone. Now it’s even harder to answer the question “where does the messaging end and the car begin?”

Let’s get something straight about IA Wed, 10 Nov 2010 04:36:32 +0000 I’ve written a lot of stuff over the last few years about information architecture. And I’m working on writing more. But recently I’ve realized there are some things I’ve not actually posted publicly in a straightforward, condensed manner. (And yes, the post below is, for me, condensed.)

WTF is IA?

1. Information architecture is not just about organizing content.

  • In practice, it has never been limited to merely putting content into categories, even though some very old definitions are still floating around the web that define it as such. (And some long-time practitioners are still explaining it this way, even though their actual work goes beyond those bounds.)
  • Every competent information architecture practitioner I’ve ever known has designed for helping people make decisions, or persuade customers, or encourage sharing and conversation where relevant. There’s no need to coin new things like “decision architecture” and “persuasion architecture.”
  • This is not to diminish the importance and complexities involved with designing storage and access of content, which is actually pretty damn hard to do well.

2. IA determines the frameworks, pathways and contexts that people (and information) are able to traverse and inhabit in digitally-enabled spaces.

  • Saying information architecture is  limited to how people interact with information is like saying traditional architecture is limited to how people interact with wood, stone, concrete and plastic.
  • That is: Information architecture uses information as its raw material the same way building architecture uses physical materials.
  • All of this stuff is essentially made of language, which makes semantic structure centrally important to its design.
  • In cyberspace, where people can go and where information can go are essentially the same thing; where and how people can access information and where and how people can access one another is, again, essentially the same thing. To ignore this is to be doing IA all wrong.

3. The increase of things like ubiquitous computing, augmented reality, emergent/collective organization and “beyond-the-browser” experiences make information architecture even more relevant, not less.

  • The physical world is increasingly on the grid, networked, and online. The distinction between digital and “real” is officially meaningless. This only makes IA more necessary. The digital layer is made of language, and that language shapes our experience of the physical.
  • The more information contexts and pathways are distributed, fragmented, user-generated and decentralized, the more essential it is to design helpful, evolving frameworks, and conditional/responsive semantic structures that enable people to communicate, share, store, retrieve and find “information” (aka not just “content” but services, places, conversations, people and more).
  • Interaction design is essential to all of this, as is graphical design, content strategy and the rest. But those things require useful, relevant contexts and connections, semantic scaffolding and … architecture! … to ensure their success. (And vice versa.)

Why does this need to be explained? Why isn’t this more clear? Several reasons:

1. IA as described above is still pretty new, highly interstitial, and very complex; its materials are invisible, and its effects are, almost by definition, back-stage where nobody notices them (until they suck). We’re still learning how to talk about it. (We need more patience with this — if artists, judges, philosophers and even traditional architects can still disagree among one another about the nature of their fields, there’s no shame in IA following suit.)

2. Information architecture is a phrase claimed by several different camps of people, from Wurmanites (who see it as a sort of hybrid information-design-meets-philosophy-of-life) to the polar-bear-book-is-all-I-need folks, to the information-technology systems architects and others … all of whom would do better to start understanding themselves as points on a spectrum rather than mutually exclusive identities.

3. There are too many legacy definitions of IA hanging around that need to be updated past the “web 1.0” mentality of circa 2000. The official explanations need to catch up with the frontiers the practice has been working in for years now. (I had an opportunity to fix this with IA Institute and dropped the ball; glad to help the new board & others in any way I can, though.)

4. Leaders in the community have the responsibility to push the practice’s understanding of itself forward: in any field, the majority of members will follow such a lead, but will otherwise remain in stasis. We need to be better boosters of IA, and calling it what it is rather than skirting the charge of “defining the damn thing.”

5. Some leaders (and/or loud voices) in the broader design community have, for whatever reason, decided to reject information architecture or, worse, continue stoking some kind of grudge against IA and people who identify as information architects. They need to get over their drama, occasionally give people the benefit of the freakin’ doubt, and move on.


This has generated a lot of excellent conversation, thanks!

A couple of things to add:

After some prodding on Twitter, I managed to boil down a single-statement explanation of what information architecture is, and a few folks said they liked it, so I’m tacking it on here at the bottom: “IA determines what the information should be, where you and it can go, and why.” Of course, the real juice is in the wide-ranging implications of that statement.

Also Jorge Arango was awesome enough to translate it into Spanish. Thanks, Jorge!

]]> 27 833
Scaffolding and messy truth Mon, 08 Nov 2010 15:13:48 +0000 I liked this bit from Peter Hacker, the Wittgenstein scholar, in a recent interview. He’s talking about how any way of seeing the world can take over and put blinders on you, if you become too enamored of it:

The danger, of course, is that you over do it. You overplay your hand – you make things clearer than they actually are. I constantly try to keep aware of, and beware of, that. I think it’s correct to compare our conceptual scheme to a scaffolding from which we describe things, but by George it’s a pretty messy scaffolding. If it starts looking too tidy and neat that’s a sure sign you’re misdescribing things.

via TPM: The Philosophers’ Magazine | Hacker’s challenge. (emphasis mine)

It strikes me this is true of design as well. There’s no one way to see it, because it’s just as organic and messy as the world in which we do it.

I mean this both in the larger sense of “what is design?” and the smaller sense of “what design is best for this particular situation?”

Over the years, I’ve come to realize that most things are “messy” — and that while any one solution or model might be helpful, I have to ward against letting it take over all my thinking (which is awfully easy to do … it’s pleasant, and much less work, to just dismiss everything that doesn’t fit a given perspective, right?).

The actual subject of the interview is pretty great too … case in point, for me, it warns against buying into the assumptions behind so much recent neuroscience thinking, especially how it’s being translated in the mainstream (though Hacker goes after some hard-core neuroscience as well).

A few thoughts on the IA Institute Thu, 09 Sep 2010 16:15:48 +0000 When I ran for the IA Institute board a couple of years ago, I’d never been on a board of anything before. I didn’t run because I wanted to be on a board at all, really. I ran because I had been telling board members stuff I thought they should focus on, and making pronouncements about what I thought the IA Institute should be, and realized I should either join in and help or shut up about it.

So I ran, and thanks to the membership of the Institute that voted for me, I was voted into a slot on the board.

It didn’t take long to realize that the organization I’d helped (in a very small way) get started back in 2002 had grown into a real non-profit with actual responsibilities, programs, infrastructure and staff. What had been an amorphous abstraction soon came into focus as a collection of real, concrete moving parts, powered mainly by volunteers, that were trying to get things done or keep things going.

Now, two years later, I’m rolling off of my term on the board. I chose not to run again this year for a second term only because of personal circumstance, not because I don’t want to be involved (in fact, I want to continue being involved as a volunteer, just in a more focused way).  I’m a big believer in the Institute — both what it’s doing and what it can accomplish in the future.

I keep turning over in my head: what sort of advice should I give to whoever is voted into the board this year? Then I realized: why wait to bring these things up … maybe this would be helpful input for those who are running and voting for the board? So here goes… in no particular order.

Perception rules

The Institute has been around for 8 years now. In “web time” that’s an eternity.  That gives the organization a certain patina of permanence and … well, institution-ness … that would lead folks to believe it’s a completely established, solidly funded, fully staffed organization with departments and stuff. But it’s actually still a very shoestring-like operation. The Institute is still driven 99% by volunteers, with only 2 half-time staff, paid on contract, who live in different cities, and who are very smart, capable people who could probably be making more money doing something else. (Shout-out to Tim & Noreen — and to Staff Emeritus Melissa… you guys  all rock).  But I don’t know that we did the best job of making that clear to the community. That has led at times to some misunderstandings about what people should expect from the org.

Less “Can-Do” and more “Jiu Jitsu”

Good intentions and willingness to work hard and make things happen isn’t enough. In fact it may be too much. A “can-do” attitude sounds great! But it results in creating things that can’t be sustained, or chasing ideals that people say they believe in but don’t actually have the motivation to support over time.

Jiu jitsu, on the other hand, takes the energy that’s available and channels it. It’s disciplined in its focus. Overall, I think the org needs to keep heading in that direction — picking the handful of things it can stand for and accomplish very well.

The Institute has a history of having very inventive, imaginative people involved in its board and volunteer efforts, and in its community at large. These are folks who think of great ideas all the time. But not every idea is one that should be followed up on and considered as an initiative. Here’s the thing: even most of the *good* ideas cannot be followed up on and considered an actual initiative. There just isn’t bandwidth.

I’d bet any organization that has a leadership team that changes out every 1-2 years probably has this challenge. Add the motivation to “make a mark” as a board member to the motivation to make members & community voices happy who are asking for (or demanding) things, and before you know it, you have a huge list of stuff going on that may or may not actually still have relevance or value commensurate with the effort it requires.

It’s easy in the heat of the moment of a new idea to say “yeah we love that, let’s make that happen” … but it’s an illusion created by the focus of novelty. I urge the community (members, board, volunteers, everyone) to keep this in mind when thinking “why doesn’t the Institute to X or Y? it seems so obvious!”  The response I’ve taken to having to those requests is: that sounds like a great idea… how’d you like to investigate making that happen for the Institute?

Anything that doesn’t have people interested enough to make it happen *outside the board* probably shouldn’t happen to begin with. The Board is there to sponsor things, make decisions about how money should be spent and what to support — but not do the legwork and heavy lifting. It’s just impossible to do that plus run the organization, for people who have paying, full-time jobs already.

Money & Membership

This is not a wealthy organization. The budget is pretty small. It only charges members $40 a year (still, after 8 years), and other than membership fees, makes a big chunk of its budget from its annual conference (IDEA — go register!). Where does the money go? Lots of it goes to the community — helping to fund conferences, events, grants, and initiatives aimed at helping grow the knowledge & skills of the whole community. It also goes to paying the part-time staff to keep the lights on, fix stuff & enable most of the work that goes on. The benefits are not just for paying members, by the way. Most of what the Institute does is pretty open-source type stuff. Frankly I’ve thought for a while now that we should move away from “membership” and call people “contributors” instead. Because that’s what you’re doing … you’re contributing a small amount of cash in support of the community, and you get access to a closed, relatively quiet mailing list of helpful colleagues as a “thank you” gift.

Whenever I hear somebody complaining about the Institute and “what I get for my forty dollars,” I get a little miffed. But then I realize to some degree the organization sets that expectation. It may be helpful for the next board to think about the membership model — which really may be more about semantics & expectations-setting than policy, who knows.

One thing the Institute has historically been afraid to do is spend money on itself. But then it tries to handle some tasks that would honestly be much better to pay others to handle. (Again, that can-do attitude getting us in trouble.) Historically, the board tried to handle a lot of the financial tasks through a treasurer (banking, recordkeeping, etc). It took a long line of dedicated people who gave a lot of their personal time to handling those tedious tasks. We finally hit a wall where we realized we just weren’t handling the tasks as well as we should as amateurs — we needed help. So we found an excellent 3rd party service provider (recommended by our excellent Board of Advisors) to take care of a lot of that stuff. (And it’s very cost-efficient — I won’t go into why and how here.)

One thing that comes up year after  year is that the board should have an annual retreat to ramp up new board members and spend concentrated face-to-face time bonding as a team, deciding on priorities & getting a shared vision. But there’s a lot of fear about spending the money (especially to fly international folks around) and the perception issue (see above) that the Board is blowing money on junkets or something.

But face time, especially if it’s moderated & structured, could go a long way toward building rapport & accountability and setting things up for success. This should be mandatory and written into the bylaws, and an explanation published on the site explaining why it is necessary. IMHO this may be the single biggest pitfall that’s gotten in the way of having a fully effective board, at least in my term.

Roads & Bridges

The infrastructure? It’s a hodgepodge of code & 3rd party services strung together through heroic efforts & ingenuity, over 8 years. A lot of it is pretty old & rickety. But honestly, it’s the 3rd party services that seem to be the biggest problem at times — for example the 3rd party membership system is messy and inflexible (though some excellent volunteer work is going on to switch systems to something that will integrate better with other web services).

I can’t tell you how many times over the last 3 years (1 as an advisor, 2 as a director) I’ve heard it said “we could totally do X better if we had the infrastructure” and just didn’t have the bandwidth or funding to move forward with that.

Progress is being made on several fronts, but the Institute needs an organized, passionate & well-led effort to deal with the infrastructure issue from the ground up. I do not mean that the Institute needs some kind of Moon Landing project. It needs to use a few easy-to-maintain mechanisms that take the least effort for maximum effect. One problem is that the infrastructure is supporting a lot of initiatives that have accrued over the last 8 years, some of which are still relevant, some of which may not be, and many of which should be reorganized or combined to better focus efforts (see the Can-do vs Jiu-jitsu bit above).

People will be people

This org, like any non-profit, volunteer-driven organization, is made up of people. And one constant among people is that we all have our flaws, and we all have complicated lives. We all have personalities that some folks like and some folks don’t. We all say things we wish we could take back, and we all do stuff that other people look at and say “WTF?”

While any organization like this is, indeed, made up of people … it’s a mistake to judge the organization as a whole by any handful of individuals involved in it. But it happens anyway.

So, since that’s inevitable, anyone running for a leadership position in an organization like this should be aware: being on the board is going to put you in a spotlight in a way that will probably surprise you. There are a lot of people who pay attention to who’s on such a list — and they look to you with a lot of expectations you wouldn’t dream other people would have of you. Just be aware that.

At the same time, remember to have some humility and openness about the people who came before you in your role, and their decisions and the hard work they did. Much of what I tried to do in the last 2 years turned out to be misplaced effort, or just the wrong idea … and some of the stuff that I think is valuable may end up being irrelevant in another year or two. That’s just how it goes. It’s tempting to go into a new role with the attitude of “I’m gonna clean this mess up” and “why the hell did they decide to do it like this?”  Just remember that somebody will likely be thinking that about some of your work & ideas a couple years from now, and give others the break you hope they may give you.

Signing Off

Speaking of people — it’s been an honor & privilege to serve with the folks I worked with over the last two years, and to have been entrusted with a board role by the Institute members. I hope I left the place at least a little better off than when I got there.

I had the privilege of hanging out with Allen Ginsberg for a few days back in a previous life when I wanted to be a full-time poet. At dinner one night, as he was working his way through some fresh fruit he’d had warmed for digestion (he was going macrobiotic because of his “diyabeetus”), he was talking about people he’d known in his past. He said something that stuck with me about his teachers & mentors through the years … I paraphrase: “You know, one thing I’ve learned … you don’t kick the people who came before you in the teeth.”  I think it’s important to keep that rule about the people who come after you as well.

I make this pledge to the incoming leaders & other volunteers: if I have an issue with the Institute, something it’s done or some decision it’s made, something that isn’t working right, or something a person said or did, I’ll strive to remember to avoid blurting an outburst or even grousing in private, because it’s best to communicate with you and ask “how can I help?” Otherwise, I have no room to complain.

A final note (finally!) … any good I and the other board members did was only building on the excellent efforts of the community members who went before … the previous boards, volunteers & staff. Thanks to all of you for the hard work you put in thus far … and thanks to those of you stepping up to offer your time, passion and ingenuity in the future.

]]> 1 818
Ch ch ch changes Fri, 11 Jun 2010 19:12:56 +0000 Today it’s official that I’m leaving my current role at Vanguard as of June 25, and starting in a new role as Principal User Experience Architect at Macquarium.

I know everybody says “it’s with mixed feelings” that they do such a thing, but for me it’s definitely not a cliche. Vanguard has been an excellent employer, and for the last 6 1/2 years I’ve been there, I’ve always been able to say I worked there with a great deal of pride. It has some of the smartest, most dedicated user-experience design professionals I’ve ever met, and I’ll miss all of them, as well as the business and technology people I’ve worked closely with over the years.

I’m excited, however, to be starting work with Macquarium on June 28. On a personal level, it’s a great opportunity to work in the the region where I live (Charlotte and environs) as well as Atlanta, where the company is headquartered, and where I grew up, and have a lot of family I haven’t gotten to see as often as I’d like. On the professional side, Macquarium is tackling some fascinating design challenges that fit my interests and ambitions very well at this point in my life. I can’t wait to sink my teeth into that juicy work.

I’ve been pretty quiet on the blog for quite a while, partly because leading up to this (very recently emerging) development, I also spoke at a couple of conferences, and got married … it’s been a busy 2010 so far. I’m hoping to be more active here at Inkblurt in the near future … but no promises… I don’t want to jinx it.

]]> 5 810
Italian IA Summit 2010 Tue, 01 Jun 2010 14:44:38 +0000 In 2010 I was fortunate to be invited to be a keynote speaker at the Italian IA Summit in Pisa, Italy.

I presented on “Why Information Architecture Matters (To Me)” — a foray into how I see IA as being about creating places for habitation, and some personal background on why I see it that way.

What am I? Fri, 26 Mar 2010 14:46:08 +0000 So… here we are a year after the 2009 IA Summit in glorious Memphis. At the end of that conference, Jesse James Garrett, one of the more prominent and long-standing members of the community, (and, ironically, a co-founder of the IA Institute ;-), made a pronouncement in his closing plenary that “there are no Information Architects” and “there are no Interaction Designers” … “there are only User Experience Designers.”

There has since been some vocal expression of discontent with Jesse’s pronouncement.*

I held off — mostly because I was tired of the conversation about what to call people, and I’ve come to realize it doesn’t get anyone very far. More on that in a minute.

First I want to say: I am an information architect.

I say that for a couple of reasons:

1. My interests and skills in the universe that is Design tack heavily toward using information to create structured systems for human experience. I’m obsessed with the design challenges that come from linking things that couldn’t be linked before the Internet — creating habitats out of digital raw material. That, to me, is the heart of information architecture.

2. I use the term Information Architect because that’s the term that emerged in the community I discovered over 10 years ago where people were discussing the concerns I mention in (1) above. That’s the community where I forged my identity as a practitioner. In the same way that if I ever moved to another country, I would always be “American” there’s a part of my history I can’t shake. Nobody “decided” to call it that — it just happened. And that, after all, is how language works.

Now that I’ve gotten that out of the way, back to Jesse’s talk. I appreciated his attempt to sort of cut the Gordian knot. I can see how, from a left-brain analytical sort of impulse, it looks like a nice, neat solution to the complications and tensions we’ve seen in the UX space — by which I mean the general field in which various related communities and disciplines seem to overlap & intersect. Although, frankly, I think the tensions and political intrigue he mentioned were pretty well contained and already starting to die off by attrition on their own … 99.9% of the people in the room and those who read/heard his talk later had no idea what he was talking about. (Later that year I met some terrific practitioners in Brazil who call themselves information architects and were genuinely concerned, because the term had already become accepted among government and professional organizations — and that if the Americans decide to stop using the term, what will they be called? I told them not to sweat it.)

So like I said — I get the desire to just cut the Gordian knot and say “these differences are illusions! let’s band together and be more formidable as one!” But unfortunately, this particular knot just won’t cut. It probably won’t untangle either. And that’s not necessarily a bad thing.

When I heard Jesse’s pronouncement about “there are no” and “there are only,” I thought it was too bad it would probably end up muddying the effect of his talk … people would hyper-fixate on those statements and miss a lot of the other equally provocative (but probably more useful) comments he made that afternoon.

Why would I say that? Because over the years I’ve come to realize that telling someone what or who they are is counterproductive. Telling people who call themselves X that they should actually call themselves Y — and that a role named X doesn’t actually exist — is like telling someone named Sally that her name is Maude. Or telling a citizen of a country (e.g. USA, Germany, Australia) he’s not a “real” American, German or Australian.

Saying such a thing pushes deep emotional buttons about our identities. Buttons we aren’t even fully aware we have.

There are some kinds of language that our brains treat as special. If you show me a fork and tell me it’s a spoon, my brain will just say “you’re confused, really just look this up in the dictionary, you’ll see you’re wrong.” No sense of being threatened there, little emotional reaction other than amusement and slight concern for your mental health.

But language about our identities is different. That sort of language often reaches right past our frontal cortex and heads straight for the more ancient parts of our brains. The parts that felt fear when our parents left the room when we were infants, or the parts that make us eat whatever is in front of us if we’ve skipped a meal or two, even if we’re really trying to eat healthier that day. It’s the part that translates sensory data into basic emotions about our very existence and survival. Telling someone they aren’t something that they really think they are is like threatening to chop off a limb — or better, a portion of their face, so they won’t quite recognize themselves in the mirror.

Like I said — counterproductive.

Why would I go into such a dissertation on our brains and identity? Because it helps us understand why practitioner communities can get into such a bind over the semantics of their work.

A couple of years ago, I did the closing talk at IA Summit in Miami. The last section of that talk covered professional identity, and explains it better there than I could here. I also posted later about the Title vs Role issue in particular. So I won’t repeat all that here.

In particular — my own analytical side wanted to believe it was possible to separate the “role/practice” of information architecture from the need we have to call ourselves something. But I should’ve added another layer between “Title” and “Role” and called it something like “what we call ourselves to our friends.” It turns out that’s an important layer, and the one that causes us the most grief.

Since I did that talk, I’ve learned it’s a messier issue than I was making of it at the time. It’s helpful, I think, to have some models and shared language for helping us more dispassionately discuss the distinctions between various communities, roles and names. But they only go so far — most of this is going to happen under the surface, in the organic, emergent fog that roils beneath the parts of our professional culture that we can see and rationalize about.

It’s also worth noting that no professional practice that is still living and thriving has finally, completely sorted these issues out. Sure, there are some professions that have definitions for the purpose of licensure or certification — but those are only good for licensure and certification. Just listen to architects arguing over what it means to be an architect (form vs function, etc) or medical practitioners arguing over what it means to be a doctor (holistic vs western, or Nurse Practitioner vs MD).

I’m looking forward to the 2010 IA Summit in Phoenix, and the conversations that we’ll undoubtedly have on these issues. I realize these topics frustrate some (though I suspect the frustration comes mainly from the discomfort I explained above). But these are important, relevant conversations, even if people don’t realize it at the time. They mark the vibrancy of a field of practice, and they’re the natural vehicle for keeping that field on its toes, evolving and doing great work.

* Note: Thanks to Andrea Resmini, Dan Saffer and Dan Klyn for “going there” in their earlier posts, and making me think. If there are any other reactions that I missed, kindly add links in the comments below? Also, thanks Jesse for saying something that’s making us think, talk and debate.

]]> 7 798
Explain IA and win a thousand bucks! Wed, 13 Jan 2010 15:46:28 +0000 Peter Morville and the IA Institute have joined forces with some excellent sponsors to host a contest. To wit:

In this contest, you are invited to explain information architecture. What is it? Why is it important? What does it mean to you? Some folks may offer a definition in 140 characters or less, while others will use this opportunity to tell a story (using text, pictures, audio, and/or video) about their relationship to IA.

Be sure to note the fine print lower on the Flickr page (where there’s also a link to a free prize!):

Our goals are to engage the information architecture community (by fostering creativity and discussion) and advance the field (by evolving our definitions and sharing our stories). We believe this can be a positive, productive community activity, and a whole lot of fun. We hope you do too!

I’m glad to see most of the chatter around this has been positive. But there are, of course, some nay-sayers — and the nays tend to ask a question along the lines of this: “Why is the IA Institute having to pay people to tell it what Information Architecture is?”

I suspect the contest would come across that way only if you’re already predisposed to think negatively of IA as a practice or the Institute as an organization — or people who self-identify as “Information Architects” in general. This post isn’t addressed to those folks, because I’m not interested in trying to sway their opinions — they’re going to think what they want to think.

But just in case others may be wondering what’s up, here’s the deal.

Information architecture is a relatively new community of practice. As technology and the community evolve, so does the understanding of this term of art.

For some people, IA is old hat — a relic of the days when websites were mere collections of linked text files. For others, IA represents an archaic, even oppressive worldview, where only experts are allowed to organize and label human knowledge. Again, I think these views of IA say more about the people who hold them than the practice of IA itself.

But for the rest of us, this contest is just an opportunity to celebrate the energetic conversations that are already happening anyway — and that happen within any vibrant, growing community of practice. It’s a way to spotlight how much IA has evolved, and bring others into those conversations as well.

Of course, the Institute is interested in these expressions as raw material for how it continues to evolve itself. But why wouldn’t any practitice-based organization be interested in what the community has to say about the central concern of the practice?

I’m looking forward to what everyone comes up with. I’m especially excited to learn things I don’t know yet, and discover people I hadn’t met before.

So, go for it! Explain that sucker!

EBAI was awesome Mon, 19 Oct 2009 14:30:25 +0000 ebai

EBAI, The Brazilian Information Architecture Congress (basically the IA Summit or EuroIA of Brazil) was kind and generous enough to invite me to Sao Paulo as a keynote speaker, closing their first day. They gave me a huge chunk of time, so I presented a long version of my Linkosophy talk, expanded with more about designing for Context. It was a terrific experience. Here’s just a smattering of what I discovered:

  • Brazilian user-experience designers tend to use the term Information Architecture (and Architect) for their community of practice — which I think is a fine thing. (I explained we still need to agree what “IA” means in the context of a given design, but who am I to tell them “there are no information architects“?)
  • These people are brilliant. They’re doing and inventing UX design research and methods that really should be shared with the larger, non-Portuguese-speaking world.
  • I wish I knew Portuguese so I could’ve understood even more of what they were presenting about. (Hence my wish it could all be translated to English!)
  • Brazilians have the best methods of drinking beer and eating steak ever invented: small portions that keep on coming through the meal means your beer is never warm, and your steak is always fresh off the grill. Genius!

Thank you, EBAI (and in particular my gracious host, Guilhermo Reis) for an enlightening, delightful experience.

]]> 4 786
Quick Tue, 13 Oct 2009 04:20:23 +0000 mother always called it the quick
so that was always still is its name
that bit of fingerflesh around the nail sewn
by magic into the wrapped fingerprint we all have
embossed on our extremities unique index thick
opposing thumb she might catch me gnawing and when she did

she’d say be careful or you’ll be done
chewed it all down into the quick

my teeth pulling at the splinter of skin going thicker
into the sensitive crease it should have a name
that crease but I’ve not heard should have
taught me something the way it hurt like a sewing

needle pressed and wriggled so
it reddens swells maybe bleeds she did
warn me I should have listened I should have
that moment back but watch it skitter away so quickly
what if every moment had a name
we could never remember them no matter how thick

the books where we kept them and no matter how thick
the shelves to keep the books no matter how stiff the spines are sewn
our very lives would burn them history’s fuel a comet trail of names
of moments and minutes hours and days what’s done
and undone it was years before I learned that quick
means alive versus dead and dead the part I’d chew until I had

hit nerve that bleeding cuticle sting that has
a lesson someplace about blood that nothing is thicker
and what we know about moments that nothing is quicker
see how simple children could sing it on a seesaw
tick tock up down until the shrill bathtime mothercall but do
you leave no you play until snatched awake by your full name

hurled from the kitchen door a great net woven of your name
and you’re waving goodbye to the neighbor boy who has
that same blue jacket from last year and in just a minute you don’t
quite see him in the dusk that descended so soon so thick
just the glow of clouds stretched pink raw and sewn
with veins of yellowing light and suddenly your steps are quicker

until you find yourself under the thick blanket with the soft-sewn
edge tucked under your chin quick quick tick tock sleep has
taken you alive even though it doesn’t know your name

Courageous Redirection Fri, 25 Sep 2009 15:43:25 +0000 roadsigns

I’ve recently run across some stories involving Pixar, Apple and game design company Blizzard Entertainment that serve as great examples of courageous redirection.

What I mean by that phrase is an instance where a design team or company was courageous enough to change direction even after huge investment of time, money and vision.

Changing direction isn’t inherently beneficial, of course. And sometimes it goes awry. But these instances are pretty inspirational, because they resulted in awesomely successful user-experience products.

Work colleague Anne Gibson recently shared an article at work quoting Steve Jobs talking about Toy Story and the iPhone. While I realize we’re all getting tired of comparing ourselves to Apple and Pixar, it’s still worth a listen:

At Pixar when we were making Toy Story, there came a time when we were forced to admit that the story wasn’t great. It just wasn’t great. We stopped production for five months…. We paid them all to twiddle their thumbs while the team perfected the story into what became Toy Story. And if they hadn’t had the courage to stop, there would have never been a Toy Story the way it is, and there probably would have never been a Pixar.

(Odd how Jobs doesn’t mention John Lasseter, who I suspect was the driving force behind this particular redirection.)

Jobs goes on to explain how they never expected to run into one of those defining moments again, but that instead they tend to run into such a moment on every film at Pixar. They’ve gotten better at it, but “there always seems to come a moment where it’s just not working, and it’s so easy to fool yourself – to convince yourself that it is when you know in your heart that it isn’t.

That’s a weird, sinking feeling, but it’s hard to catch. Any designer (or writer or other craftsperson) has these moments, where you know something is wrong, but even if you can put your finger on what it is, the momentum of the group and the work already done creates a kind of inertia that pushes you into compromise.

Design is always full of compromise, of course. Real life work has constraints. But sometimes there’s a particular decision that feels ultimately defining in some way, and you have to decide if you want to take the road less traveled.

Jobs continues with a similar situation involving the now-iconic iPhone:

We had a different enclosure design for this iPhone until way too close to the introduction to ever change it. And I came in one Monday morning, I said, ‘I just don’t love this. I can’t convince myself to fall in love with this. And this is the most important product we’ve ever done.’ And we pushed the reset button.

Rather than everyone on the team whining and complaining, they volunteered to put in extra time and effort to change the design while still staying on schedule.

Of course, this is Jobs talking — he’s a master promoter. I’m sure it wasn’t as utopian as he makes out. Plus, from everything we hear, he’s not a boss you want to whine or complain to. If a mid-level manager had come in one day saying “I’m not in love with this” I have to wonder how likely this turnaround would’ve been. Still, an impressive moment.

You might think it’s necessary to have a Steve Jobs around in order to achieve such redirection. But, it’s not.

Another of the most successful products on the planet is Blizzard’s World of Warcraft — the massively multiplayer universe with over 10 million subscribers and growing. This brand has an incredibly loyal following, much of that due to the way Blizzard interacts socially with the fans of their games (including the Starcraft and Diablo franchises).

Gaming news site IGN recently ran a thorough history of Warcraft, a franchise that started about fifteen years ago with an innovative real-time-strategy computer game, “Warcraft: Orcs & Humans.”

A few years after that release, Blizzard tried developing an adventure-style game using the Warcraft concept called Warcraft Adventures. From the article:

Originally slated to release in time for the 1997 holidays, Warcraft Adventures ran late, like so many other Blizzard projects. During its development, Lucas released Curse of Monkey Island – considered by many to be the pinnacle of classic 2D adventures – and announced Grim Fandango, their ambitious first step into 3D. Blizzard’s competition had no intention of waiting up. Their confidence waned as the project neared completion …

As E3 approached, they took a hard look at their product, but their confidence had already been shattered. Curse of Monkey Island’s perfectly executed hand-drawn animation trumped Warcraft Adventures before it was even in beta, and Grim Fandango looked to make it downright obsolete. Days before the show, they made the difficult decision to can the project altogether. It wasn’t that they weren’t proud of the game the work they had done, but the moment had simply passed, and their chance to wow their fans had gone. It would have been easier and more profitable to simply finish the game up, but their commitment was just that strong. If they didn’t think it was the best, it wouldn’t see the light of day.

Sounds like a total loss, right?

But here’s what they won: Blizzard is now known for providing only the best experiences. People who know the brand do not hesitate to drop $50-60 for a new title as soon as it’s available, reviews unseen.

In addition, the story and art development for Warcraft Adventures later became raw material for World of Warcraft.

I’m aware of some other stories like this, such as how Flickr came from a redirection away from making a computer game … what are some others?

]]> 11 765
Why We Just Don’t Get It Fri, 04 Sep 2009 14:59:09 +0000 In an article called “The Neuroscience of Leadership” (free registration required*), from Strategy + Business a few years ago, the writers explain how new understanding about how the brain works helps us see why it’s so hard for us to fully comprehend new ideas. I keep cycling back to this article since I read it just a few months ago, because it helps me put a lot of things that have perpetually bedeviled me in a better perspective.

One particularly salient bit:

Attention continually reshapes the patterns of the brain. Among the implications: People who practice a specialty every day literally think differently, through different sets of connections, than do people who don’t practice the specialty. In business, professionals in different functions — finance, operations, legal, research an development, marketing, design, and human resources — have physiological differences that prevent them from seeing the world the same way.

Note the word “physiological.” We tend to assume that people’s differences of opinion or perspective are more like software — something with a switch that the person could just flip to the other side, if they simply weren’t so stubborn. The problem is, the brain grows hardware based on repeated patterns of experience. So, while stubbornness may be a factor, it’s not so simple as we might hope to get another person to understand a different perspective.

Recently I’ve had a number of conversations with colleagues about why certain industries or professions seem stuck in a particular mode, unable to see the world changing so drastically around them. For example, why don’t most advertising and marketing professionals get that a website isn’t about getting eyeballs, it’s about creating useful, usable, delightful interactive experiences? And even if they nod along with that sentiment in the beginning, they seem clueless once the work starts?

Or why do some or coworkers just not seem to get a point you’re making about a project? Why is it so hard to collaborate on strategy with an engineer or code developer? Why is it so hard for managers to get those they manage to understand the priorities of the organization?

And in these conversations, it’s tempting — and fun! — to somewhat demonize the other crowd, and get pretty negative about our complaints.

While that may feel good (and while my typing this will probably not keep me from sometimes indulging in such a bitch-and-moan session), it doesn’t help us solve the problem. Because what’s at work here is a fundamental difference in how our brains process the world around us. Doing a certain kind of work in a particular culture of others that work creates a particular architecture in our brains, and continually reinforces it. If your brain grows a hammer, everything looks like a nail; if it grows a set of jumper cables, everything looks like a car battery.

Now … add this understanding to the work Jonathan Haidt and others have done showing that we’re already predisposed toward deep assumptions about fundamental morals and values. Suddenly it’s pretty clear why some of our biggest problems in politics, religion, bigotry and the rest are so damned intractable.

But even if we’re not trying to solve world hunger and political turmoil, even if we’re just trying to get a coworker or client to understand a different way of seeing something, it’s evident that bridging the gap in understanding is not just a peripheral challenge for doing great design work — it may be the most important design problem we face.

I don’t have a ready remedy, by the way. But I do know that one way to start building bridges over these chasms of understanding is to look at ourselves, and be brutally honest about our own limitations.

I almost titled this post “Why Some People Just Don’t Get It” — but I realized that sets the wrong tone right away. “Some People” becomes an easy way to turn others into objects of ridicule, which I’ve done myself even on this blog. It’s easy, and it feels good for a while, but it doesn’t help the situation get better.

As a designer, have you imagined what it’s like to see the world from the other person’s experience? Isn’t that what we mean when we say the “experience” part of “user experience design” — that we design based on an understanding of the experience of the other? What if we treated these differences in point of view as design problems? Are we up to the challenge?

Later Edit:

There have been some excellent comments, some of which have helped me see I could’ve been more clear on a couple of points.

I perhaps overstated the “hardware” point above. I neglected to mention the importance of ‘neuroplasticity‘ — and that the very fact we inadvertently carve grooves into the silly-putty of our brains also means we can make new grooves. This is something about the brain that we’ve only come to understand in the last 20-30 years (I grew up learning the brain was frozen at adulthood). The science speaks for itself much better than I can poorly summarize it here.

The concept has become very important to me lately, in my personal life, doing some hard psychological work to undo some of the “wiring” that’s been in my way for too long.

But in our role as designers, we don’t often get to do psychotherapy with clients and coworkers. So we have to design our way to a meeting of minds — and that means 1) fully understanding where the other is coming from, and 2) being sure we challenge our own presuppositions and blind spots. This is always better than just retreating to “those people don’t get it” and checking out on the challenge altogether, which happens a lot.

Thanks for the comments!

* Yet another note: the article is excellent; a shame registration is required, but it only takes a moment, and in this case I think it’s worth the trouble.

]]> 19 753
Favorites are FAIL for web security Wed, 02 Sep 2009 01:10:39 +0000 I don’t usually get into nitty-gritty interaction design issues like this on my blog. But I recently moved to a new address, and started new web accounts with various services like phone and utilities. And almost all of them are adding new layers of security asking me additional personal questions that they will use later to verify who I am. And entirely too many are asking questions like these, asked by AT&T on their wireless site:


I can’t believe how many of them are using “favorites” questions for security. Why? Because it’s so variable over time, and because it’s not a fully discrete category. Now, I know I’m especially deficient in “favorite” aptitude — if you ask me my favorite band, favorite food, favorite city, I’ll mumble something about “well, I like a lot of them, and there are things about some I like more than others, but I really can’t think of just one favorite…” Most people probably have at least something they can name as a favorite. But because it’s such a fuzzy category, it’s still risky and confusing.

It’s especially risky because we change over time. You might say Italian food is your favorite, but you’ve never had Thai. And when you do, you realize it blows Italian food away — and by the next time you try logging into an account a year later, you can’t remember which cuisine you specified.

Even the question about “who was your best friend as a kid” or “what’s the name of your favorite pet, when you were growing up” — our attitudes toward these things are highly variable. In fact, we hardly ever explicitly decide our favorite friend or pet — unless a computer asks us to. Then we find ourselves, in the moment, deciding “ok, I’ll name Rover as my favorite pet” — but a week later you see a picture in a photo album of your childhood cat “Peaches” and on your next login, it’s error-city.

I suspect one reason this bugs me so much is that it’s an indicator of how a binary mentality behind software can do uncomfortable things to us as non-binary human beings. It’s the same problem as Facebook presents when it asks you to select which category your relationship falls into. What if none of them quite fit? Or even if one of them technically fits, it reduces your relationship to that data point, without all the rich context that makes that category matter in your own life.

Probably I’m making too much of it, but at least, PLEASE, can we get the word out in the digital design community that these security questions simply do not work?

]]> 6 639
Awesome link-trove of game-studies research Fri, 10 Jul 2009 17:37:46 +0000 The excellent Neuroanthropology blog offers up a terrific list of links to recent research & articles covering topics like Design, Research, Addiction and Art Criticism. Check it out!

Human culture sparked by “new kind of connectedness” Tue, 07 Jul 2009 20:02:41 +0000 Jonah Lehrer explains the import of a study described in Science.

The larger implication is that the birth of human culture was triggered by a new kind of connectedness. For the first time, humans lived in dense clusters, and occasionally interacted with other clusters, which allowed their fragile innovations to persist and propagate. The end result was a positive feedback loop of new ideas.

Sounds an awful lot like what the Internet is doing, no?

]]> 1 734
Data vs Insight for UX Design Mon, 06 Jul 2009 16:51:15 +0000 UX Insight Elements

Funny how things can pop into your head when you’re not thinking about them. I can’t remember why this occurred to me last week … but it was one of those thoughts I realized I should write down so I could use it later. So I tweeted it. Lots of people kindly “re-tweeted” the thought, which immediately made me self-conscious that it may not explain itself very well. So now I’m blogging about it. Because that’s what we kids do nowadays.

My tweet: User Experience Design is not data-driven, it’s insight-driven. Data is just raw material for insight.

I whipped up a little model to illustrate the larger point: insight comes from a synthesis between talent, expertise, and the fresh understanding we gain through research. It’s a set of ingredients that, when added to our brains and allowed to stew, often over a meal or after a few good nights’ sleep, can bring a designer to those moments of clarity where a direction finally makes sense.

I’ve seen a lot of talk lately about how we shouldn’t be letting data drive our design decisions — that we’re designers, so we should be designing based on best practices, ideas, expertise, and even “taste.” (I have issues with the word “taste” as many people use it, but I don’t have a problem with the idea of “expert intuition” which is I think more what a lot of my colleagues mean. In fact, that Ira Glass video that made the rounds a few weeks ago on many tweets/blogs puts a better spin on the word “taste” as one’s aspiration that may be, for now, beyond one’s actual abilities, without work and practice.)

As for the word “data” — I’m referring to empirical data as well as the recorded results of something less numbers-based, like contextual research. Data is an input to our understanding, but nothing more. Data cannot tell us, directly, how to design anything.

But it’s also ludicrous to ask a client or employer to spend their money based solely on your expertise or … “taste.” Famous interior or clothing designers or architects can perhaps get away with this — because their names carry inherent value, whether their designs are actually useful or not. So far, User Experience design practitioners don’t have this (dubious) luxury. I would argue that we shouldn’t, otherwise we’re not paying much attention to “user experience” to begin with.

Data is valuable, useful, and often essential. Data can be an excellent input for design insight. I’d wager that you should have as much background data as you can get your hands on, unless you have a compelling reason to exclude it. In addition, our clients tend to speak the language of data, so we need to be able to translate our approach into that language.

It’s just that data doesn’t do the job alone. We still need to do the work of interpretation, which requires challenging our presuppositions, blind spots and various biases.

The propensity for the human brain to completely screw stuff up with cognitive bias is, alone, reason enough to put our design ideas through a bit of rigor. Reading through the oft-linked list of cognitive biases on Wikipedia is hopefully enough to caution any of us against the hubris of our own expertise. We need to do the work of seeing the design problem anew, with fresh understanding, putting our assumptions on the table and making sure they’re still viable. To me, at least, that’s a central tenet behind the cultural history of “user experience” design approaches.

But analysis paralysis can also be a serious problem; and data is only as good as its interpretation. Eventually, actual design has to happen. Otherwise you end up with a disjointed palimpsest, a Frankenstein’s Monster of point-of-pain fixes and market-tested features.

We have to be able to do both: use data to inform the fullest possible understanding of the behavior and context of potential users, as well as bring our own experience and talent to the challenge. And that’s hard to do, in the midst of managing client expectations, creating deliverables, and endless meetings and readouts. But who said it was easy?

]]> 9 721
Legitimate Pseudonymity Tue, 09 Jun 2009 17:31:01 +0000 There’s been a recent brouhaha in the political blogosphere about whether or not it’s ethical to publish under a pseudonym. And a lot of the debate seems to me to have missed an important point.

There’s a difference between random, anonymous pot-shot behavior and creating a secondary persona. It could very well be that a writer has good reason to create a second self to be the vehicle for expression. The key to this facet of identity is reputation.

In order to gain any traction in the marketplace of ideas, one must cultivate a consistent persona, over time. In effect, the writer has to create a separate identity — but it’s an identity just the same. Its reputation stands on its behavior and its words. If the author is invested at all in that identity, then its reputation is very important to the author, just like their “real” identity and reputation.

The Internet is full of examples where regular people have joined a discussion board, or started an anonymous blog or Live Journal and, before they know it, they have friendships and connections that are important to them in that parallel world of writing, sharing and discussion. Whether those people know the writer’s real name or not becomes beside the point. (Sherry Turkle and others have been exploring these ideas about identity online for many years now.)

What publishing has provided us, definitely since the printing press and especially since the Internet, is the ability to express ideas as *ideas* with very little worry about real-life baggage, anxieties, expectations and relationships getting in the way. It’s a marketplace where the ideas and their articulation can stand on their own.

Of course, history shows a long tradition of pseudonyms. Benjamin Franklin and Alexander Hamilton wrote under pseudonyms in order to make their points (Franklin as “Mrs Silence Dogood” and later — as an open secret — “Poor Richard”) and Hamilton as “Publius” (which happens to be the pseudonym adopted by the blogger at the center of the disagreement mentioned above). Other writers modified or changed their names so as to improve their chances of publication or being taken seriously. Marian Evans was able to publish brutally, psychologically frank fiction, partly because she published under the name George Eliot. And Samuel Clemens famously wrote under the name Mark Twain, as a way to reinvent his whole identity. While all of these don’t fit a precise pattern, the point is that publishing has always had generally accepted innovations involving the writer’s identity.

None of this is to say that anonymity doesn’t come with a downside. It certainly does. But lumping all anonymous or pseudonym-written writers into the same category doesn’t help.

]]> 1 710
The Deep Dive, 10 Years Later Tue, 02 Jun 2009 19:53:43 +0000

It appears someone has posted the now-classic episode of Nightline about Ideo (called the Deep Dive) to YouTube. I hope it’s legit and Disney/ABC isn’t going to make somebody take them down. But here’s the link, hoping that doesn’t happen.

About 10 years ago, I started a job as an “Internet Copywriter” at a small web consultancy in North Carolina. By then, I’d already been steeped in the ‘net for seven or eight years, but mainly as a side-interest. My day jobs had been web-involved but not centrally, and my most meaningful learning experiences designing for the web had been side projects for fun. When I started at the new web company job, I knew there would need to be more to my role than just “concepting” and writing copy next to an art director, advertising-style. Our job was to make things people could *use* not just look at or be inspired to action by. But to be frank, I had little background in paid design work.

I’d been designing software of one kind or another off and on for a while, in part-time jobs while in graduate school. For example, creating a client database application to make my life easier in an office manager job (and then having to make it easy enough for the computer-phobic clerical staff to use as well). But I’d approached it as a tinkerer and co-user — making things I myself would be using, and iterating on them over time. (I’d taken a 3-dimensional design class in college, but it was more artistically focused — I had yet to learn much at all about industrial design, and had not yet discovered the nascent IA community, usability crowd, etc.)

Then I happened upon a Nightline broadcast (which, oddly, I never used to watch — who knows why I had it on at this point) where they engaged the design company Ideo. And I was blown away. It made perfect sense… here was a company that had codified an approach to design that I had been groping for intuitively, but not fully grasped and articulated. It put into sharp clarity a number of crucial principles such as behavioral observation and structured creative anarchy.

I immediately asked my new employer to let me order the video and share it with them. It served as a catalyst for finding out more about such approaches to design.

Since then, I’ve of course become less fully enamored of these videos… after a while you start to see the sleight-of-hand that an edited, idealized profile creates, and how it was probably the best PR event Ideo ever had. And ten years gives us the hind-sight to see that Ideo’s supposedly genius shopping cart didn’t exactly catch on — in retrospect we see that it was a fairly flawed design in many ways (in a busy grocery store, how many carts can reasonably be left at the end-caps while shoppers walk about with the hand-baskets?).

But for anyone who isn’t familiar with the essence of what many people I know call “user experience design,” this show is still an excellent teaching tool. You can see people viscerally react to it — sudden realization about how messy design is, by nature, how interdependent it is with physically experiencing your potential users, how the culture needed for creative collaboration has to be cultivated, protected from the Cartesian efficiencies and expectations of the traditional business world, and how important it is to have effective liaisons between those cultures, as well as a wise approach to structuring the necessary turbulence that creative work brings.

Then again, maybe everybody doesn’t see all that … but I’ve seen it happen.

What I find amazing, however, is this: even back then, they were saying this was the most-requested video order from ABC. This movie has been shown countless times in meetings and management retreats. And yet, the basic approach is still so rare to find. The Cartesian efficiencies and expectations form a powerful presence. What it comes down to is this: making room for this kind of work to be done well is hard work itself.

And that’s why Ideo is still in business.

]]> 4 705
Some great reading about brains. Mon, 18 May 2009 18:44:15 +0000 Brain (from Wikipedia)

Lately, I can’t seem to get enough of learning about brain science — neurological stuff, psychological stuff, whatever. Bring it on. There’s an amazing explosion of learning going on about our brains, and our minds (and how our brains give rise to our minds, and vice-versa).

I can’t help but think this is all great news for designers of all stripes. How can it but help for us to better understand how cognition works, how we make decisions, how our identities are formed and change over time, or even what it means for us to be happy? There are a few really excellent articles I’ve run across recently (and seen lots of folks linking to from Twitter as well).

First, this beautifully written, deeply human piece in The Atlantic on “What Makes Us Happy?” It follows a unique, longitudinal study of a generation of men who were first measured and tracked at Harvard in the 30s, as mere teenagers. It’s deftly honest about the inherent limitations in such work, but shows how valuable the results have been anyway. Mainly it’s worth reading because of its poignancy and introspection.

Another article: “Don’t! The Secret of Self Control” is from Jonah Lehrer, who’s fast becoming the Carl Sagan of Neuroscience. (And I mean that in nothing but a good way.) It looks at discoveries regarding delayed gratification, and how it’s connected to intelligence, maturity and general life success over time.

And another from the New Yorker: “Brain Games” about behavioral neurologist Vilayanur S. Ramachandran, who has figured in a number of things I’ve heard and read lately about brain science. (Reading the article requires free registration, but do read it!) I found myself wishing I could quit my job and go to UC San Diego for a completely unnecessary degree, just so I could have regular conversations with this guy and his colleagues. Among the coolest stuff discussed: how deeply social we are without even knowing it, how we construct our identities, and the possibility that we might measurably discover how human consciousness emerged, and how it works.

I can’t get over how exciting all this subject matter is to me. I suppose it’s because it combines all my favorite stuff … it’s answering questions that philosophy, theology and the creative arts have been gnawing at for generations.

]]> 4 688
The Machineries of Context & the Journal of IA Tue, 05 May 2009 17:07:14 +0000 Congratulations to Andrea Resmini and all the hardworking, brilliant people who just launched the Journal of Information Architecture.

I’m not saying this just because I’m fortunate enough to have an article in it, either. In fact, I hope my tortured prose can live up to the standard set by the other writers.

Link to contents page for Journal of IA Volume 1, Issue 1

About my article, “The Machineries of Context” [PDF]. In the article, I try explaining why I think Information Architecture is kind of a big deal — how linking and creating semantic structures in digital space is an increasingly challenging, important practice in the greater world of design. In essence, re-framing IA to help us see what it has been all along.

Update: The Information Architecture Institute was kind enough to publish an Italian translation of the article.

]]> 1 680
The Return of Imagery: Mixpression Wed, 01 Apr 2009 16:05:02 +0000 I’ve been puzzling over what I was getting at last year when I was writing aboutflourishing.” And for a while I’ve been more clear about what I was getting at… and realized it wasn’t the right term. Now I’m trying “mixpression” on for size.

What I meant by “flourishing” is the act of extemporaneously mixing other media besides verbal or written-text language in our communication. That is: people using things like video clips or still images with the same facility and immediacy that they now use verbal/written vocabulary. “Mixpression” is an ungainly portmanteau, I’ll admit. But it’s more accurate.

(Earlier, I think I had this concept overlapping too much with something called “taste performance” — more about which, see bottom of the post.)

Victor Lombardi quotes an insightful bit from Adam Gopnik on his blog today: Noise Between Stations » Images That Sum Up Our Desires.

We are, by turn — and a writer says it with sadness — essentially a society of images: a viral YouTube video, an advertising image, proliferates and sums up our desires; anyone who can’t play the image game has a hard time playing any game at all.
– Adam Gopnik, Angels and Ages: A Short Book About Darwin, Lincoln, and Modern Life, p 33

When I heard Michael Wesch (whom I’ve written about before) at IA Summit earlier this month, he explained how his ethnographic work with YouTube showed people having whole conversations with video clips — either ones they made themselves, or clips from mainstream media, or remixes of them. Conversations, where imagery was the primary currency and text or talk were more like supporting players.

Here’s the thing — I’ve been hearing people bemoan this development for a while now. How people are becoming less literate, or less “literary” anyway, and how humanity is somehow regressing. I felt that way for a bit too. But I’m not so sure now.

If you think about it, this is something we’ve always had the natural propensity to do. Even written language evolved from pictographic expression. We just didn’t have the technology to immediately, cheaply reproduce media and distribute it within our conversations (or to create that media to begin with in such a way that we could then share it so immediately).

The means of production and distribution simply haven’t allowed most regular people to mix imagery into their communication with any ease. We resort to “remember that scene in that movie?” and we describe it with words. But when we have A/V equipment and the media in reach (for something like a class or presentation), we don’t hesitate to just show the clip — this was true even when all we had was VHS. What we’re seeing is just more of the same — but much much more, because the media is so much easier to grab and display in the midst of our communication.

The difference, then, is ease of access and distribution, which always results in dramatic increases in something that we were naturally doing anyway, just with more inertia. It’s the same dynamic behind the explosion of blogs (the Web) and self-published magazines and newsletters (desktop publishing software + photocopiers).

People have always grabbed whatever was available to them to help them express themselves to one another. But for the first time in human history, we have an explosion of available chunks of expression to choose from, and a communication platform on which to use those media chunks.

So I think we can maybe think of this as a Return of Imagery — using literal images in ways that we haven’t been able to do “on the street” since we first started abstracting written language from literal pictures.

It’s interesting that Gopnik, in the quote above, uses the word “game.” I haven’t read the quote in context, but it sounds like he means it with a slightly disparaging tone. Yet I’m also sure Gopnik is familiar with Wittgenstein’s concept of “language game” — and I kind of hope that’s what he means. Language has always been, in a sense, a “played” activity. It’s just that now many of us have more game pieces to play with.

I realize, too, that this is a phenomenon limited to those of us with access to such technology. But that’s increasingly becoming a commodity. Today’s cutting-edge video-phones are the every-day technology of developing societies within a couple of years. We didn’t think, a few years ago, that we’d see blogs, digital photography or video coming out of less developed areas, and yet they’re increasingly apparent. I have to wonder what my kid’s conversation with her global peers will look like 10 years from now, when she’s 23?


Footnote: About “taste performance,” which is related but not quite what I was getting at. (Thanks to Austin for sending me that article a while back.) Taste Performance is about ornamentation & accessorizing. Just as we might now wear clothes or carry bags with particular logos, messages, brands or attitudes represented (everything from Threadless T-Shirts, Louis Vuitton handbags), we can imagine a near future when activated fabrics allow us to have clips from favorite movies or music videos, or a series of images by favorite artists, playing in a loop on our garments & the things we carry.

Of course, taste-performance is communication as well — everything we do in public involves some kind of communication, whether conscious or not, including body language, tone of voice, even pheromones. And that fits better with the term “flourishing,” which sounds like what a peacock does with its feathers. What I had in mind was more what a magician does with her cards. It’s a visual, supportive expression, adding meaning to what’s already being said (so that it changes the meaning in some way — shapes it explicitly or obliquely).

]]> 1 672
You Are (Mostly) Here: Digital Space & the Context Problem Thu, 26 Mar 2009 23:58:57 +0000 Here’s the presentation I did for A Summit 2009 in Memphis, TN. It’s an update of what I did for IDEA 2008; it’s not hugely different, but I think it pulls the ideas together a little better. The PDF is downloadable from SlideShare. The notes are legible only at full-screen or on the PDF.

]]> 2 669
IAI Workshop for IASummit 2009: Beyond Findability Fri, 13 Feb 2009 18:32:04 +0000 Beyond Findability: Context

View more presentations from andrewhinton.

Now that the workshop has come and gone, I’m here to say that it went swimmingly, if I do blog so myself.

My colleagues did some great work — hopefully it’ll all be up on Slideshare at some point. But here are the slides I contributed. Alas, there are no “speaker notes” with these — but most of the points are pretty clear. I would love to blog about some of the slides sometime soon — but whenever I promise to blog about something, I almost guarantee I won’t get around to it. So I’ll just say “it would be cool if I blogged about this…” :-)

Just one more blog plug for the workshop some of us are doing before the IA Summit in Memphis this year.

See the Pre-Con Page at the Conference Site.
Register Here

For those of you who may be attending the IA Summit in Memphis this year, let me encourage you to look into the IA Institute’s pre-conference workshop called “Beyond Findability: Reframing IA Practice & Strategy for Turbulent Times.”

A few things I want to make clear about the session:

– We’re making it relevant for any UX design people, not just those who self-identify as “Information Architects.” In fact, part of the workshop is about how different practitioner communities can better collaborate & complement other approaches.
– By “Turbulent times” we don’t just mean the economy, but the turbulence of technological change — the incredibly rapid evolution of how people use the stuff we make.
– It’s not a how-to/tutorial-style workshop, but meant to spark some challenging conversation and push the evolution of our professions ahead a little faster.
– There will, however, be some practical take-away content that you should be able to stick on a cube wall and make use of immediately.
– It’s not “anti-Findability” — but looks at what IA in particular brings to design *beyond* the conventional understanding of the practice.
– We’re hoping experienced design professionals will attend, not just newer folks; the content is meant to be somewhat high-level and advanced, but you should be able to get value from it no matter where you are in your career.

Here’s the quickie blurb:

This workshop aims to take your IA practice to a higher level of understanding, performance and impact. Learn about contextual models and scalable frameworks, design collaboration tactics, and how to wield more influence at the “strategy table.”

If you have any specific questions about it, please feel free to hit me up with an email!

*Note: the IA Summit itself is produced by ASIS&T, not the IA Institute.

]]> 1 662
A mature approach to maturing IA Thu, 12 Feb 2009 16:11:43 +0000 Here’s an excellent article written up at the ASIS&T Bulletin, by some talented and thoughtful folks in Europe (namely Andrea Resmini, Katriina Byström and Dorte Madsen). I’ll quote the end of the piece at length.

IA Growing Roots – Concerning the Journal of IA

Even if someone’s ideas about information architecture are mind-boggling, if they do not discuss them in public, embody them in some communicable artifact and get them to be influential, they are moot. This reality is the main reason behind the upcoming peer-reviewed scientific Journal of Information Architecture, due in Spring 2009. For the discipline to mature, the community needs a corpus, a defining body of knowledge, not a definition.

No doubt this approach may be seen as fuzzy, uncertain and highly controversial in places. Political, even biased. But again, some overlapping and uncertainty and controversy will always be there: Is the Eiffel Tower architecture or engineering? The answer is that it depends on whom you ask, and why you ask. And did the people who built it consider themselves doing architecture, engineering or what? The elephant is a mighty complex animal, as the blind men in the old Indian story can tell you, and when we look closer, things usually get complex.

The IA community does not need to agree on a “definition” because there is more to do. An analytical approach must be taken on the way the community sees itself, with some critical thinking and some historical perspective. The community needs to grow roots. We hope the Journal will help along the way.

I especially like the Eiffel tower example. And putting a stake in the ground saying let’s not worry about a definition, we have more work to do. This is the sort of mature thinking we need at the “discipline” level, where people can focus on the academic, theoretical framework that helps evolve what the bulk of IA folk do at the “practice” level. (Of course, that flow works in the other direction too!)

]]> 1 660
The UX Tribe Wed, 11 Feb 2009 21:52:53 +0000 UX Meta-community of practiceI don’t have much to say about this, I just want to see if I can inject a meme in the bloodstream, so to speak.

Just an expanded thought I had recently about the nature of all the design practices in the User Experience space. From the tweets and posts and other chatter that drifted my way from the IxDA conference in Vancouver last week, I heard a few comments around whether or not Interaction Designers and Information Architects are the same, or different, or what. Not to mention Usability professionals, Researchers, Engineers, Interface Programmers, or whatever other labels are involved in the sort of work all these people do.

Here’s what I think is happening. I believe we’re all part of the same tribe, living in the same village — but we happen to gather and tell our stories around different camp-fires.

And I think that is OK. As long as we don’t mistake the campfires for separate tribes and villages.

The User Experience (UX) space is big enough, complex enough and evolving quickly enough that there are many folds, areas of focus, and centers of gravity for people’s talents and interests. We are all still sorting these things out — and will continue to do so.

Find me a single profession, no matter how old, that doesn’t have these same variations, tensions and spectrums of interest or philosophical approach. If it’s a living, thriving profession, it’ll have all these things. It’s just that some have been around long enough to have a reified image of stasis.

We need different campfires, different stories and circles of lore. It’s good and healthy. But this is a fairly recently converged family of practices that needs to understand what unifies us first, so that our conversations about what separates us can be more constructive.

The IAI is one campfire. IxDA is another. CHI yet another, and so-on. Over time, some of these may burn down to mere embers and others will turn into bonfires. That’s OK too. As long as, when it comes time to hunt antelope, we all eat the BBQ together.

And now I’m hungry for BBQ. So I’ll leave it at that.

PS: a couple of presentations where I’ve gone into some of these issues, if you haven’t seen them before: UX As Communities of Practice; Linkosophy.

]]> 5 652
The Challenge of Taste in Design Mon, 09 Feb 2009 22:22:44 +0000 There are a lot of cultural swirls in the user-experience design tribe. I’ve delved into some of them now and then with my Communities of Practice writing/presentations. But one point that I haven’t gotten into much is the importance of “taste” in the history of contemporary design.

Several of my twitter acquaintances recently pointed to a post by the excellent Michael Bierut over on Design Observer. It’s a great read — I recommend it for the wisdom about process, creativity and how design actually doesn’t fit the necessary-fiction-prop of process maps. But I’m going to be petty and pick on just one throwaway bit of his essay**

In the part where he gets into the designer’s subconscious, expressing the actual messy stuff happening in a creative professional’s head when working with a client, this bit pops out:

Now, if it’s a good idea, I try to figure out some strategic justification for the solution so I can explain it to you without relying on good taste you may or may not have.

Taste. That’s right — he’s sizing up his audience with regard to “taste.”

Now, you might think I’m going to whine that nobody should be so full of himself as to think of a client this way … that they have “better taste” than someone else. But I won’t. Because I believe some people have a talent for “taste” and some don’t. Some people have a knack, to some degree it’s part of their DNA like having an ear for harmony or incredibly nimble musculature for sports. And to some degree it’s from training — taking that raw talent and immersing it in a culture of other talents and mentors over time.

These people end up with highly sharpened skills and a sort of cultural radar for understanding what will evoke just the right powerful social signals for an audience. They can even push the envelope, introducing expressions that feel alien at first, but feel inevitable only a year later. They’re artists, but their art is in service of commerce and persuasion, social capital, rather than the more rarefied goals of “pure art” (And can we just bracket the “what is art” discussion? That way lies madness).

So, I am in no way denigrating the importance of the sort of designer for whom “taste” is a big deal. They bring powerful, useful skills to the marketplace, whether used for good or ill. “Taste” is at the heart of the “Desirable” leg in the three-leg stool of “Useful, Usable and Desirable.” It’s what makes cultural artifacts about more than mere, brute utility. Clothes, cars, houses, devices, advertisements — all of these things have much of their cultural power thanks to someone’s understanding of what forms and messages are most effective and aspirational for the intended audience. It’s why Apple became a cultural force — because it became more like Jobs than Woz. Taste is OK by me.

However, I do think that it’s a key ingredient in an unfortunate divide between a lot of people in the User Experience community. What do I mean by this?

The word “design” — and the very cultural idea of “designer” — is very bound up in the belief in a special Priesthood of Taste. And many designers who were educated among or in the orbit of this priesthood tend to take their association pretty seriously. Their very identities and personalities, their self-image, depends in part on this association.

Again, I have no problem with that — all of us have such things that we depend on to form how we present ourselves to the world, and how we think of ourselves. As someone who has jumped from one professional sub-culture to another a few times in my careers (ministry, academia, poetry, technologist, user-experience designer) I’ve seen that it’s inevitable and healthy for people to need, metaphorically speaking, vestments with which to robe themselves to signal not just their expertise but their tribal identities. This is deep human stuff, and it’s part of being people.

What I do have a problem with is that perfectly sane, reasonable people can’t seem to be self-aware enough at times to get the hell over it. There’s a new world, with radically new media at hand. And there are many important design decisions that have nothing at all to do with taste. The invisible parts are essential — the interstitial stuff that nobody ever sees. It’s not even like the clockwork exposed in high-end watches, or the elegantly engineered girder structures exposed in modernist architecture. Some of the most influential and culturally powerful designs of the last few years are websites that completely eschewed or offended “taste” of all sorts (craigslist; google; myspace; etc).

The idea of taste is powerful, and perfectly valid, but it’s very much about class-based cultural pecking orders. It’s fun to engage in, but we shouldn’t take it too seriously, or we end up blinded by our bigotry. Designing for taste is about understanding those pecking orders well enough to play them, manipulate them. But taking them too seriously means you’ve gone native and lost perspective.

What I would hope is that, at least among people who collaborate to create products for “user experiences” we could all be a little more self aware about this issue, and not look down our noses at someone who doesn’t seem to have the right “designer breeding.” We live in an age where genius work can come from anywhere and anyone, because the materials and possibilities are so explosively new.

So can we please stop taking the words “design” and “designer” hostage? Can we at least admit that “taste” is a specialized design problem, but is not an essential element of all design? And the converse is necessary as well: can UX folks who normally eschew all aesthetics admit the power of stylistic choice in design, and understand it has a place at the table too? At some point, it would be great for people to get over these silly orthodoxies and prejudices, because there is so much stuff that still needs to be designed well. Let’s get over ourselves, and just focus on making shit that works.

Does it function? Does it work well for the people who use it? Is it an elegant solution, in the mathematical sense of elegance? Does it fit the contours of human engagement and use?

“Taste” will always be with us. There will always be a pecking order of those who have the knack or the background and those who don’t. I’d just like to see more of us understand and admit that it’s only one (sometimes optional) factor in what makes a great design or designer.

**Disclaimer: don’t get me wrong; this is not a rant against Michael Bierut; his comment just reminded me that I’ve run across this thought among a *lot* of designers from the (for lack of better label) AIGA / Comm Arts cultural strand. I think sizing up someone’s “taste” is a perfectly valid concept in its place.

]]> 8 647
Publishers vs Reporters Mon, 09 Feb 2009 18:24:31 +0000 Just had to point out this quote from Clay Shirky’s post on the inherent FAIL of the micropayments model for publishing (and, well, much of anything).

Why Small Payments Won’t Save Publishers « Clay Shirky

We should be talking about new models for employing reporters rather than resuscitating old models for employing publishers.

But it’s amazing how hard it is to shift the point of view from looking through the lens of Institutions rather than the talents of the actual content producers. Same problem vexes the music industry.

Beyond Findability Seminar at IA Summit 2009 Wed, 14 Jan 2009 18:48:36 +0000 In case you were wondering which of the fabulous IA Summit 2009 pre-conference seminars to spend your hard-earned (or hard-begged) money on, look no further than “Beyond Findability: Reframing IA Practice & Strategy for Turbulent Times

It’s being presented by the IA Institute, and includes some very smart, experienced people in the UX/IA world: Livia Labate, Joe Lamantia and Matthew Milan. In addition, it includes little old me.

Hit that link to learn more about the workshop. I’m excited about it — we’ll be digging into some meaty subjects, and stretching our brains about IA. Yet we’ll manage to have lots of practical take-aways and fascinating conversations too.

]]> 3 641
Nussbaum Rants on the death of “Innovation” Fri, 02 Jan 2009 23:44:47 +0000 I’m peeking my head up from the last few bits of holiday time to point out that this is a great rant from Bruce Nussbaum. The first paragraph is terrific enough that I have to quote it in full.

“Innovation” died in 2008, killed off by overuse, misuse, narrowness, incrementalism and failure to evolve. It was done in by CEOs, consultants, marketeers, advertisers and business journalists who degraded and devalued the idea by conflating it with change, technology, design, globalization, trendiness, and anything “new.” It was done it by an obsession with measurement, metrics and math and a demand for predictability in an unpredictable world. The concept was also done in, strangely enough, by a male-dominated economic leadership that rejected the extraordinary progress in “uncertainty planning and strategy” being done at key schools of design that could have given new life to “innovation. To them, “design” is something their wives do with curtains, not a methodology or philosophy to deal with life in constant beta—life in 2009.

That said, I’m not sure I’m that thrilled with “Transformation” either. Because the same philistines who bastardized “innovation” and “design” will turn “Transformation” into something just as awful. Like some tassel-loafered Pygmalion sculpting a sad excuse for a girlfriend out of pie charts and paperclips.

“Transformation” sounds way too much like the self-help books these people (mostly guys) read when they want to improve their memory, pectoral muscles or golf swings.

I’m convinced there will always be a minority who “get it.” And a majority who take whatever “it” is and turn it into a hollow, dry husk of what “it” could be.

]]> 8 637
Turns out I wasn’t dead, I was just in Charlotte. Fri, 19 Dec 2008 23:45:51 +0000 I’ve been very unbloggy lately. But it’s been a crazy-busy fall of 2008.

Ever since I finished up at IDEA, it’s been a mad sprint to get myself moved to North Carolina. You’d think 6-8 weeks would be plenty of time, but not the way I work. I bounce around between tasks like a six-year-old in a chocolate-flavored-meth lab. Only, you know, less illegal than that. Mostly.

Also, I’ve been preoccupied with my new position as a Director (on the Board of Directors) for the IA Institute. But even that has been mostly on hold for a few weeks as I scrambled to get things arranged for the move. But now we’re in the thick of planning an IA Summit pre-conference session, which is shaping up to be chock full of awesome.

Moving is hard — especially if you’ve been stewing in your own juices in one apartment for four years. And especially if you’re as old as I am with all the legacy crap in tow. Boxes of books, memories, grad school papers, orphaned journals, toys from when you were seven and the rest. (I still have my Mattel Electronic Football game, and a worse-for-wear Evel Knievel doll — the one that rode that wind-up motorcycle that I could never get to work right). The logistics are crazy too — finding a moving company, getting all that squared away, then dumping enough junk so that you’re not paying someone to move things that you’re just going to end up dumping later anyway.

Finding an apartment is a trick — I finally had to stop worrying about finding something “interesting.” I’ve lived in “interesting” for the last 4 years — a historic building overlooking an up-and-coming entertainment district — and I’m kind of over it. My new apartment doesn’t have any historic charm or line-of-sight to awesome bars, but it has four whole drawers in the kitchen. FOUR drawers, my friends. Not the one I had before but FOUR. I still haven’t figured out what to do with this obscene amount of horizontal storage space. Not to mention I have two bathrooms, so I can leave one of them a mess and still have a decent one for any guests who may happen to be over.

So what does any of this have to do with the main thrust of this blog? Not much, other than making some public record here of my existence so that the content gap doesn’t reach an entire two whole months. (“Thrust of My Blog” would be a pretty nice band name, btw)

]]> 3 633
Oh, my binary heart Thu, 23 Oct 2008 16:29:16 +0000 David Weinberger’s most recent JOHO post shows us some thinking he’s doing about the history (and nature) of “information” as a concept.

The whole thing is great reading, so go and read it.

Some of it explores a point that I touched on in my presentation for IDEA earlier this month: that computers are very literal machines that take the organic, nuanced ambiguities of our lived experience and (by necessity) chop it up into binary “is or is not” data.

Bits have this symbolic quality because, while the universe is made of differences, those differences are not abstract. They are differences in a taste, or a smell, or an extent, or a color, or some property that only registers on a billion dollar piece of equipment. The world’s differences are exactly not abstract: Green, not red. Five kilograms, not ten. There are no differences that are only differences.

The example I gave at IDEA was how on Facebook, you have about six choices to describe the current romantic relationship you’re in: something that normally is described to others through contextual cues (a ring on your finger, the tone of voice and phrasing you use when mentioning the significant other in conversation, how you treat other people of your sig-other’s gender, etc). These cues give us incredibly rich textures for understanding the contours of another person’s romantic life; but Facebook (again, out of necessity) has to limit your choices to a handful of terms in a drop-down menu — terms that the system renders as mutually exclusive, by the fact that you can only select one.

More and more of the substance of our lives is being housed, communicated & experienced (by ourselves and others) in the Network. And the Network is made of computers that render everything into binary choices. Granted, we’re making things more fine-grained in many systems, and giving people a chance to add more context, but that can only go so far.

Weinberger uses photography as an example:

We turn a visual scene into bits in our camera because we care about the visual differences at that moment, for some human motive. We bit-ify the scene by attending to one set of differences — visible differences — because of some personal motivation. The bits that we capture depend entirely on what level of precision we care about, which we can adjust on a camera by setting the resolution. To do the bit-ifying abstraction, we need analog equipment that stores the bits in a particular and very real medium. Bits are a construction, an abstraction, a tool, in a way that, say, atoms are not. They exist because they stand for something that is not made of bits.

All this speaks to the implications of Simulation, something I’m obsessing about lately as it relates especially to Context. (And which I won’t go into here… not another tangent!)

Dave’s example reminds me of something I remember Neil Young complaining about years ago (in Guitar Player magazine) in terms of what we lose when we put music into a digital medium. He likened it to looking out a screen door at the richly contoured world outside — but each tiny square in screen turn what is seen through its confines into an estimated average “pixel” of visible information. In all that averaging, something vital is inevitably lost. (I couldn’t find the magazine interview, but I did find him saying something similar in the New York Times in 1997: “When you do an analog recording, and you take it to digital, you lose everything. You turn a universe of sounds into an average. The music becomes more abrupt and more agitating, and all of the subtleties are gone.”)

Of course, since that interview (probably 15 years ago) digital music has become much more advanced — reconstructing incredibly dense, high-resolution information about an analog original. Is that the answer, for the same thing that’s happening to our analog lives as they’re gradually soaked up by the great digital Network sponge? Higher and higher resolution until it’s almost real? Maybe. But in every case where we’re supposed to decide on an input to that system (such as which label describes our relationship), we’re being asked to turn something ineffable into language — not only our own, expressively ambiguous language, but the predefined language of a binary system.

Given that many of our lives are increasingly experienced and mediated via the digital layer, the question arises: to what degree will it change the way we think about identity, humanity, even love?

]]> 2 630
Flickr and Market Governance Tue, 30 Sep 2008 23:13:47 +0000 Erin Malone points to an article on the challenges of managing the Flickr community in the SF Chronicle:

"People bring their human relationships to Flickr, and we end up having to police them," Champ says. …

Lest your inner libertarian objects to such interventions, Champ is quick to correct the idea that the community would ultimately find its own balance.

"The amount of time it would take for the community to self-regulate — I don't think it could sustain itself in the meantime," she says. "Anyway, I can't think of any successful online community where the nice, quiet, reasonable voices defeat the loud, angry ones on their own."

This struck me as uncannily relevant to what’s going on right now in the US economy.

Once social platforms like Flickr reach a certain size, they really do become a weird amalgam of City & Economy, and they require governance. Heather Champ (Flickr’s estimable community manager) points out that, even if you truly believe a collective crowd like this will self-regulate, much damage will be done on the way to finding that balance.

Isn’t that precisely the perennial tension we have in terms of free-market economics?

It seems to me that User Experience design is increasingly needing to learn from Economics and Political Science — and it may even have a thing or two to teach them, as well.

I have lots of thoughts on this, but too many to get down here … just wanted to bring it up because I think it’s so damned fascinating.

Pew Internet: Teens, Video Games and Civics Tue, 30 Sep 2008 19:27:19 +0000 This excellent report came out a couple of weeks ago. It shows that the ubiquity and importance of video games, and game culture, is even bigger than many of us imagined. I explored some of this in a presentation a few years ago: Clues to the Future. I’m itching to keep running with some of those ideas, especially now that they’re being taken more seriously in business & technology circles (not by my doing, of course, but just from increased exposure in mainstream publications and the like).

Pew Internet: Teens, Video Games and Civics

The first national survey of its kind finds that virtually all American teens play computer, console, or cell phone games and that the gaming experience is rich and varied, with a significant amount of social interaction and potential for civic engagement….

The primary findings in the survey of 1,102 youth ages 12-17 include —

* Game playing is universal, with almost all teens playing games and at least half playing games on a given day.
* Game playing experiences are diverse, with the most popular games falling into the racing, puzzle, sports, action and adventure categories.
* Game playing is also social, with most teens playing games with others at least some of the time and can incorporate many aspects of civic and political life.

I’m especially interested in the universality of game playing. It reinforces more than ever the idea that the language of games is going to be an increasingly universal language. The design patterns, goal-based behaviors, playfulness — these are things that have to be considered over the next 5-10 years as software design accommodates these kids as they grow up.

The social aspect is also key: we have an upcoming generation that expects their online & software-based experiences to integrate into their larger lives; they don’t assume that various applications and contexts are separate, and feel pleasantly surprised (or disturbed) to discover they’re connected. They’ll have a different set of assumptions about connectedness.

]]> 2 617
The Cultivation Equation Part 2: Motivation Mon, 22 Sep 2008 22:01:18 +0000 Motivation

Months ago, I posted the first part of something I’d been presenting on for over a year: a simple way of thinking about social design choices. I called it the “Cultivation Equation for Social Design.” I should’ve known better, but I said at the end of that post that I’d be posting the rest soon … then proceeded to put it off for a very long time. At any rate, here’s the second part, about Motivation. The third part (about Moderation) will be forthcoming, eventually, but I make no promises on timing.

social equation

Motivation is essentially an answer to the question: Why would anybody bother using this platform?

As previously explained, there has to be some social behavior already in existence that the platform supports as a medium. As with the community itself, you can’t create motivation from fiat either. But you can certainly create a medium that has the right nutrients and structures in place for users to be motivated to engage within it.

Self-Interest over Altruism

Like it or not, people are self-interested. It would be nice to think that people will do things “for the good of the community,” but people don’t actually function that way on a continual basis.

I don’t mean this to sound cynical. People definitely act in altruistic ways all the time, but you can’t design social platforms assuming everyone will do so regularly enough to keep things going.

So, there’s a difference between selfishness, which is more like greed, and benign self-interest, which is constructive and fuels many of the best things about civilization. Self-interest is just the impulse to improve one’s own life and fulfillment, and health self-interest recognizes that being a contributing member of the social context is important to one’s own well-being.

Identity and well-being are inextricably twined with social context, and that drives a great deal of how we behave socially — so much so that we are rarely conscious of it. Contributing socially is a way of defining oneself, and bringing attention, support, companionship and other great stuff to oneself. It’s more about Social Capital than cold hard cash.

If your platform is designed with this in mind, it runs a much better chance for success. For example, Netflix uses the fact that rating movies will make its recommendations to you better over time to motivate you to bother with rating them. And not only that, it makes sure that when you visit your queue, you see the most recent movies you watched at the top as a helpful reminder to rate those as well.

Of course, Netflix’s recommendation engine (and therefore its business) is enhanced by your efforts, as are the experiences of all other users at Netflix benefitting from the massive collective intelligence behind the movies recommended to them as well … but Netflix doesn’t push this as the reason for you to rate movies. They stick to the main message: this will help YOU. Of course, nobody would believe them if they didn’t deliver on that promise; so they do.

Remixability & Presence

No community is an island. Each exists as a cluster of relationships, but every member of it has other external relationships. If you pull back just a bit, you see that every community is just a slightly more dense collection of relationships in a giant, interconnected tapestry. It’s nearly impossible to find borders or distinctions between them.

People have multivariate lives, and they’re increasingly expecting to be able to grab bits of one thing and have it mix into other things. If your community infrastructure doesn’t lend itself to syndication, mobile interaction, and the like — it’s risking irrelevance. People want it to come to them.

Because of the Internet, if people want to be part of a conversation, they can engage when they have the time to consume others’ expressions and respond in kind; just as important, they can also engage from where they are rather than having to wait until they’re sitting at a home computer.

Making it possible to catch up and respond through other channels, such as mobile and email, makes it much more likely people will integrate the platform into their daily rituals. Even though asynchronous communication allows a lot of flexibility, there’s still usually a window of opportunity during a given conversation where someone feels his or her input will be relevant. If someone doesn’t have a chance to shoot off an email or put 2 cents into the conversation — or even respond to a “poke” or comment on a photo — when they’re first aware of it, it’s extremely less likely they’ll take the time to do it later.

Another important part of Presence is making sure the platform reflects presence-related attributes of its inhabitants. For example, on LiveJournal, when you search for people with a particular interest, LiveJournal provides results that show, essentially, users’ icons and a text caption with saying the journal name, and the period of time since their last post. But, importantly, it *orders* the results by how long it’s been since their last post activity. It’s such a simple set of choices, but they have powerful consequences: the system implicitly rewards more active users with greater attention. Over time, with millions of searches and users, these design choices result in more links being created between some users rather than others, and help determine the ultimate shape of the community.

But which attributes are important? Like so many other structural choices in social spaces, there’s no universally successful list. For some it might be most recent post; for others it might be most highly rated content. The important point is to understand that these choices have powerful influence on the ultimate shape of the site — these structural, rules-based choices are the DNA of the platform. Which means it’s also important to have the ability to adjust those rules and structures as you go (more on that later).

Of course, there’s plenty more about Presence that I’m not even touching on here, such as “Ambient Intimacy” and the like. I think Presence and Remixability have a lot to do with each other; the channels and levels of granularity we have available to us now boggles the mind. Figuring out how to have a presence in all of them, and how they connect together (or, just as importantly, how they don’t) is one of the big, interesting challenges of the coming years.

Shared Artifacts & Objects

Many social-web gurus have highlighted the importance of Social Objects, as championed by Jyri Engestrom, creator of social platform Jaiku. The idea has also been articulated very well by Hugh McLeod (of gapingvoid fame).

Rather than go into detail on Social Objects here, I encourage you to click the links and bone up at the source. In a nutshell: Social Objects are the things that work as the social attractors and “glue” for social interactions. On Flickr it’s photographs; on LiveJournal it’s journals; on Twitter it’s tweets. We don’t socialize just for socialization’s sake — we connect around hubs of common interest, from the most casual to the most serious.

I have a particular interest in how this insight helps us understand Communities of Practice. It sounds silly to say it out loud, but practitioners are practical. That is, when a person is in “practitioner” mode — they’re putting on that hat, or living that role in a given moment — they tend to focus on practical matters having to do with the concrete work at hand.

Objects become even more important in this context. Teaching and learning are central to community-of-practice communication and social connection, so having shared artifacts as products of this continual conversation is of major importance to any community focused in this way. Evidently, research bears this out. It’s no accident that the wiki concept was born as a practical solution to the needs of a practitioner community. The platform has to provide the right kind of soil for any particular kind of growth. For example, a wiki provides a very different kind of soil than a wiki; one is driven by individual expression from individual identities; the other is driven by collaborative effort that obscures particular identities. The kind of artifact each of these systems offers determines the sort of work that can be done there; for some groups, a blog is a highly motivating system, while for others a wiki might work much better. It depends on the group and the sort of work they’re trying to do.

Denis Wood: A Narrative Atlas of Boylan Heights « Making Maps: DIY Cartography Sat, 20 Sep 2008 17:25:58 +0000 Detail from Halloween Pumpkins Map of Wood's Neighborhood

In my talk for IDEA Conference, I’ll be referencing the work of Denis Wood.

I’m so utterly intrigued by a particular long-term project he did back in the 80s, which ended up in an episode of This American Life.

In a blog post at the “Making Maps” blog, Wood goes into some detail about this project and how it managed to go from being a bunch of stuff in a box under his desk to one of the most popular episodes of TAL: Denis Wood: A Narrative Atlas of Boylan Heights « Making Maps: DIY Cartography

There was a lot we wanted to do. Certainly we wanted to use the mapping to help us figure out what a neighborhood was, but we also wanted to use the mapping as a kind of organizational tool, as a way of bringing the neighborhood together and helping it to see itself. This meant we wanted to be able to get copies of the atlas into the hands of the residents and so we planned a black and white atlas that could be cheaply reproduced on a copy machine. At the same time we wanted to make something beautiful, almost a livre d’artiste. I in particular was impatient with distinctions between art and science – it was an important part of my teaching that these distinctions were arbitrary and obfuscatory – and I wanted the atlas to read almost like a novel.

One thing I love about this story is how it breaks assumptions about what maps are, and what they’re for. How a map of a neighborhood shapes the identity if the neighborhood. Wood uses the term “mask” for what a map can do — it creates a mask that the place wears on its own face, for anyone who looks at the map. It constructs & reveals at the same time, a neighborhood’s crime patterns, underground tunnels & cisterns, or the spirit of its residents through where the jack-o-lanterns are on Halloween.

The map is a narrative. So what does that mean, when we create virtual places that are their own maps? It means our assumptions of what the map shows actually shape the neighborhood itself. Not just in the “social construction” sense of printed maps and physical neighborhoods, but quite literally in the sense of architecture.

What does it mean to be a resident of a neighborhood mapped as X vs one mapped as Y? What if, with the flip of a switch, the neighborhood could change between X and Y? Isn’t that what we’re making with our virtual spaces?

Moving South Fri, 19 Sep 2008 17:55:47 +0000 A bit of personal news …

It looks like it’s official: I’ll be moving to Charlotte, NC sometime around December. I’ll still be working for Vanguard’s User Experience Group, but from that location.

Why? Well, for one thing, it’s an opportunity for the company to have more UXG presence at that site, an opportunity that has recently started to make more sense. But my main reason for requesting the move is that my daughter lives with her mom in Winston-Salem, NC, which is pretty close to there. Plus, my aging parents are just south of there in GA. One might wonder why I didn’t do this sooner, but there are practical reasons why I haven’t until now (happy to explain off-line if anyone’s interested). I’m glad I can get closer to family, while still continuing to work with Vanguard’s UX group, which is honestly the best job I’ve ever had.

I hate to leave the Philly area — it’s been my home nearly 5 years now. While I’ll be coming back up for work reasons now and then, it won’t be the same as living there. And while living here, I’ve grown some very close ties that I hope to keep tied even from a distance.

It’d be great if any Philly locals can work me into their schedules for a meetup of some kind, before I ship out.

If anyone has recommendations on moving companies, places to live, etc, I’m all ears! Hit me at inkblurt via gmail.

]]> 13 597
Changes in how we perceive place & time Mon, 15 Sep 2008 18:33:15 +0000 From Adam Greenfield:

If, as so many have pointed out, the ongoing process of digital ephemeralization has taken previously place-bound functions like communication, banking and commerce, and exploded them – “smearing them across urban space,” in Bill Mitchell’s words – it’s without question also doing interesting and significant things to how we perceive the nexus of place and time.

Adam says he’s going to focus on these changes in his new book — I’m looking forward to it.

Ideas vs Ideology and the “Strategy Table” Thu, 11 Sep 2008 16:15:24 +0000 If you’ve ever seen Stanley Kubrick’s movie “Paths of Glory,” it’s a brutal illustration of the distinction between “ideas” and “ideology.”

Kirk Douglas at the "Strategy Table"Kirk Douglas’s character (Colonel Dax) is coming to the “strategy table” after leading his men in the first-hand experience of the trenches. Based on his observations from open-minded, first-hand experience of his troops on the ground, he has ideas about what should and shouldn’t be done strategically. But the strategists, basing their decisions on ideology, force him to lead his soldiers to make a completely suicidal attack: an attack that makes no sense based on what one can plainly see “on the ground.” In this movie, the Strategy Table is ideologically driven; Dax is driven by ideas shaped, and changed, by first-hand experience.

In my last post, Austin Govella commented with some terrific questions that made me think a lot harder about what I was getting at. Austin asked: “Is ‘design doing’ the practice of all design practitioners? Can you be a design practitioner whose practice consists of ideology and abstractions?” And it made me realize I hadn’t fully thought through the distinction. But it’s a powerful distinction to make.

In design practice, ideas are the imaginative constructs we generate as we try to solve concrete problems. Ideas are fluid, malleable, and affected by dialectic. They’re raw material for making into newer, better ideas.

Ideology is nearly the opposite. Ideology already has the questions answered. Ideology is orthodoxy, dogma, received doctrine. It comes from “the gods” — and it’s generally a cop-out. We see it in business all the time, where people make decisions based on assumed doctrine, partly because doing so means that if something goes wrong, you can always say “but that’s what the doctrine said I should do.” It kills innovation, because it plays to our fears of risking failure. And it plays to our tendency to believe in hierarchies, and that the top dog knows what’s best just because he’s the top dog.

Let me be clear: I don’t want to paint designers as saints and business leaders as soulless ideologues. That would, ironically, be making the mistake I’m saying we have to avoid! We are all human, and we’ve all made decisions based on dogma and personal ambition at some point. So, we have to be careful of seeing ourselves as the “in the trenches hero” fighting “the man.” There are plenty of business leaders who strive to shake their ideologies, and plenty of designers who ignore what’s in front of them to charge ahead based on ideology and pure stubbornness.

I also realize that ideology and ideas overlap a good deal — that strategy isn’t always based in dogma, and ideas aren’t always grounded in immediate experience. So, when I say “Strategy Table” I only mean that there’s a strong tendency for people to think as ideologues at that level — it’s a cultural issue. But designers are far from immune to ideology. Very far.

In fact, designers have a track record of inventing ideologies and designing from them. But nearly every example of a terribly designed product can be traced to some ideology. Stewart Brand nicely eviscerates design ideology in “How Buildings Learn” — famous architecture based on aesthetic ideologies, but divorced from the grounded experience of the buildings’ inhabitants, results in edifices that people hate to use, living rooms where you can’t relax, atriums everyone avoids. Falling Water is beautiful, and helped architecture re-think a lot of assumptions about how buildings co-exist with landscapes. But Wright’s own assumptions undermined the building’s full potential: for example, it leaks like a sieve (falling water, indeed). Ideology is the enemy of successful design.

Paradoxically, the only thing close to an ideology that really helps design be better is one that forces us to question our ideological assumptions. But that’s not ideology, it’s method, which is more practical. Methods are ways to trick ourselves into getting to better answers than our assumptions would’ve led us to create. (Note, I’m not saying “methodology” — as soon as you put “ology” on something, you’re carving it in marble.)

Jared Spool’s keynote at the IA Summit this year made this very point: ideology leads to things like a TSA employee insisting that you put a single 3oz bottle of shampoo in a plastic bag, because that’s the rule, even though it makes no practical sense.

But the methods and techniques we use when we design for users should never rise to that level of rules & orthodoxy. They’re tools we use when we need them. They’re techniques & tricks we use to shake ourselves out of our assumptions, and see the design problem at hand more objectively. They live at the level of “patterns” rather than “standards.” As Jared illustrated with his stone soup analogy: putting the stone in the soup doesn’t make the soup — it’s a trick to get people to re-frame what they’re doing and get the soup made with real ingredients.

That distinction is at the heart of this “design thinking” stuff people are talking about. But design thinking can’t be codified and made into dogma — then it’s not design thinking anymore. It has to be grounded in *doing* design, which is itself grounded in the messy, trench-level experience of those who use the stuff we make.

Coming to the “Strategy Table,” a big part of our job is to re-frame the problem for the Lords of the Table, and provoke them to see it from a different point of view. And that is a major challenge.

In Paths of Glory, one of the members of the Strategy Table, Paul Mireau, actually comes to the trenches himself. One of the real dramatic tensions of the film is this moment when we can see the situation through Dax’s eyes, but we can tell from Mireau’s whole bearing that he simply does not see the same thing we do. He’s wearing Strategy Goggles (with personal-ambition-tinted lenses!), and ignores what’s in front of his face.

At the “Strategy Table” one of our biggest challenges is somehow getting underneath the assumptions of the strategy-minded, and help them re-think their strategy based on ideas grounded in the real, messy experience of our users. If we try to be strategists who think and work exclusively at a strategic level, we stop being practitioners with our hands in the soil of our work.

But what if we approach this challenge as a design problem? Then we can see the people at the strategy table as “users,” and our message to them as our design. We can observe them, understand their behaviors and mental models, and design a way of collaborating with them that meets their expectations but undoes their assumptions. At the same time, it will help us understand them as well as we try to understand our users, which will allow us to communicate and collaborate better at the table.

]]> 15 578
Sitting at the Strategy Table Wed, 10 Sep 2008 18:30:22 +0000 Catching up on the AP blog, I saw Kate Rutter’s excellent post: Build your very own seat at the strategy table, complete with a papercraft “table” with helpful reminders! It’s about designers gaining a place at the “strategy table” — where the people who run things tend to dwell.

I had written something about this a while back, about Strategy & Innovation being “Strange Bedfellows.” But Kate’s post brought up something I hadn’t really focused on yet.

So I commented there, and now I’m repeating here: practitioners’ best work is at the level of practice.

They make things, and they make things better, based on the concrete experience of the things themselves. The strategy table, however, has traditionally been populated by those who are pretty far removed from the street-level effects of their decisions, working from the level of ideology. (Not that it’s a bad thing — most ideology is the result of learned wisdom over time, it just gets too calcified and/or used in the wrong context at times.) This is one reason why so many strategists love data rather than first-hand experience: they can (too often) see the data however they need to, based on whatever ideological glasses they’re wearing.

When designers leave the context of hands-on, concrete problem solving and try to mix it up with the abstraction/ideology crowd, they’re no longer in their element. So they have to *bring* their element along with them.

Take that concrete, messy, human design problem, and drop it on the table with a *thud* — just be ready to have some “data” and business speak ready to translate for the audience. And then dive in and get to work on the thing itself, right in front of them. That’s bringing “design thinking” into the strategy room — because “design thinking” is “design doing.”

]]> 4 574
Running for the Board & the future of the IAI Fri, 05 Sep 2008 17:35:29 +0000 In the midst of all the other things keeping me busy and away from blogging, some very nice people nominated me to serve on the Board of Advisors for the IA Institute. I’m flattered and honored, and a bit intimidated. But if elected, I’ll give it my best shot.

They asked for a bio and position statement. Here’s the position bit I sent them:

This [Information Architecture] community has excelled at creating a “shared history of learning” over the last 10 years. We’ve seen it bring essential elements to the emergence of User Experience Design, in the form of methods, tools, knowledge, and especially people. I think the IAI has been essential to how the community has developed, thanks to the hard work of its volunteers and staff creating excellent initiatives for mentorship, careers and other important needs.

The next big challenge is for the IAI to become more than a sum of its parts. How can it become a more influential, vital presence in the UX community? How can it serve as an amplifier for the amazing knowledge and insight we have among our members and colleagues? How can it evolve understanding of IA among business and design peers? And how can we better coexist and collaborate with those peers and practices?

From the beginning, IA has grappled with one of the most important challenges designers now face: how to define and link contexts usefully, usably and ethically in a digital hyper-linked world. I don’t see that challenge becoming any easier in the years ahead. In fact, the digital world is only becoming more pervasive, strange and exciting.

As a board member, my focus will be to help the IA Institute grow as a valued, authoritative resource for that future.


Already, I feel the urge to further explain.

My position statement doesn’t get into a lot of nitty-gritty details, partly because those aren’t things I feel I can promise. The details still need to be sorted out to meet the stated goal. Plus, I wanted to be relatively brief (compared to how I usually write, anyway). Plus, honestly, all the stuff below is kind of a mess, and not nearly refined enough to be stated as an official “position” on anything. But since some of my colleagues have been more specific in their positions, I figure I should at least take a stab at it. Here we go, with some thoughts on how to achieve what I mention above.

1. Re-think what it means to be an Institute (and a professional organization) in the Web Era.

We started the IAI at an interesting moment: almost exactly at the transition from 20th-century top-down thinking to 21st-century emergence thinking. Many of the old models and templates for organizations like this are becoming quickly, brutally obsolete. There’s a wave of crisis happening in professional organizations globally — they’re having trouble recruiting younger members, because those people are used to organizing and networking on Facebook and other platforms. We are not immune to this sea change.

Example: Why do we need an IAI-curated library, when there are amazing collectively-driven platforms out there already doing that work for us much better? Why not involve ourselves in *those* platforms? Or, why do we have to keep up our clunky membership directory that doesn’t integrate with anything else on the Web, when there are platforms like LinkedIn? (And if LinkedIn doesn’t have the features we need, why don’t we PARTNER with them to create them?!?)

None of these points are meant as complaints — some of them are questions I’m only just now fully realizing for myself. The only fault is if we see the light and don’t do anything about it. We shouldn’t keep trying to improve and prop-up what is now an antiquated approach to community cultivation.

2. Evolve our description of Information Architecture and its Importance & Value

I think that IA as a field of study and practice needs to be articulated better than we were able to state it back in 2002, when the current IAI definition of IA was written. That definition was, as itself states, a *provisional* statement, allowing for further evolution. It’s time to evolve it. That evolution needs to clarify what we mean by “information environment” and “structure.” IA is not merely about organizing content and metadata. That’s like saying architecture is about saying where the bricks and girders go.

My own take on it is that IA has all along been about designing *context* in digital, hyperlinked environments, where semantics are the only material we have to work with. It sounds simple, but it’s not, and the design implications are enormous. With the increase of ubiquitous & location-aware networks, it only gets weirder and more important. In addition, we have to understand that IA is more and more about creating sets of rules-as-structure, systems, for users to create their own experiences. (For my kickoff of this line of thought, see my Linkosophy talk.) I don’t think everyone has to agree with me on this particular take on IA — but the point is that we need to flesh out what the hell IA really is *at that level.* (My talk at IDEA Conference will touch on some of these design challenges.) A few years ago, I wasn’t quite so hot to do this … but I think it’s time.

To some degree, I think this lack of a mature description of IA is at the heart of some other problems with the IAI. It’s hard for people to be motivated to identify themselves professionally with an organization if its on identity is so amorphous. We have some of the smartest people in the UX community in our midst — we should be able to do this.

IA is a deep, complex area of design that’s only getting more complex. And it’s an area with implications that too many designers don’t fully grasp (even IAs don’t fully grasp it, methinks). In fact, we’re only starting to really get how big these implications are. Some people who were more active in our community have gone on to write about these things — Adam Greenfield, for example — and have looked back at the IA conversation and asked “why aren’t you dealing with this?” I wish these friends would have stuck around and *pushed* us harder to have those conversations rather than moving on. Because that’s what the community needs — stubborn, vocal people pushing the others to grapple with new, bigger challenges.

“Big IA” and “Little IA” are not separate things, just as engineering and architecture are not separate things, or micro-economics and macro-economics are not separate things. They’re part of a continuum of interdependent knowledge and skills. But if you’re merely laying bricks without concern or understanding of the whole building, you’re a bricklayer, not an architect. if you’re doing a taxonomy without concern for the “Big IA” end of things — the strategic implications, the social interconnections, the emotional dimension — you’re not doing IA, you’re just doing taxonomy. (There’s definitely a place for professional taxonomists, btw… nothing wrong with that… but it’s a specialty that existed before IA.)

3. Be a thought-leading Institute, not just a professional service organization.

There’s nothing wrong with service organizations for professions. They do important, grass-roots work helping people in their careers, networking, etc. I don’t mean the word “just” as a pejorative in any sense — I mean that it can (and must) be more. Let me explain.

Once the IAI has a better articulation of its area of focus, it can claim a much more solid identity. And it can become a center for exploring, discussing and solving problems around these very complex issues. It can also be a sounding board or amplifier for the many gifted, innovative voices we have in our community. There are many ways to do this: formal publications (a journal is already in the works…but we need to be sure we re-think what ‘journal’ means in this day and age); whitepapers; informal publications, like blogging or occasional symposia; and especially making use of the incredible knowledge-generation capacity of our community. (See point 1 above) We have a great opportunity here to figure out how to be an authoritative resource in an age when everybody has the technology to be an authority on their own.

For example … Every week I see a half dozen answers to questions on our discussion list that that should be published somewhere for people *anywhere* to reference and learn from. I see the benefits of having a closed discussion forum, but there should be a way to make some of the content available to the wider world. That’s how authority and relevance happen in the new Web universe. And in that universe, people will stop coming to a professional service organization that doesn’t have a strong voice, and authority in the idea marketplace, because that’s the coin of the realm on the Web. Without this, the IAI becomes a relic.

Conclusion (wherein I pretend the above had enough cohesion and clarity to warrant a concluding section)

There are, of course, many other wonderful things the IAI can do … but I think it’s already doing a lot of those to one degree or another. Mentoring, creating workshops, etc. I believe that making the three things above happen will help the IAI do all the other service-related stuff even better.

Of course, there are also things we could improve in terms of how well the IAI leadership communicates with its members, involves them in major decisions or expenditures, and the like. Any lack of this, to date, hasn’t been because anyone was running a secret cabal. It’s more a result of beleaguered volunteers being heads-down in operational work to keep stuff going, with minimal support from the larger community. But, again, accomplishing the things above will (I believe) help energize and motivate that support much more than begging for assistance. A successful “Institute” has people offering their assistance just for the chance to be a part of it, and to associate themselves with something great.

Let me be clear — I believe everyone who’s given of their time for the IAI to date has done great work!! It’s come leaps and bounds from the coffee-fueled brainfarts we had in Asilomar six years ago. This is just a new stage in our story.

]]> 1 573
Context Collapse Fri, 22 Aug 2008 15:24:38 +0000 First of all, I didn’t realize that Michael Wesch had a blog. Now that I’ve found it, I have a lot of back-reading to do.

But here’s a recent post on the subject of Context, as it relates to web-cams and YouTube-like expression. Digital Ethnography — Context Collapse

The problem is not lack of context. It is context collapse: an infinite number of contexts collapsing upon one another into that single moment of recording. The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved (the performer must assume) for all time. The little glass lens becomes the gateway to a blackhole sucking all of time and space – virtually all possible contexts – in upon itself.

By the way, I’m working on a talk on context for IDEA Conference. Are you registered yet?

You don’t want to miss IDEA Conference 2008! Wed, 13 Aug 2008 15:15:47 +0000 idea08 badge

IDEA 2008 is shaping up to be quite a conference, in spite of my involvement. In terms of speakers, it’s looking great: David Armano writes the influential Logic + Emotion blog — one of the few voices online that understands the complexity of merging Marketing & Advertising with User Experience Design; Bill DeRouchey is one of the smartest people writing and thinking about the future of Interaction Design; visual-thinking sensei Dave Gray is the founder of XPLANE, a company whose blog I started reading religiously back in 1999, when my career as a Web Design person was really taking off; and Jason Fried, author of the provocative design manifesto Getting Real.

That’s just scratching the surface. They’re going to have in-the-trenches expert speakers from places like Maya, IDEO, The New York Times, 37 Signals, and more. In addition, I hear Jesse James Garrett will be doing an in-person presentation of Aurora.

It’s happening October 7-8, 2008; early registration is still in effect for another week or so, until August 17!

]]> 3 571
All your selves collide Wed, 06 Aug 2008 22:49:51 +0000 There’s been a lot of writing here and there about social networks and privacy, but I especially like how this professor from (one of my alma-maters) UNCG puts it in this article from the Washington Post:

“It’s the postmodern nightmare — to have all of your selves collide,” says Rebecca G. Adams, a sociologist at the University of North Carolina at Greensboro who edits Personal Relationships, the journal of the International Association for Relationship Research. … “If you really welcome all of your friends from all of the different aspects of your life and they interact with each other and communicate in ways that everyone can read,” Adams says, “you get held accountable for the person you are in all of these groups, instead of just one of them.”

It’s a pretty smart article. I also liked the phrase “participatory surveillance” for describing what happens socially online.

]]> 1 570
Words we use for what we make Tue, 05 Aug 2008 20:58:27 +0000 I just saw that the BBC tv documentary series based on Stuart Brand’s “How Buildings Learn” has been posted on Google Video. Huzzah!

It’s been a while since I read the book, so I watched a bit of the first episode, and it kicked up a thought or two about the language we use for design. Brand makes a sharp distinction between architecture that’s all about making a “statement” — a stylistic gesture — and architecture that serves the needs of a building’s inhabitants. (Arguably a somewhat artificial distinction, but a useful one nonetheless. For the record, Joshua Prince Ramus made a similar distinction at IASummit07.)

The modernist “statements” Brand shows us are certainly experiences — and were designed to be ‘experienced’ in the sense of any hermetic work of ‘difficult’ art. But it’s harder to say they were designed to be inhabited. On the other hand, he’s talking about something more than mere “use” as well. Maybe, for me at least, the word “use” has a temporary or disposable shade of meaning?

It struck me that saying a design is to be “inhabited” makes me think about different values & priorities than if I a design is to be “used” or “experienced.”

I’m not arguing for or against any of these words in general. I just found the thought intriguing… and I wonder just how much difference it makes how we talk about what we’re making, not only to our clients but to one another and ourselves.

Has anyone else found that how you talk about your work affects the work? The way you see it? The way others respond to it?

]]> 2 569
Even “useless” categories can help Thu, 31 Jul 2008 15:20:06 +0000 Ok… before the flames start… nobody’s saying that categories are useless, just the opposite. And it’s not saying that “useful” categories are no better. But it’s still a fascinating insight… From the Frontal Cortex blog:

“Now Iyengar has published a new study showing that one way to combat the effects of excessive choice is to group items into categories. It turns out that even useless categories make people happier with their choices.”

Jyri Engestrom on “Nodal Points” Fri, 27 Jun 2008 17:12:58 +0000 Dave Weinberger is blogging bits of the valuably fecund “Reboot” conference this week. Included is a nice summary of Jaiku-founder Jyri Engestrom’s talk. In the past, he’s been very influential among social design folk for pushing the idea of “social objects” — a powerful notion that helps clarify why people do what they do socially (usually it’s around some artifact, subject or object).

This time, Jyri pulls the frame out a bit to look at the bigger picture of social patterns and talks about “nodal points” — there’s more explanation on the post, but here’s a taste:

“Social peripheral vision” lets you see what’s next. If you are unaware of other people’s intentions, you can’t make plans. “Imagine a physical world where we have as much peripheral information at our disposal as in WoW.” Not just “boring update feeds.” Innovate, especially on mobiles. We will see this stuff in the next 24 months. Some examples: Maps: Where my friends are. Phonebook: what are people up to. Email: prioritized. Photos: Face recognition.

]]> 2 567
Light Children Mon, 23 Jun 2008 23:42:47 +0000 light childrenThe terrifically talented Kyle T Webster and cohort Andy Horner have completed the first full edition of their graphic novel Light Children. It looks to be gorgeous and enthralling. Go check it out!

From the site:

On the eve of graduation, six friends struggle with the fact that two of their oldest are about to graduate and leave the others behind. But just as they devise an exciting plan for one last memorable adventure, a bizarre secret is uncovered. Fascination turns to fear when they realize this discovery may mean that Eli, a sick child new to the orphanage, may be in great danger.

The girls rush to warn Eli and re-gather their friends, but have yet to realize the worst day of their lives has just begun.

]]> 2 565
Context and “Choice Architectures” Mon, 23 Jun 2008 21:17:46 +0000 Within a larger, and more political, point in his column, George Will explains something about structuring systems so as to “nudge” people toward a particular behavior pattern, without mandating anything: George F. Will: Nudge Against the Fudge

Such is the power of inertia in human behavior, and the tendency of individuals to emulate others’ behavior, that there can be huge social consequences from the clever framing of the choices that nudgeable people—almost all of us—make. Choice architects understand that every choice is made in a context, and that contexts are not “neutral”—they inevitably encourage certain outcomes. Organizing the context can promote outcomes beneficial to choosers and, cumulatively, to society.

It’s describing a thesis behind the book “Nudge: Improving Decisions about Health, Wealth and Happiness” from a couple of people who just happen to also be advising Obama.

Will’s examples are things like automatic-yet-optional enrollment in an employer’s 401k, or automatic-yet-optional defaulting organ-donor checkboxes on drivers’ licenses.

But, beyond the implications for government (which I think are fascinating, but don’t have time to get into right now), I think this is an excellent way of articulating something I’ve been trying to explain for quite a while about digital environments. Basically, that even in digital environments, there are ways to ‘nudge’ people’s decisions — both explicit and tacit — with the way you shape the focus of an interface, the default choices, the recommended paths. But you still give them plenty of freedom.

To the more libertarian or paranoid folks, this might sound horribly big-brother. But that’s only if you have a choice between a system and no system at all. The assumption is that — as with government — anarchy isn’t an option and you have to build *something*. Once you acknowledge that you have to build it, then you have to make these decisions anyway. Why not make them with some coherent, whole understanding of the healthiest, most beneficial outcomes?

The question then becomes, what is “beneficial” and to whom? That’ll be driven by a given organization’s goals and values. But the technique is neutral — and should be considered in the design of any system.

Edward Tufte on the iPhone Fri, 20 Jun 2008 19:38:14 +0000 I don’t know how I missed this before, but I’m glad I ran across it.

If you haven’t seen this very brief clip of Edward Tufte critiquing the iPhone interface, check it out.

A couple of salient quotes:

“The idea is that the content is the interface, the information is the interface, not computer-administrative debris.”

“Here’s the general theory: To clarify, add detail. Imagine that. To clarify, add detail. And … clutter and overload are not an attribute of information, they are failures of design. If the information is in chaos, don’t start throwing out information, instead fix the design.”

]]> 2 563
“Personal Branding” vs. Inkblurt Wed, 18 Jun 2008 21:05:59 +0000 Chris Brogan has a great post about 100 Personal Branding Tactics Using Social Media, with some helpful tips on creating that thing we keep hearing about “the Personal Brand.”

I’ve always struggled with this, though. I’ve been doing this “blogging” thing a long time. In fact, my first “home page” was a text-only index file. Why? Because there weren’t any graphical Web browsers yet. And even once there were, the only people who were online to look at any such thing were net-heads like myself. There was already a sense of informality and mutual understanding, and “netizens” seemed to prize a level of authenticity above almost anything else. Anything that looked like a personal “brand” was suspect.


So, something about the DNA of my initial forays into personal expression on the ‘net has stuck with me. Namely, that it’s my little corner of the world, where I say what’s on my mind, take it or leave it, with very little concern about my brand or what-not. I am not saying this is a good thing. It just is.

Over the years, though, I’ve become more conscious of the shift in context. It’s like I had a little corner lot in a small town, with a ramshackle house and flotsam in the yard, and ten years later I look out to see somebody developed a new subdivision around me, with McMansions, chemically enhanced lawns, and joggers wearing those special clothes that you only wear if you’re really *into* jogging. You know what I mean.

And now I’m just not sure where my blog stands in all this. I don’t keep up with it often, but if I do it’s not because I’ve set a goal for myself, it’s just because my brainfartery is more active (and long-form) than usual. I feel the need to have a more polished, disciplined blog-presence, with all the right trimmings … but then I’d miss having this thing here. And I know for a fact that if I had both, I’d be so short-circuited about which I should post on, I’d end up doing nothing with either of them.

Or maybe I’m just lazy?

Note: One of Brogan’s awesome tips is to add some visual interest with each post; hence a CC licensed image from mharrsch.

]]> 3 561
The Hyperlink Wed, 18 Jun 2008 15:07:51 +0000 Whenever I say that the Hyperlink changed the world, people look at me like “huh?” The lowly hyperlink is often overlooked as just a ‘feature’ of the Internet or the Web in particular. But I’ve always thought that was a bit backwards. The hyperlink is what made the web possible — it is for the Web what carbon is for carbon-based life-forms.

So I was tickled to find that Alex Wright’s excellent article on The Mundaneum Museum has this gem of a quotation from Kevin Kelly:

“The hyperlink is one of the most underappreciated inventions of the last century,” Mr. Kelly said. “It will go down with radio in the pantheon of great inventions.”

]]> 3 559
Public vs Published Mon, 16 Jun 2008 16:04:08 +0000 When I first heard about the Kozinski story (some mature content in the story), it was on NPR’s All Things Considered. The interviewer spoke with the LA Times reporter, who went on about how the judge had “published” offensive material on a “public website.”

I won’t go into detail on the story itself. But I urge anyone to take the LA Times article with a grain or two of salt. Evidently, the thing got started when someone who had an ax to grind with the judge sent links and info to the media, and said media went on to make it all look as horrible as possible. However, the more we learn about the details in the case, the more it sounds like the LA Times is twisting the truth a great deal. **

To me, though, the content issue isn’t as interesting (or challenging) as the “public website” idea.

Basically, this was a web server with an IP and URL on the Internet that was intended for family to share files on, and whatever else (possibly email server too? I don’t know). It’s the sort of thing that many thousands of people run — I lease one of my own that hosts this blog. But the difference is that Kozinski (or, evidently, his grown son) set it up to be private for just their use. Or at least he thought he had — he didn’t count on a disgruntled individual looking beyond the “index” page (that clearly signaled it as a private site) and discovering other directories where images and what-not were listed.

Lawrence Lessig has a great post here: The Kozinski mess (Lessig Blog). He makes the case that this wasn’t a ‘public’ site at all, since it wasn’t intended to be public. You could only see this content if you typed various additional directories onto the base URL. Lessig likens it to having a faulty lock on your front door, and someone snooping in your private stuff and then telling about it. (Saying it was an improperly installed lock would be more accurate, IMHO.)

The comments on the page go on and on — much debate about the content and the context, private and public and what those things mean in this situation.

One point I don’t see being made (possibly because I didn’t read it all) is that there’s now a difference between “public” and “published.”

It used to be that anything extremely public — that is, able to be seen by more than just a handful of people — could only be there if it was published that way on purpose. It was impossible for more than just the people in physical proximity to hear you, see you or look at your stuff unless you put a lot of time and money into making it that way: publishing a book, setting up a radio or TV station and broadcasting, or (on the low end) using something like a CB radio to purposely send out a public signal (and even then, laws limited the power and reach of such a device).

But the Internet has obliterated that assumption. Now, we can do all kinds of things that are intended for a private context that unwittingly end up more public than we intended. By now almost everyone online has sent an email to more people than they meant to, or accidentally sent a private note to everyone on Twitter. Or perhaps you’ve published a blog article that you only thought a few regular readers would see, but find out that others have read it who were offended because they didn’t get the context?

We need to distinguish between “public” and “published.” We may even need to distinguish between various shades of “published” — the same way we legally distinguish between shades of personal injury — by determining intent.

There’s an informative thread over at Groklaw as well.

**About the supposedly pornographic content, I’ll only say that it sounds like there was no “pornography” as typically understood on the judge’s server, but only content that had accumulated from the many “bad-taste jokes” that get passed around the net all the time. That is, nothing more offensive than you’d see on an episode of Jackass or South Park. Whether or not that sort of thing is your cup of tea, and whether or not you think it is harmfully degrading to any segment of society, is certainly your right. Some of the items described are things that I roll my eyes at as silly, vulgar humor, and then forget about. But describing a video (which is currently on YouTube) where an amorously confused donkey tries mount a guy who was (inadvisedly) trying to relieve himself in a field as “bestiality” is pretty absurd. Monty Python it ain’t; but Caligula it ain’t either.

]]> 2 558
Birth of the Internet Wed, 04 Jun 2008 18:24:06 +0000 Everybody’s linking to this article today, but I had to share a chunk of it that gave me goosebumps. It’s this bit from Leonard Kleinrock:

: September 2, 1969, is when the first I.M.P. was connected to the first host, and that happened at U.C.L.A. We didn’t even have a camera or a tape recorder or a written record of that event. I mean, who noticed? Nobody did. . . . on October 29, 1969, at 10:30 in the evening, you will find in a log, a notebook log that I have in my office at U.C.L.A., an entry which says, “Talked to SRI host to host.” If you want to be, shall I say, poetic about it, the September event was when the infant Internet took its first breath.

IDEA 2008 Tue, 03 Jun 2008 17:04:04 +0000 idea08 badge

I’d like to encourage everyone to attend IDEA 2008, a conference (organized by the IA Institute) that’s been getting rave reviews from attendees since it started in 2006. It’s described as “A conference on designing complex information spaces of all kinds” — and it’s happening in grand old Chicago, October 7-8, 2008.

Speakers on the roster include people from game design, interaction design and new-generation advertising/marketing, and the list is growing, including (for some reason) my own self. I think I’m going to be talking about how context works in digital spaces … but I have until October, so who knows what it’ll turn into?

IDEA is less about the speakers, though, than the topics they spark, and the intimate setting of a few hundred folks all seeing the same presentations and having plenty of excuses to converse, dialog and generally brou some haha.

Strategy and Innovation: Strange Bedfellows Wed, 28 May 2008 14:25:40 +0000 This is based on a slide I’ve been slipping into decks for over a year now as a “quick aside” comment; but it’s been bugging me enough that I need to get it out into a real blog post. So here goes.

We hear the words Strategy and Innovation thrown around a lot, and often we hear them said together. “We need an innovation strategy.” Or perhaps “We need a more innovative strategy” which, of course, is a different animal. But I don’t hear people questioning much exactly what we mean when we say these things. It’s as if we all agree already on what we mean by strategy and innovation, and that they just fit together automatically.

There’s a problem with this assumption. The more I’ve learned about Communities of Practice, the more I’ve come to understand about how innovation happens. And I’ve come to the conclusion that strategy and innovation aren’t made of the same cloth.

strategy and innovation

1. Strategy is top-down; Innovation is bottom-up

Strategy is a top-down approach. In every context I can think of, strategy is about someone at the top of a hierarchy planning what will happen, or what patterns will be invoked to respond to changes on the ground. Strategy is programmed, the way a computer is programmed. Strategy is authoritative and standardized.

Innovation is an emergent event; it happens when practitioners “on the ground” have worked on something enough to discover a new approach in the messy variety of practitioner effort and conversation. Innovation only happens when there is sufficient variety of thought and action; it works more like natural selection, which requires lots of mutation. Innovation is, by its nature, unorthodox.

2. Strategy is defined in advance; Innovation is recognized after the fact

While a strategy is defined ahead of time, nobody can seem to plan what an innovation will be. In fact, many (or most?) innovations are serendipitous accidents, or emerge from a side-project that wasn’t part of the top-down-defined work load to begin with. This is because the string of events that led to the innovation is never truly a rational, logical or linear process. In fact, we don’t even recognize the result as an innovation until after it’s already happened, because whether something is an innovation or not depends on its usefulness after it’s been experienced in context.

We fill in the narrative afterwards — looking back on what happened, we create a story that explains it for us, because our brains need patterns and stories to make sense of things. We “reify” the outcome and assume there’s a process behind it that can be repeated. (Just think of Hollywood, and how it tries to reproduce the success of surprise-hit films that nobody thought would succeed until they became successful.) I discuss this more in a post here.

3. Strategy plans for success in known circumstances; Innovation emerges from failure in unknown circumstances.

One explicit aim of a strategy is to plan ahead of time to limit the chance of failure. Strategy is great for things that have to be carried out with great precision according to known circumstances, or at least predicted circumstances. Of course strategy is more complex than just paint-by-numbers, but a full-fledged strategy has to have all predictable circumstances accounted for with the equivalent of if-then-else statements. Otherwise, it would be a half-baked strategy. In addition, strategy usually aims for the highest level of efficiency, because carrying something off with the least amount of friction and “wasted” energy often makes the difference between winning and losing.

However, if you dig underneath the veneer of the story behind most innovations, you find that there was trial and error going on behind the scenes, and lots of variety happening before the (often accidental) eureka moment. And even after that eureka moment, the only reason we think of the outcome as an innovation is because it found traction and really worked. For every product or idea that worked, there were many that didn’t. Innovation sprouts from the messy, trial-and-error efforts of practitioners in the trenches. Bell Labs, Xerox PARC and other legendary fonts of innovation were crucibles of this dynamic: whether by design or accident, they had the right conditions for letting their people try and fail often enough and quickly enough to stumble upon the great stuff. And there are few things less efficient than trial and error; innovation, or the activity that results in innovation, is inherently inefficient.

So Innovation and Strategy are incompatible?

Does this mean that all managers can do is cross their fingers and hope innovation happens? No. What it does mean is that to having an innovation strategy has nothing to do with planning or strategizing the innovation itself. To misappropriate a quotation from Ecclesiastes, such efforts are all in vain and like “striving after wind.”

Managing for innovation requires a more oblique approach, one which works more directly on creating the right conditions for innovation to occur. And that means setting up mechanisms where practitioners can thrive as a community of practice, and where they can try and fail often enough and quickly enough that great stuff emerges. It also means setting up mechanisms that allow the right people to recognize which outcomes have the best chance of being successes — and therefore, end up being truly innovative.

I’m as tired of hearing about Apple as anyone, but when discussing innovation they always come up. We tend to think of Apple as linear, controlled and very top-down. The popular imagination seems to buy into a mythic understanding of Apple — that Steve Jobs has some kind of preternatural design compass embedded in his brain stem.

Why? Because Jobs treats Apple like theater, and keeps all the messiness behind the curtain. This is one reason why Apple’s legal team is so zealous about tracking down leaks. For people to see the trial and error that happens inside the walls would not only threaten Apple’s intellectual property, it would sully its image. But inside Apple, the strategy for innovation demands that design ideas to be generated in multitudes like fish eggs, because they’re all run through a sort of artificial natural-selection mechanism that kills off the weak and only lets the strongest ideas rise to the top. (See the Business Week article describing Apple’s “10 to 3 to 1” approach. )

Google does the same thing, but they turn the theater part inside-out. They do a modicum of concept-vetting inside the walls, but as soon as possible they push new ideas out into the marketplace (their “Labs” area) and leverage the collective interest and energy of their user base to determine if the idea will work or not, or how it should be refined. (See accounts of this philosophy in a recent Fast Company article.) People don’t mind using something at Google that seems to be only half-successful as a design, because they know it’ll be tweaked and matured quickly. Part of the payoff of using a Google product is the fun of seeing it improved under your very fingertips.

One thing I wonder: to what extent do any of these places treat “strategy” as another design problem to be worked out in the bottom-up, emergent way that they generate their products? I haven’t run across anything that describes such an approach.

At any rate, it’s possible to have an innovation strategy. It’s just that the innovation and the strategy work from different corners of the room. Strategy sets the right conditions, oversees and cultivates the organic mass of activity happening on the floor. It enables, facilitates, and strives to recognize which ideas might fit the market best — or strives to find low-impact ways for ideas to fail in the marketplace in order to winnow down to the ones that succeed. And it’s those ideas that we look back upon and think … wow, that’s innovation.

]]> 9 553
A Model for Understanding Professional Identity and Practice Wed, 07 May 2008 14:12:35 +0000 In the closing talk for this year’s IA Summit, I had a slide that explains the various layers that make up what we use the term “Information Architect” (or “Information Architecture”) to denote. I think it’s important to be self-aware about it, because it helps us avoid a lot of wasted breath and miscommunication.

But I also stressed that I don’t think this model is only true of IA. So please, feel free to replace “IA” in the diagram with the name of any practice, profession or domain of work.

To understand this diagram, especially the part about Practice, it helps to have a basic understanding of what “practice” is and how it emerges from a community that coalesces around a shared concern. The Linkosophy deck gets into that, and my UX as Communities of Practice deck does as well, while getting into more detail about the participation/reification dynamic Wenger describes in his work.

Here’s the model: I’ll do a bit of explanation after the jump.

title and role stack (small version)

Starting from the bottom:

1. IA as a thing: the object we work on, the material we work with. We might say “hey could you look at the IA
in these wireframes and see if it makes sense?”
2. IA as an activity: the literal act of working on the ‘thing’ … “doing” IA.
3. IA as a role: the “hat” you wear that says “I’m a person working on this at the moment” … like in baseball, for a while you’re a pitcher, then later you’re a batter. These are just temporary roles used to designate what activity you’re performing.
4. IA as a practice: the shared history of learning among people who affiliate strongly with the role over time.
5. IA is sometimes a title: but titles are really different … they’re not necessarily based on the actual work you do or the practice you affiliate with; they’re arbitrary labels assigned to you by some authority.

A title is something you give yourself, or that your employer gives you. But it very often has little to do with your actual job; and it especially rarely has anything to do with the practice with which you affiliate most closely. How many restaurant waiters are really actors? Waiting tables isn’t their preferred practice, but it may be the most official title they have.

Now, the waiting tables example is an extreme, but many UX practitioners find themselves in jobs where much of their work isn’t even centered on UX … in fact, most of us put up with 50% bureaucratic junk just so we can use the other 50% doing real User Experience work, which is what truly interests us. Even within the UX family of practices, most of us wear multiple “hats” — play multiple roles — within our daily work.

Among UX practitioners, some of us gravitate more toward the IxD community and its obsessions, while some of us tend to focus more on the IA side of things, or the Usability/Evaluative sort of work. This is perfectly natural, because it’s a normal (and somewhat unavoidable) pattern of human behavior to form one’s identity in part based on a chosen practice. Etienne Wenger has said that practices are natural “homes for identities.”

We get bent out of shape and very emotional about debates around practice definition mainly because of this very human dynamic where our practice-of-choice becomes a big part of our identities.

But some perspective is in order, especially now that so many practices can’t work in ivory silos anymore — the new, accelerated culture that’s been catalyzed and enabled by the Internet demands that we get over ourselves and learn how to work with other tribes.

]]> 1 550
Vint Cerf on Al Gore’s Internet Contribution Fri, 02 May 2008 14:38:24 +0000 The granddaddy of the Internet clarifies a popular misconception.

Print What I’ve Learned: Vint Cerf
Al Gore had seen what happened with the National Interstate and Defense Highways Act of 1956, which his father introduced as a military bill. It was very powerful. Housing went up, suburban boom happened, everybody became mobile. Al was attuned to the power of networking much more than any of his elective colleagues. His initiatives led directly to the commercialization of the Internet. So he really does deserve credit.

Something tells me you won’t hear this quoted on Fox News. (Or from hardly anyone else, probably.)

]]> 3 549
Simulation: the catalyst for IA & IxD? Wed, 16 Apr 2008 17:11:40 +0000 In the “Linkosophy” talk I gave on Monday, I suggested that a helpful distinction between the practices of IxD & IA might be that IxD’s central concern is within a given context (a screen, device, room, etc) while IA’s central concern is how to connect contexts, and even which contexts are necessary to begin with (though that last bit is likely more a research/meta concern that all UX practices deal with).

But one nagging question on a lot of people’s minds seems to be “where did these come from? haven’t we been doing all this already but with older technology?”

I think we have, and we haven’t.

Both of these practices build on earlier knowledge & techniques that emerged from practices that came before. Card sorting & mental models were around before the IA community coalesced around the challenges of infospace, and people were designing devices & industrial products with their users’ interactions in mind long before anybody was in a community that called itself “Interaction Designers.” That is, there were many techniques, methods, tools and principles already in the world from earlier practice … but what happened that sparked the emergence of these newer practice identities?

The key catalyst for both, it seems to me, was the advent of digital simulation.

For IA, the digital simulation is networked “spaces” … infospace that’s made of bits and not atoms, where people cognitively experience one context’s connection to another as moving through space, even though it’s not physical. We had information, and we had physical architecture, but they weren’t the same thing … the Web (and all web-like things) changed that.

For IxD, the digital simulation is with devices. Before digital simulation, devices were just devices — anything from a deck chair to an umbrella, or a power drill to a jackhammer, were three-dimensional, real industrially made products that had real switches, real handles, real feedback. We didn’t think of them as “interactive” or having “interfaces” — because three-dimensional reality is *always* interactive, and it needs no “interface” to translate human action into non-physical effects. Designing these things is “Industrial Design” — and it’s been around for quite a while (though, frankly, only a couple of generations).

The original folks who quite consciously organized around the collective banner of “interaction designer” are digital-technology-centric designers. Not to say that they’ve never worked on anything else … but they’re leaders in that practitioner community.

Now, this is just a comment on origins … I’m not saying they’re necessarily stuck there.

But, with the digital-simulation layer soaking into everything around us, is it really so limiting to say that’s the origin and the primary milieu for these practices?

Of course, I’m not trying to build silos here — only clarify for collective self-awareness purposes. It’s helpful, I believe, to have shared understanding of the stories that make up the “history of learning and making” that forms our practices. It helps us have healthier conversations as we go forward.

]]> 5 548
Linkosophy Wed, 16 Apr 2008 00:06:04 +0000 In 2008 I had the distinct honor to present the closing plenary for the IA Summit in Miami, FL. Here’s the talk in its entirety. Unfortunately the podcast version was lost, so there’s no audio version, but 99% of what I had to say is in the notes.

NOTE: To make sense of this, you’ll need to read the notes in full-screen mode. (Or download the 6 MB PDF version.)

(Thanks to David Fiorito for compressing it down from its formerly gigantic size!)

Giving this talk at the IA Summit was humbling and a blast; I’m so grateful for the positive response, and the patience with these still-forming ideas.

If you’re after some resources on Communities of Practice and the like, see the post about the previous year’s presentation which has lots of meaty links and references.

]]> 13 547
Twitter Info for … Me! Sun, 13 Apr 2008 20:59:08 +0000 Hey, I’m Andrew! You can read more about who I am on my About page.

If I had a “Follow” button on my forehead, and you met me in person and pushed that button, I’d likely give you a card that had the following text written upon it:

Here’s some explanation about how I use Twitter. It’s probably more than you want to read, and that’s ok. This is more a personal experiment in exploring network etiquette than anything else. If you’re curious about it and read it, let me know what you think?


  • I use Twitter for personal expression & connection; self-promotion & “personal brand” not so much (that’s more my blog’s job, but even there not so much).
  • I hate not being able to follow everyone I want to, but it’s just too overwhelming. There’s little rhyme/reason to whom I follow or not. Please don’t be offended if I don’t follow you back, or if I stop following for a while and then start again, or whatever. I’d expect you to do the same to me. All of you are terribly interesting and awesome people, but I have limited attention.
  • Please don’t assume I’ll notice an @ mention within any time span. I sometimes go days without looking.
  • Direct-messages are fine, but emails are even better and more reliable for most things (imho).
  • If you’re twittering more than 10 tweets a day, I may have to stop following just so I can keep up with other folks.
  • If you add my feed, I will certainly check to see who you are, but if there’s zero identifying information on your profile, why would I add you back?

A Few Guidelines for Myself (that I humbly consider useful for everybody else too ;-)

  • I’ll try to keep tweets to about 10 or less a day, to avoid clogging my friends’ feeds.
  • I’ll avoid doing scads of “@” replies, since Twitter isn’t a great conversation mechanism, but is pretty ok as an occasional comment-on-a-tweet mechanism.
  • I won’t use any automated mechanism to track who “unfollows” me. And if I notice you dropped me, I won’t think about it much. Not that I don’t care; just seems a waste of time worrying about it.
  • I won’t try to game Twitter, or workaround my followers’ settings (such as defeating their @mentions filter by putting something before the @, forcing them to see replies they’d otherwise not have to skip.)
  • I’ll avoid doing long-form commentary or “live-blogging” using Twitter, since it’s not a great platform for that (RSS feed readers give the user the choice to read each poster’s feed separately; Twitter feed readers do not, and allow over-tweeting to crowd out other voices on my friends’ feeds.)
  • I’ll post links to things only now and then, since I know Twitter is very often used in (and was intended for) mobile contexts that often don’t have access to useful web browsers; and when I do, I’ll give some context, rather than just “this is cool …”
  • I will avoid using anything that automatically Tweets or direct-messages through my account; these things simply offend me (e.g. if I point to a blog post of mine, I’ll actually type a freaking tweet about it).
  • In spite of my best intentions, I’ll probably break these guidelines now and then, but hopefully not too much, whatever “too much” is.

Thanks for indulging my curmudgeonly Twitter diatribe. Good day!

]]> 6 546
More on Flourishing Wed, 19 Mar 2008 16:56:19 +0000 Since so much of our culture is digitized now, we can grab clippings of it and spread it all over our identities the way we used to decorate our notebooks with stickers in grade school. Movies, music, books, periodicals, friends, and everything else. Everything that has a digital referent or avatar in the pervasive digital layer of our lives is game for this appropriation.

I just ran across a short post on honesty in playlists.

The what-I’m-listening-to thing always strikes me as aspirational rather than documentary. It’s really not “what I’m listening to” but rather “what I would be listening to if I were actually as cool as I want you to think I am.”

And my first thought was: but where, in any other part of our lives, are we that “honest”?

Don’t we all tweak our appearances in many ways — both conscious and unconscious — to improve the image we present to the world? Granted, some of us do it more than others. But everybody does it. Even people who say they’re *not* like this actually are … to choose to be style-free is a statement just as strong as being style-conscious, because it’s done in a social context too, either to impress your other style-free, logo-hating friends, or to define yourself over-against the pop-culture mainstream.

Now, of course it would be dishonest to list favorite movies and books and music that you neither consume nor even really like. But my guess is a very small minority do that.

Our decorations have always been aspirational. Always. From idealizing the hunt with wall cave wall drawings to hanging pictures of beautiful still-life scenes of stuff you can’t afford in middle-class homes in the Renaissance, all the way to choosing which books to put on the eye-level shelves in your apartment, or making a cool playlist of music for a party. We never expose *everything* in our lives, we always select subsets that tell others particular things about us.

The digital world isn’t going to be any different.

(See earlier post on Flourishing.)

]]> 2 544
Gygax attenuates the mortal process (via xkcd) Tue, 18 Mar 2008 05:05:01 +0000 gygax calls in a paladin

IASummit 2008 Mon, 10 Mar 2008 17:04:25 +0000 Meet me at the IA Summit
Some very nice and well-meaning people have asked me to speak as the closing plenary at the IASummit conference this year, in Miami.

This is, as anyone who has been asked to do such a thing will tell you, a mixed blessing.

But I’m slogging through my insanely huge bucket of random thoughts from the last twelve months to surface the stuff that will, I dearly hope, be of interest and value to the crowd. Or, at the very least, keep their hungover cranial contents entertained long enough to stick around for Five-Minute Madness.

“Linkosophy” is a homely title. But it’s a hell of a lot catchier than “Information Architecture’s Role in the UX Context: What Got It Here, What It’s About, and Where It Might Be Headed.” Or some such claptrap.

Here’s the description and a link:

Closing Plenary: Linkosophy
Monday April 14 2008, 3:00 – 4:00PM

At times, especially in comparison to the industrial and academic disciplines of previous generations, the User Experience family of practices can feel terribly disorganized: so little clarity on roles and responsibilities, so much dithering over semantics and orthodoxy. And in the midst of all this, IA has struggled to explain itself as a practice and a domain of expertise.

But guess what? It turns out all of this is perfectly natural.

To explain why, we’ll use IA as an example to learn about how communities of practice work and why they come to be. Then we’ll dig deeper into describing the “domain” of Information Architecture, and explore the exciting implications for the future of this practice and its role within the bigger picture of User Experience Design.

In addition, I’ve been dragooned (but in a nice way … I just like saying “dragooned”) to participate in a panel about “Presence, identity, and attention in social web architecture” along with Christian Crumlish, Christina Wodtke, and Gene Smith, three people who know a heck of a lot more about this than I do. Normally when people ask me to talk about this topic, I crib stuff from slides those three have already written! Now I have to come up with my own junk. (Leisa Reichelt is another excellent thinker on this “presence” stuff, btw. And since she’s not going to be there, maybe I’ll just crib *her* stuff? heh… just kidding, Leisa. Really.)

Seriously, it should be a fascinating panel — we’ve been discussing it on a mailing list Christian set up, so there should be some sense that we actually prepared for it.

Social Architectures Compared Thu, 28 Feb 2008 21:47:07 +0000 There are some insightful comments on how moderation architectures affect the emergent character of social platforms in Chris Wilson’s article on Slate:
Digg, Wikipedia, and the myth of Web 2.0 democracy.

He explains how the rules structures of Wikipedia and Digg have resulted (ironically) in highly centralized power structures and territorialism. A quote:

While both sites effectively function as oligarchies, they are still democratic in one important sense. Digg and Wikipedia’s elite users aren’t chosen by a corporate board of directors or by divine right. They’re the people who participate the most. Despite the fairy tales about the participatory culture of Web 2.0, direct democracy isn’t feasible at the scale on which these sites operate. Still, it’s curious to note that these sites seem to have the hierarchical structure of the old-guard institutions they’ve sought to supplant.

He goes on to explain how Slashdot’s moderator-selection rules help to keep this top-heavy effect from happening, by making moderator status a bit easier to acquire, at more levels of involvement, while still keeping enough top-down oversight to keep consistent quality levels high.

Personas and the Role of Design Docs up at B&A Wed, 27 Feb 2008 06:13:55 +0000 So, my article is up… thanks to all the excellent editors who pushed me to finish the dang thing over the last seven months. Procrastination is a fine art, my friends.

Personas and the Role of Design Documentation – Boxes and Arrows

Here’s a nugget:

A persona document can be very useful for design—and for some teams even essential. But it’s only an explicit, surface record of a shared understanding based on primary experience. It’s not the persona itself, and doesn’t come close to taking the place of the original experience that spawned it.

Without that understanding, the deliverables are just documents, empty husks. Taken alone, they may fulfill a deadline, but they don’t feed the imagination.

Edited to Add:

Already I’m getting some great feedback, and I’m realizing that I may not have made things quite clear enough in the article.

The article is meant as a corrective statement, to a degree. I focus so strongly on what I see as the *first* priority of methods and documentation in design work—shared artifacts for the design process, because I think this has gotten lost in the conventional wisdom of “documents for stakeholders.” So, I amped up my point in the other direction, trying to drag the pendulum more toward the center.

I was careful to point out that stakeholder communication is also, of course, a very important goal. But it is a SEPARATE goal. It may even require creating separate deliverables to achieve!

We too often get caught up in using documentation as a tool for convincing other people, rather than tools for collaborative design among the practitioners. I may have overstated my case, though, and, alas, obscured these caveats I scattered throughout.

In short: I wanted to emphasize that personas are first and foremost the act of empathetic imagination for design; and I wanted to emphasize that all design documentation is first and foremost an artifact/tool for collaborative reflection, shared understanding and iteration. As long as we remember these things, we can then go on to make all the persona descriptions and slick stakeholder deliverables we want and need to get the rest of the job done.

Maybe I should’ve used that “in short” statement in the article? But, I guess if I’d kept revising, it’d have taken me another six months!

Please do keep the feedback coming, though. Mostly, I’m wanting to spark conversations like these!

The Cultivation Equation for Social Design (Part 1) Mon, 25 Feb 2008 21:43:22 +0000 Note: This is something I had embedded in a few very long presentations from last year, and I’m realizing it would probably be useful (to me if nobody else) to elaborate on it as its own topic. Here’s the first part.

social equation

There’s a lot of writing and thinking happening around the best approaches to designing platforms for social activity. I certainly haven’t read it all, and it keeps being added to every day. But from what I have read, and from the experiences I’ve had with social design factors, I distilled the basics down to a simple equation. “Cultivation equals Motivation divided by Moderation.” It sounds like a no-brainer, to those of us who’ve been thinking about this stuff for a while. For me, though, it helps keep focus on the three most important elements to consider with any social design undertaking.


Cultivation requires that we recalibrate the approaches we’ve inherited from traditional top-down ideas of social management & design. In other words, it’s cultivation rather than dictation. To ‘cultivate’ something implies that there is an existing culture — some organic, emergent, collective entity — that exists regardless of our intrusion, with its own natural rhythms and patterns.

Communities Happen

How do we help a community maintain its health, value and effectiveness for the individuals involved in it? We certainly don’t start re-defining it and prescribing (or pre-scripting) every process and action. Rather than dictating the content of the culture’s behavior, we create and manage the right conditions for the community to improve itself on its own terms. This is much more like gardening than managing in the traditional sense.

You can’t create a community by fiat. You can’t legislate or force participation — then all you get is a process, not social interaction. Social interaction may take place under the surface, but that’s in spite of your central planning, not because of it. Communities happen in an emergent way, on their own.

Mistaking the Ant-Hill for the Colony

It’s easy to make the mistake of thinking that the software for an online community actually is, in some way, the community itself — that the intentionally designed technology “network” is the social network. But these technological tools are a medium for the thing, not the thing itself. It’s like mistaking the ant hill for the ant colony. We often point at ant hills and say “there’s an ant colony” but the social behaviors of the ants exist whether they happen in that pile of earth or another.

Social software platforms tap into conversations that already exist in some form or another. At best they can enable and amplify those conversations and help them broaden outside of their original confines, even redefine themselves in some way. Of course, many of the connections people make on these platforms may never have happened without the software, but there had to be the propensity for those connections to happen to begin with.

Designing for social activity, then, is about creating infrastructure that helps communities and social patterns behave according to their own natures. Even the social character of the network isn’t created by the software. Rather, the platform’s architecture encourages only certain kinds of extant networking behaviors to thrive.

Take, for instance, LinkedIn vs MySpace. LinkedIn didn’t create the behavior of calm, professional networking interactions, introductions and linking between peers. That kind of behavior was going on long before LinkedIn launched. But its architecture is such that it allows and encourages only that kind of social interaction to take root. MySpace, on the other hand, is much more open architecturally; linking is much more informal, and self-expression is almost completely unfettered. The nature of the MySpace platform, however, essentially guarantees that few will want to use it for the sober, corporate-style networking that happens on LinkedIn. (Lots of professional work goes on in MySpace, of course, but mainly in the creative & performing arts space, where self-expression and unique identity cues are de facto requirements.)

So the character of the platform’s architecture — its rules and structures — determine the character of social behavior that your platform is most likely to attract and support. But once you’ve done that, then how do you cultivate it?


One important factor is something you can’t create artificially: the cultivators have to be invested in the community they’re cultivating. This cannot be faked. There are too many levels of tacit understanding — gut-level feel — necessary for understanding the nuances of a particular culture involved to do otherwise. You have to be willing to get your hands dirty, just like in a garden. Communities are fine with having decisions and rule-creation happening from some top-down component (which we’ll talk about in a minute) but only if they perceive the authority as having an authentic identity within the community, and that any design changes or “improvements” to the platform are coming from shared values.

Example: one reason for Facebook’s public-relations troubles of late is that a number of the design decisions its creator has made have come across as being less about cultivating the community than lining the pockets of investors. Privacy advocates and regular users revolted, and forced Facebook to adjust their course.

Another example: MySpace managed to give new users the impression that the people running the site were just “one of them” by creating the ubiquitous persona of “Tom.” Tom is a real person, one of the co-founders of the platform, who has a profile, and who “welcomes” you to the network when you join. He’s the voice for announcements and such that come from those who created and maintain MySpace. Tom is real, to a point — recently, it was discovered that Tom’s age and information have been tweaked to make him seem more in line with the service’s target demographic. It’s arguable that by the time this disillusioning revelation occurred, MySpace had grown to enough critical mass that it didn’t matter. I suspect, though, that the Tom avatar still serves its purpose for millions of users who either don’t know about the news, or think of him more as the Ronald McDonald of the brand — a friendly face that gives the brand some personality, even if they don’t care if it’s a real person.

If you’re cultivating from an authentic stance, and you understand that your role isn’t dictator, then it’s a matter of executing cultivation by striking the right balance between Motivation and Moderation.

Next up … Motivation & Moderation. Stay tuned.

]]> 5 525
WordPress Comment Notification Fix Fri, 22 Feb 2008 01:35:06 +0000 I have no idea why, over the last few versions, WordPress hasn’t been able to notify me of comments, etc, but I’m very glad I found this post:
How to fix WordPress when it doesn’t notify you of comments | MeAndMyDrum

Evidently just commenting out a line in a root file of WordPress will do the trick.

]]> 6 538