Justin Childress
Designer & Creative Director

Information

Essays

May 8th, 2017

Designing for Integrated Experience

I had the pleasure of speaking at HackDFW’s EARTHACK 2017 event this year, and below is a transcript of my presentation. I’ve been thinking a lot about integrated technological experience, and not just in a speculative sense of “what will we build” that integrates with human biology in a cyborgian sense of the word, but also as it relates to building less obtrusive products and interfaces for the devices we have now.

Download my full slide deck here.

(Most of these photos were sourced from The Wonderful World Wide Web, so just assume that I don’t own them.)

…..

This is an exciting time to be in design and tech, and this particular event is a manifestation of some of the best impulses of our field. Here at EarthHACK you all have come together to critically consider how we can approach both design and responsibility, how we can collaborate on tools and products and services that not only are innovative, but also helpful.

However, as we surf this bleeding edge of the future, I wonder; how is what we’re building, the software and the hardware and the physical environments, fundamentally affecting the human psyche? On some level, humans are biological hardware and software, our cultures and built environments are systems, and the question becomes, “are the products that we are creating compatible with the social systems that already exist?” Are we adequately considering the psychological and social contexts of our work?

As professor Peter Hancock notes, “Either technology works for you or you work for technology. It shapes the human race just as much as we shape it.”

Its important to keep reminding ourselves how easy it is to become so micro-focused on the miraculous, that we forget to zoom out from time to time to evaluate the complicated social ecosystems in which these tiny miracles are taking place.

Speaking of tiny miracles, for example, here we have a chameleon:

The other day I was watching the new Planet Earth series with my kids, and if you ever want to be re-invigorated by the impossibility of the natural world, just watch Planet Earth with a bunch of 6-year-olds. This creature is capable of things we can barely understand; it’s skin seems to be a sentient being within itself, able to adapt to its environment seemingly effortlessly, functional for both camouflage and communication. If humans were able to recreate this ability technologically, perhaps as a fabric, or even by messing around with our own DNA, it would be considered a peak achievement. Yet, it already exists right here. This chameleon has some amazing functionality at its disposal.

However, let’s zoom out a little bit from this little fella.

This is what I call “contextual dissonance”. What good is the chameleon’s ability now? It isn’t any less cool, but in this context it enters the realm of novelty.

That’s what I want to talk about today. It’s easy to focus on the miracle and the future of specific technologies, without pulling out a bit to focus on its intent, and how it integrating into these biologically-based contexts into which we thrust it.

As Arthur Schlesinger put it, “Science and technology revolutionize our lives, but memory, tradition and myth frame our response.” We are inherently biological, our sense of self is linear, and we are fundamentally human-centered in our thinking.

The problem with talking about futurism on some level is that it’s hard to come up with a roadmap, or pull examples, because, well, its in the future. Therefore, when it comes to these speculations, our task is to define problems. Technological progress is often initiated via negativa, it is a process of filling the void.

However, it can be useful on some level to look to the past to anticipate the future, and to analyze the present. Personally, I’ve been thinking a lot about something called Psychogeography, which was a concept created by Guy Dubord in the late 50’s. He described psychogeography as “the study of the precise laws and specific effects of the geographical environment, consciously organized or not, on the emotions and behavior of individuals.”

A less academic definition describes psychogeography as “a whole toy box full of playful, inventive strategies for exploring cities… just about anything that takes pedestrians off their predictable paths and jolts them into a new awareness of the urban landscape.”

What provoked this particular social movement? What drew these evangelists of this wandering method of urban exploration to it? Even in the 50s there were rumblings of concern about the lack of attention or connection that people were having with their environment, as the advent of radio and mass media starting drawing people’s attention inward instead of outward, and so the erratic, experimental mapping methods of psychogeography were born.

From Wikipedia, the primary concert of Debord and his crew was “the progressively increasing tendency towards the expression and mediation of social relations through objects.”

The technology and methods change, but the challenge remains the same; how do we facilitate people’s cognitive presence in their environment instead of alienating them from it?

So, today I want to present you with a problem, a challenge, and a series of questions that I hope will encourage you to constantly reframe your own process as you seek to create technologies that more naturally integrate with existing human contexts.

I’ll be speaking specifically about smartphones and other mobile devices, since that’s my particular purview, but I think the principles, in general, are more widely applicable.

THE PROBLEM: Our devices are barriers.

We’ll talk about this from two perspectives, one of which leads into the other; the first is from the more popular media perspective, which I’m sure you’re all quite familiar with.

People are becoming addicted to their devices.

The number of smartphone users in the US is staggering, considering that they’ve only really been around for a decade. 77% of Americans own one today, up from 46% in 2012, and an incredible 92% of Americans 18-29 own a smartphone. Half of Americans own a tablet computer, which not too long ago was a true novelty.

Almost 70% of Americans of all ages use social media.

However, according to a recent study, 67% of smartphone owners admitted to checking their phone for calls or messages when their phone didn’t vibrate or ring.

(I’m going to go out on a limb there and say that I bet approximately 33% of smartphone users who simply did not want to admit that they too participate in this awkwardly phantom behavior.)

A recent survey also notes that 90% of the people surveyed said that 90% of the people they see walking around are glued to their phones, with 64% of those walkers being totally disconnected from life, but only 38% of those surveyed admitted to ever being zoned out themselves while walking.

Hmm.

A more objective survey estimates that around 60% of pedestrians are occupied by their smartphones at any given time. That means that only 40% of walkers are fully engaged with their environment.

The fact is, the mechanism that drives a lot of this is well-known.

These are not the same as Fruit Loops, though I guess they both have to do with the pleasure centers of your brain. Who here is familiar with dopamine loops?

Dopamine is the chemical in our brains that gives us a little buzz when something exciting happens.

In the context of smartphones, a dopamine loop is initiated when a user sends something into the void that could potentially provoke a response. This could be a text message, a Twitter post, a Facebook update, an Instagram image, whatever. All the normal stuff that the youths do. However, dopamine loops are tied to anticipation, as in “gee whiz, I sure hope people like-fav-star-heart my thing,” and anticipation in turn is inherently tied to distraction. When you’re hungry, you want pizza, and therefore can’t concentrate on your code. Right?

Desire and anticipation provoke disengagement.

As Tom Kite put it, “you can always find a distraction if you’re looking for one.”

So, this is the first aspect of our devices as barriers, in that we’ve created a digital ecosystem that is foundationally built on seeking these “micro-pleasures”. This is why something like “gamification” works so well, and is therefore becoming pretty ubiquitous.

Is this in itself a problem? I’m unprepared to answer that definitely. I don’t think that pleasure is inherently bad. I’m not a Luddite who thinks we should abandon our technologies altogether. However, once again, it’s all about context, so let’s zoom out of the phone screen for a moment to the wider environment.

Here we have a situation in which individuals have a magical power, the power to access information at all times, to do all kinds of magic tricks and discover all kinds of things, and its all tied to this object that is in his or her pocket.

This leads us to the second way that our devices our barriers, which comes down to this:

Many times the way in which a software tool is built (I’m going to just call them apps for clarity) necessitates direct visual and physical interaction with hardware for use. The device is a barrier in that it literally, physically stands between the user and the environment. The device lives outside of the environment, and yet is supposed to facilitate interaction with the environment. This tends to cause problems.

Neville Brody, an iconic graphic designer, said this “I want to make people more aware, not less aware.”

This is the crux of my interest in this issue. As designers, how do we approach our work both progressively and ethically? How do we balance innovation and awareness? What are our responsibilities as the architects of not only the future world, but the present one?

Every day we make active decisions about what we make and how it works, and all too often we can get so enamored with the details and possibilities of the techno-landscape that we forget about how what we make has to interact with the rest of a user’s rather complicated life in the present.

And as Abraham Maslow put it, “The ability to be in the present moment is a major component of mental wellness.”

This context-aware, present-priority should be one of our major focuses as product designers, because it influences our users on a very intimate, cognitive level.

We are all quite familiar with the danger of texting and driving, right? Over 2.5 million Americans are involved in traffic accidents every year, and at this point at least 1 in 4 accidents is related to texting and driving.

Texting and driving is approximately 6 times more likely to result in an accident than drunk driving.

It only takes about 3 seconds of distracted driving for an accident to occur. Most text messages take a minimum of 5 seconds to read and absorb.

So yeah, texting and driving is bad, can we all agree? Texting was not meant to be done while driving, and when you break the contextual rules, it can be bad.

However, the problem is not with the micro-issue of texting itself, the problem is related to the macro-issue of distracted driving. So why is texting bad, but this is ok?

They both require interaction. They both necessitate some amount of visual attention. They are both primarily graphic modes of information presentation. Google or Apple Maps can both include verbal directions, but those can be turned off, whereas the GUI cannot be.

Then there’s Waze

I use this example a lot when I talk about dissonance of context, because Waze is an app that literally enables distracted driving. The context it is designed for is inherently incompatible with how human brains function. We cannot focus on more than one thing at once, and yet this app would not exist if humans were not expected to try.

In fact, Waze is coming under criticism right now for this issue, especially related to the fact that they have partnered with Spotify to enable cross-functionality between the two apps

This is conceivably to simplify the process of both keeping track of traffic and listening to music in your car, but what is effectively happening is not a simplification, but a secondary complication. The attention becomes further fragmented, and the user is drawn further out of the primary environment (the car) into the secondary environment (the device) and then into 2 sub-environments, which are the individual apps themselves.

I repeat: context. Environment. In this case, the designer probably needed to start with “should I” before “can I.”

Paola Antonelli put it as such: “In an ideal world, social responsibility would be a prerequisite for design, and designers would vow to produce beautiful, useful, positive, responsible, functional, and economic things and concepts that are meaningful additions to—or sometimes subtractions from—the world we live in. Indeed, design deserves such thoughtful consideration.”

This kind of considered thinking carries not only the more ethical burdens that we just nodded to, but is also relevant in light of smaller user experience decisions.

When do you all listen to podcasts?

Personally I listen to them when I’m wandering around my neighborhood pretending to exercise, or walking to lunch or something. Generally some time when I’m moving through space, trying not to bump into other people or get hit by a car. And yet, check out this tiny tiny button:

Just try hitting that the first time while jogging.

Even something as simple as a button needs to anticipate context. That is, essentially, the core responsibility of User Experience designers.

So, these are just a couple of examples of how we, as designers and technologists, are still struggling to create adaptive interfaces that enhance a user’s engagement with their environment.

instead pulling them out of it.

I’m not the first to recognize this problem by any means, obviously, and there are endless nuances to the ways that people are trying to address this very issue (whether it be through larger hardware such as self-driving cars, through voice-activated technology, through prosthesis such as Google Glass, etc.) This field is young and evolving, and new things are happening every day.

However, here is my challenge to you, in the form of a question: How do we think beyond the paradigms of interactive interfaces that have already been created without feeling like we need to invent new hardware? How are we under-utilizing our existing platforms by falling back on “traditional” interaction design patterns? Patterns often based on overly-optimistic, potentially outmoded, maybe dangerous usage scenarios instead of contextual research?

For the sake of time I want to move from these isolated examples to some questions that I encourage you to consider as you work on your own projects. Just like design approaches are often considered via negativa, processes are often defined through frameworks built of questions instead of statements.

In my own practice I use what I call “frameworks of responsibility” to help inform my inquiry. These are the questions that help keep me user-focused instead of functionality-focused as I work through my product design process. This is an extremely fluid framework, and not all of these questions are relevant 100% of the time, but my goal here is really just to give you something to consider moving forward.

Ask yourself:

1. How often do people have to look at the device for the app to function properly? (i.e. how often are they pulled out of their environment?)

Think back to the wayfinding apps we discussed earlier. What research and testing are you doing to understand the affect that your product has on people’s attention? The more direct attention an app requires, the more cognitive competition it creates within a user’s overall environment.

2. Is the platform adaptive across multiple context-appropriate devices?

We haven’t even talked about this yet, but for a brief moment there “the device” almost universally meant “the smartphone,” but with the advent of other items like smartwatches, tablets, etc. you are able to consider the same piece of software across multiple pieces of hardware, all of which have their own physical contexts. Are you considering what functionalities flow across which devices? How are you making these decisions?

3. Does the device/ app create unnecessary dependency?

My wife talked to me the other day about how she sometimes feel like her devices are allowing her to “outsource her memory.” The question of what qualifies as “necessary dependency” is of course a gray one, but I think it’s important to critically consider this within the product design process. To what extent is your app outsourcing awareness and memory? Is it complimenting or replacing a person’s ability to adapt to their situation, whatever that may be? That’s obviously a huge question.

4. In the primary usage scenario, what should the user be paying attention to?

If your app is a wayfinding app, like the ones we’ve been discussing, should they be paying attention to their phone or to their surroundings? How can your app’s functionality and interface reinforce and support this priority?

5. What safeguards need to be in place to protect a user?

If you’re building a messaging app, should it have motion tracking functionality that locks it when a user is going over a certain speed? How do you balance flexibility and safety? How are you securing and anonymizing the data? Remember, the app you build not only affects the user, but it affects the people around the user (whether they want it to or not).

6. We have 5 senses: how are they all being used?

How are you using sound? How are you using haptics? Is a visual interface always necessary?

7. Future thinking: how could they all be used?

Perhaps ambient audio cues are underutilized in way finding. Perhaps we should look to Morse Code for our haptic indicators. Two buzzes for “take the next left,” one buzz for “take the next right.” When are we getting smell-o-vision for our phones? Don’t get hung up on how people have done things in the past, keep pushing, responsibly, towards the future.

8. And lastly, this is not a question, but an optimistic reminder: Remember, the platform should serve the user, the users are not in service to the platform.

Also remember that our responsibility as designers and builders is to create products that are appropriate for not only our users,

but appropriate for their contexts.

We must research, validate, and take a critical approach to our ideas.

And so, in conclusion, I believe that we can invent new devices, new platforms, new software & hardware, but we cannot expect the future to continuously solve the problems of now. To quote Peter Hancock one more time, “Whatever we are to become is bound up not only in our biology but critically in our technology also.”

Thank you.