The “Don’t make me think” UX advice got us where we are

(Mia on Instagram: "Consistency... Be consistent in your design. After the first action, user expect the same process. If it is not there, user will have to use cognitive capacity to find the next step. #ux #userworkflow #consistency #userexperiencedesign")

Consistency…
Be consistent in your design. After the first action, user expect the same process.
If it is not there, user will have to use cognitive capacity to find the next step.

The Krug dogma of “Don’t make me think” may have been relevant in the past. No more. If a UX discourages “use” of “cognitive capacity” then perhaps it encourages cognitive laziness so it’s no wonder that there are so many followers on social platforms. Some designers will resist this with passion as they have been drilled to “make it easier for the user” for the past 2 decades. Consider what happens to cognition when it is deprived of opportunities to exercise.

9 Likes

Thanks, @Naro.

You’ve articulated one of the principles of Maryanne Wolf’s Reader, Come Home: that “difficult” reading is good for the human brain and allows it to develop the deeper thinking that leads to change, wisdom, and insight.

3 Likes

Amen to this. Human cognition and our thoughts are even more fundamental than freedom of speech…

Please consider that humans are not that bright or quick. That’s why we act like predators (cannibals?) against our fellow man. We need simple, predictable interfaces so that our slow brains can better focus on what’s important. And our brains will always be too slow to be able to read terms, that is why we unknowingly agree to give companies unlimited rights to spy on us. Maybe the only solution is to create new human rights which better protect our privacy. Because we as first generation sapiens are too feeble to protect ourselves.

3 Likes

And let’s not forget that enforcement of terms of agreement depends in large part on interpretation. That’s why we need good lawyers.

I have considered it. Here is a little unpack:

  1. Humans are not bright or quick.
  2. Humans are predators.
  3. Humans are naive.
  4. Human are feeble.
  5. Humans are to rely on someone else for protection.

Fodder for though:

  1. How does it BENEFIT me to keep this outlook on humanity?
  2. If I keep this POV of humanity, is it possible that whatever experience I create( for myself and user) will reflect it?

“The simple choice to take responsibility for ourselves and our own values allows us to feel in control of everything that happens to us. It allows us to transform our negative experiences into empowering experiences.” And by extension Empowering User Experiences.

I would’t say we’re feeble, but rather that we are not as quick as we think we are. This simply helps us to see where our level of progress stands and to help explain our shortcomings. Our preditory behavoir is documented in hundreds of millions of years of our ancestors’ history and evolutionary biology, rooted in the law of the jungle. People are social animals much like our cousins the other hominids, we work in groups ranging from corporations to nations to prison gangs. Each of us adds (or subtracts) just a sliver to a massive collective effort where we at the same time compete for resources and mates, often times to the death. We’ve literally Holocosted 80 percent of other animals on the Earth, not to mention many of our own either directly or through neglect. The same animal dynamics drive information technology and user experience.

Empowerment indeed, but greater is our strength in numbers. All of our current technological and social advancements stem from our earlier ones such as when we first learned to make rough tools from rocks (technology often used to kill) all the way to current information technology which is built upon the collective knowledge of millions. Little there is that one of us can accomplish alone, and if we do feel empowered it’s because all of our billions of years of evolutionary advancement, millions of years of knowledge and the rest of society and all the things we’ve built together are there to support us.

To that, it’s up to us to evolve our systems, products, services and laws. As history shows we must work together to create ethical technology and to expand human rights laws regarding privacy and technology.

This morning I saw Mozilla’s assertion “With great code comes great responsibility,” the teaser for its announcement of a competition focusing on ethics in computer science.

Here is a quote from the announcement

Says Paula Goldman, Global Lead of the Tech and Society Solutions Lab at Omidyar Network: “To ensure technology fulfills its potential as a positive force in the world, we are supporting the growth of a tech movement that is guided by the emerging mantra to move purposefully and fix things. Treating ethical reflection and discernment as an opt-in sends the wrong message to computer science students: that ethical thinking can be an ancillary exploration or an afterthought, that it’s not part and parcel of making code in the first place. Our hope is that this effort helps ensure that the next generation of tech leaders is deeply connected to the societal implications of the products they build.”

and the link for applying https://foundation.mozilla.org/en/initiatives/responsible-cs/.

Thats what we’re doing, evolving our systems by recognizing that “Don’t make me think” does not work. By saying that cognitive capacity must be encouraged to exersised in day to day life in relationship of U&I (User and Interface).

As a user experience designer, I’ve always thought the idea should be that users shouldn’t need to think much to find what they’re looking for or to have to try to understand confusing interfaces or read excessive documentation. That way they have more energy left to think about the actual content!

Of course, the trouble is when that’s exploited to trick people into doing things they don’t want to do. We can’t expect people to be able to protect themselves from bloodthirsty predators like Facebook, Google, and the rest. We either need to spur competition, or introduce new rights - and I’m not talking about more warning popups or legalese, I mean real human rights of interaction.

3 Likes

You, and I, and a gazillion other user experience designers including those at Facebook, Google, Twitter, IG have “thought the idea should be that users shouldn’t need to think much to find what they’re looking”. They should not try to understand confusing interfaces. They should not read excessive documentation. Why? because that’s hard, that makes them think and they don’t want to think. I get it, people are lazy and they are all screaming “Don’t make me think, it hurts. just give me what I want”.

This mentality of prioritizing content comes with some traps. It prioritizes the response to gratification and sets a pattern through repetition, creating a solid stream of demand for content. At some point you may ask: who benefits from teaching a generation of designers to prioritize content and what is the role of that teacher in design education? Something valuable is generated in the interaction between User and content and I am guessing you already know what that is and how it is used to fine tune content.

The idea that user “should not try to understand confusing interface” or “read excessive documentation” leads to a design catered for quick reward(the content/product) enables and encourages the consumer and conditions a user to seek quick reward. For further understanding of the role of reward in conditioning refer to Pavlov, B. F. Skinner and some by T. Harris. In effect the UX idea of “make-it-snappy-for-the-user” becomes a simple but potent design strategy for that conditioning delivered to the unsuspecting user by the unsuspecting designer.

You may notice that the context of this analysis expands beyond the scope of the discipline of design. A designer is taught to think a certain way and is exposed to ideas within its own planet of confirmation bias; a planet which floats in a robust universe of Darwinian theory and human history. Questioning one means questioning all of it and in that questioning we exercise cognition and expand capacity. Becoming faster and sharper and developing ability to protect ourselves mostly from ourselves included in which are those at facebook, google and the rest. So those are not blood thirsty predators separate from you and I. They are counterparts of you and I, operating mostly with the best of intentions but for many reasons missing the unintentional effects of good intention. We must encourage thinking capacity through design because doing so works the ability to ask questions, especially when looking in the mirror. And that’s hard. That requires thinking. The easy way would be to let Social Darwinism continue laying the pattern for tech, design, social media, social anything.

1 Like

Interfaces are made to be as transparent as possible. The ideal interface is an interface that the user can barely notice. Because the interface is only the simplest way to let user access the service offered, or the technology the user want to use, in a broader sense. We can consider e.g. the handle of a knife as the interface to easily use the technology, i.e. the blade of the knife (example taken from ‘The forth revolution’ by Luciano Floridi).
So there’s no reason why the interface should be something difficult to understand, because it’s only a mean to reach the technology/service behind.

Speaking about social media, the addictive nature of those service (e.g. youtube, facebook, …) is not in the easy-to-use interfaces, but in the goal of who economically benefits from the technology/service offered, which currently is to maximize the time the user spends on it. To reach this goal, the service is designed according to certain rules and with specific features. The interface is simply the best and easiest to let the user reach the service behind, as it is conceived right now.
The problem is in the service (i.e. the algorithm that recommends new content), not in the UX, designed as good and smooth as possible to accomplish the objectives of the service.

5 Likes

Yes, I agree with @micheleminno. How easy the UI makes it to navigate the content is not the cause of the problems. It is the persuasiveness aspect. Intuitive UI is good, persuasive UI not so much.

UI’s have changed in recent years from interfaces where people could find their way with ease, towards interfaces where people are persuaded to take the actions that the designers want them to take. Persuasive techniques offer deliberate deceptions (dark patterns) and tricks that are built in to manipulate you.

4 Likes

Yes I agree with the previous two commentators. I also see what @Naro is getting at when he says:

However, what is the alternative? Making user interfaces confusing or inconsistant would have the negative effect of making it hard for people to find things, when they really are trying to find things.

If the service has a predatory intention, like social media, then of course a bad user interface would be in the interest of humane tech. Anything to destroy a bad business. But I doubt that bad companies would willingly destroy they own interfaces.

Also I see there is a mistake in your quote of what I said which makes me sound like I said something very different from what I actually said. What I said was “users shouldn’t… have to try to understand confusing interfaces or read excessive documentation” and that means something completely different from what you had quoted. “Should not have to try” was replaced with “should not try”. I was saying that users should not need to (“have to”) make an unnecessary effort (“try”) to understand things which should be simple, that’s very different than saying that users should not try to understand or read things. Granted this is partially my fault as I should have written this more clearly.

2 Likes

Easy to use UI is a construct of persuasive design. I am saying “Don’t make me think. Make it easy for me to get to what I want” delivered as a message, as content, as product, as service is a persuasive technique with an element of deception but not necessarily deliberately so.

1 Like

Thanks for pointing out the misquote. “Should not try” Vs “Should not have to try”. I am sure that if this discussion took place face to face, you’d probably communicate the semantic importance of “have to” through intonation.

We’re gonna use the next 10-20 years discovering the alternative, and hopefully not (re)discovering the same.

I think you might be taking the phrase too literally.

Our most harmful popular websites are problematic because they thrive on distracting people with endless novelty or sensory stimulation. That’s the likely enemy of introspection, and of examining things more deeply.

This is combined with certain social formats that encourage one-liners, attention-whoring and outrage. That’s the likely enemy of genuine, careful and nuanced thinking.

The fact of the matter is also that even some sites are good at encouraging “cognition” (eg, some highly technical subreddits) because they are easy and obvious to use. But it doesn’t keep the site from becoming predatory or distracting from life-activities.

Like people already mentioned, “don’t make me think” is just a way of phrasing the principle of interface transparency. This is a good thing for tools if things like Microsoft Word or Adobe Photoshop or like… chairs so it’s obvious it’s a chair and obvious where your butt should go. It’s not a very good chair if it perpetually sends you navel-gazing and philosophizing instead of actually letting you rest your butt to do stuff that require said butt position.

There are also video games that maximize interface transparency to deliver a thought-provoking story or highly cerebral and strategic, but non-addictive gameplay.

Interface transparency is only bad where it’s been combined with persuasive design and co-opted to deceive people to do stupid things like stare at a screen and at ads for hours at a time, or buy things they don’t need.

3 Likes

I recently bumped on some articles that support the statements of the original poster @Naro and they are interesting reads:

Why We Need More Online Friction

Need for Online Friction

1 Like

Don’t we get it by now? The whole purpose of technology companies is to milk people, as fast possible without them even thinking about it, with shiny addictive junk in their faces so they may not even notice it. Buy this overpriced unneeded junk faster, give me your location and entire life, click faster on my fraudulent ads, for I am Google, I am Facebook, I am Xiaomi, I am any and every tech company small and large.

Anything for them to get rich, all while these 0.1% tech workers doing this exploitative work complain about how expensive it is to buy a small mansion in San Fransisco and that the tuition at Stanford went up to US $51,500 per year. Rich idiots! On drugs! And the statistics say there are many more of these rich slobs today than ever before in all of history.

Yes, but this does not apply to everyone probably, and will not last forever hopefully (that is why we are here :slight_smile: ).