I’m new to UX and design research, but I’ve had an idea for the past year for what I’m now calling a Satiated Design Heuristics Checklist. Currently, NeilsenNormanGroup lists 10 Heuristics for UI Design, which lists out basic standards for quality website design. We also know, although without a comparable standard to the 10 NNG Heuristics, that bright colors, endless scrolling, and other subtle cues beget addictive design.
I would like to collaborate with folks on how to develop research/metrics backed heuristics on this kind of satiated design. I’m based in Brooklyn, and I can be reached here or at john.fallot@gmail.com.
However, I’m not sure it’s entirely true that there’s no comparable standard for what begets addictive design—it’s a reasonably well-understood area, but one that visual and UX designers aren’t great at talking about except in jargon.
Are you just looking to scientifically codify UX practices and their outcomes?
Reading material on the topic includes anything that talks about leveraging behaviorism research (by Pavlov, Skinner, and many others) to influence behavior, including operant conditioning methodologies. There’s already considerable research in general; you might be able to team up with a cognitive psych researcher or grad student to put together a grant proposal and research UX-specific applied behaviorism to get scientific data (as opposed to the user metrics various platforms already collect).
For a guide that’s specific to digital product UX, I suggest reading “Hooked” by Nir Eyal, which amounts to a pick-up artist’s guide to manipulating users via behaviorism techniques.
Also, material online discussing “Dark Patterns” in UX is worth reading, as it’s an analysis of mental tricks played by UX designers on users for nefarious ends. Unfortunately, most people writing about Dark Patterns overlook the idea that any behaviorist technique that works for good can also work for harm.
Last week I asked Nir Eyal on a LI marketing / habits / hooked topic if he was well-balanced and also highlighted the dark side, and he pointed me to this extensive list of articles on his site: Nir Eyal: Best of Top Picks
I have mixed feelings about a lot of this content; some of it I’ve read already, some is new to me, and there are a few gems, but a lot of the content there is, to one degree or another, eliding over various underlying issues in order to posit that either a) things are basically OK, or b) things are basically OK unless they’re personally not OK for you, and here are some steps you can take to disconnect from the basically OK thing.
The behaviorism-based model of UX design is an ethical swamp, and given that Eyal is still promoting his book and referencing it heavily, it’s hard to take his protestations too seriously. It’s nice he at least gives airtime to concerns about the impact of this type of thing on people, I suppose.
A side note:
I’d caution you on being too attached to “well-balanced” as a metric for virtue or fairness. A well-balanced examination of the pros and cons of recreational murder would be horrific, and nothing about it being well-balanced would redeem the monstrous subject matter. I walked away from “well-balanced” as a useful metric some time ago, after concluding that it’s mainly leveraged by people who can’t find a way to appeal to actual goodness, fairness, or ethical merit when making their case.
It’s at best a non-stance, an active abdication of the need to take a position on something. I don’t have time these days for people who are sitting out of debates; they’ve sidelined themselves voluntarily and I can respect that by ignoring them. Careful consideration of all factors is one thing; presenting multiple sides as if moral or ethical neutrality were possible is another.
Hello John, you mean satiated as counterpoint to addictive? A checklist, or tree of checklists, would be excellent.
I’m not working in that space at the moment, but this will hopefully be a good place to find others to collaborate with on this. And please add links to this and related studies or syntheses, to this page: https://tws.miraheze.org/wiki/Research
Thank you for framing the challenge this way; what needs to be done is to advance UX design in consideration of updated cognitive models. It’s not our tech itself that is inhumane, it’s the assumptions about human psychology behind them that are inhumane.
Are you aware of any research on UX design based on more recent cognitive models like connectionism or enactivism? I think that is the direction this community is really trying to move toward.