Principles of Humane Tech Design

I wanted to throw out some principles for humane tech design that I have been mulling over, in the hopes that they would spark a useful conversation.

The first thought is that actions have consequences, but technology companies rarely show us these consequences. It is so easy to traumatize people and call it trolling and feel nothing but positive emotions about it. So one thing that humane tech might do is force people to grapple with the consequences of their actions before allowing them to use the technology in the funnest way possible. For instance, if you say something hateful about a minority group in a public post on social media, maybe you get a weird sort of time-out where you can’t see new messages unless they come from members of that minority group. Then you could learn about how the thing you posted upset them, in whatever words they choose to use, and the technology platform would sort of teach you a lesson in a firm but undeniably fair way. Obviously no tech company would choose to do this voluntarily, because it would decrease engagement stats immensely. But it seems like it would be easy to build a copy of Twitter that uses similar rules.

The second thought is that humans are not particularly good at reasoning about highly non-local effects like the effects of sending out messages over a global communications network. So maybe there is some sort of way to limit the effect that messages have? Like a sort of volume knob, where you can speak softly (and the tech doesn’t spread your messages quite as virulently) or loudly (the tech spreads your messages as virulently as the structure of your social network allows). This is approximately what the Twitter like/retweet distinction seems to be capturing, but it seems like designing more volume knobs into social media projects could only ever be a good thing. To prevent people from just speaking at maximum volume at all times, maybe there should be some limited number of credits that people are allowed to use. Figuring out how to administer the credits seems difficult, because people will obviously come up with all sorts of ways to game the system, and tech companies will have massive incentives to do the same. But it seems like fundamentally such a thing is possible.

The third thought is that privacy is both an insanely valuable right, and kind of morally problematic. People should have the right to be anonymous online, and yet, anonymity breeds shameful behavior. Haven’t really figured out how to reconcile these pressures in a way that makes sense to me.

1 Like

Yes, you are thinking in some principles to be considered in order to a healthy design
Reading your post , reminds me The 10 principles of humane technology’, that I posted.

You enforces the idea of “consequences” , and I think this is a very important ethical issue

From the 10 principles , I think this below are craeting synergy with yours

1- Deep understand of social relationship issues
2- Focused comprehension on vulnerable behavior (*)
3- No predicting behavior and social control
4- No invasive nets to private life

Keep on thinking and creating together
Have a nice day