Design Principle for consideration: Introducing the Active Audience

Thank you @NSaikiwiki,

A lot of interesting things to ponder from your post.

Well, said, and of course - though you do not mention explicitly - this also relates to the name of this community. We have discussed a number of times in the past about this meaning of ‘humane’ though not very explicitly. For the community name originally there never went too much thinking in it, we just derived it from ‘Center for Humane Technology’ to mark our grassroots movement as a separate entity from the Center ‘thinktank’.

Tristan Harris et al probably adopted the name because - in comparison to Time Well Spent, more about personal tech use - they are now aiming at addressing really big issues. Issues that threaten humanity, such as social breakdown, collapse of democracies and even wars.

But it is a strange notion for “the good tech” to be humane. What then is the not-so-good tech, then? Inhumane? Or slightly less humane?

Maybe you didn’t refer to the ‘Humane’ in HTC, but is interesting to thing about the concept, and we are just before a big reorganization. Making a change to the name would be a big decision, however. Not easily made.

“The Active Audience” is an appealing perspective and one that would be great to hold our Pyramids of Humane Technology and Small Tech (or Philosophy of Smallness even) engagement strategies against that lens.

Just writing off the top of my head…

Your idea about verification of content is a nice one too. But as @Bozon says it will be very hard to qualify something as misinformation or hate speech, without undue bias coming into the picture. One way to mitigate a bit is to algorithmically determine a demographicly most relevant audience that do the reviews (while still picking at random from a large set of people).

But it will still be very hard to implement right. Suppose the algorithm picked the perfect group, taking diversity into account and with full inclusion of everyone, then outliers of opinion would have less chance to come through. You could mitigate by having the algorithm pick 50% (or some other percentage) and the other half is fully random, but the issue remains.

Or you could put explicit bias in the selection process. After all freedom of speech is a broad right. It does not help if, say, you put an outright Atheists opinion piece about the non-existence of God up for review to a review group that is 80% religiousm (bad example, I know, but there is a threat of things becoming effectively censored by majority vote and outliers of opinion get filtered out.

Personally I think we do not need such system. The most egregious forms of hate speech are already clearly identifiable and can be addressed by normal law and regulation (okay, prerequisite is a well-funcitoning democracy or similar). Much of the hate speech, I think, stems from the fact that people are anonymous, or semi-anonymous (e.g. they operate in a forum that has a strict filter bubble, and not much visibility outside).

These things could be addressed by de-anonimizing and also by ‘calling people out’, so it is clear they now have ‘reputation damage’ that will grow when they continue their hate speech. You could have reputation systems for this. But all this gets dystopic really quick.

Another thing is - and maybe the best way - is to ignore people involved in hate speech, and - bridging to disinformation - ignore or just recognize and act accordingly to unverifiable source of information. This you can do with help of IT systems that provide identity and verifiable claims (see “Fighting fake news” in Investigating privacy-respecting online identity, data ownership & control solutions - #10 by aschrijver).

1 Like