Design Principle for consideration: Introducing the Active Audience

“If you have come here to help me you are wasting your time, but if you have come because your liberation is bound up with mine, then let us work together.” Lilla Watson

Confession: I have a concern with the word “humane” when it comes to communicating problem-solving language with some of the more serious global concerns when it comes to technology.

Primarily, because “humane” is a word that reflects awareness and empathy towards how we treat others, meaning it is almost a transactional but going in one direction, “do unto others”, etc. The word humane doesn’t seem to represent the complete picture from a design viewpoint, perhaps only half of the equation. It inherently suggests there is a passive user, a victim being abused and needs rescuing.

I think we may need a word that reflects collaboration as a design principle, which may produce, naturally, a more humane environment on the internet.

I would love to hear some thoughts about this, as I believe it is fully in alignment with humane technology, but its focus is narrow, a design principle that requires the participation of an active internet user directly, in some manner that is found rewarding to them, as part of the solution.

I call this principle “The Active Audience”.

The active audience as a design principle for humanizing technology suggests that we take a look at the long list of problems facing technology and society, and design technology or environment that can empower any global internet user with a role in solving that problem. “Spontaneous collaboration” as the web has produced some examples of this happening (including this forum!).

The active audience takes the user from a passive consumer into an active form, ideally as a partner, not a product.

The active audience design principle already has a few examples in form, all sharing economy models are perfect examples of this, even if some of those larger companies are exploiting their participants, there is still something they are getting right that the rest of technology isn’t, actually connecting people in real life, making a real life connection or at least the opportunity for one, without a device between them.

I believe much of the next wave of design along humane principles will give users tools to collaborate in ways that deliver surprising results if we activate internet users as the solution.

Example Problem

Instagram, Youtube and Facebook could face fines (and jail time) under new Australian laws

**

“Freedom of Speech is not Freedom of Reach”, Aza Raskin.

**
Facebook, Google’s Jigsaw Team, and all the big platforms only appear to be investing in machine learning to flag content or abuse, the user is totally passive in the process.

How can we engage an “active audience” to filter hate speech, fake news, misinformation?

I think that all big platforms might do well to adopt the principle of the active audience in solving these more difficult problems and would love to hear thoughts to this.

Imagine content, when viral, would have a tag that said “Verified: MISINFORMATION” or “Verified: Hate Speech”.

All content verified in such manner could trigger the algorithms to “unfollow” them.

For example, Google could downrank a site that was verified to be misinformation, or harassment, making it more difficult for discovery. YouTube could programmatically remove videos from their discovery or recommended list.

Online verification I believe is going to be the next wave of innovation, because there is just so much requirement for it for many different reasons, financial, legal, social, etc.

To activate this, Imagine all submissions of content pay a fee, not for posting, but for verification, and for example there was a technology that could distribute the content randomly to ten internet users for each piece of content submitted to verify. Each task would take seconds for each to verify, and all users split the revenue share from cost of the submission.

Only verified content is distributed through the algorithm of each platform.

Content that doesn’t have consensus does not get distributed, and internet users can earn additional income for doing the verification.

This can be applied to fake news, any claims of harassment, identify verification, any form of advertisement. Facebook, YouTube, Google could naturally create hundreds of thousands, perhaps millions, of “gigs” for internet users to earn money scaling the solution.

I believe the active audience principle has gazillions of applications yet to be discovered.

For my work, I use the principle of the active audience in Native Smart to replace third party ad networks, turning internet users into partners responsible for distributing and verifying sponsored content on the web, and receiving revenue for each transaction, which is performed programmatically on their devices when they put them down, not when they pick them up.

I dramatically use the principle of the active audience for another platform (albeit in a much more early stage than Native Smart) called aiki wiki. This platform allows internet users from different viewpoints and passionate disagreements to build a trusted, vetted, collaborative and purely rational consensus as a published article.

Thanks for letting me share, cheers!
Rome

3 Likes

Hi @NSaikiwiki, it sounds interesting to me. However what is not clear to me is who is or will be moral authority to judge what is misinformation, hate speech etc… Internet user itself? Google or FB (frankly I do not trust them at all) ? Without clear definition and rules it will be same mess only moved to another playground… What if I want publish some video or content and what if let’s say it will be controversial and somebody will mark it as hate speech… what if somebody else will pay to certain internet users a little bit more just for marking some content as hate speech, misinformation, harassment …whatever … just to make sure that certain opinions from certain sources will be banned purposely …next level of trolling? Internet users have own preferences and biases and some of them might be just ignorant and lazy and mark everything as validated just to earn some penny without thinking …
thanks!

4 Likes

Thank you @NSaikiwiki,

A lot of interesting things to ponder from your post.

Well, said, and of course - though you do not mention explicitly - this also relates to the name of this community. We have discussed a number of times in the past about this meaning of ‘humane’ though not very explicitly. For the community name originally there never went too much thinking in it, we just derived it from ‘Center for Humane Technology’ to mark our grassroots movement as a separate entity from the Center ‘thinktank’.

Tristan Harris et al probably adopted the name because - in comparison to Time Well Spent, more about personal tech use - they are now aiming at addressing really big issues. Issues that threaten humanity, such as social breakdown, collapse of democracies and even wars.

But it is a strange notion for “the good tech” to be humane. What then is the not-so-good tech, then? Inhumane? Or slightly less humane?

Maybe you didn’t refer to the ‘Humane’ in HTC, but is interesting to thing about the concept, and we are just before a big reorganization. Making a change to the name would be a big decision, however. Not easily made.

“The Active Audience” is an appealing perspective and one that would be great to hold our Pyramids of Humane Technology and Small Tech (or Philosophy of Smallness even) engagement strategies against that lens.

Just writing off the top of my head…

Your idea about verification of content is a nice one too. But as @Bozon says it will be very hard to qualify something as misinformation or hate speech, without undue bias coming into the picture. One way to mitigate a bit is to algorithmically determine a demographicly most relevant audience that do the reviews (while still picking at random from a large set of people).

But it will still be very hard to implement right. Suppose the algorithm picked the perfect group, taking diversity into account and with full inclusion of everyone, then outliers of opinion would have less chance to come through. You could mitigate by having the algorithm pick 50% (or some other percentage) and the other half is fully random, but the issue remains.

Or you could put explicit bias in the selection process. After all freedom of speech is a broad right. It does not help if, say, you put an outright Atheists opinion piece about the non-existence of God up for review to a review group that is 80% religiousm (bad example, I know, but there is a threat of things becoming effectively censored by majority vote and outliers of opinion get filtered out.

Personally I think we do not need such system. The most egregious forms of hate speech are already clearly identifiable and can be addressed by normal law and regulation (okay, prerequisite is a well-funcitoning democracy or similar). Much of the hate speech, I think, stems from the fact that people are anonymous, or semi-anonymous (e.g. they operate in a forum that has a strict filter bubble, and not much visibility outside).

These things could be addressed by de-anonimizing and also by ‘calling people out’, so it is clear they now have ‘reputation damage’ that will grow when they continue their hate speech. You could have reputation systems for this. But all this gets dystopic really quick.

Another thing is - and maybe the best way - is to ignore people involved in hate speech, and - bridging to disinformation - ignore or just recognize and act accordingly to unverifiable source of information. This you can do with help of IT systems that provide identity and verifiable claims (see “Fighting fake news” in Investigating privacy-respecting online identity, data ownership & control solutions - #10 by aschrijver).

1 Like

Hey Bozon thanks for replying. I only meant to point to that problem and to a possible pathway to a solution using such a design principle as a way to conceptualize the actual solution - I myself yet have not designed such a solution for what I mentioned, it was more for consideration and creative flow. So perhaps I picked too complex of a problem to use as an example. :open_mouth:

But that doesn’t mean I don’t have a few ideas already that address what you bring up. Before I do however, let me address a simpler example of online verification on social networks that may be easier to see.

  • Each platform would be assumed to have their own network of users for verification. Or perhaps all platforms use a trusted third party that employees active audience verification - could work either way.

  • At a simple level, let’s not give random users too complex of a task such as determining what is actual hate speech, etc at this stage. Its more of a task like “can you verify that this web page contains this information”, with a simple yes or no response.

  • Let’s look at online verification using a simple task of proving a social identity is an actual identity they are claiming to be (using online impersonations as one of many problems), say on a chat forum.

*Using this simple example, a very easy set of protocols for human eyes could be developed. For example if a user is claiming to be an identity that is already established somewhere else, the protocol is have an internet user request they make a post on the established channel to verify, and 10 other users, all randomly selected, must show consensus on the verification.

For Facebook, let’s say Facebook adopts a policy for “no political ads”, no ads for 420 companies, no ads from certain countries, etc. All ad submissions are sent through the “active audience” for verification, determining if an ad is slanted with political views (left or right) is an easier task for random users to manage. We can start with just binary options for them.

To prevent trolling (meaning that a troll signs up to be apart of the verification active audience) that would be simple to filter, because the content is sent to 10 random users. (I’m also using “10” arbitrarily, it could be any necessary number) If one of them is a troll, their response would read differently and out of consensus. If an active audience user contradicts consensus say two or three times in a row, they are removed from the active audience verification team. I’m not a mathematician, but I do know there are lots of fun maths to queue and re-queue such a selection process.

Now, for the bigger question “How do we determine what is hate speech, divisive speech” well all of these types of communication have a consistent identifier, non-resolving communication.

I actually do have the methodology for that sort of consensus building but within aiki wiki. That is probably more time consuming than my example earlier would require for this type of verification, but I can give you a quicker example of how certain types of online social issues could get resolved through an active audience that is similar.

Social Contracts

Give users the ability to generate “social contracts” between each other. This could work for claims of harassment.

For example, I claim that someone is harassing me on Twitter, but because many times the context of harassment is intimate, between two parties, and difficult for a third party to verify, I can make this claim against a user for honest reasons, or deceptive reasons (say to try and ban a user I dont like).

For online harassment, the key identifier is any type of “non-resolving speech” - meaning that sure, two users can have an argument, and we all have bad days, so one users says FU Die. The other user sends them a social contract if they believe they are being harassed. The social contract requires both users to agree to “resolving” speech, by either walking away from the convo or apologizing, or some form of “social redemption”.

If that user declines a social contract that only binds resolution - that user could be flagged, and then other users could verify if the users participation was “non-resolving”.

If that user continues to decline social contracts…well there is an algorithm for that type of user. They become ranked less and less, become more and more isolated.

EDIT: The converse is also true, users that build a body of social contracts would be ranked higher.

While this idea is still in the formative stage, I think verification could be kept simply by looking at content that is either resolving or non-resolving.

If you made it this far, thanks for reading! I’m trying to make my points simpler and more concise, and still have lots of work to do there so forgive me for taking too much time where not necessary.

Cheers
Rome

Correct, I didn’t mean the branding of HTC at all, I’m fine with all of this flying under Humane Technology, I meant more for a subdivision component, something to identify as a design principle.

Also, it could even use a better word than “Active Audience”, since my background is more content, that term makes sense in that environment but perhaps not another.

I think the key word is “active” and “collaborative”, meaning a willingness to participate in the solution.

Wow! Honored!

aiki wiki would be a process to remove all forms of bias from a published consensus, but again, that is too complex or time consuming for this type of active audience verification process. I responded with some simpler examples of how this could work.

Ignoring hate speech is one thing, but having speech distributed at scale on the internet is different than preventing free speech. This type of verification would just instruct the algorithm if such content should be distributed and at which ranking, not if it should be published.

All platforms have the “reach” problem, meaning no content will get to all users even under the best circumstances, so this would assist in the downranking of types of content from influence, not speech.

2 Likes

Timely example of where a good “active audience” using “social contracts” could come into play.

2 Likes