A Technological & Economic solution to our Social Media "nightmare"

Thank you, and welcome to HTC. Your quest is an admirable one. I have some feedback on your ideas…

First some things that are also in the comments to the video (I watched using an alternative Youtube client):

  • As one commenter states: This is effectively a social credit system. When scaled it will be very complex and incredibly hard to avoid that it can be gamed. For the most popular and respectable sources you might find a good reviewers balance that yields a representable reputation rating. For less popular content the rating quality will deteriorate. You might have voting rings, and can have a group of people purposely deteriorating a source’s reputation. And a troll farm can destroy people’s reputations too that way.
  • Only a subset of data on the internet is verifiable information to which you can attach the notion of objective truth and hence reputation. A lot of the rest is opinion-based, like political or religious content.
  • We live in a “post-truth” world, and we’ve all seen how pervasive this phenomenon is. Scientific fact is outright rejected, and ‘alternative facts’ take their place. It might well be that ‘earth is flat’ gets a higher credibility score in the future.

Other observations:

  • There’s a lot of work in the fact-checking. You make it accessible for everyone to do this. One such other system comes to mind: Wikipedia. It works, and has no (public) credit scores.
  • Everyone in your solution should have a verifiable identity. This is not only very complex, but has problems too. There’s the self-sovereign identity (SSI) standard under development, and they recognize themselves that it may lead to dystopia. (Note: I’ve written a bit about privacy-respecting online identity in the past and think we need both anonymous, pseudonymous and validated identities).
  • Many inventions in the past have been made by scientists that stubbornly went against the flow. In your system the majority wins. Divergent opinions and out-of-the-box thinking, anything non-mainstream will be suppressed.
  • What happens to people who are less proficient and intelligent, or who are just not willing or able to do the fact-checking. People who are curious and on learning paths, and are making mistakes along the way. We all make them. They are not able to gain high reputation scores. Are these people now valued less in society with reputation being such an important metric?

All this said, I think this is a valuable discussion to have. And there are things I like in your concept.

As for going forward, you could test-drive the concepts on your own social network. It would be prudent to target specific fields of interest for this, e.g. specific scientific fields. If it works for hard science you can scale up to more difficult areas. To give this the best chance I would go with FOSS and Open Science as well.

Regulating social media algorithms

For instance transparency of algorithms. Right now a lot of effort of activism groups is focused on breaking up Big Tech monopolies. But this will offer no solution if the attention / data harvesting mechanism stay the same. A more interesting approach, imho, is if regulators have to approve / reject the algorithms that are allowed to run on these platforms, which will require them to be transparent, and publicly access so people can review and improve them.

Btw, it will be very hard to achieve this, as the competitive advantage is in these algorithms and they are closely guarded. Furthermore with black-box AI there are no algorithms to review.

1 Like