A Technological & Economic solution to our Social Media "nightmare"

tl;dr - I developed a technological and economic solution that I believer can be a complete game-changer when it comes to social media. I’d like to get your input on my solution and collaborate to make this a reality

Hi everyone,
I’m new to this forum and I’m hoping we can have a fruitful discussion/collaboration about solving the problems with social media (disinformation, polarization, hate, etc. etc.).

Over the past few years I came to very similar conclusions as the Social Dilemma and I was thinking about a possible solution to these problems. I outlined this technological & economic solution in the following video: https://www.youtube.com/watch?v=fWbqsDp1OoI (sorry for the poor audio quality).
I’m now working on developing the tool (which would work as a browser plugin) to make this a reality and the prototype is now in late stages of production.

I’d like to get feedback from the community about the solution I came up with and how we can collaborate to make it a reality.

Thank you,
Mike Natanzon

Thank you, and welcome to HTC. Your quest is an admirable one. I have some feedback on your ideas…

First some things that are also in the comments to the video (I watched using an alternative Youtube client):

  • As one commenter states: This is effectively a social credit system. When scaled it will be very complex and incredibly hard to avoid that it can be gamed. For the most popular and respectable sources you might find a good reviewers balance that yields a representable reputation rating. For less popular content the rating quality will deteriorate. You might have voting rings, and can have a group of people purposely deteriorating a source’s reputation. And a troll farm can destroy people’s reputations too that way.
  • Only a subset of data on the internet is verifiable information to which you can attach the notion of objective truth and hence reputation. A lot of the rest is opinion-based, like political or religious content.
  • We live in a “post-truth” world, and we’ve all seen how pervasive this phenomenon is. Scientific fact is outright rejected, and ‘alternative facts’ take their place. It might well be that ‘earth is flat’ gets a higher credibility score in the future.

Other observations:

  • There’s a lot of work in the fact-checking. You make it accessible for everyone to do this. One such other system comes to mind: Wikipedia. It works, and has no (public) credit scores.
  • Everyone in your solution should have a verifiable identity. This is not only very complex, but has problems too. There’s the self-sovereign identity (SSI) standard under development, and they recognize themselves that it may lead to dystopia. (Note: I’ve written a bit about privacy-respecting online identity in the past and think we need both anonymous, pseudonymous and validated identities).
  • Many inventions in the past have been made by scientists that stubbornly went against the flow. In your system the majority wins. Divergent opinions and out-of-the-box thinking, anything non-mainstream will be suppressed.
  • What happens to people who are less proficient and intelligent, or who are just not willing or able to do the fact-checking. People who are curious and on learning paths, and are making mistakes along the way. We all make them. They are not able to gain high reputation scores. Are these people now valued less in society with reputation being such an important metric?

All this said, I think this is a valuable discussion to have. And there are things I like in your concept.

As for going forward, you could test-drive the concepts on your own social network. It would be prudent to target specific fields of interest for this, e.g. specific scientific fields. If it works for hard science you can scale up to more difficult areas. To give this the best chance I would go with FOSS and Open Science as well.

Regulating social media algorithms

For instance transparency of algorithms. Right now a lot of effort of activism groups is focused on breaking up Big Tech monopolies. But this will offer no solution if the attention / data harvesting mechanism stay the same. A more interesting approach, imho, is if regulators have to approve / reject the algorithms that are allowed to run on these platforms, which will require them to be transparent, and publicly access so people can review and improve them.

Btw, it will be very hard to achieve this, as the competitive advantage is in these algorithms and they are closely guarded. Furthermore with black-box AI there are no algorithms to review.

1 Like

Thank you @aschrijver for your very thoughtful feedback!

I’ll split my response into multiple posts to avoid confusion… So here are just a few clarifications about the platform I’m developing:

Regarding divergent opinions – what really matters on the platform is not majority opinions but rather facts. Every person on the platform is essentially an investor in truth. If there is a new concept or theory out there it would not benefit your credibility to follow whatever the majority of people think. It would benefit you to follow the facts and review the concept/theory (or whatever it is) as objectively and accurately as possible. Over time, as more data is available to strengthen (or undermine) the concept/theory, your credibility would grow for correctly reviewing the claim. If you simply follow majority opinion however your credibility (as well as the credibility of everyone else who followed along) would suffer, so there is no benefit in doing so.

Regarding people who are less proficient – the purpose of the platform is not to arbitrarily punish mistakes but rather promote truth and honesty. For that reason, such people have two remedies: when writing an article or review the platform allows you to self-review and state your confidence level in your content. If you state that you have low confidence in your claims (you’re basically saying “I think X, but I’m only 20% certain about that”) you will not be penalized by people who review your content. The other remedy is to correct mistakes once those are pointed out by others – here again the purpose is to get to the truth and promote honesty, not arbitrarily punish people for mistakes (on the other hand, if a person repeatedly makes outrageous claims and then “corrects” them once he/she is exposed, the person will be penalized for trying to game the system).

Because the platform allows people to easily determine the credibility of online content, and incentivizes people to freely share content and mentor others, we can essentially create an incredible resource for learning and spreading knowledge.

Also, I’d just like to say that I wouldn’t call this platform a “social credit system” because that implies a top-down, centralized system that represses dissenting views. The platform actually does the exact opposite – it’s a bottom-up, meritocratic, decentralized digital marketplace for content that empowers people, promotes informed debate, and fact-based content.

1 Like

A few more points:

I started with a business model that incentivizes high quality content and factuality. Everyone on the platform has a stake in the success of the platform, since the higher the quality of reviews, and the better individuals’ scores reflect their knowledge and expertise, the more valuable is the “digital currency” (vs. the dollar, for example) that each member has. You also benefit from the success of other people on the platform and from everyone having access to as much knowledge as possible, since that grows the platform and makes it more influential, therefore increasing the value of the digital currency.

I agree that not all content can be verifiable, and that mere opinions are very different from statements of fact – that is ok. Obviously the platform will be more effective on articles dealing with the hard sciences (for example), but it will still be quite effective in many areas where current social media is failing – determining basic facts about news and current events, reviewing the track record of politicians, financial analysts, scientists and experts, and so on. The platform will also be able to determine if an opinion piece is fact-based or not – these are all areas where the current systems are not only failing but actually undermining our sense-making capabilities.

The platform will be able to transform how research is done – researchers will not be dependent on government/corporate grants to do their research, or on science journals with paywalls. Instead researchers will be able to conduct truly independent work (with investment from other platform members), and publish directly to the platform (making money from the credit/importance of the discoveries).

Regarding “voting” on the platform & trying to game the system:

I agree that content that is viewed as more important/popular would have a lot more reviews on the platform, and therefore its score would more accurately reflect its credibility, while content that is less important would also have fewer reviews. I also agree that more people may try to game the reviews of more peripheral content, however, the platform still provides the incentive structure and tools to minimize such fraud.

This is done in three ways: first, people don’t “vote” on the credibility of content on the platform. Instead, each person who writes a review has to provide sources to support his/her claim. People who have more expertise in a subject would have more weight in determining the overall score for the content – a physics Nobel laureate’s review would have more weight on the subject than that of 100 people with no background in physics, for example.

At the same time, each reviewer has an incentive to be truthful and accurate, since his/her score can also be affected if the person tries to game the system or provide a misleading or fraudulent review (in such an event others would give a negative score to the review, backed by evidence, which would lower the score/expertise level of the reviewer).

The critical point here is that what truly matters on this platform is facts. Trying to bring someone down just because you don’t “like” the person or the person’s claims is counterproductive, as it will only end up hurting your own credibility.

The second point is that to write reviews on the platform one would need to have a verified individual account – this would not be much different from people who have brokerage accounts today, since by making reviews on the platform you’re essentially investing on this digital platform (you don’t need an account to view content or reviews on the platform however). This means that troll farms would not be able to materialize in such an environment.

People who try to coordinate malicious reputation attacks on individuals on the platform would also end up losing credibility in the process once they are exposed – which means their efforts too would be counterproductive, as their influence diminishes.

(I do believe that there is a need to protect individuals’ privacy, as well as allow a way to make anonymous reviews at times, and I’m now working out the details of how to make that happen)

Third, I can imagine that as the platform grows there will be some people who will build their reputation on exposing scammers, coordinated groups, or others who try to game the system (they may even build AI tools to systematically detect such attempts). Therefore, the incentive to try and game the system will be minimized even further.

Thank you for your elaboration! You have obviously given this a lot of thought.

There’s a lot of interesting concepts in the mechanisms you lay out. But the proof it in the pudding. It should be really important that your prototype clearly and intuitively demonstrate how all this works, and removes any doubt. My feedback relates to what most people will think or say when first coming upon your project/product website.

You mentioned a “business model”, so maybe you intend to create a startup around this. I would strongly encourage you to create a separate documentation website where you outline in detail exactly how things work, with diagrams and specifications and the like. And I would give the content an open license, maybe CC-BY-SA or even CC0. You can then create a community around this and crowdsource refinement of the concepts, remove loopholes, and - importantly - get tons of feedback and then user testing on your prototype. If you want this to get broader tech adoption I think opening up like this is vital.

An example, on a different area of tech, is how @AndrewMackie documented his Offerbots (to bypass aggregators of attention), and especially the Problem analysis is splendid (I don’t know if Andrew is still working on the solution side, though).

Regarding identity, like i said, there are a variety of open standards in the works. I am curious about your thoughts in this direction. For a working software based on current standards (PGP) the FOSS project Keyoxide is very interesting (very similar to Keybase, but better).

This article discussion might be interesting to you @Miken: https://news.ycombinator.com/item?id=25246733