A Technological & Economic solution to our Social Media "nightmare"

tl;dr - I developed a technological and economic solution that I believer can be a complete game-changer when it comes to social media. I’d like to get your input on my solution and collaborate to make this a reality

Hi everyone,
I’m new to this forum and I’m hoping we can have a fruitful discussion/collaboration about solving the problems with social media (disinformation, polarization, hate, etc. etc.).

Over the past few years I came to very similar conclusions as the Social Dilemma and I was thinking about a possible solution to these problems. I outlined this technological & economic solution in the following video: The Internet of the Future is Here - YouTube (sorry for the poor audio quality).
I’m now working on developing the tool (which would work as a browser plugin) to make this a reality and the prototype is now in late stages of production.

I’d like to get feedback from the community about the solution I came up with and how we can collaborate to make it a reality.

Thank you,
Mike Natanzon

(Photo by Brett Jordan from Pexels)

Thank you, and welcome to HTC. Your quest is an admirable one. I have some feedback on your ideas…

First some things that are also in the comments to the video (I watched using an alternative Youtube client):

  • As one commenter states: This is effectively a social credit system. When scaled it will be very complex and incredibly hard to avoid that it can be gamed. For the most popular and respectable sources you might find a good reviewers balance that yields a representable reputation rating. For less popular content the rating quality will deteriorate. You might have voting rings, and can have a group of people purposely deteriorating a source’s reputation. And a troll farm can destroy people’s reputations too that way.
  • Only a subset of data on the internet is verifiable information to which you can attach the notion of objective truth and hence reputation. A lot of the rest is opinion-based, like political or religious content.
  • We live in a “post-truth” world, and we’ve all seen how pervasive this phenomenon is. Scientific fact is outright rejected, and ‘alternative facts’ take their place. It might well be that ‘earth is flat’ gets a higher credibility score in the future.

Other observations:

  • There’s a lot of work in the fact-checking. You make it accessible for everyone to do this. One such other system comes to mind: Wikipedia. It works, and has no (public) credit scores.
  • Everyone in your solution should have a verifiable identity. This is not only very complex, but has problems too. There’s the self-sovereign identity (SSI) standard under development, and they recognize themselves that it may lead to dystopia. (Note: I’ve written a bit about privacy-respecting online identity in the past and think we need both anonymous, pseudonymous and validated identities).
  • Many inventions in the past have been made by scientists that stubbornly went against the flow. In your system the majority wins. Divergent opinions and out-of-the-box thinking, anything non-mainstream will be suppressed.
  • What happens to people who are less proficient and intelligent, or who are just not willing or able to do the fact-checking. People who are curious and on learning paths, and are making mistakes along the way. We all make them. They are not able to gain high reputation scores. Are these people now valued less in society with reputation being such an important metric?

All this said, I think this is a valuable discussion to have. And there are things I like in your concept.

As for going forward, you could test-drive the concepts on your own social network. It would be prudent to target specific fields of interest for this, e.g. specific scientific fields. If it works for hard science you can scale up to more difficult areas. To give this the best chance I would go with FOSS and Open Science as well.

Regulating social media algorithms

For instance transparency of algorithms. Right now a lot of effort of activism groups is focused on breaking up Big Tech monopolies. But this will offer no solution if the attention / data harvesting mechanism stay the same. A more interesting approach, imho, is if regulators have to approve / reject the algorithms that are allowed to run on these platforms, which will require them to be transparent, and publicly access so people can review and improve them.

Btw, it will be very hard to achieve this, as the competitive advantage is in these algorithms and they are closely guarded. Furthermore with black-box AI there are no algorithms to review.

1 Like

Thank you @aschrijver for your very thoughtful feedback!

I’ll split my response into multiple posts to avoid confusion… So here are just a few clarifications about the platform I’m developing:

Regarding divergent opinions – what really matters on the platform is not majority opinions but rather facts. Every person on the platform is essentially an investor in truth. If there is a new concept or theory out there it would not benefit your credibility to follow whatever the majority of people think. It would benefit you to follow the facts and review the concept/theory (or whatever it is) as objectively and accurately as possible. Over time, as more data is available to strengthen (or undermine) the concept/theory, your credibility would grow for correctly reviewing the claim. If you simply follow majority opinion however your credibility (as well as the credibility of everyone else who followed along) would suffer, so there is no benefit in doing so.

Regarding people who are less proficient – the purpose of the platform is not to arbitrarily punish mistakes but rather promote truth and honesty. For that reason, such people have two remedies: when writing an article or review the platform allows you to self-review and state your confidence level in your content. If you state that you have low confidence in your claims (you’re basically saying “I think X, but I’m only 20% certain about that”) you will not be penalized by people who review your content. The other remedy is to correct mistakes once those are pointed out by others – here again the purpose is to get to the truth and promote honesty, not arbitrarily punish people for mistakes (on the other hand, if a person repeatedly makes outrageous claims and then “corrects” them once he/she is exposed, the person will be penalized for trying to game the system).

Because the platform allows people to easily determine the credibility of online content, and incentivizes people to freely share content and mentor others, we can essentially create an incredible resource for learning and spreading knowledge.

Also, I’d just like to say that I wouldn’t call this platform a “social credit system” because that implies a top-down, centralized system that represses dissenting views. The platform actually does the exact opposite – it’s a bottom-up, meritocratic, decentralized digital marketplace for content that empowers people, promotes informed debate, and fact-based content.

1 Like

A few more points:

I started with a business model that incentivizes high quality content and factuality. Everyone on the platform has a stake in the success of the platform, since the higher the quality of reviews, and the better individuals’ scores reflect their knowledge and expertise, the more valuable is the “digital currency” (vs. the dollar, for example) that each member has. You also benefit from the success of other people on the platform and from everyone having access to as much knowledge as possible, since that grows the platform and makes it more influential, therefore increasing the value of the digital currency.

I agree that not all content can be verifiable, and that mere opinions are very different from statements of fact – that is ok. Obviously the platform will be more effective on articles dealing with the hard sciences (for example), but it will still be quite effective in many areas where current social media is failing – determining basic facts about news and current events, reviewing the track record of politicians, financial analysts, scientists and experts, and so on. The platform will also be able to determine if an opinion piece is fact-based or not – these are all areas where the current systems are not only failing but actually undermining our sense-making capabilities.

The platform will be able to transform how research is done – researchers will not be dependent on government/corporate grants to do their research, or on science journals with paywalls. Instead researchers will be able to conduct truly independent work (with investment from other platform members), and publish directly to the platform (making money from the credit/importance of the discoveries).

Regarding “voting” on the platform & trying to game the system:

I agree that content that is viewed as more important/popular would have a lot more reviews on the platform, and therefore its score would more accurately reflect its credibility, while content that is less important would also have fewer reviews. I also agree that more people may try to game the reviews of more peripheral content, however, the platform still provides the incentive structure and tools to minimize such fraud.

This is done in three ways: first, people don’t “vote” on the credibility of content on the platform. Instead, each person who writes a review has to provide sources to support his/her claim. People who have more expertise in a subject would have more weight in determining the overall score for the content – a physics Nobel laureate’s review would have more weight on the subject than that of 100 people with no background in physics, for example.

At the same time, each reviewer has an incentive to be truthful and accurate, since his/her score can also be affected if the person tries to game the system or provide a misleading or fraudulent review (in such an event others would give a negative score to the review, backed by evidence, which would lower the score/expertise level of the reviewer).

The critical point here is that what truly matters on this platform is facts. Trying to bring someone down just because you don’t “like” the person or the person’s claims is counterproductive, as it will only end up hurting your own credibility.

The second point is that to write reviews on the platform one would need to have a verified individual account – this would not be much different from people who have brokerage accounts today, since by making reviews on the platform you’re essentially investing on this digital platform (you don’t need an account to view content or reviews on the platform however). This means that troll farms would not be able to materialize in such an environment.

People who try to coordinate malicious reputation attacks on individuals on the platform would also end up losing credibility in the process once they are exposed – which means their efforts too would be counterproductive, as their influence diminishes.

(I do believe that there is a need to protect individuals’ privacy, as well as allow a way to make anonymous reviews at times, and I’m now working out the details of how to make that happen)

Third, I can imagine that as the platform grows there will be some people who will build their reputation on exposing scammers, coordinated groups, or others who try to game the system (they may even build AI tools to systematically detect such attempts). Therefore, the incentive to try and game the system will be minimized even further.

1 Like

Thank you for your elaboration! You have obviously given this a lot of thought.

There’s a lot of interesting concepts in the mechanisms you lay out. But the proof it in the pudding. It should be really important that your prototype clearly and intuitively demonstrate how all this works, and removes any doubt. My feedback relates to what most people will think or say when first coming upon your project/product website.

You mentioned a “business model”, so maybe you intend to create a startup around this. I would strongly encourage you to create a separate documentation website where you outline in detail exactly how things work, with diagrams and specifications and the like. And I would give the content an open license, maybe CC-BY-SA or even CC0. You can then create a community around this and crowdsource refinement of the concepts, remove loopholes, and - importantly - get tons of feedback and then user testing on your prototype. If you want this to get broader tech adoption I think opening up like this is vital.

An example, on a different area of tech, is how @AndrewMackie documented his Offerbots (to bypass aggregators of attention), and especially the Problem analysis is splendid (I don’t know if Andrew is still working on the solution side, though).

Regarding identity, like i said, there are a variety of open standards in the works. I am curious about your thoughts in this direction. For a working software based on current standards (PGP) the FOSS project Keyoxide is very interesting (very similar to Keybase, but better).

This article discussion might be interesting to you @Miken: https://news.ycombinator.com/item?id=25246733

As the model seems to be science, let me throw in my two cents as a scientist (who got a minor – grade A – in philosophy on the philosophy of science, specialty conventionalism, from an established senior expert on conventionalism; he also taught me logic, including Medieval Logic).

Science is usually presented as this perfect mechanism to advance human knowledge, and this is to a large degree true. Mostly, because it has its roots in logic being cultivated in “the dark ages,” such as the European Medieval Period, and logic, as a mathematical theory, being a way to “disagree with perfection,” as I would like to frame it. 2+2=4, no matter what the contingent facts are, i.e., whether we talk about apples, cars, or angels. If two people stick to the rules of logic they can debate about the most offensive, outrageous, and unthinkable conjectures without offense (or any other emotion). In this way, people in the Medieval Period, monks at that (!), could dispute about the existence of God, the most offensive and dangerous question possible at that time.

Due to some advancements and then some open disputes about logic, we are currently in an age where logic has a somewhat weakened position. The result is that one may now even feel fear and danger at my mentioning a topic of dispute of the Middle Ages (!), which means one feels like a monk in the “pre-truth” age or a Soviet mentioning Marx without praise. In the same vein, people find GPT-3 generated text convincing, although it usually – and quite simply to detect – violates the rules of logic. This is pre-logical, associative thinking.

How does this relate to the proposal: the logical validity of an argument is independent from facts or expertise and can be checked (and independently from the insecurity of the facts). Universities (used to) teach this (at the entry undergraduate level). In my reading seminars, for instance, I usually had one homework assignment, in which I gave an autogenerated text to students, which they had to criticize based solely on the logical structure. This is not as easy as it sounds. They cannot just say “this is bogus,” they have to make a proper logical argument. And this is what I grade. My students used to love this: they felt free like in no other seminar, protected by the framework of logic. They knew, if the defence was logically watertight they would get a good grade from me, even if they had an entirely different opinion than me. Likewise they knew, any personal sympathy (or trying to gain such) would not gain a favorable evaluation with me, if the argument was, e.g., authority-based or associative. The result are civil and constructive discussions that can address any topic.

Thank you for all the feedback! I’ve been finalizing the Proof of Concept version of the platform and would like to share a demo of it here: https://youtu.be/UsQNLJvO-VE

Will be posting the source code on Github soon (will keep you updated…)

Hi @Miken, I am blown away by the amount of progress you made on this PoC version of your idea.

There’s certainly great stuff in all of this, and I will help spread the message, so that you get the feedback and - hopefully - additional contributors, that this solution deserves.

I’d like to contribute some additional ideas for you to consider:

  • I’ve been highly active at SocialHub and as Fediverse advocate in general, and feel that your project being decentralized (federated) would be an additional strength. Don’t know about your architecture, but if review metrics were centralized that would be a bad thing, imho.

  • Additionally the fediverse may be a very good staging ground to, first of all, get the proper feedback from people with proper incentives and culture most likely to lead to a better web. Secondly fediverse may be and ideal staging ground to test-drive the solution at scale.

  • Fediverse uses Linked Data standards, allowing extensions of any type of semantic vocubulary definitions. The metadata you track on an information source might also be kept in this format, which may yield very interesting use cases. There’s also Solid project, a companion Linked Data technology that provides additional control of personal data.

  • Just like you crowdsource content curation, I feel you should do the same for the algorithms and the model that drives the entire solution. I.e. provide algorithmic transparency and means to discuss and adopt improvements. See also Algo Transparency.

  • I don’t know what open source license you have chosen. This may need some care and attention, and you may choose for a copyleft license (e.g. AGPL) to ensure that the solution will continue to be for the people and by the people.

Note that I added an image to the first post, so that it is displayed when sharing the topic link. I just posted to the Fediverse:

(PS. I have an account at Tilvids, a PeerTube instance, and with your permission I can make the video available on fediverse)

Thank you @aschrijver !

First, the Open Source for the project is now available here: GitHub - TheNewInternet/TheNewInternet: The New Internet Toolbar

Lots of good points here. Let me just address a few (in no particular order)…

Algorithmic transparency is super important for this project, it is in large part at the foundation of the project - people need to have full confidence that the system is unbiased and evaluates each individual’s score in a clear, objective and transparent manner. Further development of these algorithms should be done openly and with public input.

Regarding the license - I published it as MIT License. I’m certainly no expert in this field and have little experience with licensing, but from what I understand such license is pretty commonsensical and non-restrictive. The whole idea of the New Internet is to give people direct control over their creative work online, and allow them to make money by creating high quality content. It would therefore make little sense for anyone to give up any share of their creativity to some for-profit organization when the system can work perfectly well without that. But if some org finds a use for this product, I’m not opposed to them using it.

I will look more into the Fediverse since I’m not that familiar with it, will check on Mastodon also. One of the most important aspects of the New Internet is integrity of data (of the reviews people make, and how these are affected by other people’s reviews). If this issue can be addressed within the Fediverse framework it may be something to consider. This is really a technical part of the project that I haven’t yet fully worked out due to my (very) limited expertise - I pretty much just started coding from (almost) scratch about 1.5 years ago in my spare time (that would also explain why my code looks like it was done by a total newbie… it was :smiley:

1 Like

I addressed some ideas very briefly in the post (toot) thread mentioned above (note that threads become forked on the fediverse, and you need to click the root post to get the entire overview). But I invite you to post to Fediverse Futures on SocialHub for more technical ideation, or use Fediverse Futures on Lemmy (federated Reddit alternative) for more non-technical brainstorm.

Strongest feedback I got, @Miken, and which I agree to myself is to remove the monetization or at least make it entirely optional.


Other feedback (via LinkedIn from a highly experienced architect) that it is well-executed in terms of the PoC, but is naive and - if it will pick up steam at all - then only on the sides in specific niches. That the economics aren’t there (regularly payed moderators work best), that it can still be gamed (bubbles and echo chambers will appear), lacks sociological and human psychology depth.

That feedback is a bit harsh, but may be fair. In any case calling the org / project “The New Internet” elicits it. In going further I think you start to reposition there. Take a modest stance, and leave the visionary, but overly ambitious slogans out of it. You should test-drive in smaller environments, where you most likely find people to help you get further with the idea.

Like I said above free software movement and Fediverse are interesting, both for culture and technology. You’ll also get highly critical reactions there, especially about the scoring and the algorithms, but that’s only good to hone the solution further.

And you should try to reach out to @metasj, see if he has time, and ask for feedback. He’s board member at Underlay, part of Knowledge Futures Group. Expert in this field. Here on Twitter: https://twitter.com/metasj

1 Like