A few more points:
I started with a business model that incentivizes high quality content and factuality. Everyone on the platform has a stake in the success of the platform, since the higher the quality of reviews, and the better individuals’ scores reflect their knowledge and expertise, the more valuable is the “digital currency” (vs. the dollar, for example) that each member has. You also benefit from the success of other people on the platform and from everyone having access to as much knowledge as possible, since that grows the platform and makes it more influential, therefore increasing the value of the digital currency.
I agree that not all content can be verifiable, and that mere opinions are very different from statements of fact – that is ok. Obviously the platform will be more effective on articles dealing with the hard sciences (for example), but it will still be quite effective in many areas where current social media is failing – determining basic facts about news and current events, reviewing the track record of politicians, financial analysts, scientists and experts, and so on. The platform will also be able to determine if an opinion piece is fact-based or not – these are all areas where the current systems are not only failing but actually undermining our sense-making capabilities.
The platform will be able to transform how research is done – researchers will not be dependent on government/corporate grants to do their research, or on science journals with paywalls. Instead researchers will be able to conduct truly independent work (with investment from other platform members), and publish directly to the platform (making money from the credit/importance of the discoveries).
Regarding “voting” on the platform & trying to game the system:
I agree that content that is viewed as more important/popular would have a lot more reviews on the platform, and therefore its score would more accurately reflect its credibility, while content that is less important would also have fewer reviews. I also agree that more people may try to game the reviews of more peripheral content, however, the platform still provides the incentive structure and tools to minimize such fraud.
This is done in three ways: first, people don’t “vote” on the credibility of content on the platform. Instead, each person who writes a review has to provide sources to support his/her claim. People who have more expertise in a subject would have more weight in determining the overall score for the content – a physics Nobel laureate’s review would have more weight on the subject than that of 100 people with no background in physics, for example.
At the same time, each reviewer has an incentive to be truthful and accurate, since his/her score can also be affected if the person tries to game the system or provide a misleading or fraudulent review (in such an event others would give a negative score to the review, backed by evidence, which would lower the score/expertise level of the reviewer).
The critical point here is that what truly matters on this platform is facts. Trying to bring someone down just because you don’t “like” the person or the person’s claims is counterproductive, as it will only end up hurting your own credibility.
The second point is that to write reviews on the platform one would need to have a verified individual account – this would not be much different from people who have brokerage accounts today, since by making reviews on the platform you’re essentially investing on this digital platform (you don’t need an account to view content or reviews on the platform however). This means that troll farms would not be able to materialize in such an environment.
People who try to coordinate malicious reputation attacks on individuals on the platform would also end up losing credibility in the process once they are exposed – which means their efforts too would be counterproductive, as their influence diminishes.
(I do believe that there is a need to protect individuals’ privacy, as well as allow a way to make anonymous reviews at times, and I’m now working out the details of how to make that happen)
Third, I can imagine that as the platform grows there will be some people who will build their reputation on exposing scammers, coordinated groups, or others who try to game the system (they may even build AI tools to systematically detect such attempts). Therefore, the incentive to try and game the system will be minimized even further.