Had a chat yesterday with a random person, with whom I discussed technology addictions and humane tech principles. At some point he asked me how he could find out if a specific app was compliant with such principles, and whether the CHT, for instance, gave such “ratings”.
I believe that could be a way to alert users of the most popular apps to all the “bad practices” that are not compliant, such as dark patterns, confusing privacy policies, etc. Is it something the CHT is considering doing at some point?
Yes, I have proposed an idea like this some time time ago, and it is waiting on this forum to be picked up:
I see the forum as a first-stage archive, a body of raw information that is waiting to be processed. In this regard old threads are not forgotten posts far down the topic list, but rather inspirations and TODO’s.
You can filter on the ‘idea’ and ‘collaborate’ tags to find more of them, and of course, there are many, many posts that contain the fruits of ideas that we can pick up on as soon as there is enough interest. If there is interest, but no follow-up (like with the book club idea) then the idea remains idle in the archive.
PS. I should make this part of the FAQ, i.e. explaining how projects can be initiated on people’s own initiative.
On this idea I believe rating criteria should be established by the CHT core team, who is best qualified but of course welcome to take ideas from the community.
As to coming up with ratings (which could involve scores per criteria, list of blatant violations of humane tech principles), whose job would that be? The community? Or a group of industry experts?
I love this. I really do believe that my company, reallyread.it, would get the highest-level, LEED Platinum, “100% Humane” certification.
We should do this, and I’d love to participate.
It’d be cool if it was decentralized. Could we do this as a wiki? Or just some yelp-like 5-star voting thing?
Just thinking about this topic again. Very interestingly, this person I talked to was very much appreciative of all technology had to offer. And despite my best attempts to tell him more about the negative sides of it, he was not receptive. “Don’t you realize that tech companies document your entire life?” No reaction, save perhaps “I have nothing to hide.”
He, however, was curious about whether ratings existed. I take this experience as very enlightening. People don’t like a lecture, but they love to check on things they do. I am sure if we had such a rating system in place, it would take 5 seconds to convince anyone to check if their favorite app is or not compliant. They could see alternatives which rate better. This would ultimately cause the leading tech companies to consider changing their business models to catch up. Very powerful!
Again, I need to solicit our severely, over-busy, but beloved and respected @aschrijver, to tell us your views on how we could make this work. The way I see it:
Come up with a list of criteria, such as privacy, security, dark patterns, etc, as well as lists of the darkest aspects of such and such app.
Build a website (or sub-domain here) where people could quickly see the ratings of their favorite apps as well as alternatives.
Big question, and very important, how to enable users to rate, while most are not equipped to objectively assess, and while score manipulation may take place? I personally believe it should be from this community, following a vetting process by a designated member here.
Rating an app would likely involve some research (lots of well-researched rankings and discussions are widely available on the web), personal experience with the specific app. If anyone would be interested in becoming a rater, please post here.
If we go for this project (which I trust would be easy to set up and contribute to), PLEASE, let’s not park it as another initiative on Github, but discuss it here.
Your bullets (which I’ve edited to be numbers) are spot on, @anon51879794.
Before I address each of them in turn, first this: The forum is always the first place to discuss/elaborate ideas, and Github second when it comes to going into the more technical details, and even then the high-level stuff should go to the forum (reporting back).
Point 1: This is something that is very valuable no matter whether this becomes a project or not. It aligns with the 4th strategic pillar “Apply Humane Design”, which requires us to know what constitutes humane design actually (an oft discussed topic with its own category even). So this can go ahead no matter what.
Point 2: A static website (displayed as subdomain of the community website, e.g. at appratings.humanetech.community) would indeed be relatively easy. It would be a bit similar to the ‘endorsed’ apps list on humanetech.com / The way forward, but with more detail. It would be necessarily limited in scope, because maintained by HTC members who are ‘in the know’. So we should focus on the most popular apps exclusively (at least at the start).
Point 3: Most powerful and interesting option, that requires the project to not be a static website, but a dynamic web application that allows crowdsourcing the information by anyone in the public. There would be no limit to the apps that can be rated, and there many ways to make it a high-value place to visit frequently.
It is also the most complex, and - as you point out - care must be taken so the ratings cannot be ‘gamed’. This requires a well thought-out ‘reputation system’ for ratings. I have some experience with this. While reputation systems may look simple, most often they are not.
When making this a project the best way forward IMHO would be: 1 —> 2 —> 3
So a project that can evolve, and is initially handled by the community and then gradually opened up to a broader audience.
Here’s how I think I can help, and how my own research overlaps. The issue (and sought after design) is a methodical system which rates apps in accordance with CHT (and HTC) criteria. Its objective would be to rate unfavorable apps so that their competition can gain an edge and drive them out of business. The system could be an HTC forum, a Github project turned into an app, or as @aschrijver mentioned it could be…
This requires a “1–> 2–> 3–>” approach because of two things. a. The initial size (and thus the political power) of the centralized or networked rating authority, and b. The fact that the internet itself (through which all apps are connected) remains unregulated.Tim Cook recently called for “sweeping U.S. regulations” without mentioning who the violators were. The mainstream news believed he was primarily referring to FB (an app) and Google (a search engine). The U.N. now has an “Internet Governance Forum,” but not everyone globally agrees on how to regulate the entire web (which would be de-facto regulation of every app connected to it).
Here, Georges hits the nail on the head. Industry experts are favored because they are experts, but nobody works for free, and when money enters the equation then so do conflicts of interest. On the other hand, lay people shouldn’t have to congregate on this forum to solve abuse Tech Giants are causing. (Google “a-blueprint-for-a-better-digital-society” or PM me for a copy). Silicon Valley guru Jaron Lanier argues 10 reasons to quit FB and calls it a “bummer machine.” Both he and I believe the problem is the “gap between big tech platforms and the individuals they harvest data from.”
Too many conflicts of interest exist to ask the Tech Giants to address the problem they profit off of. Governments have little power because network effects have given platforms disproportionate power, and [Jaron believes] the complexity of the digital economy makes it impossible to regulate in detail. I agree with Jaron only to the extent that this regulation is restricted to being only derived by centralized experts.
The salient question seems to be - “how fully automated can an ultimate crowdsourced solution be?” My offered strategy would be to empower lay people to rate all apps “subconsciously” by allowing them to leverage their own facts in their own private epistemic bubbles. That “processed data” would give the crucial “feedback and votes” which the experts (who would build the rating system) would then crunch (probably with AI) to then provide those statistics to be publicly expressed on the web at some centralized location.
I agree. In order to be fully trusted and thus free of any conflicts, the system that collects these votes should be evenly accessible in a decentralized way. Lay people could live their lives without fear of corruption, and without having to consciously debate each app that may be irritating them. Note that this is the same reason why people elect representative in government. If they fully understood all governmental functions, they wouldn’t elect anyone to represent them; they’d run for office themselves.
Can we create a decentralized app powered by popular votes? A personal dashboard on their own internal LAN of networked personal devices could lessen the psychological effect of centralized experts / politicians (even if, in fact, the DB connects to a live database for them). As a global system it would have a chance to overpower the Tech Giants (who’d be driven out of business if not obedient). It would become a unifying global power, unlike the divided power of all nation States whose Constitutions were drafted before the internet was invented.
But my research also reveals (and @aschrijver has already experienced this directly with the management of this forum) that the bigger an organization gets, the more unwieldy and difficult to manage it becomes. So as we initiate this project through the narrow first level (but with our ultimate aim on the broad third level approach) we should consider the balance between required human power versus the automation of as many factors in our framework as we can incorporate. For automation I’m focusing on AI and the modular early stage development of quantum computers. Hope this helps.
Problem with allowing anyone to rate is that, without doing some meaningful research, rarely one is equipped with the right knowledge to rate objectively. For instance, how do I know that a specific app is a likely violator of my privacy? Based on my casual experience of this app, it might be impossible to tell. However, reading educated articles about the app will help me come up with a more accurate rating.
I recently read a very good article that ranked various browsers per similar criteria, contributing to giving them an overall score. This is something authorized raters could use.
Also, @Hex, while you made some very relevant points, the objective would not be to drive the incumbents out of business, but rather motivate them to adopt humane tech designs, lest they rank poorly compared to other apps.
Yes, exactly. That is why I think it should start with your bullet point 2 and an ‘expert’ group of dedicated community members doing the ratings at first.
But when you extend it to allow anyone to rate, then you should make it an easy task, with maybe a drill-down to more specific / specialist areas, and afterwards let others ‘rate the rating’ (via e.g. upvotes) if they agree. So you get a crowdsourced consensus.
Could be something like this (just brainstorming):
A top-level checklist with Humane Tech criteria: “Privacy Policy present?”, “About page present?”, “Monetization method clearly explained?”, etcetera.
When marking a top-level entry, you can drill down into a sub-list / sub-page: “Privacy policy mentions what data is collected?”, “PP is easy to understand, written for normal people?”, “PP specifies what data is shared to 3rd parties?”, etc.
Each validation criteria should have an option to skip it .e.g. “Don’t know”, “Didn’t evaluate”
After people submit their validation checklist, the aggregate rating for the app is re-calculated.
Other people should be able to see individual ratings still, and give it an Upvote or Downvote, that contributes to the weighing in the aggregate rating.
Also frequent submitters of ratings that get many upvotes gain “reputation”. Their rating is weighed a bit heavier in the aggregate rating.
Finally each individual rating should have a free text area to leave a comment where the rater can state their biggest Humane Tech focus or concern.
The comments are similar to Reviews in the appstore and displayed as a list below the aggregate rating, ordered by the Likes they receive.
Thanks @aschrijver for your thoughts on how ratings would go. Any idea, technically, on how to build the rating platform? I was thinking maybe ready-made templates/hosts are available, so did a bit of research, but came back empty-handed.
I found a few startups that analyze customer reviews and are powered by bots and AI, but my concern is that they are all for profit algorithms and platforms. I think open source components and platforms are crucial. So we’d need an open source platform with 1. natural language processing (“NLP”) connected to 2. artificial general intelligence (“AGI”). All run by a centralized clickable web based graphic user interface (“GUI”) in order to -
Increase ease of use through a GUI based web page (as opposed to specialized Github for example), and thus increase user engagement (which would increase the data set of that central web page) and increase the total number of raters through the networking effect;
Allow anyone to rate apps without first having to perform any meta-conscious meaningful research. Experts may decide the criteria, but AGI connects user data to those criteria both before and after use of the subject app being rated; e.g., a person’s productivity in any specialized field (social and professional) both before and after the subject app use, a person’s moodiness before and after the subject app use, etc. The app user is then given a chance to consciously review and change those automated vote results before posting any of them;
Identify genuine “frequent submitters of ratings” from the biased frequent submitters (such as individuals employed by app makers to unfairly tilt the results) by processing all data through behavioral biometrics;
Maximize the fruition of the “open ended” text comment by processing it through 1. the NLP (to first understand the basic notion or premise of the feedback), and then 2. Process those NLP results through AGI to compare to the data of all other text box comments in order to categorize and organize the results (or delete them altogether if the processed data equals unusable gibberish);
Link the text comment box to the settings of the subjective app ratings GUI device to continuously improve subjective app rating performance (not just linked to the results of the rated apps).
And yes Georges, I agree, the idea is not for the app ratings GUI device to drive any app out of business, but rather to add favorable reviews to their competitors in an exponential way. The more favorable reviews are added to their competitors, the more those competitors put pressure on the problematic app makers to conform to the criteria set by the experts, even if those criteria cost more and are thus less profitable to those app makers.
Let’s not get too deep into possible technical solutions. That is too much besides the topics of this forum, and is where things better go to Github. Going further means elaborating in more detail what the functionality should be now, and how it is likely to evolve. Investigating components and services that are available, programming needs, language, etc.
Also I am quite sure that NLP and AI are not needed here.
This needs a separate project (though its development could be tracked by an awareness campaign). As part of Idea - Humane Technology Logo Program I already created a repository at the time, which I just moved to humanetech-community:
Note; I have no problems at all with elaborating on this project further (any member can start initiatives), but - as I have stated in other places - my main focus and priority will be on Aware Prepare for the forseeable future.