I’ve been working to collect ideas about potential policy positions to regulate tech and reduce screen addiction. Even though people on this forum are enlightened with respect to designing humane platforms, I’m looking for systemic solutions, and feedback on them. (I’ve written them up in prior comments and this medium post.)
Social media platforms have invested in creating algorithms that tell them the optimal way to order content on a person’s news feed. They do this by mathematically defining things like “relevancy” which is the back-end name for one of the direct contributors to what people experience as screen addiction.
It occurs to me that users may not want the best recommendations all the time — like if they are browsing youtube after midnight and they find that they can’t bring themselves to quit directly.
Most recommendation systems work by, at some point, ranking the things being recommended in terms of some definition of “best.” If we take dating apps, for example, some dating apps may rank recommendations via “profile similarity” or “predicted compatibility” whereas for others it might be “similarity along dimensions of preferences.” And provided things haven’t changed too much since the time this paper was published, the Youtube recommender uses two neural networks: one to generate recommendation candidates for each user, and another to rank the said candidates.
A regulation could require users be allowed to control whether they want to see things from the top, middle, or bottom third of the list or recommendations or pieces of content, for whatever definition of “best” is being used to rank them. Or users can be given a slider of how “relevant” they want their feed to be.
What do you think?