Potential Tech Regulation: Let users control the effectiveness / addictiveness of recommendations

Hi,

I’ve been working to collect ideas about potential policy positions to regulate tech and reduce screen addiction. Even though people on this forum are enlightened with respect to designing humane platforms, I’m looking for systemic solutions, and feedback on them. (I’ve written them up in prior comments and this medium post.)

Social media platforms have invested in creating algorithms that tell them the optimal way to order content on a person’s news feed. They do this by mathematically defining things like “relevancy” which is the back-end name for one of the direct contributors to what people experience as screen addiction.

It occurs to me that users may not want the best recommendations all the time — like if they are browsing youtube after midnight and they find that they can’t bring themselves to quit directly.

Most recommendation systems work by, at some point, ranking the things being recommended in terms of some definition of “best.” If we take dating apps, for example, some dating apps may rank recommendations via “profile similarity” or “predicted compatibility” whereas for others it might be “similarity along dimensions of preferences.” And provided things haven’t changed too much since the time this paper was published, the Youtube recommender uses two neural networks: one to generate recommendation candidates for each user, and another to rank the said candidates.

A regulation could require users be allowed to control whether they want to see things from the top, middle, or bottom third of the list or recommendations or pieces of content, for whatever definition of “best” is being used to rank them. Or users can be given a slider of how “relevant” they want their feed to be.

What do you think?

2 Likes

That seems like a reasonable proposal, and one that platforms could even choose to adopt first before regulations.

I think an added regulation that might make sense is a media native “warning” to whatever platform is being used. For example, if YouTube is feeding you say 5 videos in a row, the 6th video is a :30 PSA that warns the user of the affects of the algorithm (just like cigarettes have warning labels, certain chemicals, gas stations, etc). Could be applied to all devices, apps, games etc.

1 Like

i like where you are going with this. Though I don’t have any firm input for you on this, I’d like to respond to:

What I often see among long-time (the older) internet users on Hacker News and Mastodon, is a whistful thinking on how wonderful the ‘Web of Old’ ways in those early days of the internet. The strangest websites, most interesting subjects, amateurishly set up, but still cool, where the norm on what one stumbled onto when browsing the web.

There was not much in the way of internet search engine - Alta Vista mostly - and web surfing consisted of following-your-nose, i.e. clicking through to most interesting looking links.

A lot of that has now changed. The internet has ‘grown up’. It is now in the corporate domain, and professional high-tech search engines - with full-blown surveillance capitalism built in at the back-end - lead you along the most desired paths. The algorithms - almost by definition, I think - put you in filter bubbles.

The fun thing is that - when yet another grizzled internet oldie is grumbling about the old days - there are others that point out that that same cool internet still exists. It is just less findable, and polluted with the serious business-like stuff that gets preference.

Btw, the reason that many people find Mastodon (or broader… the Fediverse) find so attractive, is that it feels as anarchistic and fresh like the early web. Also - even though almost killed by FB and Google - RSS feeds are still all around, and allow you to get the good feeds, no algorithms.


How would the regulation on algorithm work? How do you check what algorithms are at play, when a) they constitute the deep secrets of a company - their coca cola formula, and b) deep learning behind them has made the algorithm code a black box, where even the owners do no know the formula?

Yes, that is a good idea. The text could warn about bias that may be at play too. And there could be a place where you can report bias. With YouTube scientists have uncovered deliberate plots to tweak the algorithm and lead people to conspiracy theories, because this keeps them online the longest time. See e.g. (copied from here)

Agreed; it’d be great if the platforms choose to adopt this and a suite of “humane tech policies” voluntarily. I could certainly imagine a “humane tech campaign”, a conceit of which is that the companies ‘must just not know’ about specific humane tech principles like the one mentioned above, and thus the campaign is one of education and advocacy. Doing so might allow them to save face.

These are great questions. If the tech companies adopt it voluntarily, it will be much harder to enforce, unless a third party like the CHT is invited to be the enforcer. But regardless, while the recommender systems and deep learning networks are un-auditable black boxes, all the companies utilize some sort of quantified indicator – for video recommendations, social media content, dating profiles, photos, tweets, you name it – to rank the recommended items in an order.

To use a food/menu metaphor, let’s say that you are only interested in buying the cheapest thing on a menu in a restaurant that makes up dishes on the fly. Most of the fancy algorithms a) create customized menu items and b) price them. Once that’s done, a different, generic piece of software then orders the food by price, and then presents you with a menu where the thing you are most likely to buy is at the top of the list.

The “proprietary” secrets have a lot more to do with steps (a) and (b) rather than the sorting and ranking.

One way to enforce this would be for a government agency to effectively say "if you don’t allow users the option of setting how ‘relevant’ the content they see is, or you don’t do it faithfully, then we will step in and impose limits on the relevancy ourselves.’

And @NSaikiwiki, I like your warning idea, it’s like a food label or movie rating. Except this is a public health issue, which brings me right 'round to the cigarette warnings. Maybe we can brainstorm what kind of after-effects the warning would have to include? For youtube, maybe: Will increase your physiological stress and cause trouble sleeping at night?

1 Like

Yes!

I’m so glad you brought this up, because as soon as I get the time I’m going to present a simple campaign for the community, and your ideas have influenced this before you even posted this. I hope to get it posted soon, I already have the concept, just need to put it to pen. Swamped with other stuff, but stay tuned.

tldr: exactly what you said

4 Likes

I’m looking forward to seeing what you are talking about!

1 Like

posted here a few hours ago

make sense?