[Feedback Requested!] Proposals for Specific Tech / Algorithmic Regulations and Enforcement Mechanisms (Re: Ads and Notifications)



Hi Everyone,

I’m very interested in helping develop and collaborating on a list of policy positions that could be used to a) help further the definition of what “Humane Design” means, b) inspire legislation, and c) further CHT’s efforts.

I’ve written up three ideas below. They are proposals, and I’m thinking it’d be nice to get a discussion going, as well as discuss other problems and potential policy solutions that I haven’t typed up.

Also, please let me know if there is a better format for this, as I’m new here.

1. Displaying Ads:

Description of Issue: Sometimes, people can’t remove digital ads, even if they want to.

Potential policy solution: Make it a legal requirement that for a website or app that monetizes via ads, users must be given the option to not see ads via paying for the product.

Enforcement Mechanism
The obvious first step in the enforcement chain could be done via personal reporting. Additionally, apps and websites could be programmatically evaluated for whether they are likely to be
a) running ads, and
b) whether they offer a paid option that does, in fact, shut off the ad displays.

The next question is to whom are these things reported?

For apps, assuming the aforementioned idea is eventually signed into law, one enforcement mechanism would be for Attorneys General be able to hold App Stores and their parent companies liable for the apps they distribute. Another would be a fine that could be levied at the app maker. Maybe both are needed?

For websites / web-apps / web-platforms, the company that owns the website should be on the legal hook. It’s interesting to think about including cloud providers, but I don’t know if that a) fits with legal precedent, or b) is always technologically feasible.

Then again, maybe the best case for enforcement would involve the creation of something like the Consumer Financial Protection Bureau for technology. (Or perhaps the Federal Reserve should be in charge of regulating tech apps, and every time the economy needs a boost they can change the interest rate as well as parametric legal requirements related how “addictive” certain apps and websites can be…)

Further details and considerations:

A possible extension to this policy would be to require companies who primarily make money via ad-monetization to cap the amount that could charge in terms of X times the median value in ad sales if they aren’t adding any more customer value other than “not displaying ads.” (i.e. no premium features, etc.) To enforce this, perhaps some sort of log, consisting of monthly ad revenue and monthly number of users could be filed with a government agency. (The model could be taken out of the SEC’s playbook, wherein all hedge-funds over a certain size are required to report their quarterly stock positions via a 13F filing.) One tricky spot to this approach would be figuring out how to define who counts as a user so companies have more trouble fudging the median value estimate.

2. Timing of Notifications

Description of Issue: Machine Learning algorithms that power social media newsfeeds may learn that if they “time” the delivery of notifications correctly, they end up making the app “more addictive.” Thus the notifications may be “gamed” to keep user’s attention from waning.

Potential policy solution: Legally require that users be able to make some of the following choices: Notifications that transmit information about human actions be delivered
a) in a business as usual fashion, wherein it’s up to the platform provider to determine when to deliver the notifications
b) in near-real-time (within, say, 5-10 seconds of when they happen)
c) during a user-determined time window.

For example, when she logs on, Sasha Smithsonian has the option of snoozing all notifications until 5-7pm each day for a given social media site.

Coming up with an enforcement mechanism for this is harder, especially for (b). That users could have the option to choose between a, b, and c is easy enough to enforce.

For (b), perhaps a log, consisting of the distribution of the discrepancy between the time of initiated action and time of notification can be made public, or registered with a government agency. Does anyone have thoughts on how best to audit such a log? Maybe a browser extension could enable users to (agree to do this beforehand and then) log or screen capture when they invite friend A to an event, and then friend A can log or screen capture when they receive the invitation, for example. This is not really a satisfactory answer, please respond if you have thoughts.

3. Content of Notifications

Description of Issue:
Some platforms don’t allow you to turn off notifications about other peoples’ behavior that doesn’t directly affect you. Most of the time, people want to be notified if they are involved in a conversation, or someone interacts with the content they create. Certain platforms have co-opted this desire and mix in notifications of “happenings you might be interested in” which don’t directly have anything to do with you.

Proposed Solution:
Legally define Direct and Indirect notifications, and then require that users be able to have fine-grained control over both.

Enforcement Mechanism:
A fine seems to be a good idea here.

Keeping Private Data Private
How can we best help the CHT Staff? / Fleshing out the Ledger of Harms

I have not yet analysed what you are proposing here in much detail, but there is potential for a great project in what you are saying and things we have addressed previously in various topics on this forum.

I’ll cross-reference to your other post: [Feedback Requested!] Proposals for Specific Tech / Algorithmic Regulations and Enforcement Mechanisms (Re: Ads and Notifications)

And some related ones:

And there is more. But currently we are on the verge of reorganising the community (and this forum) and there are many things cooking that need to be addressed first. I have added the #idea label, and we should definitely elaborate how we can create an exciting project around this theme.


Great. I’ve been thinking about these kinds of policy positions for a while. I’d love to push them further, do more research on them, and generally collaborate.

Controlling for the pending reorganization of the community and forum you talk about, the broad strokes of creating a project around this theme, as you put it, seemingly consist of the following:

0. Identification
In my mind, the initial step involves identifying the problematic effects of digital technology. Lucky for us, this has already been set in motion vis-a-vis the Ledger of Harms, and this forum.

1. Finding Examples
Once a harm in the ledger or an issue has been identified, the next step would involve finding 2 or more examples of where it happens within a piece of digital technology.

2. Identifying Mechanisms
Using the examples and description of the harm, identify the mechanism involved. This might only be possible by talking with people who work in tech, or who can, for example, speak to the details of notification systems.

3. Generate Counter-mechanisms
One tricky part here is that the counter-mechanisms can’t impinge on the basic tenets of capitalism too much. It’s not “greed” we are trying to solve, instead, we want to introduce more nuance and control to, for example, notification delivery. Another tricky part is that the counter-mechanisms have to be enforceable, too.

4. Research
Once the basic solution has been identified, then it needs to be researched. How will companies try to skirt the solution if it were a law? What are the edge-cases, etc. Through this type of research, the details of the solution can be nailed down / written up.

I don’t mean to imply that everything can be reduced to these four steps, only that this might be a good starting point.

I for one am down to go use the ledger of harms to identify examples and mechanisms. Maybe we could get a new tag, or subdirectory to house the various threads that will inevitably stem from doing this?