[Feedback Requested!] Proposals for Specific Tech / Algorithmic Regulations and Enforcement Mechanisms (Re: Ads and Notifications)

Hi Everyone,

I’m very interested in helping develop and collaborating on a list of policy positions that could be used to a) help further the definition of what “Humane Design” means, b) inspire legislation, and c) further CHT’s efforts.

I’ve written up three ideas below. They are proposals, and I’m thinking it’d be nice to get a discussion going, as well as discuss other problems and potential policy solutions that I haven’t typed up.

Also, please let me know if there is a better format for this, as I’m new here.


1. Displaying Ads:

Description of Issue: Sometimes, people can’t remove digital ads, even if they want to.

Potential policy solution: Make it a legal requirement that for a website or app that monetizes via ads, users must be given the option to not see ads via paying for the product.

Enforcement Mechanism
The obvious first step in the enforcement chain could be done via personal reporting. Additionally, apps and websites could be programmatically evaluated for whether they are likely to be
a) running ads, and
b) whether they offer a paid option that does, in fact, shut off the ad displays.

The next question is to whom are these things reported?

For apps, assuming the aforementioned idea is eventually signed into law, one enforcement mechanism would be for Attorneys General be able to hold App Stores and their parent companies liable for the apps they distribute. Another would be a fine that could be levied at the app maker. Maybe both are needed?

For websites / web-apps / web-platforms, the company that owns the website should be on the legal hook. It’s interesting to think about including cloud providers, but I don’t know if that a) fits with legal precedent, or b) is always technologically feasible.

Then again, maybe the best case for enforcement would involve the creation of something like the Consumer Financial Protection Bureau for technology. (Or perhaps the Federal Reserve should be in charge of regulating tech apps, and every time the economy needs a boost they can change the interest rate as well as parametric legal requirements related how “addictive” certain apps and websites can be…)

Further details and considerations:

A possible extension to this policy would be to require companies who primarily make money via ad-monetization to cap the amount that could charge in terms of X times the median value in ad sales if they aren’t adding any more customer value other than “not displaying ads.” (i.e. no premium features, etc.) To enforce this, perhaps some sort of log, consisting of monthly ad revenue and monthly number of users could be filed with a government agency. (The model could be taken out of the SEC’s playbook, wherein all hedge-funds over a certain size are required to report their quarterly stock positions via a 13F filing.) One tricky spot to this approach would be figuring out how to define who counts as a user so companies have more trouble fudging the median value estimate.


2. Timing of Notifications

Description of Issue: Machine Learning algorithms that power social media newsfeeds may learn that if they “time” the delivery of notifications correctly, they end up making the app “more addictive.” Thus the notifications may be “gamed” to keep user’s attention from waning.

Potential policy solution: Legally require that users be able to make some of the following choices: Notifications that transmit information about human actions be delivered
a) in a business as usual fashion, wherein it’s up to the platform provider to determine when to deliver the notifications
b) in near-real-time (within, say, 5-10 seconds of when they happen)
c) during a user-determined time window.

For example, when she logs on, Sasha Smithsonian has the option of snoozing all notifications until 5-7pm each day for a given social media site.

Coming up with an enforcement mechanism for this is harder, especially for (b). That users could have the option to choose between a, b, and c is easy enough to enforce.

For (b), perhaps a log, consisting of the distribution of the discrepancy between the time of initiated action and time of notification can be made public, or registered with a government agency. Does anyone have thoughts on how best to audit such a log? Maybe a browser extension could enable users to (agree to do this beforehand and then) log or screen capture when they invite friend A to an event, and then friend A can log or screen capture when they receive the invitation, for example. This is not really a satisfactory answer, please respond if you have thoughts.


3. Content of Notifications

Description of Issue:
Some platforms don’t allow you to turn off notifications about other peoples’ behavior that doesn’t directly affect you. Most of the time, people want to be notified if they are involved in a conversation, or someone interacts with the content they create. Certain platforms have co-opted this desire and mix in notifications of “happenings you might be interested in” which don’t directly have anything to do with you.

Proposed Solution:
Legally define Direct and Indirect notifications, and then require that users be able to have fine-grained control over both.

Enforcement Mechanism:
A fine seems to be a good idea here.

1 Like

I have not yet analysed what you are proposing here in much detail, but there is potential for a great project in what you are saying and things we have addressed previously in various topics on this forum.

I’ll cross-reference to your other post: [Feedback Requested!] Proposals for Specific Tech / Algorithmic Regulations and Enforcement Mechanisms (Re: Ads and Notifications)

And some related ones:

And there is more. But currently we are on the verge of reorganising the community (and this forum) and there are many things cooking that need to be addressed first. I have added the #idea label, and we should definitely elaborate how we can create an exciting project around this theme.

1 Like

Great. I’ve been thinking about these kinds of policy positions for a while. I’d love to push them further, do more research on them, and generally collaborate.

Controlling for the pending reorganization of the community and forum you talk about, the broad strokes of creating a project around this theme, as you put it, seemingly consist of the following:

0. Identification
In my mind, the initial step involves identifying the problematic effects of digital technology. Lucky for us, this has already been set in motion vis-a-vis the Ledger of Harms, and this forum.

1. Finding Examples
Once a harm in the ledger or an issue has been identified, the next step would involve finding 2 or more examples of where it happens within a piece of digital technology.

2. Identifying Mechanisms
Using the examples and description of the harm, identify the mechanism involved. This might only be possible by talking with people who work in tech, or who can, for example, speak to the details of notification systems.

3. Generate Counter-mechanisms
One tricky part here is that the counter-mechanisms can’t impinge on the basic tenets of capitalism too much. It’s not “greed” we are trying to solve, instead, we want to introduce more nuance and control to, for example, notification delivery. Another tricky part is that the counter-mechanisms have to be enforceable, too.

4. Research
Once the basic solution has been identified, then it needs to be researched. How will companies try to skirt the solution if it were a law? What are the edge-cases, etc. Through this type of research, the details of the solution can be nailed down / written up.

I don’t mean to imply that everything can be reduced to these four steps, only that this might be a good starting point.

I for one am down to go use the ledger of harms to identify examples and mechanisms. Maybe we could get a new tag, or subdirectory to house the various threads that will inevitably stem from doing this?

2 Likes

Hiya @hmswaffles,

As a designer of a “humane” sponsored media technology platform, I’m hoping that any rules and regulations can be derived from what a healthy ecosystem would look like to all parties involved (brands, publishers/creators, and internet users).

I’m a big fan of Bucky Fuller, his design principles deeply influence most if not all of what I work on. His core principle is “use forces, don’t resist them”.

I have years of experience in ad tech, so some of the core problems arising occur from much more simpler problems, things like channel control (brands dont know where their ads land, that is bad!) transparency between publisher and advertiser (communication between pubs and brands is NOT direct but through a third party), and of course data collection.

The problem you propose for removing digital ads because people dont like them also creates an “unintended consequence” for publishers, who are just as much victims in this in many occasions as the users. Internet users, all of us, have been trained to assume that content is free, with or without advertising, and the reality is of course not true.

I think users SHOULD be able to opt out of ads, but should not be able to access content if they do, or unless they offer some other fair exchange.

Enforcement Mechanism

I like this idea, and I think over all, what you’re leaning towards is ultimately “verification” via humans, not algorithms.

I do believe that is part of the new era of humane technology, employing internet users to verify, report, etc.

I think a simple approach to this would be creating a minimal cost to upload content (say $1) and that $1 goes to internet users to verify the content before it hits its reach. enrolling online users into the verification process instead of an algorithm naturally resolves much.

If we have to pay to upload content, that will probably filter plenty of content from being uploaded, and content uploaded creates a new economy of internet users who can verify.

For example, say Facebook has a policy “NO DIVISIVE POLITICAL ADS” that has clear demarcations for what that is. Each post gets sent to say 9 internet users who receive a small portion of revenue for the verification, and if a consensus is not achieved, the content is not distributed through the algorithm.

The next question is to whom are these things reported?

They would get reported to the party responsible for broadcasting the content. So regulations on internet platforms and publishers would address that, but many publishers will be compliant without regulation.

I hope we can solve these problems without regulation because in principle, these problems truly offer enormous opportunity to internet users, and regulation also has unintended consequences that usually the big players can get around, and the small players can’t.

Lets keep this convo going!

cheers
Rome

4 Likes

Hi @NSaikiwiki,

Great points! I like the idea of deriving regulations from the picture of what healthy landscape would look like.

That’s a fantastic idea about charging people to post. To take your idea and run with with… If Reddit charged a penny to vote on content, 5 cents to post a link, and 10 cents to comment, at the time I’m writing this, they’d make ~10k off of the first eleven posts on the front page alone (at the time I’m writing this). The fee would likely deter some users, but it would incentivize higher quality content, especially if, as you say, the poster got a royalty, or fraction of the income generated from their content.

Metaphorically, internet content creators have been getting “free postage” on everything they’ve sent through the cloud. A fractional fee could apply to email; that would deter spam. Youtube, Instagram, Twitter, Facebook posts, Wikipedia edits… It works in every case I can think of. I don’t know if it would be enough to supplant ads entirely, but I like the idea a lot.

Going back to the reddit example, you could arrange some form of remuneration to people who generate lots of up or downvotes, or lots of other comments that themselves generate lots of other comments. Maybe you’d distribute the remuneration in real currency, or maybe you’d do it in terms of “reddit bucks” which could also be spent on posting and commenting. If done correctly, you’d simultaneously reward and invest in site-veterans with meaningful contributions to the discussion.

Also, I’m still not convinced that requiring social media companies to charge users, who are willing to pay for an ad-free experience, a justifiable amount to cover the costs and recoup a profit on giving the users that experience harms publishers. I think money would be a fair exchange. Maybe I’m not following what you mean when you say publishers would be harmed – can you clarify?

As for enforcement mechanisms. I think we would need something centralized in certain cases, like in auditing a log of notification timings, for example. Some things are likely going to require a team of forensic engineers and data scientists. At the same time, there is definitely room for social media platforms to pay some of their users to verify ad content. Maybe they don’t do it because they don’t want the legal hassle of paying non-employees for work, under current labor-law definitions?

Looking forward to your thoughts!

1 Like

Great points! I like the idea of deriving regulations from the picture of what healthy landscape would look like.

:open_mouth: Designing regulations from a system that would not require regulations if the system existed in the first place. I think we’re on to something :sweat_smile:

Metaphorically, internet content creators have been getting “free postage” on everything they’ve sent through the cloud. A fractional fee could apply to email; that would deter spam. Youtube, Instagram, Twitter, Facebook posts, Wikipedia edits… It works in every case I can think of. I don’t know if it would be enough to supplant ads entirely, but I like the idea a lot.

Lots of fun stuff to play around with too, especially adopting new fintech novelties. Within the platform that we’ve been working on, ads would be essential to what you describe above, it could actually fund much of it but not in the way that is obvious.

Going back to the reddit example, you could arrange some form of remuneration to people who generate lots of up or downvotes, or lots of other comments that themselves generate lots of other comments.

you probably already know about Reddit “Gold”, which is actually considered a digital currency. Some of what you’re describing is somewhat close to how subscription models work, but that will likely also stand in the way of adoption, so a tricky slope to go down.

Maybe you’d distribute the remuneration in real currency, or maybe you’d do it in terms of “reddit bucks” which could also be spent on posting and commenting. If done correctly, you’d simultaneously reward and invest in site-veterans with meaningful contributions to the discussion.

Our platform has created a measurable and distributable “attention” digital asset (legally, I’m unsure when something becomes a “currency”, but for all sakes and purposes, it treats attention as currency, and provides a digital asset that can store, sell, or trade that value - that can be traded into USD, or spent with a VISA debit card.) and the design purposes is to hopefully stimulate new internet economies along the lines of this discussion. having an easy to earn and easy to spend true digital asset on the web has many interesting applications :smirk:

Also, I’m still not convinced that requiring social media companies to charge users, who are willing to pay for an ad-free experience, a justifiable amount to cover the costs and recoup a profit on giving the users that experience harms publishers. I think money would be a fair exchange. Maybe I’m not following what you mean when you say publishers would be harmed – can you clarify?

hmm, not sure the context, looked at my post, not sure where I mentioned harm coming to publishers? Perhaps you’re referring to the harm that could come to publishers from regulations?

I think the free model is here to stay. I believe the holistic system looks at publishers creators (who realistically need revenue), internet users (whose attention CAN be valued and traded), and agents/brands.

I think paying nominal fees to distribute (not post, different behavior) for the purposes of verification is much easier to implement.

As for enforcement mechanisms. I think we would need something centralized in certain cases, like in auditing a log of notification timings, for example. Some things are likely going to require a team of forensic engineers and data scientists.

I can see how regulations could limit number of notifications OR better enforce a pay per notification :slight_smile: but I’m concerned the idea of applying machine learning at scale to complex social media interactions or postings will prove futile, it will just lead to censorship and certain things require human interaction and decision making.

At the same time, there is definitely room for social media platforms to pay some of their users to verify ad content. Maybe they don’t do it because they don’t want the legal hassle of paying non-employees for work, under current labor-law definitions?

I think they don’t do it because they do not have a design model for it that makes economic sense. I do believe that this sort of decentralized verification will be a future wave we will see emerging within the next few years.

What is your focus on regulations?

1 Like

Yes, I agree. My focus on regulations stems from my belief that the big tech players are not going to regulate themselves, even if it is in their self-interest to do so over the long term. Maybe they will self regulate under the threat of regulations, maybe they won’t. Regardless, it seems to me to be fruitful to try and make any theoretical regulation as concrete and well-thought-out as possible. Even if it doesn’t lead to policy, my hope is that such thinking would lead to more discussion.

I’ve written up the policy proposals in the original post, as well as added a few more in this medium post.

And yes, I agree – I do think the free model is here to stay. As a theoretical mechanism for “solving Reddit” however, your idea is very fun.

1 Like