Nice, @ben! It doesn’t have to be CSNY (though a younger generation might think that a state college in upper New York). The challenge here will probably sometimes people might prefer to eat from the bottomless popcorn bag … even if it ultimately does them harm. So how do you handle that?
One school of thinking espouses user choice. But we all know that’s more complicated than what’s obvious on the surface. Default choices provide that slippery slope, whether it’s privacy settings or an opt-in culture that often presents itself more accurately as an opt-out one.
Another school: do we provide the equivalent of tobacco warning labels on digital products to warn consumers of their excessive abuses and what they might lead to? It sounds a little comical, but is it really?
I think the issue that challenges me most is how to address the harm that comes without malicious intent. It’s actually an easier situation when we have deliberate, malicious intent. But the algorithms being deployed to optimize consumption for seemingly innocuous primary concerns tend to lead to rather horrendous secondary, or unconsidered, effects.
It’s the paperclip maximizer in AI. What kind of guard rails do we need to consider so that well-intended machine automations don’t start disassembling our infrastructure and strip-mine our homes, transportation, electronics, bridges, and buildings for wire?