Steve, thanks for your well thought-out reply. I’ll go along with your time-exchange thing and try to give an equal comment that explains my point of view on this. This post took me about 20 minutes to write, but it has been brewing ever since I made my account on here.
I think there are many people in this community who question the utility of smartphone apps/software projects - of any size - to solve the issues that humane tech is concerned with. In order to explain why I think personal projects should be corralled into a separate space on this site, I feel like I should try to explain those questions the best I can. Hopefully others can chime in where I come up short.
One of the core concerns of humane tech is that software designers’ ability to influence human behavior has become way too advanced - to the point where it isn’t really a fair fight anymore. The ecosystem of the mobile internet is fundamentally based on apps that purposefully cultivate compulsive behavior in their users - which at this point constitute a majority of humans living on earth - in order to extract personal information from people and build datasets to sell to advertisers. The more granular the data, the better insights it gives into our habits of consumption, thus the more valuable it is to advertisers. This is what (I am pretty sure) we all agree on.
Here’s where I see the split. Many of us would like to take the question a step further than simply asking why do we tolerate addiction by design. Why should we allow computer software that is designed to influence our behavior at all, even if that influence is meant to direct us away from compulsive behavior? In my thinking, submitting control to such a piece of software is ultimately not much better than submitting to Facebook, Twitter, Snapchat, etc.
For example, ironically enough, I have a browser add-on called StayFocusd installed - which blocks websites of my choosing and doesn’t allow me to change the settings without passing an annoying “challenge”. Has it kept me from wasting hours following Wikipedia rabbit holes? Absolutely. But has it fundamentally changed my relationship with technology? I would say not. It controls behavior, but doesn’t encourage any contemplation or real engagement with the issues that drive compulsive internet use. Ditto with apps like Forest, which try to turn the gamification techniques of social media into a pro-attention weapon. Well what if I don’t want to be “gamified” at all? Such apps mean well, but they are still, as @joseadna would say, based on inhumane behaviorist models of human psychology.
If this movement engages with people by telling them that they can stop wasting time on Facebook by downloading an app in 5 seconds, or (even worse) simply turning their screen grey, we will be promising something unrealistic and setting people up for failure. How will we ask people to challenge themselves? Confronting one’s own bad habits requires some messy, vulnerable, deep contemplation - doesn’t yet another piece of software provide an all-too-easy way around that? It’s like putting out fire with a flamethrower, putting a bandaid on a bullet wound, etcetera.
For this reason, I don’t think personal projects should have such a high visibility to newcomers to the site. If this place becomes some sort of free incubator for humane tech projects, then that sounds great to me. But we also need to make room for the tougher personal discussions about our own relationships with technology - discussions which can be far more transformative than any slight variation on an attention-saving app could ever be.
By extending the principle of rejecting addictive software to that which controls behavior of any kind, I know I am setting myself up for accusations of idealism. So be it, I’ll embrace that. There is some kind of categorical imperative to be found in all this - that if it’s unethical for apps to control our behavior for the purposes of data harvesting, then it is unethical for them to control behavior for any reason whatsoever, even if we give them permission to do so. There is the underlying principle of all this.
Like almost every aspect of human society since the 17th century, the cohesiveness of this movement will be threatened by a split between idealists - of whose views I have hopefully just given a fair explanation - and realists, who believe that rabbits do not get put back into hats, that we need to tolerate the existence of behaviorally-unethical apps, that we should fight fire with fire, principles be damned. If you have read this far, I assume you know which camp you are in. If we can recognize this difference in opinion and continue to work on the same problems in different ways, then human tech can become a movement that has real impact. Thanks for your time!