I’d like to introduce another way of thinking about humane design. It is not a goals of humane design per se, as it encompasses quite a few of the points already mentioned, but it can help to identify other goals as well.
First, some context. If you just want to get to the conclusion, you can skip most of what I’m writing next, but if you want to understand how I get there, carry on. You can rejoin me further down if you decide not to.
What got me thinking about tech today, or rather what made me critical of the tech world as we live in it today, occurred to me while I was learning about moral philosophy. I’d read Kant, which I found intuitive, but not expansive enough. Then I read Hegel, who, to me, did not get around Hume’s fork. Hume’s fork is the perhaps most pervasive problem of all ethics - simply put, Hume asks how we can get from purely descriptive statements to prescriptive statements without introducing our subjective values into the equation at some point. Hume famously drew the conclusion that this is impossible. For example, if we find that smoking is harmful, how can we say that it is morally wrong (and not just bad for our health) and therefore should not be done. Hegel doesn’t answer this question, but simply introduces something he calls the “good life” without further elaboration.
You may be wondering how this is related to humane design, and I’ll get to that in a moment, so bear with me. When I first encountered the problem Hume poses, the large gap between “is” and “ought”, I felt lost. After a few weeks, however, I had a revelation: Ultimately, we must base our moral code on something that unites all human beings and that still has positive consequences. What is it that all humans share? In one word: Dignity. Dignity is first and foremost a feeling, something that we are not actively aware of when it is there, but very aware of when it is absent. Humans who feel like they have been deprived of their dignity cannot be happy nor fulfilled. But what exactly is dignity? Dignity is the human experience. That means it is composed primarily of three parts:
The social - social contact is inherently human. It exists for every human beginning even before we are born. The embryo forms a social bond with the mother, and, after birth, with countless other individuals.
The will - all animals have a will. Simple single-celled organisms will to balance their inner chemistry, Bees will to protect their hive, birds will to build a nest and gather food for their young, and so on. Just as all other animals, humans too, will. But there is something that sets humans apart from more basic lifeforms.
The ability to reflect - this is perhaps the most human thing there is. Granted, some animals may also possess this ability, and then the concept of dignity may also extend to them. But that is for another time. As humans, we do not simply will blindly. We may want to eat our co-workers sandwich, for example, but we reflect upon the costs and benefits of acting on our will and adjust our actions accordingly. The ability to reflect is what brings us from a simple will to a free will.
These three elements compose dignity, and we can build a scheme for determining the moral value of things based on it. First, we must recognize that every human being has dignity, as the definition is based on it in the first place. Then, because every human being has dignity, we can say that to deprive another person of their dignity would be to deprive them of their humanness. And if you look at what composes dignity, depriving somebody of their dignity also deprives them of their moral capacities.
Here, we can rejoin Kant’s philosophy, for those who are familiar with it. Simply put, it must be immoral to deprive another person of their dignity, as if this was to be applied universally, dignity in general would cease to exist and thus also all morality. As this produces a logical contradiction, it is in conflict with the categorical imperative and cannot be right.
If depriving someone of their dignity is wrong (i.e. of “negative” moral value), we must equally ask what is right. Here, we can distinguish between not-depriving-someone-of-their-dignity and showing appreciation for another’s dignity, i.e. increasing it. What is “good” is thus that which enables us to live out our human experience to the fullest - to increase the social bonds we have and to appreciate the free will of every individual.
For those who left me earlier, here you can rejoin. Let me summarize the paragraphs above: We’ve established a concept of dignity which builds on the social and the free will, and shown that good and bad can be defined in terms of “this increases my dignity” and “this deprives me of my dignity”.
When applying this concept to the tech of today, we can easily see how technology has incredible potential to be a source of good in world, but that its current forms today are leading to the opposite. Social networks, for example, can be an amazing tool in bringing people together, but only as long as they are not isolating. The endless feed for example, is bad in this sense. While the early form of Facebook was used to share pictures and stories of family and friends for example, now it shows everything remotely entertaining. It’s purpose does not lie in connecting people, but in keeping people on the platform as long as possible. And in this sense if also does not appreciate our free will - it manipulates the simple will into wanting to stay on the platform, and uses all psychological tricks it can to prevent the reflective will from gaining control.
In the same sense, massive data collection with third-party cookies that are mandatory does not appreciate the fact that we have our own free wills and may not want to be tracked wherever we go. The requirement to give up personal data for using services which have become crucial to social life today therefore falls into the realm of coercion.
Given this concept of dignity as a guiding principle for all actions, what is humane design? Humane design, if it is to be true to its name, must show appreciation towards the dignity of every individual. It must thus create products that:
exist in an environment that promotes dignity. This means that, as others have mentioned, users must have the possibility to choose between products without major negative consequences (such as is the case with not using Facebook or WhatsApp) and be able to easily port their data from one platform to another
see the existing list above
As I mentioned before, many of the points other gave in this discussion fall under the umbrella of increasing dignity instead of depriving individuals of it. The main contribution I see in the framework I described above is that it provides the foundation for arguments why tech should be this way and why humane tech is fundamentally more valuable (from an ethical perspective) than the tech we find existing today.
EDIT: I accidentally posted this as a reply to a post in this discussion and not as a post in itself.