Goals of Humane Design. Goals of Humane Design is to create products that:

I am compiling a list of everything that Humane Design strives to do. I am just throwing everything that comes to my mind wrt to Humane Design goals. Please, add to the list!

Goals of Human Design is to create products that:

The list is compiled from the items given in the replies to this post.

  • bring long term happiness into human lives
  • enable digital sovereignty (ultimate control over digital assets and end-to-end ecryption)
  • have more privacy
  • have less notifications
  • have less unnecessary engagement
  • do not addict their users
  • do not create envy spirals
  • view their users as humans, not as sources of revenue
  • support connections in real life
  • when possible promote face-to-face communication
  • increase community cohesion
  • add transparency to communities working
  • do not create echo chambers
  • enhance and promote the best aspects of humanity and minimize the worst
  • expose and correct negative side effects that technologies/products may be having on individuals and societies
  • inform public of unknown risks/dangers of in-humane tech
  • teach us how to craft a life as well as make a living
  • support satisfaction of basic psychological needs of autonomy, competence, relatedness as per Self-Determination Theory of Motivation
  • do not exploit attention of their users
  • do not abuse power when they become leading or monopoly product in their niche
  • allow fair competition from other products in their niche
  • respect users’ freedom to use and modify the product as they see fit
  • do not rely on a business strategy of planned obsolescence
  • respect users’ right to use or reject the product
  • be open to criticism that seeks to improve the product
  • exist in an environment that promotes dignity (explained, in this post)
  • support and promote dignity of its users
  • make users’ data freely portable and ensure that this portability is frictionless
  • provide services that are modular and make sure their providers are easily interchangeable
  • have code, data and content are open source / creative commons / public domain

Please, reply to the post and I will add your ideas for goals to the list or we can make this a wiki-post.

  • enhance and promote the best aspects of humanity and minimize the worst
  • expose and correct negative side effects that technologies/products may be having on individuals and societies
  • inform public of unknown risks/dangers of in-humane tech
  • teach us how to craft a life as well as make a living

BTW, I notice the word sovereignty being used a lot lately. It seems to me that individual sovereignty might be a misnomer–unless individuals live as independent kingdoms. Perhaps autonomy is a better word in some cases.

@patm I see what you mean, regarding the usage of sovereignty vs autonomy, sovereign vs autonomous.

The usage of sovereign, sovereignty is correct when we talk about the digital technology that enables ultimate control over digital assets and end-to-end encryption. This is a great name for this technology - it underscores that it provides ultimate freedom and control.

You are right, till now the word “sovereign” was reserved for nations and kings. Only the king was a sovereign, others were subjects. Now Sovereignty Enabling Technologies enable anyone to be a sovereign, a peer in relationships with anyone (even with a government) - at least in the digital world. This is very important as it opens a way to create an “Organization of United Humans” just as we have an “Organization of United Nations”.

When we talk about psychology related stuff, then autonomy is a correct word. For example, as per Self-Determination Theory of motivation, there are 3 basic psychological needs: autonomy, competence, relatedness.

One of the goals of Humane Design is to create products that support autonomy.


Thank you for this explanation, @Drabiv :slight_smile:

I recognize the word’s popular currency and am questioning it for the sake of advocating some restraint in its use.

In the Hawaiian Islands, sovereignty has a specific meaning associated with the Native Hawaiians as a people and nation (lahui). Hawaii, as you may know, was annexed to the U.S. by force in the late nineteenth century. Prior to that, it was an independent kingdom recognized as such by other countries. For decades, the Native Hawaiians have sought to recover sovereignty as a nation. It’s with this knowledge in the back of my mind that I respond to the word.

I see your points and appreciate your differentiation from the word autonomy.

1 Like

Food sovereignty is popular in my field. Meant the way Drablv is using it digital assets but we are seeing misuse now as some countries have put it in constitutions and are starting to use it in terms of the State.

And we do need sovereignty of states and as well as commitment to the global community. Problem is the balance and getting the balance very hard because of the imbalance of power. Think of trade rules on agriculture and how limited policy space is for developing countries to protect nascent industries the way developed countries did…

On what to add to the list, I would like to see something about prevention of monopoly. We heard Facebook and other social media was going to democratize the world. Now they are monopolies and we see what happened in the 2016 elections. Blockchain is being touted the same way but just because it is distributed does not mean it cannot be owned and monopolized. Drablv, not sure how you reflect this…


@bragdonsh Added your point about monopoly as following:

  • do not abuse power when they become leading or monopoly product in their niche
  • allow fair competition from other products in their niche

I did not understand your point about agriculture, sovereignty of food. Can you clarify, please?

This sounds good, but to me these points are simply about not breaking the law. Monopolies like Facebook and Google are possibly already illegal. Humane tech is about going a much further in my mind.

Ideally systems should be designed so that all modular parts are easily interchangeable at any time and all data is freely portable and that this portability is frictionless. A good principle is to have at least 3 different providers that users can choose from for each modular part of the service. All data that doesn’t belong to users should be open source / creative commons / public domain. Users should be able to keep their data (for example all posts, photos, messages) on their own “clouds” and backup services of their choosing as well as on their own devices. Services should be completely modular and interchangeable, for example users can easily switch between messaging apps or sites and have all of their messages and contacts ported with them since each user holds her own data on her own chosen cloud provider(s). Services should not have access to any user data unless it is absolutely necessary - user data is served directly from each user’s private cloud(s).

This goes above the basic goals of humane tech and for a reason. If the goals of humane tech are just not to exploit users, then we’re still going to be stuck with the same providers of technology always playing the same ethical game – “It’s ok to do evil, but make it look like we don’t do evil. - Google, Inc.”. So we need to change the system. There have been successes in the past such as Wikipedia killing off Microsoft Encarta. The next steps include Tim Berners-Lee’s Solid and other new Technologies, protocols and standards for a better future Web.

Building on your first point, I would add:

  • respect users’ right to use or reject the product
  • be open to criticism that seeks to improve the product
1 Like

I’d like to introduce another way of thinking about humane design. It is not a goals of humane design per se, as it encompasses quite a few of the points already mentioned, but it can help to identify other goals as well.

First, some context. If you just want to get to the conclusion, you can skip most of what I’m writing next, but if you want to understand how I get there, carry on. You can rejoin me further down if you decide not to.

What got me thinking about tech today, or rather what made me critical of the tech world as we live in it today, occurred to me while I was learning about moral philosophy. I’d read Kant, which I found intuitive, but not expansive enough. Then I read Hegel, who, to me, did not get around Hume’s fork. Hume’s fork is the perhaps most pervasive problem of all ethics - simply put, Hume asks how we can get from purely descriptive statements to prescriptive statements without introducing our subjective values into the equation at some point. Hume famously drew the conclusion that this is impossible. For example, if we find that smoking is harmful, how can we say that it is morally wrong (and not just bad for our health) and therefore should not be done. Hegel doesn’t answer this question, but simply introduces something he calls the “good life” without further elaboration.

You may be wondering how this is related to humane design, and I’ll get to that in a moment, so bear with me. When I first encountered the problem Hume poses, the large gap between “is” and “ought”, I felt lost. After a few weeks, however, I had a revelation: Ultimately, we must base our moral code on something that unites all human beings and that still has positive consequences. What is it that all humans share? In one word: Dignity. Dignity is first and foremost a feeling, something that we are not actively aware of when it is there, but very aware of when it is absent. Humans who feel like they have been deprived of their dignity cannot be happy nor fulfilled. But what exactly is dignity? Dignity is the human experience. That means it is composed primarily of three parts:

  1. The social - social contact is inherently human. It exists for every human beginning even before we are born. The embryo forms a social bond with the mother, and, after birth, with countless other individuals.

  2. The will - all animals have a will. Simple single-celled organisms will to balance their inner chemistry, Bees will to protect their hive, birds will to build a nest and gather food for their young, and so on. Just as all other animals, humans too, will. But there is something that sets humans apart from more basic lifeforms.

  3. The ability to reflect - this is perhaps the most human thing there is. Granted, some animals may also possess this ability, and then the concept of dignity may also extend to them. But that is for another time. As humans, we do not simply will blindly. We may want to eat our co-workers sandwich, for example, but we reflect upon the costs and benefits of acting on our will and adjust our actions accordingly. The ability to reflect is what brings us from a simple will to a free will.

These three elements compose dignity, and we can build a scheme for determining the moral value of things based on it. First, we must recognize that every human being has dignity, as the definition is based on it in the first place. Then, because every human being has dignity, we can say that to deprive another person of their dignity would be to deprive them of their humanness. And if you look at what composes dignity, depriving somebody of their dignity also deprives them of their moral capacities.
Here, we can rejoin Kant’s philosophy, for those who are familiar with it. Simply put, it must be immoral to deprive another person of their dignity, as if this was to be applied universally, dignity in general would cease to exist and thus also all morality. As this produces a logical contradiction, it is in conflict with the categorical imperative and cannot be right.

If depriving someone of their dignity is wrong (i.e. of “negative” moral value), we must equally ask what is right. Here, we can distinguish between not-depriving-someone-of-their-dignity and showing appreciation for another’s dignity, i.e. increasing it. What is “good” is thus that which enables us to live out our human experience to the fullest - to increase the social bonds we have and to appreciate the free will of every individual.

For those who left me earlier, here you can rejoin. Let me summarize the paragraphs above: We’ve established a concept of dignity which builds on the social and the free will, and shown that good and bad can be defined in terms of “this increases my dignity” and “this deprives me of my dignity”.

When applying this concept to the tech of today, we can easily see how technology has incredible potential to be a source of good in world, but that its current forms today are leading to the opposite. Social networks, for example, can be an amazing tool in bringing people together, but only as long as they are not isolating. The endless feed for example, is bad in this sense. While the early form of Facebook was used to share pictures and stories of family and friends for example, now it shows everything remotely entertaining. It’s purpose does not lie in connecting people, but in keeping people on the platform as long as possible. And in this sense if also does not appreciate our free will - it manipulates the simple will into wanting to stay on the platform, and uses all psychological tricks it can to prevent the reflective will from gaining control.

In the same sense, massive data collection with third-party cookies that are mandatory does not appreciate the fact that we have our own free wills and may not want to be tracked wherever we go. The requirement to give up personal data for using services which have become crucial to social life today therefore falls into the realm of coercion.

Given this concept of dignity as a guiding principle for all actions, what is humane design? Humane design, if it is to be true to its name, must show appreciation towards the dignity of every individual. It must thus create products that:

  • exist in an environment that promotes dignity. This means that, as others have mentioned, users must have the possibility to choose between products without major negative consequences (such as is the case with not using Facebook or WhatsApp) and be able to easily port their data from one platform to another

  • see the existing list above

As I mentioned before, many of the points other gave in this discussion fall under the umbrella of increasing dignity instead of depriving individuals of it. The main contribution I see in the framework I described above is that it provides the foundation for arguments why tech should be this way and why humane tech is fundamentally more valuable (from an ethical perspective) than the tech we find existing today.

EDIT: I accidentally posted this as a reply to a post in this discussion and not as a post in itself.


@RollingCompass added 2 items to the list about dignity, hopefully they convey your main point.

1 Like
  • data is freely portable and that this portability is frictionless
  • services are modular and providers are easily interchangeable
  • code, data and content are open source / creative commons / public domain
1 Like

@Free a few comments to the topics touched by you and others

  • Regarding decentralization
    IMO, decentralization in itself is a red herring*. What new or better functionally does it bring to the users in itself?

There have been successes in the past such as Wikipedia killing off Microsoft Encarta.

Wikipedia took over MS Encarta, not because it is built on decentralized technology, but because it is better (more comprehensive, more dynamic) encyclopedia.

By the same logic, Validbook Social is IMHO a great alternative for Facebook (that might very gradually replace it), not because it is going to be decentralized, but because of it better functionality - - compartmentalized social profiles comparmentalized inverted fiollowing model. Compartmentalized social profiles>>more comprehansive view on personality>>dampened down envy spirals. Comparmentalized inverted fiollowing model>>more real-world-like mutual exposure on each others life and personality.
BTW, these were not implemented by Facebook because they would lessen engagement, which is ok with Validbook as it is not based on Attention Bases Business Model.

  • Regarding, privacy (end-to-end encryption)
    I think E2EE will become standard with main cloud storage providers (e.g. Google Drive) and email providers (e.g. Gmail) not because of “new decentralizing” technology, but because of a competition pressure from startup services that provide E2EE (NextCloud, KeyBase; ProtonMail.).
    I think WhatsApp introduced E2EE because of pressure from Telegram.
    As for privacy on Social Networks it is in a sense an oxymoron - users share data from one to many (publicly or semi-publicly) so there is not much privacy from the beginning. The aggregation of that data and selling attention of its users to advertisers can be fixed by replacing Attention Based Business Model with something new (see Self-Sovereign Identity based Business Model) not by decentralization in itself.
    Also, IMO the real beef (the root cause of issues) with Social Media is about envy spirals (Envy vs. Notifications? Issues with social networking services – is it all about envy?) and has little to do with privacy loss and attention depletion.

  • Regarding, portability of data and contacts.
    I think this would be a great thing, mainly because of the increase in competition between service providers. I do not agree, how a lot of people think this will be achived via intermediary locally hosted applications.
    The major barriers to this are:
    – mental load that it causes to users. People minimize their mental load (in other words: people have habits, people do not like to change their habits, people do not like to do unnecessary work, people want to do stuff and go places)
    – incumbents will block or obstruct such portability
    These barriers can be overcome by standardization of transferring data and contacts between existing services and by and enforcing them to do so by introducing new competition laws. Here is my musings on how portability of contacts can be done in real world by using Decentralized Identifiers and what I call Base Identity.

  • Regarding incentives
    Tim Berners-Lee’s creation of www standards had so much impact not only because they were open source and free (although this was necessary for their wide adoption). Most importantly, they created opportunities and incentives for thousands and millions of talented people to create services and products. Practically, all Internet products and services that we use now where created because people had opportunties and great incentives to work on those opportunities.
    How much more opportunities and incentives will decentralization technology like SOLID add*, beside what already exist with conventional web technologies? - I am not sure. I do not see how it adds a lot. I might be wrong. I’d like to see some explanation about opportunities and incentives that this technology will provide.

As for Validbook roadmap - we plan to gradually decentralize all its services to make it more secure (“nuclear-war-proofed”) and more performant (relevant when decentralization technology matures). But before decentralizing it needs to be built. As put in the article Centralize, then Decentralize - “Let’s stop playing pretend and start building.”

*Note - I am not talking about blockchain here - where value of making possible sovereign money and sovereign identity is clear.

Just to clarify, decentralisation is just one option for some services. We used to have more decentralisation in the past, for example when personal computers were more popular than smartphones users had many more options regarding programs, data was more portable and so on. Decentralisation provides many potential benefits to users such as privacy and lower costs due to higher competition, and even some new functionality such as interoperability and openness.

Open source / creative commons / public domain just can’t be overlooked. People should be working together and not competing, it’s the only way to defeat the big corporations.

Regarding portability of data and contacts, I think we need to do away with the incumbents and start over. Users could still get all their services from one chosen company (so easy to sign up), but the difference would be that each part is modular so they can easily switch out any part they want to change at any time (or try multiple providers at the same time).

If all data is held in and served from the user’s own cloud and is legally private – any app has no right to even pass this information through its servers and could be arrested for digital trespassing / hacking (very serious crimes) if they even passed a single piece of user data though their servers (this would be visible on users’ devices). Note that the user’s private cloud and apps could all be provided by the same company or by different companies – apps would run on the user device or web browser but get data directly from private clouds. App servers would never see any private user data.

One benefit of decentralisation is that now developers and service providers will be getting a fairer and larger share of smaller pie. In the current system, almost all money if going to a few mega corporations which are literally the richest companies in the world and in all of human history. It’s bad for power to be concentrated. But yes the IT pie will be smaller and that leads to another great benefit which is services and goods will cost significantly less for consumers, which again helps to shift the economic balance away from the rich toward humanity.

1 Like

Interesting proposal here that increasing dignity can be the central tenet of humane design. I really like the idea, but I haven’t been able to translate it to concrete design decision making.

You have defined dignity here as a combination of social dignity, will, and free will. To help me understand, would you mind taking a design decision (maybe the introduction of new emojis, or adding a “best friends” option to facebook), and explaining how to consider if it benefits each of these three points?

1 Like

I don’t think that dignity can necessarily improve every function in tech to an unlimited degree. Emojis, for example, don’t need to be improved in this regard. Central to the concept of dignity is the all humans possess the ability to decide what they want and do not want, and that this decision should be able to translate into action in every aspect of our lives. That means that, for example, we have the option to use emojis or not use emojis (which we already have) without being excluded from social life. So I don’t think there are really any necessary changes we have to make to emojis. You can compare this to the option of disabling notifications, for example. There could for example theoretically be phones that don’t have this function, which would be fine as long as there would be viable other options to choose from when deciding which phone to buy or which app to download. This would only be a problem if there weren’t other options. If there was only one phone with a functionality necessary to participation in social life, for example, and this app didn’t allow to turn off notifications, then that would be a problem.

I think we can use the concept to identify the battles we still need to fight. For example, in Germany, where WhatsApp is used by 65% of the population (in certain subsets of the population likely higher), you don’t really have the option not to use the messenger. That’s why it is a problem that one cannot choose which data to share with Facebook. As one doesn’t realistically have the choice to use a different messenger, in the interest of increasing dignity, there should be a function to choose which data to share with Facebook.
As we can’t force Facebook to implement such a function, in terms of design decisions, therefore, we should try to create messengers that allow complete portability of data and that work across messengers. So, the optimal design for an alternative messenger would be one that still allows users to communicate with people who don’t use the same messenger, while offering the function not to share any data. I of course understand that this is difficult if not impossible to implement, but it is the conclusion I draw from applying the concept of dignity to messenger apps. I use Signal a lot with the few people I know who also have the app, but it would be best if I could also use Signal to communicate with my friends and work partners who don’t have the app - as I still have to communicate with them, though, I am in some ways forced to use WhatsApp.

1 Like

Thank you for the response, you make some great points. I’m also facing a situation where Whatsapp and Facebook Messenger are the only ways my friends and family communicate. I wish something like the Blackberry Hub had caught on for more phones, as it centralized all your messages into one spot. So while not making the data portable, it at least centralized the sending and receiving of messages so you were not so committed to a single platform.

Pl look at my book https://www.amazon.in/Grassroots-Innovation-Minds-Margin-Marginal/dp/8184005873

Not all innovations are good, not all communities are cohesive, technology is like a word, Instt are grammar and culture is like a thesaurus

Will be happy to hear back

Letvus not make tech-social interface laden with a smooth all motherhood statements

Don’t know if anyone noticed the Ethical OS (https://ethicalos.org/) released recently with support from the Institute of the Future and Omidyar Network. It’s a very relevant framework being proposed on how to ethically approach technology and its social impact. Mostly questions - and they even cite Tristan and the CHT - but it looks like a promising start. Definitely worth getting on everyone’s radar, IMO.

For prime examples, from the Ethical OS Toolkit (version 2) at https://ethicalos.org/wp-content/uploads/2018/08/Ethical-OS-Toolkit-2.pdf :

Risk Zone 2
Addiction & the Dopamine Economy

Research by Common Sense Media found that the average teenager spends 9 hours a day using some form of media. 9 hours!

The time we spend with our devices is of growing concern. Tristan Harris, founder of the Center for Humane Technology (CHT), has called for tech companies to encourage “time well spent,” suggesting designers optimize the time they spend on platforms, in a way that makes their time beneficial to their overall happiness and well-being.

Studies show people achieve maximal intended use of apps like Instagram and Snapchat after 11 minutes — beyond which overall happiness decreases. How might tools be designed to advocate for time well spent? How can we design software that prioritizes user happiness — offline and online—over keeping eyes glued to the screen?

  • Does the business model behind your chosen technology benefit from maximizing user attention and engagement — i.e., the more, the better? If so, is that good for the mental, physical, or social health of the people who use it? What might not be good about it?

  • What does “extreme” use of, addiction to, or unhealthy engagement with your tech look like? What does “moderate” use to or healthy engagement look like?

  • How could you design a system that encourages moderate use? Can you imagine a business model where promoting moderate use is more sustainable or profitable than always seeking to increase or maximize engagement?

  • If there is potential for toxic materials like conspiracy theories and propaganda to drive high levels of engagement, what steps are being taken to reduce the prevalence of that content? Is it enough?


Thanks for the link @greg. We need such framework.I liked their checklist especially. I need to go through Validbook idea with it. Anyone would like to help?