What it means to be human

A recent article The Tech Backlash We Really Need in the New Atlantis made the following statement about the Humane Tech project: “Its vision, however, presupposes broad public agreement about what exactly constitutes more humane technology. It trades, in other words, on a shared vision of human flourishing that may not actually exist. It speaks of ‘our humanity’ and ‘how we want to live,’ but it is not altogether clear that it offers a meaningful answer to the questions of what constitutes humanity, of how we want to live, of what we means.” Reading through the comments here, it seems to me that there is a implied sense of what “human” means, but I have to agree with the author of the article that perhaps there is no shared vision of human flourishing, though many obvious ways of diminishing our humanity through technology have been clearly identified.

Perhaps Jaron Lanier in his book You are Not a Gadget can provide a basic starting point: “Emphasizing the crowd means deemphasizing individual humans in the design of society, and when you ask people not to be people, they revert to bad moblike behaviors. This leads not only to empowered trolls, but to a generally unfriendly and unconstructive online world.” More broadly, he says, “The deep meaning of personhood is being reduced by illusions of bits.”

Here are some of his specific suggestions for reclaiming humanity on the web:

1. Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out.
2. If you are twittering, innovate in order to find a way to describe your internal state instead of trivial external events, to avoid the creeping danger of believing that objectively described event defines you, as they would define a machine.

The “inner voice” is something that has to be steadily worked at in order to be defined as human. Where does our unique voice come from? What makes it the work of one and only one person rather than a mashup of half-digested options gathered online?

In the second example, our internal state is what makes the event something human rather than some literal event that has a built-in definition which would be the same for a machine.

Breaking through the software’s definition of a person is what seems to be called for, because in comparison to the built-in richness of the human being, that definition can only be reductive. Perhaps we need to investigate the human qualities that social media serves as a substitute for, such as a direct perception of another’s ego. Then we might be able to understand why the definitions built into the software don’t allow those qualities to emerge. Perhaps trying to fit our human capacities into a technological box is inherently impossible and would only be attempted in a society where powerful interests realize significant gains from the type of human beings formed by such software. Perhaps it’s time to search a little deeper.


Software is created by human beings who supposedly have empathy. If their empathic powers are limited, the software they create will probably reflect that.

The hardware created to run that software serves the purposes of its manufacturers, whose intentions may be narrowly focused on profit.

Is there any hardware serving a philanthropic manufacturer? Is there any software designed to make us better human beings, to improve our societies, to help us eliminate our defects?


I think that the implicit and explicit intentions and motivation behind the question of “what s a human” has significant impactions for how that question is answered. For example, if a software designer asks the question because she wants insight into humanity that will help her make money off of the product she hopes to design, then that motivation will be reflected in any answer she has. It would be rare indeed if her question about humanity lead to her conclude that the software and and company the worked for had little to contribute to humanity or may be making things worse. So I agree with boydcster that we need deeper thinking into the question of what a human being is and also deeper thinking into our intentions in asking such a question.

I’m reminded of my work as a therapist with couples, when I ask people what there marriage is for. Most people in couples counseling are in crisis, obviously, so there marriage is something they need to save, or a problem they need to solve. When people pivot away from “what’s the immediate problem that needs to go away” towards “what do I want to have left over when the problem is gone,” their approach to the marriage and couples counseling changes dramatically. When people define marriage as “a place where affection is shared,” then they can act from that intention and figure out if the goal is best achieved through staying together or splitting up. Either way, both people are usually much better off than they would be compared to being stuck in “a marriage is a crisis we need to fix.”
So, let’s think deeply about out intentions/motivations and the assumptions and biases that may be hidden within them. I think this will help us be better humans, as well as create more nuanced and useful descriptions of what a human is.


Yes- this is my experience being a conversationalist and extrovert. I process information by real human interaction and this experience is gone for me now- social media will never replace this…, You can’t meet people in a cafe anymore- those natural social chats with strangers are gone. This is a public health problem- we cannot go out of our houses and expect social interaction anymore. It’s an isolated world we live in.

1 Like

Hi Dan,

I really like your analogy of working with couples. Moving from the specific problem that brought them to the therapist to the benefits of the whole relationship is exactly like what we need to do in software and why the solutions often go wrong. They go wrong because we focus too exclusively on the immediate requirement and lose the big picture what the user’s goals really are. Worse still, we are often encouraged to try to define those goals for them.

But from a humane technology perspective, I’m trying to delve into how technology so often substitutes for human capacities that we could be developing. The most obvious example is social media where the purpose of human relationships seems to get lost in the technical paraphernalia, turning what could be friendships into a sophisticated video game that ends up reducing our ability to build meaningful relationships. I would like to identify the human capacities that build friendships and ask the question as to whether software can make a real contribution to this process and how. Obviously that doesn’t seem what Facebook is designed to do, but are there examples of other social media platforms that actually encourage these behaviors?

1 Like

Software may be created by those who have empathy in their personal lives, but this seems rarely a goal for the software they create which is dictated by impersonal corporate goals. My focus on the human is meant to say, “Do we really have such a deep understanding of the nature of the human being that we can define software that supports our humanity or has technology drained of so much humanity that we are no longer capable of creating such software, whatever the goals of the companies we work for might be?”

I’m totally with you on this observation. As Lanier would probably support me on this, a lot of social media seems designed to drain away whatever humanity we might have been able to preserve. But to develop alternatives we first to have define humanity in a non-reductive way. That was what I was getting at with my two examples: 1) Don’t write a blog post until you’ve found a true inner voice that reflects your individuality; 2) Communicate your insights in a way to defines a unique relationship to the realities around rather than what the crowd wants to hear. Maybe creating software that enables courage is too much to ask for, but at least we can identify those human characteristics that define us at our best and see if we can stop encouraging their opposites. But even so, nothing replaces that spontaneous look on someone’s face, the tone of voice, and the clasp on the shoulder. Maybe it’s time to get back to those things.

As long as people are creating the software, there is hope for technology that takes into account the flaws, frailties, and vulnerabilities of human beings. Once the software is created by machines, that possibility is lost, and the likelihood of some catastrophic ending for humanity is increased.

As someone from the humanities, I would add that “technology” is not the be-all and end-all of human existence. It is just one aspect, and other things, including the arts and literature, are necessary too. A wise poet and doctor once said that you won’t find the news in poetry, but every day, men die for lack of what is found there.

I’m re-recommending the article by L. M. Sacasas, director of the Greystone Theological Institute Center for the Study of Ethics and Technology. What it is be to human is a complex and misunderstood issue which is why we need experts to the rescue.

Technology is created by humans and therefore is human. The unethical and unsophisticated parts of technology reflect being human. It is human as human is, not as we would not like it to be.

Many people went into tech because it’s what they can do, maths, logic, design, accounting. On the other hand language, ethics, culture, philosophy, history, literature are just not considered on an application for a tech job. Tech workers and business people are often uneducated in the latter. What we have now is the unenlightened, uncivil and unsophisticated.

So we have these out of control human calculators designing our world, who don’t understand our world. For many companies, they would rather have human calculators, because people with ethics or cultural sensitivity would put a damper on their unethical cold profits.

All of this is human. But “human” is not the right word to describe whether something is good or bad (whatever good or bad means). “What does it mean to be human?” is not the right question at all. It’s an unrelated question. The question should be something like how can we create technology that’s cultured, civilised, ethical, enlightened, sophisticated.

Civil / Civilised / Sophisticated / Enlightened Technology

civilise : to bring out of a savage, uneducated, or rude state; make civil; elevate in social and private life; enlighten; refine

sophisticated : altered by education, experience, etc., so as to be worldly-wise; not naive

enlightened : to give intellectual or spiritual light to; instruct; impart knowledge to

(source: dictionary.com)

1 Like

Good thoughts, @free.

Did you see this article from the Guardian? It’s an excellent study of the friction and potential for change that result when technologists and artists meet.

1 Like

Thanks. I suppose artists are just as self-centered as business people.

I know of this article. What strikes me is the naivety of these “tech libertarians”, looking to move to the moon or Mars and thinking they can make the world all for themselves and recreate the Middle Ages. Apparently people like Peter Thiel know little of history.

It’s silly, move to New Zealand for safety? Clean air? If war or chaos or economic apocalypse breaks out (it won’t) then wouldn’t it be better to be in a rich, large country with a strong military? I travel prolifically, there are endless remote places in the world. Many of them are in these guy’s home country, the US. And I’m sure New Zealand is the only place in the world with clean air.

If these guys knew more about being human, they would understand they need other people to survive. They are applying the laws of the things they know, like computers, to people and the world. That is causing them to create these bizarre libertarian ideas, because they do not understand people or history that well.

Dear Silicon Valley, sorry but your grasp over the world and humanity will not multiply in the way computer processing power grows. And your machines are not going to make people useless, look at the history of technological progress. Your money will not grow infinitely, look at history again. You are a dependant of other people.

1 Like

This is what I’m talking about with education and technology. Displacing critical time needed in childhood for developing relationship skills. We learn how to relate to each other by trial and error at a young age with guidance. What I’m seeing today in school communities does not support a wholistic effort in supporting this.

1 Like

Thanks Boyd. Yes, this is the conversation I want to have, and I think mental health professionals need a seat at the table in tech companies so we can provide our perspective and expertise on relationships and wellbeing.
One thought is that relationships are alway corrupted or distorted when a third party manipulates it for that third party’s benefit. If difficult to have a meaningful connection when someone else is using that connection to make money or gain power. Regarding how relationships are built, the 2 main factors seem to be proximity and repetition. Just being around someone makes it more likely to become friends. Social media can sometime provide this, which is clear to the many people who made friends online. It can also “weaponize” proximity and repetition through, like you said, turning relationships into video games. They ways social media can mimic relationship in shallow and addictive ways is clearly very toxic. As long as clicks on ads is the business model, people will be manipulated to engage and click on ads. I think FB should become a subscription model. People will behave better, since people tend to respect what they pay for, and FB would have less incentive to let third parties manipulate or exploit users. The problem is that social media users are the resource being mined by social media companies, and these companies treat users like resources to exploit rather than humans to respect and earn repeat business from. People are easy to hack and we’ve been hacked. And any hacked person in a hacked relationship is not healthy.
There’s so much to talk about, but I’m going to sign off for now, it’s late and I’m getting tired. Thanks for the conversation : )

1 Like

That’s a fantastic article. Thanks for posting!

I think you misunderstand the author’s argument in the quote that you’re posting though. I think he’s most likely making a similar argument to Patrick Deneen in Why Liberalism Failed, that emphasis on liberal individualism has made society less cohesive in ways that ultimately undermine society. Focusing on your unique inner voice and internal state as Jaron Lanier suggests is a good vision of human flourishing, but ironically it will probably not get people on the same page when defining human flourishing for themselves. This, according to the author, is the problem.

When it comes to what these competing visions of human flourishing are and how they deal with technology, the author and Jaron Lanier fall into two very different camps:

Jaron Lanier: Technology is good when it allows us to find our authentic inner voice, but should be avoided when it coerces our humanity to serve a mechanical system.

Wendell Berry (and the author): Technology is good when it helps us maintain deep networks of belonging, but should be avoided when it atomizes people or uproots them from their communities.

It may sound like I’m putting words into the author’s mouth, which I may well be. But I make this distinction because I know a lot of people who define flourishing this way, and the author has all the affiliations I would expect from someone in that camp (not least of which being that he quotes Wendell Berry).

I, however do not subscribe to either definition. I think that both have an incomplete understanding of the human experience, and neither one adequately accounts for social power dynamics. Perhaps a better vision is in tension somewhere between these two, but if that’s the case, it probably won’t get many adherents and we’re back to the same problem we started with. What is the vision of flourishing that we’re after? And is there any way we can get enough people to agree that we can make meaningful change?

1 Like

The feeling of belonging “deeply” to a community provided by technology is both artificial and superficial. Some very active in social media want to please and show their best sides, others wish to shock and provoke in order to appear as originals, be it original jerks.

This can be easily contrasted with real-world, prolonged exposure to individuals, which over time reveals their complex nature and…humanity. We love even the dark sides of those we love and respect for their many other qualities, we hate the near-perfect image of those who seek to please all, we have compassion for those who hurt because they themselves hurt in the past, we despise the hypocrits who over time cannot hide that they go against what they preach.

All these subtleties are lost in the ether, and our rich personalities are diluted in a digital persona that is a piece of code, a lie aimed at projecting who we want to be, what we hope the community wants to see in us, and not our truly human nature, which can only be captured in situations where our face expressions say it all.

1 Like

Reading through everyone’s posts, I’m noticing the theme of how the emotional immaturity and poor social skills of some elite technologists have shaped so much of social media and our conversations about it. How can we influence these guys to grow up, to overcome their blindspots and become more well rounded, empathetic leaders? Also, how do we challenge ourselves to grow up and do the same?

Let’s be even more incendiary and direct on your point: many elite tech entrepreneurs are on the spectrum. IMO, treating someone with ASD is no small matter to be handed over to crowdsourcing.

The only reasonable thing we, as the ultimate users and potential victims of these systems, can do is to do our best to look into the heart or intentions of their creators and decide for ourselves if they are something we want to invite or not in our lives.

But I absolutely love your challenge about pointing this back at ourselves. We spend little time knowing ourselves, learning about ourselves, and developing what’s truly inside us. Many of these diversionary tools are filling our subconscious need to distract us from being alone with ourselves and our thoughts. And given that technology is a magnifier, an amplifier, it merely takes all those flaws and missing parts within ourselves and unleashes them on the broader world. Our impact on the world is wholly a reflection of what’s inside us and how well we know ourselves first.

This is a particularly crucial task as Yuval Noah Harari points out: that we are all at risk of losing our free will when we don’t take the time to know ourselves and yet billionaire platforms know so much more about our conscious and subconscious. They know how to trigger what they know about us, and they leave us believing we are in control of ourselves. It’s already happening.


Living near silicon valley I find this to be true as well. Socialthinking was developed near Silicon Valley where many of these people have kids…

See: Social Thinking Product List

It’s worth looking through these resources which are widely used in our school systems in SF Bayarea

Hi wolverdude,

Thanks much for clarifying Sacasas’ argument in making the distinction between Jaron Lanier’s argument and Wendell Berry’s. Your distinction seems correct to me and helps clear my thinking about the central issue - as Sacasas put it - “What sort of discourse do they [social media platforms] encourage or discourage? What kind of political subjectivity emerges from the habitual use of social media?” I would add wider questions to these such as “What types of human beings are we being molded into by technologically-based social media?”

Sacasas, like Berry, focuses on the “limits appropriate to the human condition.” According to Berry, these limits can enhance the “fulllness of relationship and meaning.” In this context, the point of the Lanier examples was to provide concrete instances of what that fullness of relationship might look like such as the discovery of an inner voice. The example suggests respect for our own uniqueness as a starting point for real relationships. The limitations Berry refers to are those human characteristics such as moral insight, imaginative range, creative discoveries and related human abilities that flourish best when our human limits are respected, when we are in our least “machine-like” state, if I might put it that way.

One of Sacasas’ major points is that we refuse to question our submission to management by digital tools. The type of tech backlash he questions could be satisfied with a “humane” facelift over fundamentallly inhuman tools. The existence of this backlash, along with the current remediatory efforts, may in fact reveal the depth of our implicit commitment to growing inhumanity.

The current critiques and reform efforts stand completely within the technological paradigm that has dominated human progress for the past several centuries. Such efforts at reform may strengthen the underlying inhumanity of technology by providing assurance that the industry is truly striving to make it more human, a word which necessarily is left in a vague and undefined state. My contention, along with Sacasas’, is that we need to think more deeply about what humanity means whether it is through Lanier’s examples of truly personal engagement or Berry’s invocation of the great artistic traditions, which he characterizes in his inimitable way as follows, “We must learn again to ask how we can make the most of what we are, what we have, what we have been given. If we always have a theoretically better substitute available from somebody or someplace else, we will never make the most of anything.” (Faustian Economics). Sometimes that may involve technology, but often not.

While I understand the difference between the viewpoints of Berry and Lanier, I tend to see both as encompassed in a larger viewpoint that questions technology in terms of its acceptable contribution to human flourishing. Lanier represents a more individualist method of extending our humanity while Berry emphasizes the social dimension. Sacasas is not simply endorsing Berry but using him as a example of the type of critique that could be made, but that he does not see being made by organizations such as HumaneTech, about which he says, “The critique emanates from within the system, assumes the overall beneficence of the system, and serves only to maximize the system’s power and efficiency by working out its bugs.” While this may be the only practical way to proceed, it should be recognized as such.


I’m not so sure whether we can, or wish to, have the same impact when we either have face-to-face interactions or type something on a device.

Many people have resorted to spending much more time texting than speaking, because it gives them more time to think and edit their thoughts, and control their emotions.

As to social media, I admit I am not an expert, lacking sufficient exposure, but it seems to me from what I’ve seen that many people create a voice for themselves and tend to post following a style that they define as truly representing them.

A great and wise writer who knows himself/herself well will be more likely to properly and accurately define that voice; for most of us, that voice can only be identified through feedback from close friends, teachers, mentors, psychologists, family members, etc who are more able to see contrasts and contradictions.

As you pointed out, knowing ourselves is key, failing which the image of our subconscious formed by social media engines - and our “followers” - will be biased, or appear filled with contradictions.

This reminds me of one type of personality tests, where through asking many seemingly independent questions, the test is able to determine that the subject has been trying hard to portray himself/herself as “ideal” in his/her eyes.

The question therefore is, before we seek to determine the social good of social media, “are we able to be true to ourselves in the digital realm, to communicate and share without second thoughts about our audience?”. Or is the pressure of knowing we will be read by many forcing us to control and fabricate?

Here are two related texts on the subject:

The second text posits that there’s nothing more “real” in face-to-face interactions than in digital representations of ourselves. Both are equally valid, the author says. I find this thinking distasteful, since the author attributes the contradictions to an attention to different audiences, and appears to condone lying: “We sometimes lie when we call sick while we just don’t want to get to work”. Is that the best justification there is for having a “dual identity?”

I tend to believe the pressure to “perform” is high for most, and that we fear our inadequacies. Maybe I’m wrong.