What it means to be human

Are you saying that the technological paradigm began with the Industrial Revolution? Just want to clarify. If so, I agree. The predominant metaphor at that time was that of the watchmaker, i.e., a being who uses his skill and knowledge to create a perfectly working object.

Agree here. This brings up the old argument of changing the system from within: can it be done, or are we fooling ourselves? The more complex and ubiquitous, the more difficult to change. At some point, our minds become part of the operating system and help it run.

There is a wonderful TED talk by Brené Brown on vulnerability in which she defines courage as the ability to tell the true story of ourselves.

2 Likes

Another aspect of social media that is well worth considering: any “respectable” (in the sense of being business-savvy, not necessarily human-friendly) website or app leverages recommender systems, whereby user preferences, inclinations or biases are mined to deliver content that is most likely to appeal to their attention.

Let’s assume for a moment that somehow (for conscious, or unconscious reasons) your digital persona is distinct from your (original, i.e. if social media did not exist) persona. My previous post argued that this may well be the case for most, but that’s not the point here.

Then, you will be served with content that most appeals to your digital persona, thereby reinforcing its “values”. Taken to the extreme, if you are interpreted as having far-right values, the system will push content that corroborates such views or even radicalizes you even more. Your attention may be called to communities whose goals associate with far-right values. All this contributes to making you feel good about your views and gives you plenty of opportunities to “belong”, not to society as a whole, but to communities who fully embrace such views.

Sorry that I took the example of far-right and describing it as an evil: while it is for most, it is objectively just a model of thought.

In the real world, exposing your thoughts expose you to feedback from random interactions that may be so well informed that you will change your view point, or at least moderate it.

Influencers on social media have not gone through a thorough process whereby their rationality and reliance on objectivity could have been demonstrated. Hence a social media society is turning into a democratization of influences, where the key criterion is the popularity, not the erudition, of the influencer. In a way, we tend to follow influencers because they are popular, not because they are right. This is a hugely drastic change to how opinions are formed: from the informed, educated and structured; to the most pleasing, popular, and demagogue.

If you had a choice, financial considerations aside, between sending your kid to Harvard or letting him understand the world through the lens of social media influencers for a couple of years, what would you do?

This long aparte on education brings me to this: from a world where opinions were discussed and dissected based on facts and reason, we have come to a world where anyone’s most crude opinions are massively corroborated by social media, whose purpose (so that it can derive higher profits) is to validate your thoughts by seeking to deliver content that pleases you.

Take as an instance that Zuckerberg is not very keen on established media, whose profession is (or should be, at least) to portray facts and provide rational analyses.

Facebook (not to mention others) started out as an innocent project. Now as it grew powerful and sophisticated, it is trying to justify its existence as “bringing people together”. The question is “does it bring similarly flawed people in like-minded communities where their flaws are now extolled as virtues?” Shouldn’t instead these flaws be corrected or at least contrasted with content that go against their preferences?

Social media recommender systems can be compared to e-commerce recommender systems. Why would you recommend a yellow dress to someone who obviously hates yellow? Or should you try to change her tastes? This is something social media should do, but again, it works better for them if they push content that go along your preferred dimensions.

Not that they manually define these dimensions in an evil way: to follow on our earlier example, far-right tendencies may be learnt as a latent dimension, more so than “I’m interested in discussions on whether far-right or more left-leaning views are justified.” This is just an evil side of the automated nature of machine learning: individuals may be evaluated on their views, rather than their concerns.

2 Likes

@anon51879794 nothing like being challenged with things we might not normally reach out to. Like being trapped in a car with the only radio station that comes in tune, or a chatty airline passenger that we can’t turn off… we are not given these opportunities anymore and those experiences are the buds of diversity and tolerance.

I honestly think we should just force ourselves to listen to or hang out with people we totally disagree with once in a while to create those chance meetings social media displaces.

1 Like

@healthyswimmer : fully agree, and technology, in addition but not in replacement of real-world interactions, also affords us tremendous means to achieve that - provided it is not in control of what content we’ll be mostly exposed to.

1 Like

Hi patm,

Thanks for your replies. The technological paradigm I’m talking about is the broad attitude that emerged over the last 200 years that every problem has a technical solution. In the context of CHT, and as a professional web developer for 25 years, I’m trying to define those features of my users’ humanity that my designs can impact positively or negatively. Just talking vaguely about preventing attention erosion or avoiding hijacking user minds doesn’t seem to be much to go by, so I was hoping we could look at issues such as those raised by Dan Rubin about how actual human relationships can built online and what technologies could enhance that process or if they must remain superficial and that’s OK because that’s all the technology is capable of given the profit objectives of the social media companies.

I’m basically agreeing with the critique made by Sacasas in his New Atlantic article referenced above: The Tech Backlash We Really Need. The argument he is making is that, while interesting and praiseworthy in terms of its stated goals, the CHT initiative doesn’t really address the root issues, “It speaks of ‘our humanity’ and ‘how we want to live,’ but it is not altogether clear that it offers a meaningful answer to the questions of what constitutes humanity, of how we want to live, of what we means” (Sacasas)The assumption is that we have a basic understanding of these without having to spell it out in detail. I started this section to see if these questions could be addressed.

As to changing the system from within, we could start by identifying the actual assumptions behind the project, one of which is that the technology behind social media is basically “good”, meaning not inherently anti-human, but it is often being used wrongly. What I’m trying to do is draw a line between human and anti-human based on my understanding of the human good. But there is a problem with defining “human” in meaningful terms, “Liberal democracy professes a fundamental neutrality regarding competing visions of the good life, offering instead to protect basic human rights while creating the context for individuals to flourish with maximal freedom. In the space created by this professed neutrality, modern technology has flourished, unchecked by a robust and thick understanding of human flourishing.”

That “robust and thick understanding” seems lacking in the tech community so I’m asking what is the criteria that we can define so that we can make real progress toward a more human experience or if the inherent momentum of technology makes it anti-human in certain fundamental ways that can only be compensated by temporarily moving away from it and enjoying other things such as the sky, the ocean and a few close friends. As you say, we are part of the operating system so our minds can only function within a tight set of restrictions when working with the technology. What are these restrictions doing to our humanity? And is it worthwhile exploring that?

By the way, I’m a big fan of Brene Brown and I loved her TED talk. That’s my idea of humanity.

1 Like

I think you make a number of excellent points, particularly, “relationship are always corrupted or distorted when a third party manipulates for that third party’s benefit.” So what if the social media platform was owned by the users? I see two immediate consequences: 1) Some users might try to exploit other users just as they do in the real world but there would be hard to protect against because of no central source of responsibility; 2) Without strong corporate backing, the platform would languish because no one would take responsibility for it.

Therefore the owners of the platform need to make enough profit to maintain the usability of the platform. They could follow a subscription model and this would be most likely to be successful if it could appeal to particular interest groups who would be willing to pay for quality interactions that are not exploitative with others who shared the same interests. Many would likely be willing to pay just to get rid of the trolls if there could be a way to escape them.

Just like we try to form a circle of quality around ourselves in the real world, could we find way to do the same online? If the selection process was driven by human decisions rather than algorithms, the quality could be constantly enhanced because it would grow through experience and interaction rather than keywords. The real problem is that we have been very effectively trained to use all online interactions as resources to be exploited for a number of motivations, not all of which are monetary.

It seems like its the same problem we face in the offline world. What are the social tools we use to build up our humanity, to govern ourselves so that we build the solidarity we inherently long for.

1 Like

@boydcster

My take on the question you raise about starting by defining what makes us human and what “we” mean in the first place is that, unless you want to go all philosophical and attempt to spell it out in absolute terms, it’s much more approachable to achieve that by way of contrasts and yes/no questions.

We all agree that if there were a universally accepted definition of what makes us human in absolute terms, we would in any case have to accept that technology cannot entirely reflect it and help it prosper, as it is both a reduction of complex aspects to a finite number of features and an endeavor that relies on profit-driven interests (which makes for necessary compromises).

If instead, we go by way of contrasts (as many in the humane tech community do), we can easily contrast products (not necessarily “solutions”, as that would imply there was a problem to be addressed in the first place) offered by technology, and on the other hand, their equivalent in the non-digital world.

Take three examples:

Netflix: through recommender systems, Netflix offers users personalized advice on movies they will likely enjoy, be they blockbusters or indie movies. In the real world, you would need to know people with an incredible knowledge of tens of thousands of movies to provide you with a similar experience. On the other hand, Netflix is unlikely to suggest you movies outside your perceived preferences, so it doesn’t help you grow and diversify your interests, but that’s still something you can do through real-world interactions and recommendations.

Netflix follows an approach that emulates the real-world, but goes beyond its limitations. No real conflict here as far as our humanity is concerned.

Uber: We all know the business model and its advantages. A positive one is that it encourages drivers to be courteous as it may impact their ratings. A negative one is that in some countries, regular taxi drivers resent the competition from Uber drivers who unlike them did not have to go through a thorough registration process, nor foot high upfront costs. Such issues should be resolved by regulators, so there’s a due process in place to arbiter. Another possible negative aspect is the exploitation of drivers, who only take home a limited pay despite many, many hours at the wheel. But again, that’s just another aspect of a capitalistic society, and something regulators can scrutinize.

Does Uber change/negatively affect our humanity? Not really. It’s just a product that introduces more - if not unfair - competition.

Finally, our favorite subject of discussion: Facebook. I wrote enough on this thread about the social media giant. The contrast here is very, very significant. One may argue that nothing precludes our humanity from evolving - albeit at an unprecedented and frenetic pace - and “integrating” our new digital habits (I won’t say addictions). Maybe I’m wrong, but the way I see it is, we are at a unique point of history where our humanity (again, without defining it in absolute terms) is undergoing TREMENDOUS changes on a global scale and all of this has been happening within a mere decade or so.

The whole point of the CHT initiative - again, that’s my view only - is to rein in this rapid evolution, or at least to make it clear to anyone what we’re going through and take responsibility for the consequences.

I know it’s cliche, but doesn’t it freak you out to watch people on the street, and notice that at least 80% are holding their phones? Is that “human”? Or has humanity over the past centuries craved for a device they could hold on to 16 hours a day?

Taking another - very stupid example - let’s say 51% of the population gets on crack and crack is legalized. Would you then say “Access to crack should be a human right?”

The point of this stupid example is, it’s not good enough to just state that technology is what it is, what we make of it reflects our human nature, so let’s just live with it. There’s a urgent need for awareness, and from my experience discussing the subject with many people outside this community, they have no care in the world.

1 Like