3 Criteria of Humane Technology


#1

On May 19th we had the first Bay Area Humane Tech Meet Up. 15 people attended – most of whom are members of this forum. The topic of our discussion at the event was “what is humane technology?”

I told the attendees I’d post the definition we came up with at the meeting here. We agreed that a tech product/service should meet at least 3 criteria to be considered humane:

  1. Humane technology is transparent. People know what the technology does when they use it because it conforms to their expectations. There are no hidden ulterior motives or dark patterns that trick users.

  2. Humane technology lets people opt-out. People are not forced to sign up for services or needlessly give up data when they use humane tech. This includes the right to remove sensitive personal information from products like search engines.

  3. Humane technology has legal language that is easy to understand. People can comprehend the terms of service for tech they use so they can make informed decisions. Contracts and privacy policies are written or explained in the simplest legally-permissible language.

This is by no means an exhaustive list, but we felt like these were the ‘minimum required elements’ for humane technology. I’ve seen several more ideas mentioned in other threads – is there anything you would add to this list? What do you think of these three?

(By the way, if you want to discuss this in person we are having our second meet up on June 30th. I posted the details for that event in this thread.)


Idea: How about an 'attention footprint score' for every app
Technologies, protocols and standards for a better future Web
Creating synergy on this forum and between community and CHT organization
#2

I like the list here (and sorry for moving halfway across the world so I could not make it :).

The key seems to be in the first point: transparency. That actually informs the second and third points. If you understand and can surface the tradeoffs of what you give up in order to opt-out, for example, that might be a reasonable bargain or trade that many would still be willing to make. For others, maybe not.

The fact that these technology products are designed for addiction isn’t necessarily an inherently bad thing. Ask any dedicated videogamer: addiction is a desired good in many contexts. It’s how products engage their audiences. The transparency factor is being more aware of what could be lost by that willful addiction, and will that be a tradeoff people are willing to make. Not everybody should be forced to develop in-person human friendships and a personal connection with the real world, as much as that might be a priority for me.

A complement to transparency includes developing an ecosystem understanding of the economics that go into making and supporting technologies. A challenge is that transparency risks divulging competitive “secrets” of some companies who naturally deserve some form of protections for their intellectual investment.


#3

You hit the nail on the head – the core of humane technology is transparency. I love the way you elaborated on this concept. Some of the points you made were also brought up by people at the meeting.

Even though I’ve devoted myself to fighting internet addiction, I agree that addictive technology isn’t bad ipso facto. I only think it is problematic in three circumstances:

  1. When people have no idea the tech they’re using is built to addict them. This is particularly true when tech companies try to hide this fact from the public.

  2. When tech services which are essential to live and work are needlessly designed to be addictive. A large proportion of the population must use email, text messaging, search engines, and workspace chat programs for their jobs. If these products are addictive they violate the right to opt-out I outlined in my original post.

  3. When addictive tech is targeted at groups that don’t have the capacity to make informed decisions (i.e. young children).


Designed for Addiction
#4

Thanks for the thoughtful reply.

  1. Is tricky to address. One person’s addiction is another person’s casual hobby. The risk is that when everything comes with a trigger warning, those warnings no longer serve any purpose other than to be ignored and provide rubber-stamp outs for legal teams. I’m not sure how to proceed on this one.

  2. You’re really on to something with this one. When opting out isn’t a realistic option - where it creates debilitating impediments towards functionally participating in major shared benefits of society - we’re in trouble. These types of services should almost adhere to a higher standard of transparency and scrutiny.

  3. Also a key point. Especially as I think of all the Silicon Valley execs who refuse to expose their own children to the fruits of their careers.


#5

Great points – I’ve put a lot of thought into the ‘trigger warning’ conundrum. One possible solution would be to host the information on a centralized, easily accessible website instead of plastering warnings on every tech product. People who want to be informed could look up the info on the website without becoming numbed by repeated exposure to warnings.

However, only a small fraction of the population would see the centralized website. If you wanted to inform everyone then distributed warnings are probably necessary. These messages could be effective if they were only used on the worst offenders.


#6

I like your trigger warning solution. Again, it goes back to the first point: transparency. We don’t want a nanny state where people are nagged with warnings for behaviors that some people are happy and willing to adopt regardless. But we can support greater transparency in the tradeoffs involved.

Yes, that resource would probably be neglected for the most part. Think of how ridiculous cancer warnings have become on cigarette packages in some countries and that still doesn’t deter some… or even get noticed anymore. But if it can remain an independent reference with some balanced standards of review and research data - akin to more of a Wikpedia or at least a Snopes.com - it might be useful to some. And might inspire some self-reflection and shame to act in ways that seem objectionable.

Because the attention game being played now is unsustainable and is creating an online world that nobody really wants in the end - including for the advertisers and for the oligarchies like Facebook and Google.


#7

Great discussion, you are having here! Thank you @willmattei for summarizing the Meetup!

I very much agree with you here. Besides games, I would like to specifically mention Gamification here. It is about adding game mechanics to reach serious goals. Of course, it can be used for good and bad purposes, but when used for good (and transparently) it is one of the most powerful tools to effect changes in people’s behaviour, e.g. to adopt healthy habits, etc.

Agree. This would be a good applcation for gamification. Instead of fear-mongering images, letting smokers feel guilty about their addiction, gamification could motivate them to stop, find alternatives to smoking.

Great idea. This would be like the movie ratings systems, where you have a number of informative icons that are easy to interpret, and you can click them to bring you to the centralized website with further explanation and recommendation. This could be combined with Idea - Humane Technology Logo Program


#8

This would be like the movie ratings systems, where you have a number of informative icons that are easy to interpret, and you can click them to bring you to the centralized website with further explanation and recommendation. This could be combined with Idea - Humane Technology Logo Program

I like this idea a lot and would like to make it happen. A similar project I’m familiar with is a browser add-on called Terms of Service; Didn’t Read. It is supposed to show ratings for websites you visit based on how ethical their terms of service are. However, I’ve downloaded the extension and never been able to see the ratings. I’m not sure how active the program is. Edit: many of the ratings are listed here. I don’t know why they don’t show up in the browser extension.

Maybe we could try to revitalize this project or start our own service. An easy way to do it would be to focus on rating the most popular websites people use (Facebook, YouTube, etc.) first. We could worry about rating less-used sites later.

What do you think of this idea? Does anyone know any similar services that I’m not aware of?


#9

I love this conversation. Thanks for starting it @willmattei!

This makes me think of the “organic” food movement (which then led me to think about the “slow food” movement, which led me to discover this “slow tech” manifesto! Yay for productive wikipedia link-following :slight_smile:).

After reading this thread, I’m beginning to imagine a crowdsourced humane tech / app directory that we could collectively create.

We could start by defining a framework / criteria for evaluating the “humaneness” of technology (which this thread is starting to clarify – is transparent, designed for addiction, enables the user to opt-out, ???). Then we could evaluate apps and tools based on that criteria and compile it into a crowdsourced directory!


#10

Thanks for sharing the “slow tech” manifesto! I’d never seen it before.

I like the idea to crowdsource a humane tech directory. It would be difficult for one organization to compile all this information alone. There could also be questions about its credibility/biases. Crowdsourcing avoids both of these issues.

Terms of Service; Didn’t Read is a good start but I think its sole focus on ToS is too narrow. It’d be nice to have a directory that includes info about the design and user experience elements of tech products in addition to the ToS details.

We could start by defining a framework / criteria for evaluating the “humaneness” of technology (which this thread is starting to clarify – is transparent, designed for addiction, enables the user to opt-out, ???). Then we could evaluate apps and tools based on that criteria and compile it into a crowdsourced directory!

This is a fantastic first step! I’d love to help define “humaneness” criteria for technology – we can start by discussing it on this forum. Should we continue the conversation on this thread or make a new one?


#11

Don’t you dream of some indication about how that new app you’re installing on your phone is going to eat up your time and attention in the next days/months?

I wish apps stores could provide us with some kind of ‘Attention Footprint Score’ so that I can make a clear choice of which apps to install and use.

That score could be delivered by some testing against an Attention Protection General Policy (defined by CHT?) and/or by users giving feedback and evaluation score about apps they are using.

Apps designers and developpers would then be more carefull about what they do with our attention and time.


#12

I just came across @jci’s post about an ‘Attention Footprint Score’ and it seemed related to this conversation, so I thought I’d experiment with merging them. :slight_smile: Hope that’s okay!

Woohoo! Wonderful. I’d love for you to lead that! I think you can decide if it’s best to start a fresh thread, or rename this one, or just build off the conversation here. Whatever you think will lead to the most fruitful conversation.


#13

I’m glad you merged the threads – I didn’t see @jci’s original post. I’ll start a new thread to define humane tech criteria sometime in the next couple of days.

In the meantime, me and a couple of collaborators turned this topic into a blog post. :slight_smile:


#14

There’s another interesting & thoughtful approach to thinking about these sorts of criteria here, which lays out some of their own guidelines for designing and building ethical technology that respects human rights, effort, and experience… I find a lot to like about it, even if it seems a bit broad.


#15

Wouldn’t it be better if instead of just being able to opt-out of things, users also had to opt-in specifically for each and every action in the first place?

Users have come to expect surveillance, tracking, spying on our locations, our contacts, messaging services telling all of our contacts when we’re online and even if we’re typing, publicly posting our profile photos, public posting of our names and so on.

But what if instead nothing was shared unless the user specifically chooses to share each and every piece of information? I think that’s not going too far, and I think that’s what users really want.

I think this could turn software as we know it upside down. Is there a name for this?


#16

Wouldn’t it be better if instead of just being able to opt-out of things, users also had to opt-in specifically for each and every action in the first place?

I think this could turn software as we know it upside down. Is there a name for this?

I’m not sure if there’s a name for making everything opt-in, but I’ve heard other people discuss this before.

I’d be interested to see companies try this concept, but I do have some reservations about it. On one hand, many people want to explicitly opt-in whenever a business uses their information. However, the opt-in system could be cumbersome and alienate users who value convenience.

In other words, I think the success or failure of this practice would hinge a lot on execution.


#17

5 posts were merged into an existing topic: Technologies, protocols and standards for a better future Web


#18

You have a very good discussion going on here, which was unfortunately dragged off-topic by my own doing. So I have split the resulting posts to a new topic…


#19

Excellent blog !
Thanks for share your strong production
Open mind proposals
Great


#20

I like the concept of transparency but I’m not sure it’s enough. Many people had the notion that social media was addictive but it didn’t stop then from being addicted. Instead, another framing, which still matches your example of gaming, would be that the product is designed in service of the user’s goal, and things that are designed solely to meet a business goal but don’t match or are antithetical to the user’s goal should be avoided.