Humane Technology reading lists

Harrassment on Instagram is getting so bad, that it cannot only destroy your digital life, but your physical one too…

Discussion on Hacker News: https://news.ycombinator.com/item?id=18219633

If you want to read more about me and why I’m interested in making an iconic product (see: The Resistor Case), check out these articles:


and

I’m not a Luddite fanatic. These articles are supposed to be critical AND funny.
Cheers.
m.

1 Like

As a Luddite of low-intensity fanaticism, I can confirm that your writing is indeed critical and funny. Thank you :relaxed:

For those who haven’t read the second article @Marcel lists, here is an excerpt:

If Silicon Valley is going to face its digital sins, it makes perfect sense for these confessions to come from the men who have shaped, dominated and protected the culture that has provoked the invention of such a concept as the attention economy. However, I am less enthusiastic about the prospect of having these men serve as both confessant and confessor, designing apps to promote digital abstinence, giving mindfulness lectures that command heavy speaker’s fees and signing lucrative deals for self-help books inspired by an appropriative philosophy…It may well be that what Mr. Harris and other high-profile digital sinners are experiencing is not a Sunday morning epiphany, but a post-Saturday-night hangover.

Just found this beauty:

switching.social

ethical, easy-to-use and privacy-conscious alternatives to many of the apps and softwares you use

1 Like

This article about identifying faked images was shared on CHT’s Facebook page.

Here is an excerpt:

There are two main ways to deal with the challenge of verifying images, explains Farid. The first is to look for modifications in an image. Image forensics experts use computational techniques to pick out whether any pixels or metadata seem altered. They can look for shadows or reflections that don’t follow the laws of physics, for example, or check how many times an image file has been compressed to determine whether it has been saved multiple times.

The second and newer method is to verify an image’s integrity the moment it is taken. This involves performing dozens of checks to make sure the photographer isn’t trying to spoof the device’s location data and time stamp. Do the camera’s coordinates, time zone, and altitude and nearby Wi-Fi networks all corroborate each other? Does the light in the image refract as it would for a three-dimensional scene? Or is someone taking a picture of another two-dimensional photo?

1 Like

Yes, @patm, nice find. I saw that on Hacker News, but it didn’t make it to the front page.

I also found this startup, that uses AI to protect you against identity hijacking and detects deepfakes too:

Not Evil]

Note: It is not clear how they intend to earn their money (probably by offering professional services for companies), and though they have a clear Privacy Policy, it allows some sharing of personal information to 3rd-parties.

Two very insightful TED talks for you to watch, and that help raise your awareness:

1 Like

Here is an article with a surprising assertion: that because of their education’s emphasis on using digital devices, students are not learning the manual skills required in the medical field.

2 Likes

Security Awareness

Please, please, please! Don’t be like these people!!

2 Likes

A nice article featuring Tristan Harris in 2016 about the addictiveness of social media platforms & apps, the advent of Behavior Design, and the impact this has, when a small group of people decide what billions of others get to see of the (rest of the) online world:

Harris talks fast and with an edgy intensity. One of his mantras is, “Whoever controls the menu controls the choices.” The news we see, the friends we hear from, the jobs we hear about, the restaurants we consider, even our potential romantic partners – all of them are, increasingly, filtered through a few widespread apps, each of which comes with a menu of options. That gives the menu designer enormous power. As any restaurateur, croupier or marketer can tell you, options can be tilted to influence choices. Pick one of these three prices, says the retailer, knowing that at least 70% of us will pick the middle one.

Harris’s peers have, he says, become absurdly powerful, albeit by accident. Menus used by billions of people are designed by a small group of men, aged between 25 and 35, who studied computer science and live in San Francisco. “What’s the moral operating system running in their head?” Harris asks. “Are they thinking about their ethical responsibility? Do they even have the time to think about it?”

https://www.1843magazine.com/features/the-scientists-who-make-apps-addictive

(featured again today on Hacker News here, and mentioned before by @soregan here)

A research project – in six parts – designed to shed light on state department of education and select school district website security and privacy practices.

Tracking: EDU - Education Agency Website Security and Privacy Practices

Tracking EDU

The project consists of the following parts:

Press Release / Select News Coverage
Part I: Introduction
Part II: Secure Browsing
Part III: Ad Tracking & Surveillance
Part IV: Privacy Policies 
Part V: State of the States
Part VI: Implications & Action Steps

Very informative source of information!

@patm I just posted a prediction about this today!!! I’ll find it and link-

The summary: no it’s not a farce. Tech companies operate with an emphasis on profit above all else. Therefore real ethical tech only exists in nonprofits.

“Rather than building products that satisfy animalistic behavior, from screen addiction to fear mongering, tech nonprofits are building technology to fill gaps in basic human needs — education, human rights, healthcare.”

I like the use of the word “animalistic” because it puts us in our place among other animals. Almost all animals, including humans, fight for survival against others, that represents tech capitalism and harmful or selfish activities in the self-interest of a person or groups of people such as countries. But animals such as humans also need to cooperate to survive, this is represented by nonprofits, communities and other benign organisations.

1 Like

RAND corporation’s project on Thruth Decay:

Nate Hagens Reality 101 lecture. A synthesis of Energy, Economy, Ecology, Psychology

Forward thinking people might find the Nate Hagens lecture enlightening. He helps us understand Energy, Economy, Ecology, and Psychology and how they all fit together in the modern world. Of particular interest to this site is the neuro-chemistry that is hijacked to keep us clicking.
.


.

An Overview of Artificial Intelligence Ethics and Regulations

This LinkedIn article by Prof. Christian Guttmann, UNSW provides a number of good resources related to AI Ethics, classified as (taken from the article):

What is AI ethics, AI regulation, AI sustainability?

For sake of simplicity, I have used the umbrella term “AI ethics and regulation”, and under this umbrella you find many topics. Below are 7 key notions associated to AI ethics, regulation and sustainability.

Algorithmic Bias and Fairness. When an AI makes decisions and takes actions that reflect the implicit values of the humans who are involved in coding, collecting, selecting, or using data to train the algorithm.

AI Safety. An example here are adversarial attacks. For example, Neural networks can be fooled. How can we manage such vulnerabilities in AI?

AI Security. Hacking a self-driving cars or a fleet of delivery drones poses a serious risk. Whole electricity nets and transport systems benefit from autonomous decision making and optimisation, they need to be secured at the same time. How can we secure AI systems?

AI Accountability. Who is accountable when an entire process is automated. For example, for self driving cars, when accidents occur, who can be accounted for? Is it the manufacturer of the car, the government, the driver of the car, or the car itself?

AI Quality Standardisation. Can we ensure that AI behaves in the same way for all AI services and products?

AI Explainability. Can or should an AI be able to explain the exact reasons of its actions and decisions?

AI Transparancy. Do we understand why an AI has taken certain actions and decisions? Should there be a requirement for automated decisions to be publicly available?

There are other related topics such as responsible AI, sustainable AI, and AI product liabilities.

1 Like

Visiting Prof. Nicholas Carr at Williams College has a great review of the book “Age of Surveillance Capitalism” by Shoshana Zuboff:

Surveillance Capitalism is described as (emphasis mine):

“Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material.”

Read the article, it is great:

There is also a good Hacker News thread about the article with interesting comments: Thieves of Experience: How Google and Facebook Corrupted Capitalism | Hacker News

1 Like

Thank you, borja. Enjoyed your video, and I have shared the link on MeWe. You should also read World Without Mind: The Existential Threat of Big Tech by Franklin Foer. Another brilliant book.

1 Like

“your data — the abstract portrait of who you are, and, more importantly, of who you are compared to other people — is your real vulnerability”

“Infinite scroll at the very minimum wastes 200,000 human lifetimes per day.” — Aza Raskin (co-founder of the Center for Humane Technology and inventor of the infinite scroll)

1 Like