Human at the Heart - Humane Machine Learning

I think a humane approach to using machine learning to solve business problems is to start with the people already working on the problem. Focus on front line staff asking what they feel are their most most valuable and meaningful contributions, what gets in their way, then build tools to help support them in these tasks. This could be called Human at the Heart - a play on the existing term Human in the Loop,

Human in the Loop describes an approach to applying machine learning algorithms that considers how human input should be included in the algorithm. This can apply in applications like self driving cars where how a person can influence and override the algorithm is a natural part part of the problem, but I am referring to business more generally. As an example, consider content moderation at an image hosting company. A standard approach would be to use a large number of previous examples to train an algorithm to detect certain classes of images. Once the accuracy is sufficient the algorithm would be phased in to replace some manual process, maybe at first with human verification of the results but eventually to replace staff. A Human in the Loop approach might be built to learn from the person checking images, and that person may be asked to describe their decisions in a way that makes sense to the algorithm. In both cases however the goal is to eventually replace most or all people from the process.

A Human at the Heart approach would start by asking the people involved in moderation when they feel most productive. They may say the most valuable work they do is where they need to understand the context and community where a particular image was posted, and focus on building tools to provide that information.

The problem with this idea is that it wouldn’t sell. The types of problems where use of machine learning is considered are generally ones that are well suited to automation. Technology is getting to the point where it can help individuals in complex interactions at scale, but business drivers tends toward reducing cost and headcount. To make this work it would be necessary to focus less on standardisation and improving existing business models and more on incentive design and defining acceptable behaviour of both people and algorithms. I believe that organisations that get this right could benefit greatly from the creativity of people, where I think they will exceed computers for a long time to come.

So what do you think so far? Any suggested reading material, who is thinking along these lines already? Any risks or pitfalls? One I see already from the example above many would prefer imperfect computer moderation to having individuals policing their content.

2 Likes

A lot of what can be automated should be, IMO. That said, I believe that we humans thus far have been poor by defining what could be automated too broadly. If it’s routine, repeatable, and follows the classic ML model of extrapolating on a curve, great.

But on the other hand, code becomes law. If there isn’t a form of human monitoring nor intervention, we become slaves to the machine. There’s probably some measure of applying AI to make other AIs more explainable. And we could apply statistical methods to root out biases in automated decision-making. But a major problem with ML is it’s all based on hindsight. Is it ethical to let it harmfully manipulate lives in a biased way as long as you can detect, and hopefully correct, it later down the line?

There are too many human things we are outsourcing to machines. Brené Brown correctly jokes that if empathy was a desired skill, there would be people in Silicon Valley building machines to automate that for us. Which is totally beside the point, of course. We have machines now telling us when to walk more, what to watch and read, when we’re sad, etc. - completely outsourcing the most personal of human decisions to machines and turning us into the dependent, self-ignorant grey goo of capitalism.

There are also major limits to what AI/ML can effectively do. When Mark Zuckerberg spoke to Congress about how AI was going to be the solution to content moderating, it reflected how he completely confused the means of scaling a solution with the solution itself. So for the content moderation piece, of which your post seemed most focused on, I don’t think you can simply outsource ethics to a machine. Nor should we, even if I think there are many techno-utopians who believe society would be better, more efficient, and cleaner if we just got rid of all the messy humans.

Time and time again, technology – and technology solutionism as a whole – has proven problematic at best and disastrous at worst when applied as a solution to social problems. Hence in the content moderation space, I see AI/ML-assisted humans as our best bet. Call it a Human at the Heart or a Man in the Middle, a past-facing AI/ML will always lack the foresight to deal with new human issues and new social interpretations as they evolve emergently.

1 Like

The first question comes into my mind is: What do we want to achieve with machine learning?

“The types of problems where use of machine learning is considered are generally ones that are well suited to automation.”

The automation that we are used to, as I observe, has undergone two main stages. As the industrial revolution and the invention of electricity came along, humans started to use machines to automate physical activities, such as manufacturing products in factory lines. Then, as the computing age unfolded and the Internet boomed, humans started to use machines to automate intellectual activities, such as generating reports and calculating numbers with an enormous amount of data. Right now, machine learning is being enhanced, resulting in the trend that humans start to use machines, more specifically, algorithms to automate mental activities that are far more complicated and directly related to the consciousness and workings of human brains we yet haven’t come to fully understand.

Greg has talked about many good points here. The transition is a manifestation which supports what greg has mentioned and also the historian Yuval Noah Harari has stated in his book 21 Lessons for the 21st Century that we humans are outsourcing more of us to machines. I recently finished this book and Yuval has dedicated many pages to explaining his opinions on machine learning, which I find very insightful and would like to share some of them here. The central point is that we shouldn’t outsource our minds to algorithms, as there are huge differences between intelligence and consciousness, and it would be too dangerous to cross the line.

“Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love, and anger. We tend to confuse the two because in humans and other mammals intelligence goes hand in hand with consciousness. Mammals solve most problems by feeling things. Computers, however, solve problems in a very different way.”

“We are now creating tame humans that produce enormous amounts of data and function as very efficient chips in a huge data-processing mechanism, but these data-cows hardly maximize the human potential. Indeed, we have no idea what our full human potential is, because we know so little about the human mind. And yet we don’t invest much in exploring the human mind, instead focusing on increasing the speed of our internet connections and the efficiency of our Big Data algorithms. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.”

“The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.”

The issue is indeed complicated. Let’s still use content moderation as an example. On one hand, I see the challenge of computers effectively studying the person checking images, as every individual is unique because of their cultural, social, economic background as well as the standard of ethics. On the other hand, I agree with the idea that human-computer cooperation can be extremely powerful if we combine the human factors (such as empathy, the mind etc) and the hyper-fast processing speed & nearly infinite storage of data. Imagine computers using all the available data to provide context for the content moderator which is just too broad for a single person to comprehend, and the moderator will make more sensible decisions supported by the analysis. This reminds me of the book Principle by Ray Dalio in which he described the computer system Bridgewater has used for decades to make decisions related to investment. The system stores comprehensive historical data of market changes and previous investment decisions and simulates potential results of hypothesized investment with in-built statistical models. Ray attributed part of the success of the firm to the human-computer cooperation, and it worths noting that humans should always be the final decision-maker.

Regarding content moderation, I previously read an article depicting the mental health issues content moderators have to deal with in their job and how many of them worked under inhumane work conditions as contractors. Machine learning has huge potentials and I am optimistic about the future of it, but I do worry about what may happen if we develop this technology too fast without putting humans in the centre of every discussion.

3 Likes

I definitely agree we have been bad at deciding what should be automated. One of my main interests at the moment is in how these decisions get made.

Unrecognised bias is one of the big risks of automated decision making, and human monitoring only really helps if the decision is structured in a way that allows them to properly understand it. I’m often doubtful of legislation being proposed to mitigate risks like this, but I think this is an area where businesses should be required to demonstrate that they understand their decision making processes and can prove that they are not discriminating unfairly.

I think content moderation came to mind as an example because I had just listened to the CHT podcast with Guillaume talking about YouTube’s recommendation algorithm.

Yes, machine learning is being applied to automate mental activities in the same way that machinery was used to automate physical activities since the industrial revolution. Our organisational structures and how we make decisions are often based on industrial patterns. I think that appropriate use of the technology we have today could support huge diversity in how people make a living, but I don’t think the incentives and structures of business today will lead in that direction.

I found Yuval’s writing valuable too, it has been a big influence on my thinking about this. It can be risky making statements on consciousness since it means so many different things to different people, but I think what you have quoted here makes a lot of sense. I’ve also recently read Jaron Lanier’s “You are not a gadget” which is also much against the tendency to treat people as data processor’s.

I’m not entirely happy with content moderation as an example scenario, but in a way it is informative because it shows how little thought current businesses give to the people actually involved in the process today.

1 Like