What if There Are No Conscious Decisons? Using Tech for Better Health & Wellness.

Neuroscience research suggests that decisions are made the automatic mind that we share with our evolutionary ancestors. Our current ad-driven tech environment teams up with our automatic mind to overrule our conscious intentions and drive decisions that make us less healthy and happy. It’s two against one- and they’re winning. In this article I discuss an alternative strategy, how to team up our technology with our conscious mind to drive better decisions for health, wellbeing and happiness.

3 Likes

Hi Jesse,

Really liked your article!

It triggered some thinking which - me being a noob and thoughts entirely unscientific, maybe nonsense - went like this (skip if you want to):


I could reverse your analogy and making “You’re Not the Boss of You” untrue. I have alway wondered why after your automatic mind takes over processing, this would suddenly mean that there is no longer free will at play. I see the conscious and automatic mind not as two separate entities but a continuum where there is no clear boundary between one and the other.

Both the automatic and the conscious minds are models that are trained / reconfigured throughout life and their cooperation is more or less seamlessly. There is an underlying topology / architecture that is fixed (compare: based on our DNA). If you go upwards from basic level adding layers of complexity, at a certain level consciousness emerges. In the upwards direction flexibility of the model increases, but this flexibility results from parametrization of the lower levels.

This is comparable to a computer architecture, where a developer programs in a high-level language (conscious mind), that is ultimately resulting in assembly code being manipulated (automatic mind), which has to fit into caches, registers, byte formats, etc. (DNA determined).

But wrt your analogy of Employee (junior analyst, conscious mind) and Boss (automatic mind) I’d take an analogy of a software company.

In my analogy the Boss is the conscious mind. The Workfloor with many Employees is the automatic mind. Over the lifetime of the company the Boss has shaped it to fit the environment in which it operates in a way it is likely to be successful. This involved tweaking the organization structure, fostering the proper culture on the workfloor, and how employees interact with partners and customers.

The Boss operates by analysing the environment and basing business strategies upon that. Suppose one strategy is “We need to develop our own software product” (intention). After the Boss formulated the strategy she puts it in action. The strategy now formal, the entire workfloor springs to action. The Boss has delegated responsibility. Was a decision made? Let’s suppose it is still an intention. With the workfloor now in action, did the Boss give up free will. I argue not.

The conscious mind does not need to know about the intricate complexity of the automatic mind. Does the Boss have to know the hashing algorithms of a password? No. Does she needs to know about the Continuous Integration pipeline being configured to release the product? No. But the CI pipeline exists because she made an earlier ‘decision’ / formulated a strategy that determined the company needed to have one. The Boss (conscious mind) does not want to be overwhelmed with implementation details.

After a while the development team has finished the code, and the Release Manager at the end of the CI pipeline shouts “Eureka, we have created our own software product” (the automatic mind decision as it lights up in the MRI scanner). About half a second later the Boss receives confirmation and immediately presses ‘send’ on the Press Release email titled “We have launched our own software product” (the conscious mind decision that comes later).


That’s my naive arguing, not bothered by scientific facts, that gives me the comfortable feeling that I’m the boss of me :slight_smile:

Turn the tables?

I haven’t given this much thought, tbh, and am a bit tired not at my best while typing this, but wonder whether - while not scientifically accurate - it would still be useful to turn the analogy around, as described above.

In your example there would be a very unpredictable Boss, driven by unpredicable moods, emotions, hidden instincts, etc. Analysing how with such a Boss you’d still end up with a successful company (compare: well-balanced person) will put a much smaller body of knowledge available to you.

Whereas when turning the tables, switching the analogy, you’d have a rational Boss that is struggling to steer a growing company with a diverse, complex workforce through the business environment (life). The entire body of knowledge of business development is at your disposal.

Moods and emotions have an analogy with company culture (productivity loss if the culture is too festive), and instincts with work habits, personality traits (e.g. the tendency to procrastinate if not properly engaged).

The technology angle

Imho the first philosophy part of the article (which I found very interesting) is only lightly touched upon in the part that addresses technological solutions.

We Need To Build An Ally

Agreed. You state “The current crop of behavior change strategies are not working well enough.” I would like to know more about what these strategies are, before diving into why they do not work.

Our good intentions are getting swamped by 11 hours a day of counter-messaging from ad-supported media.

Translation to my analogy: Slack was hastily introduced on the workfloor, and now everyone is getting distracted by continuous notifications of cute cat pictures the employees share amongst themself. Productivity (attention, habit-forming) suffers. I’d broaden the problems space to much more than advertising-driven technology. Or rather, identify the underlying forces that influence our automatic mind in ways that hamper our decision-making.

The principles you provide, summarized:

  • You should be the customer, not the product (you should be in control, not controlled)
  • You should be shielded from distraction

These are very high-level, generic statements.

What we need is technology that teams up with our conscious goals and intentions to help us make the right decisions.

Decision-support software for a rational boss?

Summary

An fascinating subject area you touched on. It would be interesting to see if we can deepen this out, and make it more specific and actionable.

Thanks so much for your thought and reply! It shows the support and feedback this community is able to provide. I agree that most people believe their conscious mind is the boss in the relationship. The problem for those of us in behavior change / health & wellness is that we can get people to consciously intend changes… and then not follow though. It’s very complex to say they decided one thing in our coaching meeting, then decided something else when they walked out the door. It reminds me of pre-Copernican astronomy, if you assume the Earth is at the center you come up with very complex calculations to explain why Mercury goes backwards at times. With the brain studies (look under neuroscience of free will on Wikipedia) I feel we’re at a Copernican moment where the conscious mind is no longer the center of the universe, and then you look around and everything gets simpler to understand. I don’t expect everyone to agree on the 1st try, but I’m developing my thinking by sharing with all of you.

1 Like

I will be publishing a story called “One Change to Fix Everything That’s Wrong with Tech” later this month. In that story I spend a lot more time explaining the proposed solution. I will look forward to more discussion when I’m finished with the story and can post it here!

1 Like

Hello,

The article is a good read. Coincidentally, I am doing my final year project at university researching how to design AI to trigger the reflective mind, which is conceptually similar to the conscious mind mentioned in your article.

It’s indeed a fascinating area to look at. Your article reminds me of dual-process theories (which talk about two kinds of reasoning/thinking) and the book Thinking: Fast and Slow. A paper I will recommend is Dual-Process Theories of higher Recognition: Advancing the Debate. There’s still debate going on about how humans reason exactly, but scholars Jonathan Evans and Keith Stanovich have given a comprehensive account of the theory’s current state.

They use Type 1 processing to refer to the autonomous mind, whose defining characteristics is being autonomous and not requiring working memory. Its correlated features thus will be that it’s fast, intuitive, and contextualised. In contrast, Type 2 processing requires much more working memory, and people call it slow, conscious and controlled thinking. Interestingly, Keith Stanovich further divides Type 2 thinking into two levels: one is the algorithmic level which is related to a person’s cognitive ability (i.e., IQ); the other one is the reflective level which is related to a person’s thinking disposition/personal values and goals.

I agree with your point that many technologies are designed to exploit our autonomous mind’s limited capabilities. A lot of biases and logical fallacies find their roots in the autonomous mind, and the concept of “nudge” is a good example of how dark patterns, online ads and social media etc trick us into getting instant gratification with the least effort. For example, just recall how automatic we are when we click “I agree” when we browse a new website…the digital environment we are in is leading us downward on a slippery slope, what the author of Re-engineering Humanity call “Engineered Determinism” where we are deprived of our practical agency without even being aware of it.

…I feel we’re at a Copernican moment where the conscious mind is no longer the center of the universe, and then you look around and everything gets simpler to understand

I think you make a really good point here. One thing that kind of bothers me is that humans are lazy in many situations (because the autonomous mind dominates and even though the reflective mind can override it, it’s influenced by the input given by the autonomous mind which often tends to be biased), and we are not good at engaging in long-term thinking. So we keep getting fed by the cheap bliss and accept the convenience AI has brought us. There are many angles we can talk about in terms of the solution to the problem, and the one I am looking into right now is boosts - interventions that, unlike nudges, aim to establish and foster capabilities that can persist.

Looking forward to your further discussion!

1 Like

I’m not sure about the focus on decision making in order to promote health and wellness. People don’t necessarily stick with what they decide and end up changing their behavior along the way if the behavior is not well designed. In other words, one may decide to quit smoking or start eating healthier food, but that alone won’t be enough to accomplish these goals.

I believe the focus should be on behavior, not decisions. BJ Fogg has a clear behavior design model one could use to create technologies that promote well-being.

One thing that holds us from persisting with positive behaviors is how our motivation is biologically limited. “Celebration” is also an important behavior design tool. In this sense, @Saiyu work of using AI to give people external boosts they’re not able to generate themselves sounds promising.

1 Like