Critical Discussion of Tristan Harris' Event

Respected people have criticised Center for Humane Technology’s “A New Agenda for Tech” event, notably Romman Chowdhury on Twitter:

I agree with the first two points, but not the last two.

I’m really tired of super privileged rich people in the most expensive city in the entire world telling us how they are going to solve the problems of … inequality and manipulation. Oh wait they forgot to mention inequality. Oops. To me this is a rich club. Also by the way the $1000 a head conference is coming next year. Millionaires only of course.

Video of the presentation:

Yes, there is something to say in that regard. But it is also harsh and easy criticism that is in itself biased, and shows just mindless anger.

On bias

If you look at the new landing page there is not a single word this is somehow not inclusive. It is directed to humanity in its entirety. The page is providing a generalized high-level summary of the problems, as a landing page should do.

On the problem page there could be mention of ‘Bias’ (but it is implicitly present in ‘social isolation’ and ‘polarisation’). But algorithmic bias is a technical phenomena, and exists on a more detailed level that need not be mentioned when doing a high-level overview. In the presentation Tristan mentioned “Overwhelming AI” and voodoo puppets of ourselves as among the big problems. Algorithmic bias is included in that.

In the worksheet that is now online there is mention of Group Dynamics:

We are inhibited when Excluded, divided and mobilized through fear.

Inhibiting factos:

  • SuppressIng views and nuance
  • Enabling ad hominem or hate speech
  • Enabling viral outrage
  • Lack of agreed-upon norms

There is no need to explicitly mention bias in this.

On using tech as the solution

So Rumman says:

"HEY GUYS so the answer to these problems is a (I kid you not) “full stack approach to human centered design. Bc engineering frameworks is what we need right now.”

The question is: ‘What is CHT all about?’, and then second: ‘How do you address an audience of high-level tech representatives that are dominating the world, and convince them to change their ways?’

We have many, many groups out there that try to improve how tech affects us. CHT is one of them and we - HTC - are another. We can ditch the big tech players and go ‘Small Tech’. With - in a couple of years - 5 billion people in the holds of Big Tech will that be the solution? No!! But it is at least a small part of the solution. For the foreseeable future only a fraction of people will go ‘small tech’, and these are mostly of the activist, more technical kinds of people. That is good, because they have a relatively louder voice to influence the Big tech world.

But Big Tech right now has already won dominitation. They will continue to grow. Creating an alternative tech world without some others tackling tech monopolies and the products and services they create is doomed to fail.

Here we come to CHT. The CHT is a strategic, top-down thinktank. They have scoped themselves to this. So yes, they focus on making the tech better for us. And they have the high-value network to do so. To whisper in some Big Tech hotshot’s ear, and to make Big Tech level boardrooms see reason of change. Of course they should use that network.

Tech itself and Engaging employees is one Strategic Pillar of CHT. Should Tristan at that conference say: “Hi guys… we are going to regulate the hell out of you, and break your businesses apart”. No, of course not. But the second Strategic Pillar of CHT is to organize political pressure. They will do that for sure. How do you engage in politics in the most efficient ways? By being really diplomatic. By being a strategic chess player on an intricate and complex chess board.

That is where CHT is, and I find that a good and logical positioning for them.

On privilege

Re: “Mostly with middle class kids”… Yes, there could be more diversity in the team, but at least the core team is 5 male, 4 female. Their privilege in terms of wealth stems from the fact that they are indeed insiders in the Silicon Valley scene, and this scene lacked the diversity too. This hauls back to the previous point. It makes them uniquely positioned to affect change from the inside. Most of the Big Tech is still SV-based after all. That they are rich and still working for the good cause, instead of retiring and taking it relaxed is… noble?

Concluding: I don’t know Ms. Chowdhury, but I find these emails easy rants. Imho she fell in the Twitter trap of all-too-easy communication. This is the negativity we try to fight with better solutions. We can all be part of the solution, and take our own niche from where we operate. Cooperative spirits will prevail.

On Humane Tech Community position

The good thing is that - in comparison to CHT - we go from the other side, grassroots, bottom-up, and we forego the tricky diplomacy and politics sides that the CHT has to deal with. We work in the open, we lead in the open, and we are operating with fullest inclusion and diversity.

To that end - and for others reading Here is our position statement: Humane Tech Community Open Canvas


@Free, I’m sorry to hear you are feeling tired and frustrated.
@aschrijver, good points.

To me, critical discussion means two things: finding both what works and what doesn’t work. As interesting as it would be to write about the ideas contained in the tweets linked in the original post, I’m going to keep my contribution to this discussion focused on the event itself.

General Notes

  • I, for one, was glad to see something visible from the CHT. And I was moved by Aza Raskin’s intro, especially the part about cellos and pianos.

  • Every non-profit has to work its donors. Comes with the territory.

  • An hour is a short time to characterize a hard-to-define problem and chart a path towards a solution.

  • I though the discussion about how tech has surpassed human weaknesses (but not human strengths) was spot on.

Re: Language and Metaphors

  • “Human Downgrading” is a tech-centric way of putting it. I suspect Harris et al. did so because they are trying to reach people in tech. The more I think about it, the more I like this idea, even if it didn’t sound 100% natural initially.

  • I thought that the “Voodoo dolls” bit was a great metaphor; hell, I wish I had thought of that myself. Same with the phrase “Race to the bottom of the brainstem.” Tristan Harris and the CHT et al. have coined several great terms / pushed several concepts into consensus reality. I’d love to see more of this vocabulary creation.

  • My experience suggests that saying that the voodoo dolls can perfectly predict everything we are going to do or think is an overstatement – the models aren’t quite that sophisticated. One reason it appears like your phone is listening to you because your friends search for things before you do.

  • The new tagline for the website – Our mission is to reverse human downgrading by inspiring a new race to the top and realigning technology with humanity – reads as a bit too abstract to me. I imagine getting more feedback on this phrase from their target audience would cause them to refine it a bit. (There’s a joke about a lack of A/B testing somewhere in here.) That being said, I heartily agree with the ideas behind the words, and I imagine most of us feel the same.

  • “Fullstack approach to human-centered design” is also tech-oriented language. On one hand, I don’t know that this phrase works as well as the others I’ve just mentioned. On the other, at its best, I think this concept is trying to point to the idea that none of us have great mental models for a) the amount of variation in decently large groups of human beings, and b) we need more empathy, not less.

Re: What was and wasn’t there

  • The solution the CHT presented is one of self-regulation. The legal policy / rely on the government approach likely occurs to most everyone who is interested in systemic solutions. I’m sure it has occurred to the CHT staff. Thus I imagine they see their goal as less of “solving tech, once and for all” and more of “bring tech people along such that tech solves itself.”

  • I would have liked to have seen a deeper discussion of the “new fiduciary” Tristan touched on at the end of his talk, especially in light of the recent discussions that we’ve developed here on this forum.

  • Then again, I imagine that all of us on this forum would have liked to have seen more discussion of the action steps and solutions the CHT is taking… but at the same time, the reason we participate on this forum, I imagine, is because we are already convinced of the problem.

  • I think that the design guidelines worksheet isn’t specific enough. I’ve written before about several specific things that could fit naturally in a set of ‘humane design’ guidelines. For example, perhaps the biggest one – and one way to solve the Youtube problem that Tristan / the CHT highlighted – would be to allow users manual control over the quality of their recommendations. Every recommendation system knows how “addictive” each item will be, and everyone would win if users were able to set how relevant they wanted the next hour of recommendations to be.

  • Perhaps the CHT would benefit from clarifying that by “design” they mean something like “the total conception of an app, website, or business” rather than “how a piece of digital technology looks and is interacted with.” AI, ML Bias, incentives, recommender systems, etc. are all included in the former but not the latter.


This… is a great set of points. Really good analysis. Thank you, Evan!


Yes, I find this to be overstating an extended, complex reality. Who we are shifts over time and in response to many things, much like a plant responding to light.

I agree with @aschrijver that your analysis is really good; thank you for sharing it and for thinking deeply and well.


The Center for Humane Technology has chosen to focus on high-level political and business leaders. And that’s fine, we need someone to do that very important work even if it means that Tristan’s schedule is full of $1000 conferences that only millionaires and employees of billionaire companies attend. It is the world of the 0.1%, 0.01% and 0.001%, people who are very out of touch with reality, but also make the decisions in big bad tech.

What’s missing? All the other people of the planet who apparently are not the core target audience and also not being engaged by the Center for Humane Tech. That is a big omission, almost all of the Center’s potential audience is being unserved. Ordinary people are very concerned about the harms of tech. And many minds from the humanities to tech, from business to government are interested in solving the problem. Yet all these ready and willing people are not really addressed or given ways to take any action whatsoever.

Maybe that is intentional, the Center does not want to infuriate the rich tech vampires of the world by instigating revolution. And also the Center itself wants money, after all look where they are and who the are, coming from the highest levels of privilege and wealth themselves.

The Center does not participate in this very Community here, this forum. Where is Tristan in his own forum? Posted once he did maybe. So this forum has split itself off into a separate organisation, the Humane Tech Community.

Tristan rose to fame with am energizing TED talk aimed at us, the people of the world. Since then his focus has shifted towards the super-wealthy and powerful.

However while some change may come from the 0.1%, I expect almost all change to happen from the 99.9%.


@hmswaffles articulate and clear as usual.

Here are my notes, likely long winded as usual :confused:

I loved the personal intro by Aza.

I’m Concerned that introducing the meditation session may turn off some to the real message and indeed instigate some backlash, perhaps even putting the movement in the “California hot tubber” category, and many will use that to establish the “flakiness” of the movement and wind up as a scene just like in the HBO show “Silicon Valley”.

I say that as a California hot tubber who also meditates.

Human downgrading

  • I would prefer calling this amplification of our reactive natures. “Downgrading” implies a golden age of humanity prior to computers, and humanity has always been downgraded and suppressed, there is probably no other time in history when humanity had so much opportunity to “upgrade” our potential, so it would have been nice to see that represented.

  • Technology has amplified the best and worst of us, and I think we’re just seeing human nature as it is because of it, and we don’t like what we see or what it says about us.

Our reactive natures have been exploited before technology came into play.

Are humans really this evil/bad/horrible? Is this what social media is showing us?

His answer to his own question was no. I admit I was a bit disappointed by that. Was he suggesting that it was social media that created the urge for genocide?

  • I wish his answer was “yes”, our reactive natures can definitely be that bad and have always been this bad if not far worse, we’ve just never had the tools to amplify them as we do now. Love it or hate it, technology and social media is forcing us to reflect on ourselves, and my experience has been most of us do not like to do that, we tend to just want to make the “other” wrong, and not look at the unintended consequences of our own individual choices. Tech is also amplifying that too.

  • Everyone cheered social media in the early days when it was exposing corrupt governments, companies, “them” etc…yet we have to accept that social media has also exposed how crappy we can be to each other. Social Media is bringing light to society more now than ever before, the problem is we don’t like what it is showing us about ourselves and our society.

  • The influence of Wall Street and Finance on the evolution of technology is what is also being revealed by the unintended consequences of technology, but was missing from this talk.

  • Facebook just posted HUGE profits this quarter. With all the mass awareness of this problem, none of that has changed Facebook’s profit potential. (it probably made their advertising engine more attractive to advertisers most likely, making it more seductive)

We have to create a better business model that can outcompete this problem by creating a more efficient ecosystem that will make the third party aggregator obsolete.

There needs to be a revolution against the tech giants, and that revolution can only come in the form of better business models and better technology, no?

Attention Extraction, Extracation incentives

I’m not sure why Tristan is discouraging the “free” model, because the only other option is either a paid subscription model (which cuts off the internet to the poor) or a government utility that is paid for by tax dollars (which turns the internet into PBS).

  • These services he is criticizing for their advertising models are not traditional publishers, they are aggregators of content. (facebook, twitter, etc)

  • Meanwhile, the real genuine publishing world, which includes news and education, are suffering in the “attention economy” that Tristan is criticizing and it is the publishers of content that could face extinction, yet this was not mentioned. If you remove the attention economy, you lose news, sports broadcasts, niche bloggers, etc. What about them?

  • “Throwing pennies” to people for their data is meaningless when their attention has already been devalued at scale by the platforms, not the publishers of genuine content. I think this was a poor phrase for him to use

  • I would have liked to have seen Tristan show a bigger understanding of what the “attention economy” means outside of the social media platform aggregators.

  • The free model just doesn’t have adoption, it has complete and utter saturation. I believe we need to work within this framework if we are to address this as an urgent problem. It is too late to put the toothpaste back in the tube.

I believe that if we are truly going to make a change here, we have to challenge the big platforms, who are just aggregators of content, and offer an ecosystem that genuine content publishers can use without relying on those platforms.

  • I enjoyed his phraseology, “attention extraction”. He’s right, on this environment, we are truly being robbed of our natural value. However, the model Tristan employs is that of a passive user, a victim, a “voodoo doll” that can have pins stuck in it without having any power.

  • I do believe 100% that the solution to this problem is turning the passive user into an “active user” and create a true “attention exchange” that is the owned and shared as the natural wealth of all citizens.

Attention is our natural currency. It is something that we own, and that others want. This is what the numbers and the data shows, it is not controversial to even suggest this.

Tristan danced around that yet didn’t seem to acknowledge the face value reality of what he was saying - attention is hijacked because attention is valuable.

Attention is the gas that runs the entire engine. If attention is focused and not distracted, it is more valuable.

Want to talk about an upgrade? How about the upgrade to humanity where all people own the inherent value of their attention and have a stake in the ecosystem that requires it to survive?

Tristan declared an urgency to this problem (which I agree with), but what solutions did he offer?

What’s the solution? Ban Straws, blockchain, what are we gonna do?

I’m not sure why he shrugged the blockchain community, while there are loads of issues there too, at least there is lots of innovation and a genuine intention to create better systems. Although I don’t believe the blockchain can be applied to the attention economy, it can be applied to the startup funding economy, a way for startups to work outside of the influence of VC’s and IPO’s, so that would be a good thing to highlight.

He mentioned, vaguely, other “solutions” that people have thrown out there, but he didn’t really say “why” they won’t work, only that there is urgency so we need a solution to emerge now. To me, this seems like a conversation that may have been had in a “bubble” within Silicon Valley.


Yes, give internet users the “agency” to solve this problem, the audience is the solution to the problem. The third party has removed human agency.


Tristan mentioned the urgency, yet I had a hard time seeing what he was offering was an immediate solution instead of an abstraction with only the hope of influence on others to make some change.

But I dig the abstractions. I dig that he is really encouraging social gamification, yet these ideas need business models to launch…what are they?

Shared Language

This was wonderful to hear from him. Although I’ve spent most of my time on this forum focusing on the problems with ad tech, online consensus building is my true passion and creating a shared language is the key to all consensus building from my experience.

Collaborating on building a shared narrative, and dialling it down to users creating a shared language and set of truth values I believe is indeed the next wave of social media innovation - but the problem Tristan missed with this is how do these new innovative platforms get funded?? How will they get adopted without funding??

If there is urgency, what is the plan of action?

I understand he was giving the talk to all the tech CEO’s, but is he expecting them to abandon their shareholders and turn into subscription models?

Okay, Ima go jump in my hot tub now :slight_smile:


Hey guys,
i was surprised of the low number live viewers. It was around 10-15 viewers a half after after beginning?!

Great work of using language and metaphors!
The meditation part was kind of missplaced.

I look forward to see the design guidelines!

After all I am glad to see some news about CHT!


Yes, this was what I also noticed. I don’t know if those counts were measured from distributed video servers, or they were the aggregated total count.

The meditation was okay for that event, but less fitting in the concept of the Livestream, where all watchers are waiting for the action.

I think that on the whole the event could have done with a better PR campaign beforehand. We at the Community Team knew a week upfront that an event was upcoming, but weren’t informed on the details. All CHT social media channels were silent right until the day before where some twitter messages were created.


I support the whole message of Tristan Harris and the form and words of his presentation too. Also the meditation phases were important to follow and practice, in order to feel the new kind of agency over ourselves we need to aspire to.

The lack of technology advices is due to the fact that technology comes after, before it comes a common shared problem, vocabulary and knowledge of how we are and what brought us to this point.

I don’t get the point of him speaking to rich people. If they are rich it means that they got rich developing technology apps/services that go against the humane technology, otherwise they would not be rich today, considering how the market has been till now (or, even worse, a year ago). So if even a few of them will change their way of thinking and realize how much money now they can get with humane technology if this thing grows bigger, it’s already a great success.


@NSaikiwiki, wonderful points, as always. I don’t know if “California Hot Tubber” is a recognized phrase, but speaking as one myself, I’m going to start using it. And nice line about toothpaste, too.

Your point about the meditation session goes back to the question a lot of us have touched on: who is the intended audience? As much as all of us may discuss this on the forum (i.e. Is it / should it be the 99%? The 1%? Tech CSuite executives? People who have not yet done things like apply to YCombinator? People running for political office? etc.), we don’t really know.

Maybe it’s just me, but it’d be cool to see the CHT release design mocks / design case-studies for how Youtube, etc., could be re-envisioned to be more humane. He started to get there with the slides that organized contacts while displaying helpful things they may have said at some point in the past. That, and creating a shared vocabulary would seem to be his / the CHT’s visible strengths.


The Silicon Valley Bay Area tech industry is very cut throat with funding and ethics outside of inhumane technology practice.

CHT is very different- I can feel it that they take a very different approach to business, they are the brave black sheep in a good way… Tristan Harris took the high road and is building from concept up- I can feel their business in general will be a ethics model. This is start-up not and a revolution since the early dotcom days…


Not sure if I should be PMing this comment but coining a group of people in this manner is inappropriate. What is a California hot tubber? This does not appear to be a statement that contributes to the cohesive nature of a group., we need to be careful because intentions can be misread without social cues on a forum.

Rumman is completely entitled to her opinion. I know her personal raison d’être primarily concerns the ethical governance of AI, which is wholly rife with algorithms and data that codify all sorts of forms of bias: racial, gender, etc. It’s a horrible problem and its roots are at the cultural level, not just the technical.

So that’s the lens that she is likely going to be biased to view things, and this is probably more the flavor of what “humane tech” means to her. Hijacking the attention economy probably seems like a frivolous “First World Problem” compared to where she’s coming from, which is more on the inequality side.

But that’s more than a bit disingenuous. Ironically, she’s also demonstrating a perfect example of how social media usage can encourage human downgrading: encouraging rash judgment and posting for reactions and likes rather than slowing down and thinking deliberately and analytically in any constructive way.

I sense a bit of misplaced resentment in her reactions… a kind of holding people in contempt for trying to better things while applying a Silicon-Valley-tech-bro whitewash of the same demographic wholly responsible for the mess she sees everyone in. A major challenge in the inequality sphere is that the worst kinds of daggers are often directed at some of your most likely allies.

The subject of human downgrading is very much as valid as addressing inequalities, and there’s no reason any one initiative to better the world should have to attempt to rule them all for justification – just as her response on Twitter illustrated. What good is solving algorithmic bias if society death spirals in a human-disconnected, depressive, reactionary funk that still contributes to the erosion of democracies and encourages genocide?

I actually do agree with some of the points in the third tweet, however: namely primarily the race condition of looking at engineering solutions to engineering-generated problems. This is what Einstein meant about solving problems with the same thinking that created them. I would also challenge the concept of human-centered design as an absolute good for that matter.

Human-centered design is what gave us the convenience of Uber while increasing urban congestion 2.6x by some studies. It’s what made Éric Favre, inventor of the Nespresso coffee capsule, resent his creation when he realized his design for human convenience led to environmental disaster. It’s what makes Amazon shopping convenience employ warehouse employees living in parking lots and wearing diapers while filling orders. Design needs to be inclusive of entire ecosystems of roles, system players, and representation. Reductionist thinking is lazy and convenient but will always get us into these messes of “externalities”.


The conversation was very Valley centric. The language doesn’t exactly speak to the everyman who uses and is affected by tech. However, I’m glad CHT is out there. The valley is very powerful and perhaps by speaking in a familiar language it will change the culture and the benefits will trickle down. I’ve had a humane technology product and god knows nobody in power cared about it, so maybe they will start to consider this an important element in scaling and funding companies.

I completely understand the frustration of business as usual. As an analogy, I met with entrepreneur who was using text messages to disseminate health information in low income minority communities. He was saying how disillusioned and disgusted those communities are with the powers that be… the universities and health philanthropies that come in again and again to do research in those communities using the people as guinea pigs, supposedly in their best interests. And the universities leave and nothing changes. We were talking about that a major change would be when those universities and hospitals came in with their research designs they actually paid people for their willingness to give their data and be test subjects. Then at least there might be some pay off. Or even better have actual community members on the research committees. If they had been, you can be sure, that getting community members paid for their data would have been suggested much sooner.

So I hope that including other voices gets considered sooner rather than later…

BTW: When I lived in LA I actually attended Trudy’s meditation group. When they first came on, I thought it could be seen as California woo-woo hokey. But I do think that remaining mindful is a good counterpoint to what happens when tech hijacks people’s brains.


You are using human as an equivalent for customer/consumer. That reveals how the word has evolved from its original meaning and why adding a simple e at the end makes all the difference.

I am in part, by intention. Perhaps I haven’t looked hard enough, but I have seen nothing from the CHT that reflects a multi-stakeholder perspective of human-centered design for all of its various stakeholders, from individuals to business entities to government. That requires dedicated platform ecosystem design.

The Uber example is more than human impacts, as is the Nespresso capsule (more obvious) and the Amazon example. Take the environmental impact of all of these examples. They ultimately all have serious human impact but yet have no representation under typical human-centric design. It is swept under the rug unless someone can draw a UML diagram with a stick figure.

I’m not sure humane covers that sufficiently, unfortunately.

1 Like

I definitely think the message should be to the 1% and the 99%, and it does make sense to have focused outreach to the CEO’s as well as government (Roger McNamee for example works directly with the US Senate, in addition to working directly with George Soros at the global level).

So it’s good that they are speaking to rich people, because we need rich people to fund stuff :slight_smile:

I believe what you wrote here encapsulates the core problem. And this is where I also think we need to cut some of the tech giants some slack, because “unintended consequences” exist all the time in all industry and government, (not to mention my own personal life, oye!).

I believe that the success of technology and the web is expressed through the complete adoption and absorption of society in a relatively short amount of time, and much of the “unintended consequences” can be framed as a global crisis of consensus building, since never in history have so many discussions been instantly connected around the globe.

most of the web supports emotional consensus building via the voting algorithm, it is just an extension of “most popular in high school” social urges or at worst mob rule and riot. This even includes page ranking. These are all forces that can be manipulated by any individual or group for almost any purpose, commercial, political, even personal.

What Tristan calls our “palaeolithic urges” are I believe more evolved than that. I believe our deep-seated psychological urge is the urge for a consensual reality, at least with those within our social group. Social Media amplified that urge. “Consensus building” can be viewed as just a smorgasbord of various psychological forces converging around a set of discussions happening on the internet all seeking a shared and consensual reality with some group of people.

The tech giants developed tools for emotional consensus building because that is how we all thought society worked, whatever is the most liked, most popular, most similar to that which is most popular, is what we all assume we want because that is where consensual reality tends to establish itself first, what gets the most attention in some form.

What Tristan is suggesting and I 100% agree is that the web gave us tools for emotional consensus (voting up/down), and the tools for targeted ad buys (propaganda), and ZERO tools for “rational consensus building”, which would be the psychological forces that temper an emotional consensus and establish a consensus that is trusted, honest, mutually resolving and collaborative and does not require majority vote for the accuracy of its truth value.


Great points. I wholeheartedly agree. It’s easy to slag on the tech giants when they are the status quo giants. But in no way can I fault Mark Zuckerberg for not forseeing how a group of people in Moldova would create a profitable blogging enterprise through abusive uses of Facebook to unduly influence democratic elections. I can, however, fault him for a) being slow to acknowledge the problem, and b) an insufficient response to address the surfaced problems.

A lot of the human downgrading also comes down to how does one human actually survive in an onslaught of ever-present, always-on information and commentary? It’s a deluge beyond any normal human comprehension. The side effects of which include attention scarcity, the weaponization of brain hacks attempting to hijack your attention, superficiality (e.g., not reading past the headline and merely responding like an automaton), lack of emotional connection and investment on a lot of levels and various channels, etc.

There absolutely are no supportive tools for that. And what we have merely reduces human experience to 1s and 0s. You can’t even vote up to five stars for YouTube videos anymore: we, and our interactions, are ever-more reduced to ones and zeroes. And of course, all conventional software development reduces humans to zeroes and ones – and has no model for how humans actually operate: e.g., holding conflicting ideas at the same time, not acting deterministically as homo economicus but rather responding in one of multiple ways depending on our whims and moods of the moment, valuing things in our lives that have no coherent value to optimized machine algorithms such as love, friendship, doing the unnecessary through kind gestures or appreciating art and music…

This is only a gut reaction response, but something tells me the answers need to lie in filtering out the clutter that makes us knee-jerk react and spend more time with fewer things that we can go deeper with.