Given the problem statement of this community, what types of Humane Tech are fitting to the community, and which are not?
CHT has a Mission, but what is its Vision?
Going from humanetech.com it would seem that the mission is: Humane Design is the solution. But several members and some media articles indicate that this may be too small in scope, given where technology is headed. And how do we go about making this vision a reality?
From my perspective, technology which makes good use of human attention, and helps focus and conserve our attention towards a good and healthy life, is humane. This tool for those with memory loss seems humane in a very positive sense – a nice change from countering the inhumanity of attention hacking and social addiction.
In my mind, the initial goal of Time Well Spent / Humane Tech was to reclaim our personal time and attention from systems which use it for the wrong reasons. This leads to a derivative goal: to support technology that specifically helps us in the reclaiming time from these applications.
Prisma offers the opposite (but equally important) side of the spectrum. It improves the use of time that we already have, without focusing on “reclaiming.” Rather than just filtering negative uses of our attention by technology, CHT should equally encourage positive uses of our attention by technology. That is where Prism fits in.
I think this aligns with what @metasj said as well.
Thanks @metasj, I have added it to the awesome list.
Now I intentially broaden the subject much further to something that may seem off-topic-ish: sustainability (defined here as a combination of economical, ecological, social and political factors, not just the environment).
Apple, Samsung, and Microsoft can help solve the problem, because keeping people hooked to the screen isn’t their business model.
True. But what is their business model? And does it make their tech into Humane Tech?
For instance, on Apple (don’t know if they changed their ways since June '17… doubting it, but who knows) there’s this: Life and death in Apple’s forbidden city .
These companies may provide paid services and not sell your personal data, yes. But while not in the Ad-selling-business I wouldn’t call their tech humane, because their business models are unsustainable!
This is a global problem - we as humans are the only ones striving for unlimited growth and base most businesses solely on monetary incentives, profit, the bottom line.
The economist Kate Raworth explains this excellently in her vision of the Donought Economy. I would very much encourage you to watch this video (few secs of Dutch language in it…):
The only long-term sustainable model is the one in which we don’t destroy our planet. IMHO, a root-cause analysis of the problems defined by The Center for Humane Technology leads to the flaws in our current economic models as described by Ms Raworth.
And this, I think, should also be a foundation for this community. This doesn’t mean it needs to be in-your-face, but more as an underlying philosophy. There are interesting discussions on this Discourse forum on face-to-face interactions, meeting in the flesh, Time Well Spent.
That all fits the bill. The sharing economy, gig economy, circular economy, however you name it, are the domains where the best solutions can be found.
Food for thought?
At least I am basing my startup, innerircles, on these ideas. Its refreshing to think like this. Finally, to make it a bit more concrete… related to business models again, I’ll leave you with just one more link - to a presentation on the Flourishing Enterprise Innovation Toolkit and Business Canvas:
I find it hard to define Time Well Spent. What is well? For example video games can probably fit the HT definition of bad actor (I disapprove)
What I hate is those toxic products that hacks user brains to monetise it, and that requires more and more surveillance technologies. For me it’s always going back to privacy.
Maybe I am a cypher punk?
What do you think of the cypher punk movement ? Any similarities with HT?
Thanks for the link to Kate’s TED talk! I took some economics courses in college (required for business degrees), and strangely enough, the first course I took was titled “Environmental Economics”. Of course, it’s been awhile, but I remember discussing in class the effects of pollution (Lake Erie, etc.) and how it relates to the economy. The rest of the coursework (Macro, Micro) went as Kate discussed. I enjoyed her comment about economics professors and the need to rewrite the textbooks!
So what is humane technology? It certainly should include tech with positive outcomes for well-being. We need to step back and observe the evolution of technology and its impact on each generation to understand where it goes wrong and how we can get it right.
Re-reading Tristan’s words and with awareness day after day news and recently Cambridge Analytica shutting down, I can feel the speed of the increasing good conditions to a cultural awakening.
As we know to change culture is not an easy nor quickly process.
And I am sure we want a really and deep change, not a superficial one.
This kind of change demands process and process demands time and organization.
But, in the other hand technology was and is a global revolution, I feel CHT is the revolution of the revolution. And in that way, events are increasing speed and deep claiming for change.
Probably it’s impossible still for us to see the historical point in the line time we are in this exact moment. And still impossible to comprehend global influence for the future CHT is creating.
Always was alike in all human history.
Fortunately we’re part of this movement and then we can involve in this cultural awakening.
So, in a few words, I want to say that we must be patient , as moderators ask for , but meanwhile get a better organization of contributions and wait for Founders developments that Tristan’s mentioned.
Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.
…when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.
In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics.
In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms.
This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.
The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.
We have no idea what Tristan and his team are working on, so it could be they have come to the same conclusions this Guardian journalist has. Until we know for sure, we should try to be patient and tend to the fires of goodwill and faith.
From The Nation’s list of ways to break up monopolies:
Google, Facebook, and Amazon pose such a fundamental threat to American democracy that the only rational solution is to break them up.
These companies command markets to a degree that makes agribusiness and airline giants seem meek by comparison. Google controls 92 percent of internet searches. Last year, Google and Facebook attracted an estimated 84 percent of global spending on digital advertising, excluding China. Amazon owns over 70 percent of the market for e-books and, through the spread of Alexa, 71 percent of the in-home voice market.
One reason these pervasive digital platforms constitute an existential danger is that they control the fates of so many other businesses. Retailers and manufacturers at once compete with Amazon and depend on it to reach the market. Media companies are beholden to the algorithms of Facebook and Google. Google has used its control of search to privilege its own travel and shopping services, while marginalizing those of rivals. In 2013, Facebook bought a tiny start-up, Onavo, and used its technology to spy on smartphone users as a way to detect popular new apps early and then either buy them or copy their functionality.
A second reason these firms have become dangerous is that they control the flow of ideas and information—an arrangement no democracy can tolerate. They mediate our interactions, and they decide which sites show up when we search; which news stories, real and fake, we encounter in our feeds; and which books we run across. Surveillance is a central strategy of all three of these tech monopolies. And they all use the extensive data they’ve gathered about us to hinder upstart competitors, keep us tuned into their services, and otherwise sustain their dominant positions.
What would breaking up these companies look like? It would essentially mean removing the conflicts of interest at the heart of these digital giants by compelling them to spin off their platforms, which provide market access for other firms, from the other parts of their business that compete with those same firms.
That process would likely begin with an antitrust investigation. Last November, Missouri Attorney General Josh Hawley, a Republican, launched an antitrust investigation of Google. His office is looking at, among other things, whether Google manipulates search results to favor its own products over those of its competitors, in violation of the state’s antitrust laws. It would be a promising development if other state attorneys general, who have broad authority to protect their citizens from monopolies, did the same. Many of the nation’s most pivotal antitrust cases, including those that led to the break-up of Standard Oil and the almost-break-up of Microsoft, were first initiated by state investigations and later taken up by federal antitrust enforcers.
The article makes some excellent and very valid points. I have moved the posts on this subject to this topic, and we should further discuss this as we are improving the forum.
Personally I agree with many of the points raised, and other members and their contributions also indicate that the scope, mission and line of work should probably be broader than what is currently described on the CHT website.
We don’t know strategy and plans of the core team yet, but we as a community together can help them define the - rather new - field of humane technology, and we can certainly investigate its full scope.
Should humane tech also be talking about tech that is produced, supported and disposed of humanely?
There are many facets to this. Everything from tax avoiding tech companies, to terrible labour conditions on low pay, using conflict related minerals and metals, dumping of used phones (and dangerous stripping out of valuable components because they are not designed to be recycled), poor environmental performance, designed obsolescence (and the wasting of resources).
Should we be thinking about this when we talk about humane tech? Who made my product, who will suffer from it and its disposal?