What are your thoughts about prosocial / humane bots?

I read some research that looked into whether bots could be deployed on Twtitter to promote prosocial / humane messages — that is, to push hashtags and posts that are positive and cause them to trend. The answer seemed to be yes. I’ve been thinking about this, and I can see arguments for this as a strategy and also obvious concerns. I wonder if others have considered this tactic and what your thoughts are.

1 Like

I imagine there are two types of bots in general :

  1. Ones that use a predefined text material as a source to post out of.
  2. The other engages people and mines their response to do “machine learning” to make even more posts (Microsoft did a experiment letting their bot roam the twitter-verse for automated machine learning and it ended up posting fascist and worse things in a short time)

If the “pro-social/humane bots” fall under the first kind, I don’t see a problem as long as it doesn’t ping (@ mention in twitter for example) random people directly i.e it should just post in frequent intervals from a source of text its highly ethical creators decide. And it should not engage people directly.

For example, I have seen a world war history bot in twitter that posts frequently what happened “on this day long ago” in world war 2 - so that people don’t forget what harms a global war can do.

2 Likes

I think the problem with this would be how to define “prosocial/humane” and how to restrict bots to act on such messages–and, of course, how to choose the people who decided such things.

I can see money entering into this with businesses paying to employ such bots to further their “beneficial” messages.

2 Likes

Thanks to both of you for giving this question some thoughts and for posting your thoughtful comments. To explain a bit further, there is a great deal of research and analysis - and I’m guessing you’ve both read this - the Russian-affiliated botnets have been very active on twitter for several years, and continue to be. These bots impersonate Americans and post highly inflammatory messages that are designed to increase polarization and drive divisions. They are retweeted by right wing Americans, and their hashtags tend to trend. This has fueled the sense of animus in the U.S., and has also provoked greater animus with real people who read the tweets associated with right wing media. I realize that I may be criticized or mentioning that these posts are right wing, but I’m not speaking from a political party perspective - I’m taking about hate speech, conspiracy theories, fascistic messages - points of view that - whatever your political persuasion - one can see as antisocial. These tweets are inflammatory and truly represent what Tristan called ‘the race to the bottom of the brain stem.’ I point toward the work of botsentinel.com and the Alliance for Securing Democracy as well as the Oxford Computational Propaganda project. My point here is that this same strategy could be used to promote ideas and information that helps advance a positive shared narrative rather than a hateful and divisive one. I believe the ethical concern is that bots - at least some of them - present themselves as people, which is deceptive. On the other hand, there may be a way to use bots that is ethical and serves as a powerful countervailing force on Twitter. That’s the notion I’m considering - and as I said there is at least one piece of research that suggests this can work on Twitter. If you wouldn’t mind commenting further now that I’ve explained myself more fully I’d welcome your further thoughts.

2 Likes

Karen, does Gokul’s response answer your question? Not sure if you were looking for a more technical analysis, but I think he hits the main points here.

For myself, I worry about the proliferation of bots good, bad, and neutral. I realize I am probably in the minority, and if so, I want to represent that POV. When @micheleminno and I were in the Mozilla open leaders program, we had to use Slack, a messaging app. Whenever I opened it, I got a greeting from the so-called aloha bot. Because I habitually dismissed these bot messages, I ended up not benefiting on the rare occasion when it had something to teach me, e.g., a shortcut for useful functions.

Trying to make bots “friendly” is something I worry about–mainly because, as Sherry Turkle has pointed out, we anthropomorphize so easily.


Just saw this and want to add it here: a report from the Mozilla Foundation about misinformation.

1 Like

The technical answer is appreciated but not what I meant. I share your dislike of bots as well and would 100% prefer not to even consider them as part of the solution, but I am deeply concerned that the race to the bottom of the brainstem is (a) being accelerated dramatically by AI, including bots and machine learning and (b) is a losing battle when human communicators try to push out as much humane content as the dark and disturbing content that’s coming out and that, by the nature of how the brain works, will win out for attention in a 1:1 competition with positive messages. I’m trying to imagine a set of strategies for ordinary people and for people with larger audiences - strategies that could reverse the race of attention back toward the front of the brain, where reason and ethics reside. I feel as if the only argument being made is to change the business incentives and business model, and while I absolutely 100% support that solution - and I put my money where my mouth is, by paying for protonmail and by getting off all platforms with the current business model - but I don’t see that happening any time soon, and I fear that the downward spiral in our society is happening faster than any reform efforts will possibly move, never mind regulation, which unfortunately at the moment is almost certainly a non-starter in the U.S.

1 Like

I’ll add one other comment to what I’ve said. I’ve presented twice on this subject in the past 3 days, at a national conference, and even used about 8 minutes of Tristan’s TED Talk - to a group of public officials from across the country, many state level officials (Cabinet Secretaries, commissioners) others county level executives. What I can tell you is (a) they are very aware of the polarization happening in across this country, and deeply concerned (b) they are trying hard to communicate a more positive and unifying, well-framed set of messages and © they had no prior familiarity with the issues with social media, the way that outrage and fear prolong engagement, the reality that engagement is the key business metric - and their first and most urgent question was what should we do about this. How can we use social media and all communications methods to help shift the culture away from deeper and deeper divisions? They are prepared to use their influence, in my opinion - but we need to offer them ways to do so other than telling them that the business model needs to change. I told them that - but they are communicating every day, and they want to do so in ways that are good for their communities and the people they serve. I have some hypotheses, but no answers - nor am I aware of anyone else having answers either. The time for us to come up with strategies to test is now - that’s why I asked the question about the notion of humane bots - not because I want to deploy them, but because I am searching for a multi-faceted approach that can be put into use today, not in some hoped-for future when FB is broken up or Twitter charges for membership. I would love to see more thinking about these questions on this forum - I appreciate you two @patm and @gkrishnaks for engaging with me on this topic - I wish others would join us.

2 Likes

@khkey, I think that Arnold’s response to @PatMc’s post may be apropos here.

@aschrijver can you update your answer with a view to addressing’s Karen’s particular concerns?

1 Like

Thank all of you for sharing the insights.
When I think about this, the first question that comes into my mind is: how is this going to be implemented? If such bots are here “to push hashtags and posts that are positive and cause them to trend”, what metrics are used to determine whether a posts is positive/prosocial?
I imagine that there are two approaches:

  1. let the users decide. (kind of similar to “like” a tweet but the button shows as “promote this post as prosocial”?)
  2. there is a committee who collectively decide the metrics and input them into the bots (how I understand this is that in order for bots to discover and promote positive posts themselves, they need some models/data to learn from). In this case, the metrics and the whole decision-making process need to be transparent to the public.

Although I know the aforementioned hypothesis is at a much detailed implementation level, I think it’s important to vision how this will look like. I can also imagine there is a fair chance that two kinds of bots standing at the opposite ends of ethics–malicious bots that want to manipulate people’s thoughts and bots that promote positivity and humanity–compete with each other for users’ attention through hashtags and trends. This to me can be a new war where multiple parties are involved. Hence, my opinion is that if prosocial bots are to be implemented, more details need to be considered, like combating unethical bots at the same time etc. However, I do agree that the issue is very urgent right now, and we need to collectively make effort to tackle it!

4 Likes

I appreciate your thoughtful comments.

I would suggest that your question about what constitutes “prosocial” is identical to the question that some have asked about what constitutes “humane” when it comes to technology. In other words, excellent question, and one that deserves to be explored and codified - but not something that we can allow us to stop trying to advance positive social norms.

If we were to try to advance this idea - and is so doing avoid the pitfalls you rightly flag - we would actually need to begin, and that is where I am still hoping for more engagement from others on the forum. Your response is encouraging - I’m concerned that so few people on the forum seem to be thinking about ways to actually use technology to reverse some of the dangerous polarization it has helped spread (like gasoline on a fire) - not only by changing the technology itself (eg developing new apps and platforms) or regulating tech companies, but by equipping those who need to communicate today and tomorrow and the next day with tools to do so. Now.

1 Like

Hey all! New user here, but I found this conversation interesting. Particularly the question of “prosocial”.

If I may, I’d like to take a step back and look at the frame of the question. Specifically, what are the benefits and drawbacks to a centralized social network in contributing to a polarized or unified society. The benefit asked about, spreading positive messages (let’s leave “positive” undefined for the moment), is the other side of the coin of spreading negative messages. In thinking about how to develop pro-human technology, an ideal I would like to aspire towards is how to build systems which are designed systemically to promote “good” and mitigate “bad” (because as when designing secure software, we should assume any design will come under attack). I don’t believe current social networks are designed for this (and to be fair I don’t think most people foresaw this problem with them in the early days).

What I think would be a more direct approach to what you want to accomplish is to replace social networks with a solution that’s more human-friendly and which can mitigate the impact of bad bots. Specifically, break the single platform into many focused around special topics (like the HTC forum!). People can connect on these platforms and participate in as many as they like (subject to the oversight of the individual community moderators). The definition of “good” and “bad” are then left up to the moderators of the individual communities, but by keeping things small if someone runs a toxic community then others can simply leave and join a different one. Similarly, you can still have bots jumping between communities, but if they are disruptive to a community then the moderator can ban them from that community.

There might be concern that this would lead to more of an echo-chamber effect as people join only communities that mirror their beliefs, but the important thing to me is that this would no longer be the result of an algorithm invisibly deciding to show you only media like that which you already engage with, but would put the responsibility back on the individual user to join communities they felt added value to their life.

Because as @saiyu mentioned, if we build positive bots on the current framework then there will simply be a war for the messaging, and I don’t think positivity will win because of the “race to the bottom of the brain stem” you mentioned earlier.

3 Likes

Welcome @indigochill! Good point, but what about social media that people join because everyone is in? Like all their friends and ‘cool’ people? This would be more difficult to substitute with small niches of theme communities…

Again, while I appreciate the ideas about what might be done - better solutions in the long run, many of which I believe in and would endorse - I again urge you all to consider what advice and assistance we might offer to public officials who need to communicate with the public today, tomorrow and the next day. If it’s not prosocial bots - and I said from the beginning I see all sorts of downsides to that idea - then what can we offer them now, to try? I feel as if the question I am asking is getting lost in hypotheticals and parsing words. Accepted - most people don’t like the idea of prosocial bots (or want to debate what prosocial means). I don’t like the idea, but I actually am looking for real world ideas and methods to share with public officials that can be put to use today. Is there anyone who wants to talk about that?

To your question about what to talk to public officials about, if I were to talk to public officials about how to improve the relationship the public has with tech (particularly in having a healthier relationship with social media they consume on Twitter/Facebook/etc which seems to be your interest here), I would say let’s look for ways we can educate the public about being discerning about the messages they take in. Don’t take things at face value. Do your research. That kind of thing.

Basically it boils down to a greater educational emphasis on critical thinking, which should hopefully reduce the innocent proliferation of misinformation created by bad actors, whether bots or otherwise.

This could take the form, for instance, of explicitly teaching logic courses in school, or maybe even some form of introductory computer science, which if we strip away all the tech trimmings is really just teaching systematic reasoning. Or for something a little less drastic perhaps, putting more of an emphasis on hands-on experimental process in existing science courses to teach students to combine observation with reasoning and to subject their hypotheses to experimentation.

On the topic of communities that grow merely because everyone else is in them (which is a great question), right now I don’t have a great answer. My mind goes in two directions:

  1. Let them be. If the niche approach is more rewarding, more people will move towards that eventually. Maybe there’s even room in the world for both approaches. But already there seems to be plenty of public sentiment against the major social media leaders, but people don’t see decentralized alternatives like Mastodon or Diaspora as easy for non-technical people to get into, so they feel they don’t have any other options.

  2. Although I’m a bit reluctant just to suggest this, in the spirit of brainstorming… In theory if bad actors are truly able to automate the trending of negative messages, then an arguably “grey hat” approach would be to get obvious gibberish trending in order to drive home in the public awareness the vulnerability of what’s considered “trending” and undermine trust in the platform and drive more users towards seeking more robust communications solutions. I think this would be more effective in the long run than driving positive messages because the problem goes deeper than the negativity: the platform itself is flawed. This could even encourage the platform to take measures to mitigate this sort of attack, resulting in a win-win for everyone including the platform.

1 Like

Thank you, @indigochill.

If I understand @khkey correctly, she is requesting a practical approach that can be implemented now and achieve results soon. And it has to be one that can be explained to people who are not techies but who are in leadership roles.

One thought that occurs to me at this moment is that confusion results when there is no clear authority–or too many authorities. This has to be simplified, e.g., there needs to be a short list of authoritative sites one can trust for correct information and fair, unbiased interpretation. And these sites–let’s say The Guardian, New York Times–need to be on their guard against hacking so that they can continue to deliver the news in a way we can depend on.

Social media can’t be depended on to do anything socially useful because it has an entirely different function: to bind people with common values and interests. It does not serve the population as a whole.

I like the guerilla approach you suggest. It’s anarchistic and disruptive, which means it would have to be implemented by sane, rational people.

I really appreciate the conversation here. @indigochill Thank you for sharing your thoughts, and thank @patm for making the concern clear here. I can feel the urgency through @khkey’s posts.

In my opinion, we all want to achieve noticeable results soon, but I am afraid that due to the scale of this problem and with so many parties involved, patience and endurance are needed to see the fruits grown from any effective strategy.

I am thinking about the words above, and I wonder whether shifting the engagement offline would be a good approach. Since I am not in America, I cannot propose in details how the officials can make this work, but I feel that some emphasis can be put on offline community building and engagement. More programs and activities can be held in neighborhoods to connect people. People’s empathy and responsibility for their social environment can grow through the involvement in their community, and that may prompt people to think twice when they see a dehumanizing post on Facebook. If we agree on that face-to-face humane interactions offline are more valuable than online engagement, this may have a more profound impact than implementing prosocial bots. Because at the end of the day, it’s how people react to the information that hugely decides the things that follow (trending of topics etc).

@indigochill raised a very good point on awareness. If we don’t expect the tech giants to make changes soon, we’d better educate ourselves. As a computing student, I am disappointed with the fact that there isn’t education on tech ethics for tech students at college, let alone the public who may even be unware of them being manipulated. Many computing students aim for working at these tech companies, and it’s worrying that they do not pay that much attention to tech ethics. From my perspective, education for tech students, awareness programs for the public should be among the top priorities today.

I agree that expecting FB to be broken up is somewhat unrealistic currently, but regarding its enormous scale and user base (controlling WhatsApp and Instagram, with Snapchat potentially being the next one), any information can be spread and become viral so fast, so limiting its power in terms of the scale they have is, in my opinion, critical.

Hope my thoughts can help you in some way.

I couldn’t agree more about moving community engagement offline, and it’s part of what I am indeed advocating with public officials. I’d love to see a multi-pronged strategy that had both offline and online elements - and I appreciate having this conversation in this forum.