Overton – an app to recommend you news articles that you’ll probably disagree with

(TL;DR: Pocket/Feedly for aspiring rationalists. Purely hypothetical so far. Looking to collaborate!)

Hello! My name’s Zafaran. I’m an interested consumer of news and technology with a growing interest in humane tech and very little programming or design experience. I had this idea for an app a few months ago, but the more I thought about it, the less sensible it seemed. Rather than let the idea languish in obscurity, I thought I’d post it up here. I’d love some input, feedback, encouragement, whatever – opposing views especially, of course!

What would it do?

Ideally, the app would help people to become more understanding of opposing viewpoints through helping them learn to recognise their own biases. It could maybe even start a trend for hip new recommendation engines that don’t quite do what you expect.

Who would it be for?

Broadly, it would be for people looking to develop their critical thinking skills, especially if they’re new to the whole reasoning thing or find themselves falling back into old habits (e.g. not seeking out articles or outlets that they disagree with, whether through lack of time or motivation).

How would it work?

You’d have a list of saved articles (like in Pocket or Instapaper). For each article you saved, the app would also save a number (defined by you) of articles with opposing viewpoints.

Where possible, the app would openly state why it had saved a given article for you (perhaps at the end of the article, to avoid influencing your opinion of it before you’d finished reading it). For example:

“We recommended you this article because you recently saved a lot of articles on [contentious issue] from websites with a strong [liberal/conservative/other] bias.”

(I know there are a lot of issues with this, and I’ll try to work through some of them in the next section.)

Nice-to-haves might include:

  • An "our favourite articles this week" newsletter, with a broad focus on "rationality" (e.g. posts from ChangeMyView, Wait But Why, LessWrong, &c.). This could hopefully inspire/nudge people to learn to reason more actively than they would necessarily have to while using the app.
  • A weekly/monthly email detailing your reading habits (to allow you to track any gradual changes more easily)?
  • (A built-in semi-automatic micropayment system for outlets/authors would be lovely, but I imagine that's for a much later date or even a separate project.)

What could go wrong?

Who would it be for?

Maybe no-one cares about this sort of thing, or maybe the people who do care have other, more convenient ways of doing this already, like following a load of different news outlets on Twitter (or buying the Telegraph and the Guardian when they go to the shops).

The app may also attract people who have a lot of uncertainties already. This could be good if they’re then introduced to all the great rationality resources, and even better if they take full advantage of them, but not as good if they don’t take those extra steps.

How would it work?

Maybe it’s difficult to do this properly. Can you really quantify someone’s biases in a useful way? I read a bit about IBM Watson’s Personality Insights a while ago, but I’m guessing it only works because it has so much data to work with?

It might be difficult to pigeon-hole more balanced or nuanced articles effectively, so it could be hard to work out what to recommend to someone who was reading a lot of them. Would people who read high-quality journalism be the sort of people who are more likely to use the app anyway, or might these people feel that they already have a balanced enough view of things?

Also, just because you’ve saved and read something doesn’t mean that you 100% agree with it (hopefully). Two people could read the same article and come away with completely different impressions of it. The app would be none the wiser, unless it asked people how the article made them feel after they’d finished it or something. Is straight away too soon to tell?

If two people’s underlying assumptions and values are completely different (as is often the case with contentious issues), they might benefit less from reading articles with strongly opposing views: the opposing view might just seem like nonsense because of the background knowledge or inferential leaps needed to understand it fully. Would it be better to have an app that recommended explainers instead?

Rather than recommending completely opposing views, what if the app guided people towards more balanced articles and sources? Is it hard for people to find these themselves? (Would it perhaps be better to have menus in existing article-saving apps that let you see “bias gauges” for the articles you’d saved?)

Avoiding bias

How do you deal with bias? Even if you assembled a team of moderators from across the political spectrum, how could you do this in a balanced way? Everyone in the team might think that their view’s pretty close to the norm, including the person assembling the team.

Is the app likely to attract certain groups of people anyway, e.g. Effective Altruism types? If the recommendation engine relied on other users’ data to create a more accurate picture of people’s views, this could cause some sort of skew.

Who would you get to label the data? Would you need subject area experts? Wouldn’t this be a massive waste of their time and talent? You could crowdsource the info, but that would probably create more problems with bias.

Critical thinking skills

Might people rely too much on the app for their news and not develop their research skills? Is this already an issue with Twitter, Feedly, or the lengthy list of newsletters I’m subscribed to?

Would people develop their critical thinking skills less if they could just parrot both sides of the argument? Would it be better to get people to actively respond to what they’ve read, or is this a step too far? I imagine most people wouldn’t answer.

Non-standard views

I imagine that people with very non-standard or “extreme” views probably wouldn’t use the app. Even if they did, what would it recommend them? If a lot of what the app did was based on having a lot of data to work with, what would it do when it had less?

Conspiracy theories feel like a whole other kettle of fish. There are already websites and apps that analyse the accuracy of news sites (e.g. The Factual). Could we include this sort of thing in the app? What about things that can’t really be proven either way? Could we label them as “speculative”, or would this be an insult to people’s intelligence?

Should we moderate content? Should we allow extreme views? What counts as hate speech? Should we include warnings, or would that be too great a sign of our own bias? If anyone with extreme views used the app, could these warnings cause them to feel alienated? (This might sound like an odd thing to say.) In a way, I imagine we’d be quite keen to keep them. How could we do this safely and inclusively?

I was also wondering whether the backfire effect might just render the app useless, but I’ve heard mixed things about it…

Recommendation engine dynamics

What level would we want recommendations at: outlets, authors, articles? Where would these recommendations come from? Would we use people, machines or both?

How specific should the recommendations be? How specific must they be for the app to work? Given current technological constraints (which I imagine there are plenty of), how specific can they be?

People could already be trying to read news with opposing views: this could confuse the recommendation engine (and, if people are keen to try out the app, it might even put them off saving news that they disagree with for a short while, which could lead to them breaking a newly forming habit – is this a slippery slope argument?).

How do you solve that?

  • You could make it so that people could flag individual saved articles as ones that opposed their views, but would people just use that as a workaround to avoid getting recommendations?
  • Could you take some sort of questionnaire to gauge your political leanings before you started using the app? There would definitely be bias issues here. If your views changed markedly, would this affect how effectively you could use the app? Could you choose to reset your preferences or get the app to prompt you to take the questionnaires periodically? (Is this too complicated?)
  • Compare the saved articles to your viewing history? Even if you had access to the person's article saving history, I imagine this wouldn't be easy.
I imagine that people could end up converted to the opposite view if they consistently had access to better-quality information from the other side of the argument, but I don't think this is very likely.

If a person was reading too many recommended articles, the algorithm could end up yo-yoing. In this situation, how would you weight recent activity vs. history?

How transparent would the recommendation engine be, and how transparent could we make it? (I’m sure we’d be accused of some sort of bias regardless of what we did.)

Might users or content creators try to game the recommendation engine or other aspects of the app? If so, why? Is there anything we can do to prevent this?

More general design issues

How do we avoid addictive elements? Endless scroll’s obviously a big one: now it’s the norm, what do you do instead? What about getting addicted to the actual act of saving articles? (I’ve been there and it’s not fun.) Is there a way to limit the amount of content people consume without limiting the app’s potential? Could you ask people what they want to read about and give them an article they’ve saved alongside a recommended rebuttal? (I’m pretty sure that a browser plugin called rbutr pretty much does this already.)

How do you stop people from just ignoring the recommended articles? Could you link to them at the bottom of the saved one, or maybe even scroll straight into the next? (This wouldn’t be infinite, just from the saved article to the recommendation.)

Would there be privacy concerns? Might you feel more averse to your data being used because you’d notice more, because you’d probably be doing a bit more conscious decision-making while using the app?

What other tools are people likely to use alongside the app? Emails, Twitter, Google Alerts?

Are there obstacles to people using it?

How will it continue? Does it need funding? If so, how?

Most importantly: Overton or another name entirely? (I nearly named it Contraflow, but that sounds a bit too much like a traffic app to me.) It’s currently named after the Overton window, which defines the range of views that are considered acceptable in political discourse at any given time – the app’s aim is basically to help people to “open the window”. (Is this maybe a bit too academic?)

How about Controversial.ly?

Conclusion

I don’t know enough about recommendation engines (or machine learning, or anything much to do with this) to know how much of this is feasible. I’m sure there a lot of things I haven’t mentioned or even thought about. (Also, I completely acknowledge that a lot of these questions are very broad and don’t have very good answers yet, if any at all!)

I do know that there’s a browser plugin called rbutr that does something similar to what I’ve suggested here. I believe rbutr’s rebuttals are crowdsourced, which sounds less costly than anything I’ve suggested!

To actually conclude, thank you for your time! I’d love some input if you’d be willing to give some. Is this too naive, niche, derivative or just plain terrible? Are there things you’d do differently? A better name, perhaps? Let me know – I’d be very grateful for the feedback.

Also, let me know if you’d be interested in getting involved! I imagine that designers, philosophers, psychologists, political scientists, journalists and programmers would be good to have around. Investors might be nice too. Even if you’re not one of these, it’d still be lovely to hear from you!

2 Likes

I think that would make lots of lazy people a lot more accurately informed and is a great idea.

Doing this manually is not hard imo but to just see them baked in is wonderful.

Like that dumb tv show back in the day when James Carville and the bow tie guy argued about things on left and right, like debate club in school.

This guy I really loved his work he predicted the arab spring three years before it happened the fall of egypt and tunisia and he always said its not clairvoyance and since he was a college professor he gave the technique as a formula anyone can do.

So for war and things of this nature first you listen to the pr or what the rulers official statements are. Then you hear from the opposition of those rulers. Then whatever the threat or asusmption whatever you look at the map and see what is possible what the land looks like and who is where and what their goals seem to be.

And you read behind the lines. When muslim brotherhood shoots anwar sadat for making peace, and they are made illegal in egypt, that changed in 2008 when obama made his cairo speech and when he got their refused to take the stage until muslim brotherhood terrorists were given a seat on the dias in front of the media. A sign for things to come.

It’s reading between the lines. Now we see a world in which russian leader takes what he wants, usa makes weaponry and software and nothing else, and china who has been running advanced persistent threat on every country on earth also supplies the entire electronics parts supply and manufacturing chain and the billionaires in that class in usa who are friends with the pres are all quietly hoarding crypto, with full knowledge that the hardware behind it lives in China.

If people don’t grab on to opposing views they will not be available soon, anywhere.

@thefalsescotsman - I really enjoyed reading your idea. It sounds like a good one but I don’t think I would ever use it. Some other people surely might though! What makes consumer software development so hard is that you need users to really deeply need/want what you have to offer. Best of luck to you!

Also, I think that you (and @Broodwich - I love your comments on this forum!) might really enjoy participating in the reading community I’m creating at Readup. Check us out and let me know what you think!

2 Likes

It is an interesting idea indeed, @thefalsescotsman. As input and along similar lines you might want to check out:

https://www.talkingeurope.eu/

Talking Europe connects citizens having an opposing political view - and living in a different country! […]

“Opinion-Bubbles” shape social discourse throughout Europe. People have always been shaped by their social environment in terms of their social views and political convictions. However, media and technological developments are accelerating this trend. Social networks want to keep their users on the platform as long as possible - algorithms are designed to avoid cognitive dissonance. Thus, only information that corresponds to one’s own ideas is displayed and conflicts are usually reproduced loudly and unobjectively.

TalkingEurope aims to break this bubble for European citizens. It has been very successful so far. I can’t find the article that described this very well, but here is another one with some background:

1 Like