The public has unfortunately settled on the term “bots” to describe the social media manipulation activities of foreign actors, invoking an image of neat rows of metal automatons hunched over keyboards, when in reality live humans are methodically at work. While the 2016 election mythologized the power of these influence-actors, such work is slow, costly, and labor-intensive. Humans must manually create and manage accounts, hand-write posts and comments, and spend countless hours reading content online to signal-boost particular narratives. However, recent advances in artificial intelligence (AI) may soon enable the automation of much of this work, massively amplifying the disruptive potential of online influence operations.
Read the article to get a sense for how much worse misinformation and fake news online will get!
A further discussion of the article is Here on Hacker News.
The article also addresses why we are especially vulnerable to this wave of propaganda that is coming:
There is a lack of public awareness or skepticism towards content that users view online
The legal system has numerous blind spots that lawmakers should close
Online anonymity is facilitating bad actors to spread their propaganda
On that third point: The internet should not be de-anonymized. Online anonymity is an internet freedom. But we should be especially critical of anonymous content and be aware of how it can be manipulated.
That will never work and imo is a fools errand to id people online. Ask how its working in Russia or Pakistan where you need to flash id to purchase sim card or post in comments under news article? Registration of internet ID is dictators wet dream come to life, awful awful idea with tons of risk and zero benefit to anyone against genocide. Could not disagree more.
Also, the amount of misinformation people injest skyrocketed in the 1990s with the advent and cancer like growth of cable news. September 11 attacks happen and a week later the world cannot agree on anything that happened there. But it’s not the first time. Same thing happened in Pearl Harbor, most Americans and people in the Pacific had no idea what to believe and were being shoved into planes and boats and hurled at the other side.
I shouldn’t say I completely disagree, the assessment of how the manipulation machine is snowballing of course I agree with that its obvious to anyone. Maybe I am more cynical than most but lists do not provide security they take it away.
Anonymity, privacy, security, we need to either formally separate these words from electronic communication, or defend them. Giving the people who build the prisons a list of everyone’s name and what they say is what we are trying to avoid.
Well, this is not how that system would work. First of all everyone retains the right to stay fully anonymous at all times, but besides have the opportunity to have one or more identities. These can be pseudonymous (e.g. just a username) or fully validated. These identities could be provided by a government, a company or any other institution, or you could create them on your own server or that of your friend.
Depending on the nature of your identity (which can be aggregated from multiple sources and only contains information you want to provide) other people can determine the credibility they give to content you publish. If you are anonymous then other people might not give much credit to your ‘important news article XYZ’, whereas if you are validated as a journalist by US Government and NY Times your article can be deemed way more likely to be legit.
Edit:
@Broodwich, this primer on functional identity also highlights the danger of abuse in identity systems. It’s why good systems require careful thought and deliberation. But you should ask why you are using an identity system in real life, and at the same time no proper identity system online.
It has been proven that the best way to impact voter opinions is to manipulate search rankings on political search terms and Google exercises this power freely. Centralized identity would do nothing to stop this type of influence, and Google/Facebook already tried to strong-arm the rest of the internet into a centralized identity system (OAuth2) and failed. As the author mentions, weak-identity (or, as the author put it, “shockingly vulnerable” :D) systems like Reddit and ephemeral platforms like Snapchat are increasingly favored by internet users.
Note: I am specifically not talking about centralized identity! I am a proponent of the Decentralized Web and this also extends to (next-gen) decentralized identity systems.
It’s been done tried many times and stopped being a viable idea long ago. OAUTH2is great example I was going to say OpenID but apples and oranges, they all get crushed into submission.