Related to the recent Orwell vs Huxley discussion and Examples of what is coming to us in the near future humankind now faces:
The public has unfortunately settled on the term “bots” to describe the social media manipulation activities of foreign actors, invoking an image of neat rows of metal automatons hunched over keyboards, when in reality live humans are methodically at work. While the 2016 election mythologized the power of these influence-actors, such work is slow, costly, and labor-intensive. Humans must manually create and manage accounts, hand-write posts and comments, and spend countless hours reading content online to signal-boost particular narratives. However, recent advances in artificial intelligence (AI) may soon enable the automation of much of this work, massively amplifying the disruptive potential of online influence operations.
Read the article to get a sense for how much worse misinformation and fake news online will get!
A further discussion of the article is Here on Hacker News.
The article also addresses why we are especially vulnerable to this wave of propaganda that is coming:
- There is a lack of public awareness or skepticism towards content that users view online
- The legal system has numerous blind spots that lawmakers should close
- Online anonymity is facilitating bad actors to spread their propaganda
On that third point: The internet should not be de-anonymized. Online anonymity is an internet freedom. But we should be especially critical of anonymous content and be aware of how it can be manipulated.
The article argues for anonymity, pseudonimity and the need for (non-federal) online identity systems. This is interesting and aligns with stuff I wrote earlier in Investigating privacy-respecting online identity, data ownership & control solutions