A little while back I wrote about how we’re forfeiting the first major global conflict between humans and AI frameworks: by not recognizing that the conflict exists, encouraging machine-learning models to measure success in their ability to change human behavior, and broadcasting voluminous data and access to human responses that help those nets to succeed even when the results hurt or dismay the people involved.
Addressing the second point: recognizing that sharing granular data about preferences and responses makes it easy for others to exploit individual behavior, how can we institute better control over the use of personal data on major platforms – social networks, ad tracking, ISP tracking, and the like? How can we ensure that the platforms we support and make powerful provide such protection to all by default?
The current public furore over Cambridge Analytica and SCL is relevant, but conceals in part the universal leverage all platforms have over their users, until they implement encryption and isolation that makes such leverage impossible without user buy-in (opt-in).
Some attempts to answer these questions: