MOOC on HumanDowngrading & PersuasiveTech

The MOOC “Big Data, Artificial Intelligence, and Ethics” has a new section on “Human Downgrading”, “The Attention Economy” and “Persuasive Technology”, much inspired and in line with the discussions here:
https://www.coursera.org/learn/big-data-ai-ethics?specialization=computational-social-science-ucdavis#syllabus (week 4 on Ethics)

Would love to get any feedback from this community!
You can also see some of the content here: https://www.youtube.com/watch?v=nWL2F9aW2V0&list=PLtjBSCvWCU3o_OMZn3RVUuraz7Ro9z66F&ab_channel=MartinHilbert

This is the 2nd of 5 courses of our Specialization on Computational Social Science taught by Professors from all 10 campuses of the University of California. With some 35,000 learners during its first year, the Specialization has already been voted to be among the BEST OF ALL TIMES online courses on ClassCentral. Its courses on “Computational Social Science Methods” & “Computer Simulations” have already been selected as the Best Online Courses 2021 edition.

3 Likes

I thoroughly enjoyed watching your lecture, thank you for sharing! You did an excellent job presenting a tremendous amount of research in a relatable, concise, and intuitive way.

I have not gone through your entire course, so I do not know if you present more technology solutions, but I think it is imperative to include discussion on some potential ethical design changes:

  • Batch delivery of notifications by default.
  • Disambiguation of notifications by default.
  • Platform fiduciary responsibility for users’ attention. (You mention this one in the lecture!)
  • Detect too much Time on Device / recognize users’ isolating negative emotional state, then promote genuinely pro-social content that aims to elevate the user’s emotional state and disengage them from the platform while promoting face-to-face interactions within their local community.
  • User Trust Leveling - indicates how well users adhere to community values, and determines how widely their posts are promoted. Users level-up over time with insightful / high-quality contributions, users level-down when posts do not align with community values. Higher level users receive a wider audience on the platform.

That list is the tip of an iceberg, for more possibilities I recommend listening to the Center For Humane Technology’s “Your Undivided Attention” podcast for many more suggestions from engineers and designers who work / have worked for the major tech companies.

I also think it might be interesting to weigh in on the federal antitrust lawsuits against the major tech companies since business ethics are fundamental to the problems presented by persuasive technology and human downgrading.

Thank you again for sharing your lectures! I will keep an eye on your MOOC, I hope to go through the full course!

1 Like

Hi

thanks for that. Yes, I agree, these are all very important aspects, and the Center is doing a great job working on corporate responsibility, new tech adjustments, and regulation. I still think there’s one aspect missing: human evolution. This emerges as the lower right quadrant here: Social Media Distancing: An Opportunity to Debug our Relationship with our Algorithms | by Martin Hilbert | Medium

So I decided to focus most of my active research attention there…
i.e. Social Media Distancing 4: Human Innovation | by Martin Hilbert | The Startup | Medium

Let me know if anyone comes to mind to collaborate on these questions, Jeff! Thanks

1 Like

The syllabus looks great, I would take the course. The inequalities (x!=y) in week 2 are just brilliant. I used to teach a course on the topic, kudos!

This may be my European bias, but do you talk about privacy and privacy by design?