I am a law student just beginning to study the effect of AI on society. Tristan Harris and Yuval Noah Harari brought me here.
Section 230 is widely attacked from all over the political spectrum. Some say it allows too much content moderation (infringing free speech) and others say the opposite, that it allows too little content moderation (allowing companies to escape their social responsibilities.
But no-one I have found is focused on the common problem caused by Section 230: that it has been interpreted by courts to mean that content algorithms are neutral, “passive conduits”, rather than the society-defining architecture they actually are. Because of this statute, courts dismiss all lawsuits that seek to hold internet companies responsible for user-generated content; as a result, their algorithms are kept in the dark. We don’t know how much moderation is being done, and what criteria goes into content algorithms–this seems to me the key legal obstacle to making true progress here.
I’m wondering what can be done about this, and I like the comparison to impact-litigation of the past: to make progress in civil rights laws, Thurgood Marshall designed a strategy to fight particular legal battles over the course of several years, in hopes of pushing the courts to start making more equality-minded law. I think CHT would be a perfect organization to do something like this–they seem to be the ACLU of the 21st Century.
Curious what everyone’s thoughts are on this. I’ve poked around the site and can’t find anyone focused on this issue, and there are very few lawyers/scholars looking at the power of algorithms to design society.
Cheers!