- Messages
- 166
- Reactions
- 234
Sites aren't required to do anything to claim section 230 protection; if the posts in question are from a user and not someone representing the site then they're covered. Catching some calls for violence and missing others is a function of the algorithms they use to find posts inciting violence, not a matter of political perspective. Manually moderating every single tweet and post would be an impossible task and put an end to live content.Yet "in good faith" has proven to be too open to interpretation. It's not an easy problem to solve but it doesn't seem insurmountable. Probably just tighten up the requirements that must be met before a site can claim section 230 protection. Perhaps require a clear and unambiguous set of community standards and also require transparency and uniformity in how they are applied. That seems pretty straightforward to me.
If this were the case, Twitter would lose their 230 protection immediately for not dropping things like calls for murder against Jews living on the West bank, while banning others for similar calls for violence. Facebook would be in similar straits for similar reasons. I suspect that media companies would secretly love to see such requirements because it would free them of the millstone of having to endlessly appease the woke. They would have the freedom to simply point at the law and say "this is clearly the law, we can't break it, sorry".
But you're correct in saying media companies want some sort of regulation; they don't want the headache of trying to decide which speech is OK and which is not.