Facebook Russian ads influencing 2016 election

The thing that we should be thinking about more, is about the advertising. They have an obligation, just like any other business, to not proliferate any type of advertising that is hurting their customers. All advertising should be vetted to some degree for integrity. People rely on the platform they use to be vetted so that they aren't being fed nonsense. For example, there is likely a filter that checks any links to make sure they are not sending users to websites with malware and viruses. Or I hope they do. If they were doing so and users getting viruses, they would incredibly upset. Well the same holds true with the advertising content itself, it needs to be vetted in some way.

The problem Facebook has, it is essentially it's own internet. It's strategically created a platform that is much like the internet itself, only controlled by Facebook. In doing so, they have the control and are going to be held responsible for whatever is on that platform. And they should be. They are not the internet, they are for being social and connecting real people. They need to control who is real and what is social versus just spam. They allow things like pages and groups. Doing so lends them to political content and paid propaganda. If they want a platform like that, they either need to allow their community to police itself, or to police the community. There is an inherent trust from the users that they are being delivered vetted information, or that if they are not, there is something setup to allow that to be taken down.

As for fake news, it does have an obligation to control any content that is blatanly false. But from a legal standpoint, Facebook should not be held liable or responsible for such things, because they are not the police and they cannot be the police, that would make them above the law. Instead, we should simply be protesting Facebook by any means necessary to pressure them into creating better policies and to remove blatant fake content that does not need to be reviewed by police. Make a policy for the platform and anyone who violates it is subject to removal. And anything where they are not quite sure of the legality, they need to hand over to the police.

This is indeed a slippery slope, as where does their policing overlap with the actual police? There really is no line to be drawn and there is no way around that. Either they help with their best judgement, or Another way to make it better would be to have the public step in and be community police to help them. Much like on craigslist, reporting something (see if this is actually the case if they have a report button already).

The only true policing that will ever work, is when the people speak up when they see something. Have discussions about it even. The people need to help police themselves and not rely on the And the controlling entity needs to make systems that allow those voices to be heard, or they simply give up and rely on the controlling entity.

Now who am I to be talking about a subject like this and give my opinion? Well, a matter of fact, I helped to “police” an online community of about 6,000 members and we had to deal with trolls, attacks, and false users and information. We developed a system between all of the others policing, where if there is something that was reported by users and harmful or false, the users submitted those claims to us, we submitted it to everyone in our policing group, and then we would individually decide whether it was something to be taken down or discussed further with the user who posted it. If we all independently agreed, we suspended the post and contacted the user. If we did not all agree, we talked in a discussion section about why we thought it should or shouldn't be taken down. We therefore had a written log about our decision. We often came to agreements as to what to take down, and it wasn't always easy. A lot of the time it was easy, it was very obvious. But then a few times it took a lot of discussion. And for those, we often ended up contacting the user to see if they could change it slightly to be better. That usually worked well, most of those contacts went well and the user accepted to change their wording. We were nice about it and gave them good information as to why we thought it was inappropriate. They often appreciated our level of thought behind it. On very rare occasions, we did have to ban members. But we were ok with that because they were very obviously causing issues.

It's likely not so easy for an entity like Facebook, especially since it's such a huge amount of users and content. Which is why it is probably in their best interest to create systems which allow people to police themselves in some capacity. To have organizers in groups to vet content, and then any issues get escalated to Facebook employees.

In the end, it's just something that comes with the territory. Facebook is not doing a good job of policing itself or those on their platform, they need to step up and do something about it. They control the platform, the content, and the users to some capacity, so they need to take responsibility for that control.