Facebook is using Artificial Intelligence to remove EXTREME posts!

Facebook announced Thursday that it plans to use artificial intelligence to help remove inappropriate content from the social media platform.

CEO Mark Zuckerberg wrote in a post that the efforts will be directed at removing terrorist content, but suggested that other “controversial posts” could be taken down too.

Noting that human reporting does not always catch terrorist posts in a timely fashion, he explained, “That’s why we’re also building artificial intelligence that lets us find potential terrorist content and accounts faster than people can.”

The software uses “natural language understanding” and “image matching” to find content. “And when we identify pages, groups, posts or profiles that support terrorism, we use algorithms to find related material across our platform.”

“There’s an area of real debate about how much we want AI filtering posts on Facebook,” Zuckerberg conceded. “It’s a debate we have all the time and won’t be decided for years to come. But in the case of terrorism, I think there’s a strong argument that AI can help keep our community safe and so we have a responsibility to pursue it.”

The CEO also announced the launch of Facebook’s “Hard Questions” blog, which will tackle such issues as:

  • How much should we monitor and remove controversial posts, and who gets
    to decide what’s controversial?
  • How do we make sure social media is good for democracy?
  • How aggressively should social media companies monitor and remove
    controversial posts and images from their platforms?
  • Who gets to decide what’s controversial, especially in a global community
    with a multitude of cultural norms?
  • Who gets to define what’s false news — and what’s simply controversial
    political speech?

In a post to which Zuckerberg linked in announcing the blog, Facebook’s vice president for public policy and communications, Elliot Schrage, further elaborated that it will be a place for the social media giant to explain its editorial decisions.

“As we proceed, we certainly don’t expect everyone to agree with all the choices we make. We don’t always agree internally,” Scharage wrote. “We’re also learning over time, and sometimes we get it wrong. But even when you’re skeptical of our choices, we hope these posts give a better sense of how we approach them — and how seriously we take them.”

During the presidential election in May 2016, the tech blog Gizmodo broke a story, based on reports by former Facebook news curators, that the platform regularly suppressed conservative news and injected liberal topics into its “trending” news section.

Zuckerberg strongly denied that Facebook engaged in this practice.

“We have found no evidence that this report is true. If we find anything against our principles, you have my commitment that we will take additional steps to address it,” he responded at the time.

To further reassure conservatives that Facebook is playing the fair online arbiter, the company invited various conservative personalities to its headquarters in Menlo Park, California, that month.

Read the rest at: Facebook Artificial Intelligence

Previous articleCalling a Brownie a Brownie requires police intervention!
Next articleWhy would a party cancel THOUSAND of voter registrations?
The Real Side
Posts categorized under "The Real Side" are posted by the Editor because they are deemed worthy of further discussion and consideration, but are not, by default, an implied or explicit endorsement or agreement. The views of guest contributors do not necessarily reflect the viewpoints of The Real Side Radio Show or Joe Messina. By publishing them we hope to further an honest and civilized discussion about the content. The original author and source (if applicable) is attributed in the body of the text. Since variety is the spice of life, we hope by publishing a variety of viewpoints we can add a little spice to your life. Enjoy!