Connect with us

Google

How reviews on Google Maps work

Moderating reviews with the help of machine learningAs soon as someone posts a review, we send it to our moderation system to make sure the review doesn’t violate any of our policies. You can think of our moderation system as a security guard that stops unauthorized people from getting into a building — but instead,…

Published

on

Moderating reviews with the help of machine learning

As soon as someone posts a review, we send it to our moderation system to make sure the review doesn’t violate any of our policies. You can think of our moderation system as a security guard that stops unauthorized people from getting into a building — but instead, our team is stopping bad content from being posted on Google.

Given the volume of reviews we regularly receive, we’ve found that we need both the nuanced understanding that humans offer and the scale that machines provide to help us moderate contributed content. They have different strengths so we continue to invest tremendously in both.

Machines are our first line of defense because they’re good at identifying patterns. These patterns often immediately help our machines determine if the content is legitimate, and the vast majority of fake and fraudulent content is removed before anyone actually sees it.

Our machines look at reviews from multiple angles, such as:

  • The content of the review: Does it contain offensive or off-topic content?
  • The account that left the review: Does the Google account have any history of suspicious behavior?
  • The place itself: Has there been uncharacteristic activity — such as an abundance of reviews over a short period of time? Has it recently gotten attention in the news or on social media that would motivate people to leave fraudulent reviews?

Training a machine on the difference between acceptable and policy-violating content is a delicate balance. For example, sometimes the word “gay” is used as a derogatory term, and that’s not something we tolerate in Google reviews. But if we teach our machine learning models that it’s only used in hate speech, we might erroneously remove reviews that promote a gay business owner or an LGBTQ+ safe space. Our human operators regularly run quality tests and complete additional training to remove bias from the machine learning models. By thoroughly training our models on all the ways certain words or phrases are used, we improve our ability to catch policy-violating content and reduce the chance of inadvertently blocking legitimate reviews from going live.

If our systems detect no policy violations, then the review can post within a matter of seconds. But our job doesn’t stop once a review goes live. Our systems continue to analyze the contributed content and watch for questionable patterns. These patterns can be anything from a group of people leaving reviews on the same cluster of Business Profiles to a business or place receiving an unusually high number of 1 or 5-star reviews over a short period of time.

Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Google

We’re announcing our first partnerships to scale biochar for CO2 removal.

Today Google is announcing two long-term purchase agreements to help scale biochar as a carbon removal solution. We’re partnering with Varaha and Charm to purchase 100,0… Source

Published

on

By

Today Google is announcing two long-term purchase agreements to help scale biochar as a carbon removal solution. We’re partnering with Varaha and Charm to purchase 100,0…

Source

Continue Reading

Google

Google is supporting new solar projects in Oklahoma.

Google has entered into long-term agreements with Leeward Renewable Energy (LRE) to support over 700 megawatts (MW) of solar projects in Oklahoma. The projects are strat… Source

Published

on

By

Google has entered into long-term agreements with Leeward Renewable Energy (LRE) to support over 700 megawatts (MW) of solar projects in Oklahoma. The projects are strat…

Source

Continue Reading

Google

An open call for the next Google.org Accelerator: Generative AI

Apply now for an opportunity to receive funding and to participate in the Google.org Accelerator: Generative AI, a $30 million global open call. Source

Published

on

By

Apply now for an opportunity to receive funding and to participate in the Google.org Accelerator: Generative AI, a $30 million global open call.

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.