When Kay Dean turned her keen detective’s eye on the reviews for a Toronto dental clinic, she quickly discovered something alarming. The reviewers behind the clinic’s glowing testimonials had also reviewed an array of obscure small businesses across North America and Europe. The same 14 companies were recommended by a few reviewers: they included a dry cleaner in Florida and a Texas locksmith. Surprisingly, reviews were republished word for word on Facebook and Google with different names.
Dean, an ex-federal criminal investigator, discusses her concerns about the dentist’s practice in a video posted to Fake Review Watch. She explains how she discovered that 195 of the clinic’s 202 positive reviews had been shared within three days. The previous clinic only had a few reviews over the years.
“Given what I’ve just shown you here,” Dean says in the video. “Would you trust any reviews for this Toronto dental practice? I wouldn’t.”
Fake positive reviews go beyond the ones Dean flags on her YouTube channel.
The COVID-19 fraud pandemic is continuing to swell global e-commerce. This has made it one of the main factors in the decline of online consumer trust. According to the World Economic Forum, fake reviews led to $152 billion worth of global spending last year on low-quality products and services. The world of online reviews is so overrun with fakes, it’s become like the Wild West, Dean says. “There’s no repercussions,” she says. “There’s no penalties. Cheating is rewarded in the current environment.”
Fake Review Watch was started by Dean, who had her own terrible experience five years back. After reading reviews on Yelp and Google, she chose a psychiatrist. However, her negative experience left her frustrated and upset at the manipulation of reviews. “We’re not just talking about five-star ratings,” she says. “We’re talking fake stories about people feigning various mental illnesses and claiming to be helped by the practice.”
Since then, she’s been devoted to exposing the various ways that businesses try to dupe consumers by manipulating reviews on their own or employing the services of shady organizations that sell fake reviews en masse (sometimes referred to as paid-review farms).
Dean in her trademark glasses and dangle earrings calmly explains to viewers how she determines if a company is falsifying reviews. “My aim is not necessarily to dime on these businesses, but you have to show examples to make it real so people can see what’s going on,” she says. “I’m just trying to get the word out as a consumer advocate.”
Kay Dean, seen here in a recent video on her “Fake Review Watch” YouTube channel, is a former federal investigator.
Dean’s ongoing fight to publicize the growing threat of fake reviews coincides with renewed governmental efforts to tackle the issue. In April, the U.K. announced plans to combat fake reviews. The U.S. Federal Trade Commission followed suit with new guidelines in May. These updated guidelines will be relevant to the digital economy. Meanwhile, major review platforms like Amazon, Google, Meta, TripAdvisor, Trustpilot, and Yelp say they’re making their best attempts to keep deceitful reviews off their sites. A new policy, announced by Meta on June 20, states that Meta would be more aggressive in reducing fake reviews in America. These steps are a step in the right direction. But experts say the battle against fake reviews can’t be won without turning up the heat to full blast.
What a journey!
A smaller number of internet shoppers were forced to shop online after the pandemic. These shoppers came to lean on reviews more heavily, says Adrian Palmer, head of Henley Business School’s department of marketing and reputation at the University of Reading. “Many consumers had to learn what indicators they should use to make it easier to buy online, rather than going to a shop,” he says. “Instead of picking up a product, touching it, feeling it, smelling it, they were confined to a two-dimensional screen and had to use all the tools they could to make a decision. Reviews were one of those.”
Studies show that fake reviews have a positive impact on businesses. BrightLocal, an SEO platform found that 77% consumers consider reviewing reviews before making purchases. 75% of those who read positive reviews are most likely to feel better about the business. Fake reviews corrupt the market by restricting access to “free, fair, and proper information,” says Palmer. “If you’ve got a market where information that you think is coming from a buyer is actually coming from a seller, the market doesn’t work.”
Around the globe have emerged paid-review farm businesses that offer their services for companies looking to increase their rating on review websites. “A lot of times businesses unwittingly will hire these firms because they try to position themselves as online marketing and reputation firms,” says Brian Hoyt, TripAdvisor’s head of global communications and industry affairs.
Fake review networks that are organized and based largely offshore often use social media sites such as Reddit, Craigslist and Facebook to market and sell deceptive practices and services to businesses. Dean says there’s a “tremendous amount” of review fraud being facilitated on social media.
“Groups that solicit or encourage fake reviews violate our policies and are investigated and enforced upon appropriately,” a Meta spokesperson says.
Some businesses resort to incentivizing reviews and coercing their employees to post fake reviews. Dean claims that fraud in reviewing can be found in every field. “I’ve personally seen doctors, lawyers, dentists, contractors, wedding DJs, piano teachers, lactation consultants,” she says. “And I’m a single investigator using no automated tools, just my eyeballs and spreadsheets.”
Fake reviews can be particularly dangerous when safety of consumers is at stake, according to Beibei Li (associate professor IT and management) at Carnegie Mellon University. “If you go to a doctor with a lot of fake reviews and you don’t recognize that, it could affect your health,” she says. “That’s definitely a higher risk scenario compared to going to a restaurant for dinner where, worst case, you just get a bad meal.”
It’s an issue the FTC considers to be one of the principal problems with online marketing. “There are thousands and thousands of fake reviews being generated almost on a daily basis,” says Richard Cleland, assistant director of the FTC’s division of advertising practices. “This really infects the entire commercial space of the internet.”
How are review platforms doing it?
Which? was founded in February 2021 by Which? UK consumer advocacy group. A U.K. consumer advocacy group Which? published a report in February 2021 about fake Amazon Marketplace reviews being used to sell bulk online. It found that companies set up for the sole purpose of flooding Amazon sellers’ product listings with phony praise were fueling a huge global industry of coordinated online reviews. For prices as high as $10,000, AMZTigers (German company) was advertising bulk packages with up to 1000 reviews. Amazon tried to clamp down on fraud sellers in the second half of that year, suspending several high-profile sellers who used banned methods to obtain reviews.
An Amazon spokesperson says that the company has “clear policies” that prohibit review abuse and that it suspends, bans, and takes action against those who violate those policies. “Amazon receives more than 30 million reviews each week, and uses a combination of machine learning technology and skilled investigators to analyze each review before publication,” the spokesperson says. “Each review provides us with dozens of pieces of information to detect suspicious activity. We can cross-reference this information to identify fraudulent patterns and potential abuse.”
TripAdvisor, a global platform for reviewing and bookings, became the first to issue a transparent report in 2019 detailing its steps to fighting fraud. In the most recent report released in 2021, TripAdvisor stated that 943,205 reviews, or about 3.6 percent, were found to have been fraudulent out of 26,000,000 submitted reviews to the site. Tripadvisor stated that it stopped 67.1% of those fraudulent review submissions from making it to the site.
Hoyt states that TripAdvisor moderates user-generated material by adding to the expertise of its investigators with fraud detection tech. The technology maps hundreds upon hundreds of individual pieces of information online to track things like origin of reviews and possible connection to other reviewers. Reviews must be recent, relevant to travel experiences, non-commercial, first-hand, respectful, and unbiased to meet TripAdvisor’s criteria. “It’s a race towards perfection, because no one in this space will ever truly be perfect. But we’re constantly trying to stay one step ahead of the fraudsters,” he says.
TripAdvisor’s transparency report states that it’s put a stop to the activity of over 120 different paid-review farms around the world in recent years. “We’ve been really successful in chasing down these companies that pop up like Whack-a-Mole advertising, ‘Hey, pay us 20 bucks a month or whatever and we’ll give you x, y, or z number of reviews to boost your business,’” Hoyt says.
Trustpilot is an online review site that links consumers and businesses. Its transparency report states it has removed about 5.8% (2.7 million) of its nearly 50,000,000 reviews. Anoop Joshi, Trustpilot’s vice president of legal and platform integrity, says the platform’s automated systems analyze factors like text content, IP addresses, VPN use, email addresses, and patterns of behavior to determine the authenticity of reviews. “If we see a review that’s being submitted for a business that’s based in the U.K. and then 10 minutes later, we see one [from the same reviewer] for a business based in the U.S., that’s a signal that there may be some untoward behavior there,” he says.”
Yelp’s automated software recommends reviews that it considers most trustworthy. Yelp claims that about 4.3million of 19.6 million user reviews submitted to it in 2021 weren’t recommended by the software. Yelp still displays not-recommended reviews on its business pages.
“Yelp invests in both technology and human moderation to mitigate misinformation on our platform,” a Yelp spokesperson says. “Our approach is driven by our automated recommendation software, reporting by Yelp’s community of business owners and users, human moderation, and Consumer Alerts.”
The Consumer Alerts program places warnings on the pages of businesses that have been flagged for suspicious review behavior to inform consumers about the violation of Yelp’s policies.
Google stated in a blog posting that it has put safeguards on more than 100,000 businesses by 2021. This was after it detected suspicious reviews and attempts to abuse using machine learning and humans. Google claims that its machine-learning models are trained to detect suspicious patterns in review activity, identify specific keywords and phrases and examine past content.
“Our policies clearly state reviews must be based on real experiences, and when we find policy violations, we take swift action ranging from content removal to account suspension and even litigation,” Google says. “We’ll continue to work closely with government officials to share more on how our technology and review teams work to help users find relevant and useful information on Google.”
What’s next in the fake review fight
Despite these measures, fake reviews have proven to be a particularly thorny issue—especially as rising inflation contributes to an increasingly fraught economic environment.
Dean contends her research shows that major review platforms often aren’t adequately looking out for consumers and honest businesses. They need to be more proactive in alerting consumers when fake reviews are removed. “Showing the public evidence of widespread fraud on their platforms is not in their business interest,” she says.
Legislators and regulators now want to steer the ship, trying to get businesses to follow their lead and to review platforms that will help them increase their enforcement efforts.
In the U.K., government reforms put on the table in April would make it “clearly illegal” for businesses to pay someone to write or host a fake review for a product or service, while sites hosting consumer reviews would have to take reasonable steps to check they are genuine. Britain’s competition watchdog, the Competition and Markets Authority (CMA), would also have the power to fine businesses up to 10% of their global turnover for misleading consumers.
In May, the FTC announced that it was proposing changes to its Endorsement Guides to tighten rules around reviews and better reflect what’s going on in the market today. ”The guidelines haven’t been updated since 2009,” Cleland says. “A lot has happened in the last 10 years. With the rise of influencer marketing and the dominance of digital retail sales, we felt it was time to address current marketing trends.”
It was months after Fashion Nova had been accused of preventing negative reviews from appearing online, and the FTC resolved that complaint in January. This was the FTC’s first case against a company regarding alleged review suppression.
The new guidelines will serve to warn businesses against breaking the rules. As a result, many are calling for the repeal of Section 230 which protects digital platforms against lawsuits over user-generated content. Dean believes that such legislative reforms would make it more difficult for review platforms to enforce self-policing. “These platforms are getting a free pass under the current legislation,” she says.
Upping the ante
It is not enough to take retaliatory measures in a scattershot fashion because of the scope of the criminal underworld of review fraud.
Due to the nature of the fake reviews industry, Cleland says it’s difficult to enforce measures put in place against it. “The principal challenge is that much of this review brokering activity is being generated outside of the United States,” he says of paid-review farms. “Many of the targets that we’ve identified are either located somewhere in Europe or Asia. And those are difficult for us to address.”
He adds that it can be tough to locate even smaller scale fake review brokers because “you don’t need anything but a computer and a coffee shop.”
Li believes that one way to solve this problem is to place more responsibility on review platforms. “Right now, there’s a very low cost of being caught,” she says. “So there’s potentially two levels of discouragement for these efforts. From a legislative perspective, one is to press the platforms. Other is to force platforms to adopt information transparency policies that will make it difficult for review writers who are not genuine. Like, ‘If you do this, we’re going to publicly shame you to consumers.’”
Dean agrees that review platforms need to have “more skin in the game” to stop fake reviews from taking over. “If I can singlehandedly uncover the shocking amount of fraud I have using just publicly available information,” she says. “They should certainly be doing more,” she says.
Read More From Time