Consumers aren’t the only victims of fake reviews. Platforms can suffer too.
While bad actors who write fake reviews—and businesses that buy fake reviews—are the ones at fault, platforms are held more accountable than you may think.
81% of consumers are concerned about fake reviews. And when asked who should take action against review fraud, emerging patterns in legislation and consumer behavior show that many believe online platforms have a responsibility.
Fake reviews are a dangerous virus for online platforms. The longer you leave them, the more damage they can cause to your reputation, your user experience and your bottom line. Here are 5 reasons why you should urgently take action against fake reviews.
Laws on fake reviews are tightening around the world, with hefty fines for non-compliance. The EU, UK and USA have now all taken decisive action to ban fake reviews.
In the US, the Federal Trade Commission (FTC) has announced a final rule prohibiting the purchase, creation and dissemination of fake reviews, fake celebrity testimonials, and AI-generated fake reviews. The rule proposes maximum civil penalties of up to $51,744 per violation. In the UK, businesses could be fined up to 10% of their global turnover for non-compliance (plus additional penalties) under the DMCC Act. And in the EU, businesses are obliged under the Omnibus Directive, to provide total transparency on where and how they get reviews. Some of this legislation puts more responsibility on platforms and marketplaces.
→ Are fake reviews really illegal? Read more about the latest legislation.
There’s no sign of AI slowing down anytime soon. Unfortunately, in the wrong hands, this technology only makes it easier for criminals to mobilize scams and fabricate reviews.
Platforms like Tripadvisor have noticed the impact of this. In an interview with The Guardian, a representative commented that, for now, the platform has banned all AI-generated reviews, removing over 200,000 of them in the space of one year. And during an investigation, CNBC found several product reviews on Amazon that started with the phrase “As an AI language model”—a common opener in responses from ChatGPT.
As much as 65% of internet users identify content as suspicious because it contains poorly written content. But with the help of AI, bad actors can generate thousands of word-perfect content in a fraction of the time. Eliminating this once-obvious telltale sign.
To keep up with the relentless onslaught of AI-generated reviews, platforms need to integrate AI with their detection systems (and fast).
Fake Reviews: Why You Need to Act Now
You care about your users. You want them to have the best experience possible. Otherwise, you wouldn’t have built a platform in the first place.
As much as 90% of people read online reviews before making a purchase. But unfortunately, research has shown that 30%–40% of these are not genuine.
Bad actors and unscrupulous businesses violate your platform to orchestrate their criminal activity and mislead users. The sooner you act, the more harm you can prevent.
When bad actors post fake reviews on your platform, they not only mislead your users, they also cause damage to your reputation.
Take Facebook, for example. A study found that the number of users reading reviews on Facebook had dropped by 8% in the space of two years. This was the only major platform to see a downturn. It’s suspected this is due to Facebook’s declining trust factor—93% of respondents said they were “a little suspicious” of fake reviews on Facebook.
Once your platform has a reputation for fake reviews, it can be incredibly challenging to repair. In the meantime, your competitors could implement new measures to combat fake reviews, and position themselves as a more trustworthy and authentic platform. All of this can dwindle your user base, stifle your revenue and inhibit any opportunities for future growth and investment.
“Minimum Viable Product” is a common concept in the tech space. But there’s a newer, more preventative approach that can save your platform from the negative impact of fake reviews.
It’s called “Safety by Design”
Safety by Design is the process of implementing safety measures at the build stage, instead of responding to safety breaches and threats as and when they happen. If you’re in the early stages of building your platform, you have this very fortunate advantage.
But is it too late for legacy platforms to apply these “Safety by Design” principles? Of course not! Even if you’ve dealt with fake reviews in the past, or you’re dealing with them right now, you don’t have to wait for the next big scam, or the latest legislation to spur into action. And you don’t have to hurriedly build an interim solution that doesn't really solve the problem. When you act now, you can take your time, and do it right.
Using a combination of behavioral analytics, AI and machine learning, Pasabi helps you detect fake reviews on your platform. We look beyond the review text, focusing on the behavior of users. Our software analyzes your data against our repository of known bad actors created from multiple data sources. Our cluster technology identifies suspicious patterns in your data, so you can take action against the biggest risks before they become a problem.
→ Learn more about our fake review detection solution to quickly protect your platform from fraudulent reviews and maintain the integrity of your reputation