FTC finalizes rule banning fake reviews, including those made with AI
Comment,The U.S. Federal Trade Commission (FTC) announced on Wednesday a final rule that will tackle several types of fake reviews and prohibit marketers from using deceptive practices, such as AI-generated reviews, censoring honest negative reviews and compensating third parties for positive reviews.,The decision was the result of a 5-to-0 vote. The new rule will start being enforced 60 days after it’s published in the official government publication called Federal Register. ,The FTC’s new rule has been a long time coming. It aims to improve the often untrustworthy online review system and — hopefully — make it easier for people to find reliable reviews. Merchants, especially on Amazon, have been using fake and paid reviews for far too long. Amazon claimed to cease more than 200 million fake reviews in 2020. In 2021, Yelp reported over 950 people “suspicious groups, posts, or individuals” were engaging in “deceptive review practices” on online platforms. Now, the rise of generative AI has made it easier than ever for bad actors to write fake reviews. ,The FTC initially proposed the rule on June 30, 2023, following an advanced notice of proposed rulemaking issued in November 2022.,You can read the finalized rule here, but we also included a summary of it below:,According to the final rule, the maximum civil penalty for fake reviews is $51,744 per violation. However, the courts could impose lower penalties depending on the specific case.,“Ultimately, courts will also decide how to calculate the number of violations in a given case,” the Commission wrote.