Can you trust any Yelp review when some restaurants create fake online restaurants as portals to avoid their own bad reviews, or pay for good ones?
And as organizations like the American Council on Science and Health quickly learned, when a cabal of anti-science groups, like SourceWatch, US Right To Know, Natural News, and Joe Mercola team up with Mother Jones to undermine your work, the public will rightfully only look skin deep and not realize the negative press is being manufactured.
Those smear tactics may be harder in the future. Computer scientists in Australia are using computer software that can detect false feedback and perhaps ensure the integrity of ecommerce trust management systems. Soon Keow Chong and Jemal Abawajy of the Parallel and Distributed Computing Lab at Deakin University show that feedback proffered by trading partners is fallible. There is always the potential for feedback to be manipulated strategically to the detriment of the site's reputation on the small-scale and in the worst case scenario a site might undergo a "rating attack" that could cause serious damage to brand and company image.
The team successfully developed an algorithm that can identify and block falsified feedback being sent to a site's trust management system and so hopefully make it more robust against rating manipulation attacks. The team points out that the algorithm can detect when an established, credible user who has built up trust on a system suddenly begins cheating or when a multitude of new users are pushing false feedback on to the site.
The team explains that the feedback verification scheme uses a clustering algorithm to group similar ratings together and define the majority rating. The trust value of the rater is based on his/her past behavior and the frequency of rating submissions. In order to determine the quality of a rating, the team uses a trust threshold which designates a minimum value required to establish the trust relationship. All ratings that fall within the majority cluster are combined with the trust value of the rater, the transaction frequency and the transaction value to determine the credibility of the ratings.
The algorithm then adds "weight" (credibility) depending on various factors: rating frequency, total submissions, low value versus high value transactions, total feedback on a given product and other parameters. It thus determines whether any given feedback falls below a set threshold for credibility and defines those that do as false and so avoids adding it to the trust management system, it also scores against the user's individual trust value.
It isn't perfect, and groups who have mastered the dark underbelly of the Internet like SourceWatch has will always find ways around systems based on ethics, but it is a good start.
Comments