Yesterday, I pointed out suspicuous behavior at Good-sounds.org, and questioned the need for “troll detection.” Thinking about the problem further, I think good-sounds.org has it right, and can actually provide a mechanism that could be exploited in other contexts. Each sound uploaded to good-sounds.org is analysed by an objective algorithm that computes five features correlating with “good” sound production, as well as being “upvoted” or “downvoted” by real humans. When the two scores conflict, a flag can be raised denoting that the algorithm has missed something important that the benevolent voters did not, or the algorithm has acted objectively while the malevolent voters did not. In the first case, a moderator can seek the reasons for the algorithm’s failure, and impove it. In the second case, a moderator can seek the reasons for the voters’ malevolence, and “improve” them. Either way, it is a win-win for the system as an evolving community-driven resource!