← Back to Blog

Keep out the bad apples: How to moderate a marketplace

2024-09-13 · Dominic Quirin
Marketplace moderation strategies and security measures for online platforms

In a marketplace, you’re bringing together thousands of people – often anonymous – and letting them interact with each other. You are facilitating money transfers, private messages, and sometimes even in-person meetings. Sounds like a recipe for disaster, doesn’t it?

At MentorCruise, we’ve dealt with troublesome users right from the start. Initially, we used manual methods to track and catch them, like providing report buttons and keeping an open ear on the support email. But as we grew, it became clear that manual moderation doesn’t scale. You can’t read every message (nor should you, for privacy reasons), and you can’t vet every sign-up personally.

These are the tricks and strategies we’ve employed to keep our community safe, trustworthy, and thriving.

The Challenge of Bad Actors in Marketplaces

Before diving into the solutions, it’s important to understand why bad actors pose such a significant challenge in online marketplaces. It’s not just about “mean” users; it’s about protecting the integrity of your platform.

Recognizing these challenges early on can help you implement effective strategies to mitigate them.

Layer 1: Friction as a Filter (Form Validation)

The first line of defense is your interface. By adding the right amount of friction, you can deter low-effort bad actors without hurting good users.

Encouraging Meaningful Communication

For any form of formal communication – be it an inquiry, review form, booking instructions, or a formal report – you want to extract as much meaningful information as possible from the user. If you’re running a service marketplace and the instructions to the vendor are as minimal as “thanks”, the chances of misunderstandings or issues arising are pretty high.

One effective strategy is to implement minimum length requirements on forms. While not universally applicable, we’ve found that the average word count on mentorship applications increased by 24% once we added this feature. By encouraging users to provide more detailed information, we facilitate better interactions between mentors and mentees.

Implementing Smart Form Fields

Beyond minimum lengths, consider adding smart form fields that guide users on what information to include. For example:

Layer 2: Automated Detection and Filtering

When friction isn’t enough, you need code. Automated systems can catch what humans miss and work 24/7.

Detecting Gibberish

When you enforce a 50-character limit, lazy users (or bots) will type kjflkdfjkljkfsklfl or test test test test to bypass it.

Strategy: Use Gibberish Detection. We implemented a simple library that scores text based on the probability of character transitions. If a string looks like random key-mashing, we block the submission. This catches a surprising amount of low-effort spam and prevents “junk” data from entering your system.

Profanity and PII Filtering

Machine Learning Filters

For larger platforms, relying on simple lists isn’t enough. Tools like OpenAI’s Moderation API or AWS Rekognition (for images) can automatically flag inappropriate content, nudity, hate speech, or harassment with high accuracy. These models learn from context, making them far more effective than keyword blocking.

Layer 3: Effective Reporting Mechanisms

No matter how robust your preventive measures are, it’s crucial to be aware of issues as they arise. Turn your community into your moderation team.

Multiple Reporting Touchpoints

One area we’re focusing on is enhancing our reporting mechanisms to allow users to report problems from various touchpoints within the platform:

Key Insight: Ask for context. A simple “Report” button isn’t enough. Require the reporter to select a category (e.g., “Spam,” “Harassment,” “Off-platform”) and add a description. This helps your support team triage tickets faster.

Acknowledging and Following Up

It’s important to acknowledge these reports promptly. Users need to know that their concerns are being taken seriously. A simple follow-up can go a long way in maintaining trust within your community. Consider implementing automated acknowledgment messages and setting up a ticketing system for efficient follow-up.

Layer 4: Establishing Clear Policies

Crafting a Code of Conduct

What constitutes acceptable behavior on your platform? As a marketplace, one of the first things you should establish is a comprehensive Code of Conduct. Even if it’s generic, it should clearly outline what is off-limits and the consequences for violating these rules.

Tailoring to Your Audience

It’s worth noting that the Code of Conduct for a dating site can be entirely different from that of a tutoring service. Cultural differences may also necessitate different policies for services operating in North America versus those that are global. Tailor your policies to reflect the values and expectations of your specific user base.

Educating Your Community

Layer 5: The Banhammer

So, what happens when you still encounter a bad apple that slips through your reports, validations, and filters? Enter the banhammer.

When I first built MentorCruise, I was anxious about the potential for misuse and decided to implement a banhammer feature right from the start. It’s an internal admin tool – a simple page with a single form field. When I input a mentor’s username, the following actions are triggered automatically:

  1. Profile Deactivation: The user’s profile is immediately deactivated (404’d), removing their visibility from the platform.
  2. Data Removal: We remove or anonymize all their user data to comply with privacy regulations.
  3. Notification: The user receives a notification informing them that they are no longer part of the program.
  4. Connection Closure: All their ongoing connections are closed.
  5. Connection Notifications: Users who were connected with the banned mentor are notified about the ban (e.g., “Your booking was cancelled because the user was removed for violating our ToS”).

The First Swing

It took over 1.5 years before I had to use the banhammer for the first time. While it was a bittersweet moment, the system worked flawlessly and resolved all outstanding issues related to that user. Having this tool ready made the process swift and efficient, minimizing disruption to the community.

Shadow Banning

Sometimes, for spammers, it’s better to “shadow ban” them. They think they are posting messages, but no one else can see them. This prevents them from immediately creating a new account to bypass the ban, as they don’t realize they’ve been banned yet.

Layer 6: Trust Signals and Social Proof

Moderation isn’t just about punishment; it’s about incentivizing good behavior.

Identity Verification Badges

Offer users a “Verified” badge if they connect their LinkedIn, verify their phone number, or upload a government ID.

Reputation Systems

Implement a robust review system. But go beyond simple star ratings:

The Cost of False Positives

A word of caution: aggressive moderation can hurt your business if it’s inaccurate. If your automated filters block a legitimate user, you might lose them forever.

Managing Disputes: When Things Go Wrong

Even with the best moderation, deals will go sour. A buyer will hate the product, or a seller will claim they never got paid. You need a standardized Dispute Resolution Process.

  1. The “Cooldown” Phase: Encourage users to resolve it themselves first. “Have you messaged the seller?”
  2. The Evidence Phase: If they escalate to you, demand evidence. Screenshots, tracking numbers, code commits.
  3. The Decision: Be decisive. Based on your ToS, make a ruling (Refund / Partial Refund / No Refund).
  4. The Payout: Only release funds from escrow once the dispute is closed.

Pro Tip: In your Terms of Service, include an arbitration clause to prevent small disputes from turning into lawsuits.

Financial Fraud: The Hidden Enemy

Bad actors aren’t just mean; some are thieves.

Conclusion

Managing bad actors in a people-centric marketplace is an inevitable challenge, but with the right tools and strategies, it’s a manageable one. By implementing robust form validations, effective reporting mechanisms, clear policies, and decisive actions like the banhammer, you can maintain a healthy community that serves the best interests of all your users.

At MentorCruise, these measures transformed our support load from a chaotic fire-hose into a manageable stream, allowing us to focus on growth rather than policing. Build your walls high, but keep your gates open for the good guys.

How Twosided Can Help Keep Your Platform Safe

Moderation is often reactive – you wait for a report, then you act. But what if you could be proactive?

Twosided helps you spot anomalies before they turn into disasters. Our marketplace analytics can detect suspicious patterns:

Safety is not just about filters; it’s about data. Secure your marketplace with Twosided today.