Improving quality assurance as a marketplace, or: how star ratings failed us
The core to retention and growth is a good service. Easier said than done! While SaaS businesses can tweak their product, marketplaces have to educate and vet vendors.
Earlier this year, we had a new challenge at hand over at MentorCruise. I set our growth goals high and somehow we stood slightly under the target every single month. Time to go into debugging mode!
Thanks to my lovely marketing mentors (shoutout Rui and Daniel), we quickly nailed down “churn” as one of the things that are blocking our growth the most. Simply said – people on MentorCruise didn’t stick around enough.
There were a lot of low-hanging fruits we managed to get fixed quickly. Things like platform bugs that prevented folks from chatting and mismatched expectations during signup. What remained was harder to solve – the service quality. Mentors that didn’t take their job seriously and as a result missed meetings, gave mediocre advice or simply didn’t manage to provide value.
Now, these didn’t make up the majority of mentors, but they did make up the majority of churn that was left. By upgrading our quality assurance, we got rid of a lot of these cases and are back on track for our growth goals.
A rating system, duh?!
It seems so simple. Clearly, if a vendor is not doing satisfying work, they get a bad rating and should get booted from the marketplace, right?
Well, we’ve had a rating system since Day 1. The reality of a very commonly seen 5-star rating looks like this.
A perfect five star rating is the default. A mentor has to mess up a lot to push a mentee to rate anything else. If it gets to that, the next best rating is usually one star. There’s really not a lot in-between.
Plus, giving one star feels like revenge for many. It reduces the public rating of a mentor, making them lose rankings, mentees and future opportunities. Too much for many who will instead opt for a generic 5-star rating. For others, a welcome opportunity to give someone who wasted your time one last kick.
To get a better insight into the performance of a mentor, we introduced a second step to our exit survey – a blind rating. This rating would only be shared with us. We’ve also moved away from a generic star rating for this – and went with a scale instead. “Not satisfied at all” scores a 1 in the background, “Very satisfied” scores as a 5.
The results speak for itself:
A lot of mentors do really great work. But it’s not as crystal clear as the public ratings seem to suggest. Instead, many mentees are extremely satisfied with their mentors. Many others are satisfied but may offer one or two points of improvement. Some others feel quite neutral about an experience.
The minority of mentorships ends badly, only around 7% to be exact. In those cases it’s up to us to take further action and make sure we can push down that number even further.
Three strikes, you’re out
As a next step, we started introducing a moderation system. Getting bad feedback, a report, missing a meeting – all of it gets you a strike.
On the first strike, mentors get added to a watchlist. We check in on this regularly, see what other feedback a mentor has, whether we can help with anything. We sometimes pull the emergency stop here if we can’t trust a mentor to do better.
Two strikes, and a mentor will get an email from us. We’ll work together on a performance improvement plan, make sure they know where they are lacking and what mentees might say about them.
Three strikes, you are out. No discussion.
Since the beginning of our new system, we’ve only had to pull the final trigger once. But strikes have been dished out and mentors have realized they need to step up their game, which we gladly help them with.
Given these chances, we were able to bring down our churn by double-digits and are back on track on all goals for the end of this year 📈