Let’s face it, performance reviews are hardly anyone’s idea of a good time. Managers hate them. Team members hate them. And this is nothing new. Complaints go as far back as to the Wei Dynasty in third-century China, where the rulers introduced an appraisal system to grade their household members. They quickly got the thumbs down. “The Imperial Rater of Nine Grades seldom rates men according to their merits but always according to his likes and dislikes,” Chinese philosopher Sin Yu wrote. The uncomfortable truth is that eighteen centuries later, we still struggle with bias and accuracy. A recent Gallup survey showed that only 29% of American employees consider their appraisals fair, and 26% think they correctly reflect their performance. So what can we do to turn around this sorry state of affairs? And is there a way to make the unpopular ritual less flawed and more useful?
That’s what we set out to do last year at Digital Natives, when we decided to scrap our previous system and rolled out a new, competence-based one. The aim was to make the evaluation process more transparent and less subjective. Sounds ambitious? Here’s how we went about it and what we’ve learnt along the way.
Back to the drawing board
For starters, let’s see what was wrong with our previous assessment practices. We started experimenting with peer reviews a couple of years ago. We asked team members to rate each other on a scale of 1-5, based on expertise, experience and loyalty. In other words, how much they know, what they’ve done and achieved to date, and how long they’ve been with the company. The total scores determined salary scales (but with no possibility of a pay cut). As straightforward as it sounded, the system was riddled with flaws. Most people weren’t clear on what different scores actually meant so they often dished them out randomly. There was also a general sense of discomfort in the team, as people felt uneasy about rating their colleagues. Plus, friends tended to give each other higher scores than deserved.
But we didn’t want to give up on peer reviews. We felt that involving a diverse group of people in the process could reduce individual bias and give a fuller picture of strengths and weaknesses. And as our company was growing rapidly, we needed that more than ever.
We decided to introduce a more comprehensive system than what we had before. And this time we paid more attention to making sure every rater is on the same page. Instead of the former, rather vague criteria, we broke down roles to 15-20 competences and defined what each one involves. Take a developer, for example. Besides a firm working knowledge of programming languages, they need to be able to write good documentation, simplify solutions and fix broken tools.
The role-specific requirements were also complemented with a set of general skills, in line with the core values of our company culture. For example, we expect every team member to have a desire for self-development and to be able to assertively communicate with others in and outside of the organisation. Not to mention that they need to take responsibility for their work instead of entering into a blame game.
People had to score themselves and their colleagues on a scale of 1-10, where 10 is “the best in the world”. With a team of 30-odd people (and counting), it would have been unreasonable for everyone to size up everyone else’s performance. So the general rule of thumb was this: people working closely together should all evaluate each other, but everybody else remained optional. And if one of your peers scored your work, you had no choice but to return the favour.
The good, the bad and the ugly
We launched the pilot system at the end of last year so it’s still early to draw any major conclusions. However, feedback has been generally positive. As the biannual assessment is done from different aspects and different perspectives, the final picture is way more complex than before. Meaning that our team members have a better idea of what they need to work on and what they have every reason to be proud of. When it comes to goal-setting and self-improvement, it’s also much clearer to everyone where they are on the learning curve, where they wanna get to and how to get there. By providing a sense of direction, the system seems to work wonders for intrinsic motivation.
It’s generally agreed that self-reflection is the key to growth on both a personal and organisational level. And that’s exactly what we like about our new evaluation system. Simply put, detailed peer reviews help people see themselves more realistically. One of our team members managed to get their self-scores match the raters’ average to the last digits. On the other extreme, we had people with a glaring gap between how they see themselves and how others see them. Whether you underestimate or overestimate your competences, unrealistic self-image can cause all kinds of professional and personal issues and needs to be addressed. There’s no other way around it.
Now the pitfalls. I don’t want anyone to think we’ve come up with a cure-all. Far from it. But we’re much closer to it than before. So what are the downsides? Subjectivity, before anything else. As long as people do the scoring, some bias will always enter the mix so having a critical eye is a must. As for our lists of competences, we still have a long way to go, from refining our definitions to breaking some of them down further. Then comes the question: Should competences be weighted, and if so, how? Or are they all equally important? Are scores enough or should people also give reasons?
And we don’t kid ourselves. Evaluation will still remain a chore to most people and they will probably resent the time required to get the job done. So the most important lesson of all? Keep it short and sweet.
How do you evaluate each other in your company?