This is part 2 of a 2 part series. You can check out part 1 here.
Last week, we looked at how to increase buy-in for psychometric testing when you’re meeting resistance from internal people.
We covered how to counter the arguments that:
• The tests don’t actually work (properly designed and validated tests do, and there is a plethora of evidence available)
• The tests don’t bring about tangible ROI (they do – in many different ways – and we can prove it)
• The tests are no better than ‘gut feel’ (they are much more objective, fair and impartial than we are).
This week, we’ll take a look at the next 3 reasons, which are:
• They don’t understand how the tests actually apply to real-world situations
• They think candidates will cheat or fake their responses
• They think it’s a turn-off for candidates.
Again, there are some very simple responses to each of these arguments. Let’s take a look at each one.
4. They don’t have any relevance to the real world
For some people, it’s difficult to see how an assessment that asks candidates to guess the next number in a sequence or decide whether a statement proves that Sally likes wearing orange on a Tuesday relate to real-world, on-the-job performance.
This argument takes us back to the first point I mentioned about methods of predicting performance at work, which shows that, for example, thoroughly tested and reliable cognitive ability tests are one of the single best predictors of future work performance.
They do this by assessing the very core information processing abilities of your candidates. By asking them to solve a series of different problems, we’re able to accurately and scientifically measure how well they can solve problems, reason and absorb new information.
When you apply this ability to the job, it has some very strong and important repercussions. For example, people who score well on a cognitive ability test will take less time to train and just ‘get’ the job more quickly.
Here’s some other examples of the types of behaviour you can expect from candidates, based on their cognitive ability results.
So if Louisa is applying for an accounting role, and her cognitive ability results fall in the top 20th percentile when compared other people in a similar role, you can be confident that she will learn the job more quickly, apply her past and current knowledge to solve problems, produce higher quality output and be more able to think on her feet than someone in the bottom 20th percentile.
Here’s another example. The table below shows what kinds of behaviour you can expect from candidates, based on their Work Reliability Assessment results (which assesses a candidate’s level of reliability and integrity).
So if Jason is applying for an HR role, in which he’ll have access to a lot of confidential and sensitive employee information, and scores in the top 20th percentile, you can be quite certain that he’ll adhere to your rules and procedures and exhibit highly ethical behaviour.
5. Candidates will tell us what we want to hear
Clearly, there’s no faking an ability assessment, such as the cognitive ability test or an ability-based emotional intelligence test, but what about the more subjective tests, that assess a person’s attitudes towards safety or reliability?
Well, as a testing provider, we could just ask candidates to tell us how safe or reliable their behaviour is and take their word for it, but that wouldn’t be a very good test.
So, what we do instead is include a variety of different checks into the assessment itself, to make sure we detect any suspicious behaviour. This includes building in alerts that tell you when a candidate:
• Has provided what seem to be overly positive responses (which indicates they might be trying to present themselves as more safe or reliable than they really are)
• Has answered inconsistently (which indicates they might have answered the questions haphazardly or randomly).
Any reputable testing provider will have safeguards such as these in place, as well as offering verification testing and forensic monitoring to detect possible cheating and let you know when it occurs.
6. Candidates hate them and they reflect badly on us
I really have to beg to differ on this point. Okay, so in some cases, recruitment assessments are not a great deal of fun (more on this point later). But candidates get it. They understand that you, as an employer, care about having a fair and equitable recruitment process, in which each and every candidate is assessed on his and her own merits. They know that it takes a great deal of inadvertent bias out of the picture and actively encourages diversity.
Often, if candidates do complain about the testing process, it’s because of lack of information. If you’re open and candid about what your recruitment process involves and what they can expect at every step, it goes a long way towards managing expectations and engaging with your candidates.
It’s also about being clear on why you’re using a particular set of assessments and how they relate to the job. For example, when you’re recruiting for engineers, you can explain that you use a cognitive ability assessment to measure their information processing ability and a work safety assessment to measure their attitudes towards safe behaviour at work. This is called face validity – the degree to which the assessment appears to be relevant to the job at hand.
During the assessment process, we explain to candidates what each assessment measures and why it’s important, but it’s always a good idea to re-iterate these messages in your own words and help candidates understand how they’re relevant to the specific job at hand.
Finally, back to the point about assessments not being much fun. We realised a few years back that we could improve the candidate experience of assessments by making them more enjoyable and engaging and to prove it, we built Australia’s first psychometric assessment game, Theme Park Hero.
Candidates love it. Seven out of ten candidates said they enjoyed the experience and wish that more employers would use it. It’s more fun than traditional assessments, helps to assuage the nerves that come from applying for a job you really want, and uses the exact same scientifically validated approaches as traditional assessments. And employers love it too because they get robust results and signal an innovative and desirable employer brand.
I hope this helps you counter some of the objections you might face when trying to introduce a more scientific and equitable recruitment process into your organisation. As I mentioned, all of the examples of Return on Investment and the kinds of improvements you can expect to see when implementing assessments are based on actual, real-life cases that we’d be happy to share with you. Just send us an email or give us a call if you’d like to hear more.
Want to refresh your memory about the first three objections? Check out Part 1 here.
By Cherie Curtis CEO @ Revelian