Revelian

WEBINAR: A look at Game-based Assessments | Theme Park Hero

Theme Park Hero

A look into game based assessments and our assessment Theme Park Hero.

Game-based assessment is the first serious deviation from the traditional testing approach, forming the next evolution of psychometric testing and offering a vast array of advantages including improved brand perception, engaging experiences and advanced data streaming, capture and analysis.

Listen to Jason Blaik (Head of Psychology) in this insightful webinar as he takes you through:

  • The positive impacts that game-based assessments have had in the recruitment process
  • Advances in psychometric testing and the current state of assessments
  • The rise of gamification as an evolution of psychometric testing
  • Important concepts including design, motivation, flow, recognition, reward and play
  • The broader differences between traditional testing and game based assessments and including a developmental review of Theme Park Hero, Australia’s first game-based assessment

6 simple ways to increase buy-in for psychometric testing – Part 2

This is part 2 of a 2 part series. You can check out part 1 here.

Last week, we looked at how to increase buy-in for psychometric testing when you’re meeting resistance from internal people.

We covered how to counter the arguments that:

• The tests don’t actually work (properly designed and validated tests do, and there is a plethora of evidence available)
• The tests don’t bring about tangible ROI (they do – in many different ways – and we can prove it)
• The tests are no better than ‘gut feel’ (they are much more objective, fair and impartial than we are).

This week, we’ll take a look at the next 3 reasons, which are:

• They don’t understand how the tests actually apply to real-world situations
• They think candidates will cheat or fake their responses
• They think it’s a turn-off for candidates.

Again, there are some very simple responses to each of these arguments.  Let’s take a look at each one.

4. They don’t have any relevance to the real world

For some people, it’s difficult to see how an assessment that asks candidates to guess the next number in a sequence or decide whether a statement proves that Sally likes wearing orange on a Tuesday relate to real-world, on-the-job performance.

This argument takes us back to the first point I mentioned about methods of predicting performance at work, which shows that, for example, thoroughly tested and reliable cognitive ability tests are one of the single best predictors of future work performance.

They do this by assessing the very core information processing abilities of your candidates. By asking them to solve a series of different problems, we’re able to accurately and scientifically measure how well they can solve problems, reason and absorb new information.

When you apply this ability to the job, it has some very strong and important repercussions. For example, people who score well on a cognitive ability test will take less time to train and just ‘get’ the job more quickly.

Here’s some other examples of the types of behaviour you can expect from candidates, based on their cognitive ability results.

So if Louisa is applying for an accounting role, and her cognitive ability results fall in the top 20th percentile when compared other people in a similar role, you can be confident that she will learn the job more quickly, apply her past and current knowledge to solve problems, produce higher quality output and be more able to think on her feet than someone in the bottom 20th percentile.

Here’s another example. The table below shows what kinds of behaviour you can expect from candidates, based on their Work Reliability Assessment results (which assesses a candidate’s level of reliability and integrity).

So if Jason is applying for an HR role, in which he’ll have access to a lot of confidential and sensitive employee information, and scores in the top 20th percentile, you can be quite certain that he’ll adhere to your rules and procedures and exhibit highly ethical behaviour.

5. Candidates will tell us what we want to hear

Clearly, there’s no faking an ability assessment, such as the cognitive ability test or an ability-based emotional intelligence test, but what about the more subjective tests, that assess a person’s attitudes towards safety or reliability?

Well, as a testing provider, we could just ask candidates to tell us how safe or reliable their behaviour is and take their word for it, but that wouldn’t be a very good test.

So, what we do instead is include a variety of different checks into the assessment itself, to make sure we detect any suspicious behaviour. This includes building in alerts that tell you when a candidate:

• Has provided what seem to be overly positive responses (which indicates they might be trying to present themselves as more safe or reliable than they really are)
• Has answered inconsistently (which indicates they might have answered the questions haphazardly or randomly).
Any reputable testing provider will have safeguards such as these in place, as well as offering verification testing and forensic monitoring to detect possible cheating and let you know when it occurs.

6. Candidates hate them and they reflect badly on us

I really have to beg to differ on this point. Okay, so in some cases, recruitment assessments are not a great deal of fun (more on this point later). But candidates get it. They understand that you, as an employer, care about having a fair and equitable recruitment process, in which each and every candidate is assessed on his and her own merits. They know that it takes a great deal of inadvertent bias out of the picture and actively encourages diversity.

Often, if candidates do complain about the testing process, it’s because of lack of information. If you’re open and candid about what your recruitment process involves and what they can expect at every step, it goes a long way towards managing expectations and engaging with your candidates.

It’s also about being clear on why you’re using a particular set of assessments and how they relate to the job. For example, when you’re recruiting for engineers, you can explain that you use a cognitive ability assessment to measure their information processing ability and a work safety assessment to measure their attitudes towards safe behaviour at work. This is called face validity – the degree to which the assessment appears to be relevant to the job at hand.

During the assessment process, we explain to candidates what each assessment measures and why it’s important, but it’s always a good idea to re-iterate these messages in your own words and help candidates understand how they’re relevant to the specific job at hand.

Finally, back to the point about assessments not being much fun. We realised a few years back that we could improve the candidate experience of assessments by making them more enjoyable and engaging and to prove it, we built Australia’s first psychometric assessment game, Theme Park Hero.

Candidates love it. Seven out of ten candidates said they enjoyed the experience and wish that more employers would use it. It’s more fun than traditional assessments, helps to assuage the nerves that come from applying for a job you really want, and uses the exact same scientifically validated approaches as traditional assessments. And employers love it too because they get robust results and signal an innovative and desirable employer brand.

I hope this helps you counter some of the objections you might face when trying to introduce a more scientific and equitable recruitment process into your organisation. As I mentioned, all of the examples of Return on Investment and the kinds of improvements you can expect to see when implementing assessments are based on actual, real-life cases that we’d be happy to share with you. Just send us an email or give us a call if you’d like to hear more.

Want to refresh your memory about the first three objections? Check out Part 1 here.


By Cherie Curtis CEO @ Revelian 

Webinar: A look at Game-based Assessments | Revelian

A look into game based assessments and our assessment Theme Park Hero.

Game-based assessment is the first serious deviation from the traditional testing approach, forming the next evolution of psychometric testing and offering a vast array of advantages including improved brand perception, engaging experiences and advanced data streaming, capture and analysis.

Listen to Jason Blaik (Head of Psychology) in this insightful webinar as he takes you through:

• The positive impacts that game-based assessments have had in the recruitment process
• Advances in psychometric testing and the current state of assessments
• The rise of gamification as an evolution of psychometric testing
• Important concepts including design, motivation, flow, recognition, reward and play
• The broader differences between traditional testing and game based assessments and including a developmental review of Theme Park Hero, Australia’s first game-based assessment

For more information on Revelian Middle East inventory - http://www.hamiltonresourcing.com/revelian-introduction

6 simple ways to increase buy-in for psychometric testing – Part 1

This is part 1 of a 2 part series of posts. Make sure you come back for the second instalment next week!

What do you do when you’re trying to implement psychometric tests, but you’re meeting resistance from hiring managers or people who don’t believe they’re useful?

You’ve done your homework and decided on a psychometric testing provider.

You know that introducing assessments will increase the fairness and validity of your recruitment process. You’re confident that they’ll help you identify a better calibre of candidate and get a more robust understanding of each person’s capabilities before you make the risky decision of bringing them on board.

You know that assessments will actually help you to increase the diversity of your new hires, since they’ll remove bias from the equation and focus solely on the aptitude, abilities and characteristics of each candidate and not factors that lurk below our consciousness and influence our decision-making processes.

And you’re confident that painstakingly researched and validated psychometric assessments do what it says on the packet – they will actually help you accurately and reliably identify people who are more likely to be top performers, be engaged with your organisation and its values, have the right kind of personality and behavioural preferences for the role and team they’ll be working in, and the skills needed to perform well.

So, why is it sometimes so incredibly hard to convince the rest of your organisation that assessments are not only worthwhile, but will also lead to positive outcomes and a solid ROI?

Why people don’t buy in to psychometrics?

In my experience working with hundreds of organisations to introduce more rigour and science to their recruitment process, we’ve come up against this issue many times. And it usually boils down to one of six reasons:

• They don’t believe that tests actually work – that is, do a better job of identifying the best people than they themselves can
• They don’t believe the tests will improve the quality of candidates they hire and bring about actual, tangible ROI
• They think gut instinct is as good – if not better – than a psychometric assessment
• They don’t understand how the tests actually apply to real-world situations
• They think candidates will cheat or fake their responses
• They think it’s a turn-off for candidates.

There are some very simple responses to all of these objections. Let’s take a look at each of the objections in turn and how you might address them to get the buy-in you need to improve your recruitment processes.

1. The tests don’t actually work

This one usually comes about when people have seen other poorly-developed tests online and think that the same sloppy methodology applies to all types of recruitment assessment.

The fact is, this couldn’t be further from the truth. Developing a properly researched, robust and reliable psychometric testing tool takes years of development and needs to pass through an extremely stringent validation process to prove that they actually predict the kinds of outcomes they say they do.

As an example, the makers of a valid cognitive ability test must be able to demonstrate that it actually predicts a candidate’s future performance at work.

In 1999, two men called Frank Schmidt and Jack Hunter looked at over 32,000 job applicants over 85 years, across 500 different jobs.

They examined 19 different selection methods to see which ones were most accurate at predicting performance at work and found that work sample tests (actually having someone perform the job) were the best way to predict how a person would perform once they were hired. Obviously, this really isn’t practical: it takes a long time to clearly see how a person will perform and you can’t ask each and every applicant to work for free for a few weeks to accurately gauge their performance.

When they added cognitive ability tests (which are short, easy to administer, usually available online) to the mix, they found this:

Cognitive ability assessments combined with a structured interview is the simplest, fastest and most cost-effective way to predict how someone is actually going to perform once they’re hired.

This finding held true over all of their 85 years of research and still to holds up today, even after strenuous investigation.

At Revelian, we’ve conducted our own research and found that cognitive ability tests have clearly predicted not only performance but also tenure and likelihood of being promoted.

And it’s not just cognitive ability tests that have a strong predictive validity (that is, they actually predict real-world outcomes), which brings me to the next point…

2. The tests don’t bring about actual, tangible ROI

We often hear this one from the C-suite, in particular, the numbers people. And at the end of the day, this is the criteria any recruitment process needs to fulfil: is it actually going to benefit the business and bring about tangible improvements we can measure?

Our CFO wrote a great blog earlier this year about selling your HR initiatives to the C-suite, which talks about stepping into the shoes of the people you need to convince and seeing things from their perspective. In this case, it means showing clearly the kind of Return on Investment you’ll get from implementing psychometric assessments and how long it will take to pay back the investment and we’ve put together some great examples of how to do this for graduate recruitment (which applies to any recruitment exercise) and small business recruitment.

So, what kinds of ROI can you expect to see from implementing certain types of testing? Here’s a general guide to the kinds of results you can obtain by adding assessments for cognitive ability, values fit, safety, reliability, and emotional intelligence and to your recruitment process. All of these results are based on actual real-life implementations of Revelian assessments.

3. The tests are no better than ‘gut feel’

The judges on The Voice hold blind auditions for a very good reason. While they know that they’re all able to assess who has a great voice, they also know that unconscious bias will always creep in and influence their decisions.

No matter how objective and impartial we think we are, biases will always influence our decision-making process. As humans, we instinctively try to save resources and make quick decisions with minimal effort, and will always attempt to consciously take in as little information as possible to make what we feel is a valid decision.

One of the major culprits is confirmation bias – when we make an initial judgement about someone, we then look for evidence that confirms this judgement and overlook anything that doesn’t support it.

Wharton professor and author of Originals: How Non-Conformists Move the World, Adam Grant gives a nice example in this Huffington Post article. He tells the story of Ari, a maths major who built robots in his spare time and had applied for a sales role that Grant himself had filled the year before.

During an interview with Grant, he didn’t once make eye contact, which led Grant to conclude that he had poor social skills and wouldn’t be able to build effective relationships with clients. When Grant told his president about his observations, his president laughed at him and said ‘Who cares about eye contact? This is a phone sales job!’

Grant had fallen prey to confirmation bias. Because he believed early on that Ari would be no good at sales (or as Grant put it, he wasn’t Mini-me), he missed other clues, such as how well he built rapport, asked questions and thought creatively.

Looking at Ari from that angle, they re-assessed him and gave him the job. He performed brilliantly.

Google’s Laszlo Bock has come up with a solution. When making a new hire for Google, a large team conducts a strictly structured interview, which ensures every candidate answers the exact same questions. The interview team – made up of some people who could work with the candidate and people from other teams – take extensive notes about each candidate’s answers.

These answers are then reviewed by an impartial hiring committee, who will make the final hiring decision, without ever meeting the candidate in person. All in an effort to avoid any kind of bias from clouding their judgement and preventing them from hiring the very best people.

Psychometric assessments (well designed and valid ones, as we discussed above) also give you this completely impartial and unbiased view of each candidate. They make sure every single candidate is assessed fairly and equitably, using scientifically sound performance criteria.

Make sure you come back for Part 2 next week, when we address the next 3 objections!


By Cherie Curtis, CEO Revelian

4 Reasons to Trust (valid and reliable) Psychometric Assessments

It goes without saying that human behaviour is incredibly complicated. It’s determined by an intricate combination of factors, and – as you can imagine – trying to predict how a person is going to behave, or react, or perform is no easy task.

Enter psychometrics, whose goal is to get accurate and unbiased insight into people’s mental abilities, personality, and behaviour. But how on earth is this possible?

1. There’s a lot of evidence that they work

Organisational Psychologists have spent over five decades researching, creating and rigorously testing psychometric assessments that are robust enough to predict when and why a given person will be successful or not in a given job. And as someone who is working towards becoming an Organisational Psychologist, let me tell you that these folks are an extremely hard to impress, detail-focused and highly sceptical bunch.

There’s now a large body of highly credible scientific evidence that demonstrates that a person’s results on a (valid and reliable) psychometric assessment can strongly predict a number of different work-related factors, including:

• Future job performance: how well they will learn new tasks, solve complex problems and perform on the job
• Organisational fit: whether they’re likely to share the organisation’s values and feel more committed and engaged in their job
• Safety behaviours: how likely they are to accept personal responsibility for safety at work and avoid risky behaviour
• Behaviour and personality: how someone naturally prefers to behave at work, the kinds of behaviours they have adopted, and how difficult it is to sustain behavioural changes
• Emotional intelligence: how well they can identify, understand, manage and use their own and other people’s emotions.

These kinds of assessments also give employers a standardised, fair and equitable way to compare candidates for a role, based on criteria that have been scientifically proven to predict success in a particular role.

2. They need to demonstrate reliability and validity

The next question has to be: how do we actually know that these assessments can really do what they say they will? It all comes down to two little words: reliability and validity.

These two properties are the foundation of psychometric assessments, and are the reason you can have confidence that psychometrics will help you identify and select the right people for a role.

So, what do we mean when we talk about reliability and validity? Let’s take a look at each concept on its own.

Reliability refers to the ability of an assessment tool to produce stable and consistent results. For example, a personality assessment should produce very similar results for the same candidate each time they complete it within a similar time period.

We can break reliability down a little further as well, into sub-categories that include test-retest reliability and internal consistency.

Test-retest reliability happens when we administer the same test to the same group of people several times, and achieve similar results each time. So, if someone is assessed as being a top performer in their first test sitting, a reliable test will give us a similarresult the second time they complete the same test.

Internal consistency reliability examines the consistency between the different items within a test. This means that if there are two or more items in an assessment that measure the same construct – for example, in a safety assessment, there might be multiple items that assess a person’s locus of control – we would expect that the same person will answer all of the items in a similar way.

Validity refers to the extent to which an assessment measures what it is intended to measure. For example, a measure of intelligence should measure intelligence, and not something else, such as memory. Like reliability, validity has a number of sub-categories which all need to be met for a test to be considereda legitimate psychometric measuring tool.

A particularly important sub-category is predictive validity. This concept is all about how well a test score can predict performance on a set of future criteria.

A nice example of predictive validity is the incredibly strong and rigorous scientific evidence that a person’s score on a cognitive ability test predicts their future performance at work. In other words it is very likely that the higher a candidate’s score on a (valid and reliable) cognitive ability assessment, the better their job performance will be.

There are many other cases of strong and rigorous associations between people’s scores on a particular construct and their subsequent performance at work, including:

• A robust association between a candidate’s score on a measure of work reliability or integrity and their rate of absenteeism from work
• A clear association between candidate’s score on a measure of safety and their likelihood of suffering a workplace injury or accident.

3. They go through an extremely stringent development process

Developing a psychometric test is not the kind of endeavour that can happen overnight. While anyone can pull together a quiz or questionnaire and deliver some results to people (certain magazines do this very well – and they’re fun to complete), constructing a proper, valid and reliable psychometric assessment is a whole other world of complexity.

Because they do have such stringent criteria to meet and need to prove that they can provide genuine information about a candidate’s suitability or ‘fit’ for a particular role, psychometric tests can take up to 10 years to develop.

To be taken seriously, the test developers have numerous hoops to jump through. One of these is making sure that the items in the test are measuring the construct they’re supposed to measure – and just that particular construct – as precisely as possible.

This involves conducting an intricate statistical analysis to determine which items should be eliminated from or retained in the item pool, and whether additional items need to be developed.

Yet another challenge is ensuring that psychometric assessments remain up to date and relevant. This usually means that tests need to be continually updated over time, based on feedback and new research in the field.

4. They have safeguards to prevent faking or response distortion

‘But wait!’ you may say. ‘This is all very well and good, but what about candidates giving the answers they think you want in an assessment?’ And that’s a really good question.

Obviously, when candidates are applying for a job, they’re motivated to show you their very best side. This also means that they’re likely to be tempted to give fake or distorted responses on an assessment, such as telling you they’re more reliable than they really are.

This is a question that psychologists have pondered for many years, and there’s a whole body of psychological literature dedicated to it. From all of this research, there are a number of different – and effective – ways we can reduce the opportunity for candidates to fake their responses, including:

• Verification testing: candidates complete the same assessment (with different questions) a second time under supervised conditions to verify their original results
• Validity scales: checks are built in to the assessments (by certain questions or algorithms) to detect whether candidates are trying to present an overly positive image of themselves or their behaviour

Making candidates aware of the consequences of faking: some psychometric assessment providers (Revelian included) also collect some fairly sophisticated forensic data behind the scenes, and are alerted when candidates exhibit suspicious behaviour. Alerting candidates of this before they begin the assessment, and that their results may be deemed invalid if they do not respond honestly is a useful and effective method of reducing faking. So, as you can see, developing and delivering a valid, reliable and robust psychometric assessment is no mean feat and there are some extremely stringent guidelines attached.

And while this is a burden that we – as psychometric assessment developers – must bear, the great news for employers is that these same stringent guidelines mean you can be confident that tests meeting these requirements will give you accurate, fair and reliable predictions of how candidates will behave and perform at work.


by Jarrah Watkinson