There are different strategies for the assessment of competence. The paper Strategies for the Assessment of Competence, The Vocational Aspect of Education is a good starting point as it examines three different strategies for the assessment of competence. Each strategy is illustrated by means of an analysis of an assessment programme in operation. The programmes are drawn from an investigation of competency‐based training programmes in TAFE colleges, industry and various training institutions carried out in Australia and Great Britain. Even though the paper is based on investigation in the early 90s is still very useful in developing understanding how sound competency-based assessments can be developed. The analysis are still relevant to most of today’s practices, bringing out the strengths and weaknesses of each strategy, as well as the particular problems and difficulties associated with the assessment of competence. They also outline the procedures that must be followed to ensure that principles of sound assessment are adhered to.
As discussed in the paper the success or failure of a competency-based assessment depends on the quality of the assessment methods employed and on the methods of monitoring student progress and recording competence achieved.
There are three fundamental questions which must be answered before a valid and reliable assessments can be developed.
- What forms or types of evidence should be sought?
- How much evidence is required?
- Where should the evidence be provided? (Or who has the responsibility for making the judgements?)
Answers to these questions determine the shape and size of the assessment.
The assessment strategies outlined in this paper take different approaches on these questions. They are different because they answer the three fundamental questions differently. They also base their judgements about competence on different forms and amounts of evidence and adopt different procedures for obtaining this evidence. They are also different because they are based on different views of competent performance and they face different practical problems in obtaining the evidence required.
Assessment based on samples of performance
One of the often used strategies is assessment based on samples of performance. With this strategy the major evidence of competence is derived from samples of performance on specially arranged assessment events such as practical tests, exercises and simulations. These are generally supported by what is referred to as “supplementary evidence”. Supplementary evidence is derived from written and oral questions and multiple-choice tests. The practical assessments are designed to measure the technical or performance aspects of competence, while the written and oral tests are usually designed to measure underpinning knowledge and understanding.
This approach to assessment requires that criteria are worked out for each assessment event on which judgements about competence are based. Assessments based on these criteria are generally made on a pass/fail basis. Learners are generally assessed individually when they ready and judged as ‘competent’ or ‘not competent’. Records of progressive achievement are kept as individuals re attempt (multiple attempts are usually allowed) assessment events until competence is achieved.
A common challenge with this assessment strategy is a tendency to fragment the assessment of competence and to assess many separate elements at the expense of more holistic assessment. That can be overcome by drawing together a number of elements into integrated tasks and assessments.
Another challenge can arise from the complex and time-consuming recording of learner progress, required to keep track of competence achieved and not achieved, number of successful and unsuccessful attempts for all learners and all elements of competence. This is particularly challenging if the process of record keeping is manual. The implementation of learning management systems (LMSs) or computer-managed learning and assessment has made this process more efficient and cost-effective and providing adequate records.
Performance evidence from natural observation in the workplace
A second strategy for assessment of competence is one based on performance evidence from natural observation in the workplace. The major source of evidence is the observation of natural or routine performance in the workplace. Assessment on the job is based on the observation of work performance. It is carried out by supervising tradespersons each time a trainee does work in one of the competency areas as part of the ‘normal daily work routine’. Records of each experience are maintained. This approach appears to have validity as the performance being assessed is real workplace performance under real conditions and is related to the units and elements of competence set out in the statement of standards. Assessment is holistic as it is based on real work which generally requires the integration of skills, knowledge and attitudes.
To be effective this approach requires multiple observations of natural or routine work performance of all elements of competence. It usually involves multiple assessors to ensure quality control, tracking and recording of competence achieved. The extent to which this strategy is reliable, fair to all trainees and generally practicable and cost-effective depends on:
- adequate quality control to ensure consistency across assessors;
- the quality and range of workplace experience and training that can be provided by tradespeople;
- sensible decisions about the number of observations of performance required to make a reliable judgement about competence as the combination of circumstances in range statements and performance criteria can lead to a multiplication of assessment events.
Evidence from prior achievements or learning
A third strategy of assessment is based on evidence from prior achievements, where judgements about competence are based on recognising, assessing and accreditation of prior achievements and learning. This is often achieved by learners providing a portfolio of evidence documenting their achievements. The portfolio may be supported by evidence from supplementary sources such as interviews, oral and written questions and simulations. This approach is more likely to be used to assess higher levels of competence standards such as those in management or supervisor training, where natural observation of performance in the workplace may not be a viable option.
To ensure that the required standards are reached, steps need to be taken to ensure that the process of evidence collection and documentation is more than a file-keeping exercise. To enhance reliability, candidates can be trained in documentation and reflection techniques and assessors would need to have expertise in the facilitation and assessment of these techniques.
Take some time to think about the three main strategies and reflect on your experience with competency-based assessment. Which of the three strategies have you come across our used yourself to assess competency. Did you face similar challenges? Have you found ways to overcome the practical problems that you can share with other course participants? If you have not used or experienced any of the strategies, which one do you think will work best in your current context and why?
Share your reflection, experience and ideas in the CTVSD2 forum.
Here are some additional resources on strategies for assessing competence:
Not just falling over the line? A snapshot of competency-based assessment – this report explores whether competency-based assessment is meeting the needs of users. There are a number of issues raised, including grading, the quality of competency standards and their treatment of underpinning knowledge, who are appropriate assessors, and what resources are needed to support assessment. The report puts forward strategies to improve competency-based assessment, directed at policy-makers and registered training organisations, that you may want to consider and if appropriate put forward for consideration in your institution.
Evaluating on- and off-the-job approaches to learning and assessment in apprenticeships and traineeships – this report summarises the views of Australian apprentices and trainees on their learning and assessment experiences in on- and off-the-job settings. It also examines through some 20 case studies the strengths and weaknesses of the approaches to learning and assessment that are being used today.
Improving the validity of competency-based assessment – this study considers the status of validity in the context of the assessment of VET in Australia. The project has involved reviewing the literature, reporting the outcomes of case studies, presenting the key findings and developing a diagnostic tool to guide assessors.
Vocational voices is the official podcast of the National Centre for Vocational Education Research, where you can hear leading experts discuss current trends in vocational education and training.
This publication is a Competency-Based Training (CBT):An Introductory Manual for Practitioners produced by the International Labour Organization (ILO) in 2020 is a very useful resource aiming to provide TVET trainers and developers with a basic understanding of the steps involved in designing competency-based programmes. Step 12: Understanding and designing assessments on page 58 provides a good overview of the common assessment strategies and comparisons of theoretical / knowledge based assessment items, as well as comparisons between the most common practical assessment methods, listing their strengths and weaknesses.
A Guide to Writing Competency Based Training Materials is written to assist educators and trainers to design and deliver competency based training programmes. It is worth exploring the entire guide as you will be exposed to a good deal of the content and will help you refresh or develop your knowledge of CBT. For the purpose of assisting you with designing competency based assessment refer to the Assessment of competence section page 18 to 24 as it provides an overview of forms or types of assessment, principles of assessment, evidence gathering methods, types of assessment evidence, and examples of assessment methodologies.
There are different strategies for the assessment of competence. The paper Strategies for the Assessment of Competence, The Vocational Aspect of Education is a good starting point as it examines three different strategies for the assessment of competence. Each strategy is illustrated by means of an analysis of an assessment programme in operation. The programmes are drawn from an investigation of competency‐based training programmes in TAFE colleges, industry and various training institutions carried out in Australia and Great Britain. Even though the paper is based on investigation in the early 90s is still very useful in developing understanding how sound competency-based assessments can be developed. The analysis are still relevant to most of today’s practices, bringing out the strengths and weaknesses of each strategy, as well as the particular problems and difficulties associated with the assessment of competence. They also outline the procedures that must be followed to ensure that principles of sound assessment are adhered to.
As discussed in the paper the success or failure of a competency-based assessment depends on the quality of the assessment methods employed and on the methods of monitoring student progress and recording competence achieved.
There are three fundamental questions which must be answered before a valid and reliable assessments can be developed.
Answers to these questions determine the shape and size of the assessment.
The assessment strategies outlined in this paper take different approaches on these questions. They are different because they answer the three fundamental questions differently. They also base their judgements about competence on different forms and amounts of evidence and adopt different procedures for obtaining this evidence. They are also different because they are based on different views of competent performance and they face different practical problems in obtaining the evidence required.
Assessment based on samples of performance
One of the often used strategies is assessment based on samples of performance. With this strategy the major evidence of competence is derived from samples of performance on specially arranged assessment events such as practical tests, exercises and simulations. These are generally supported by what is referred to as “supplementary evidence”. Supplementary evidence is derived from written and oral questions and multiple-choice tests. The practical assessments are designed to measure the technical or performance aspects of competence, while the written and oral tests are usually designed to measure underpinning knowledge and understanding.
This approach to assessment requires that criteria are worked out for each assessment event on which judgements about competence are based. Assessments based on these criteria are generally made on a pass/fail basis. Learners are generally assessed individually when they ready and judged as ‘competent’ or ‘not competent’. Records of progressive achievement are kept as individuals re attempt (multiple attempts are usually allowed) assessment events until competence is achieved.
A common challenge with this assessment strategy is a tendency to fragment the assessment of competence and to assess many separate elements at the expense of more holistic assessment. That can be overcome by drawing together a number of elements into integrated tasks and assessments.
Another challenge can arise from the complex and time-consuming recording of learner progress, required to keep track of competence achieved and not achieved, number of successful and unsuccessful attempts for all learners and all elements of competence. This is particularly challenging if the process of record keeping is manual. The implementation of learning management systems (LMSs) or computer-managed learning and assessment has made this process more efficient and cost-effective and providing adequate records.
Performance evidence from natural observation in the workplace
A second strategy for assessment of competence is one based on performance evidence from natural observation in the workplace. The major source of evidence is the observation of natural or routine performance in the workplace. Assessment on the job is based on the observation of work performance. It is carried out by supervising tradespersons each time a trainee does work in one of the competency areas as part of the ‘normal daily work routine’. Records of each experience are maintained. This approach appears to have validity as the performance being assessed is real workplace performance under real conditions and is related to the units and elements of competence set out in the statement of standards. Assessment is holistic as it is based on real work which generally requires the integration of skills, knowledge and attitudes.
To be effective this approach requires multiple observations of natural or routine work performance of all elements of competence. It usually involves multiple assessors to ensure quality control, tracking and recording of competence achieved. The extent to which this strategy is reliable, fair to all trainees and generally practicable and cost-effective depends on:
Evidence from prior achievements or learning
A third strategy of assessment is based on evidence from prior achievements, where judgements about competence are based on recognising, assessing and accreditation of prior achievements and learning. This is often achieved by learners providing a portfolio of evidence documenting their achievements. The portfolio may be supported by evidence from supplementary sources such as interviews, oral and written questions and simulations. This approach is more likely to be used to assess higher levels of competence standards such as those in management or supervisor training, where natural observation of performance in the workplace may not be a viable option.
To ensure that the required standards are reached, steps need to be taken to ensure that the process of evidence collection and documentation is more than a file-keeping exercise. To enhance reliability, candidates can be trained in documentation and reflection techniques and assessors would need to have expertise in the facilitation and assessment of these techniques.
Activity
Take some time to think about the three main strategies and reflect on your experience with competency-based assessment. Which of the three strategies have you come across our used yourself to assess competency. Did you face similar challenges? Have you found ways to overcome the practical problems that you can share with other course participants? If you have not used or experienced any of the strategies, which one do you think will work best in your current context and why?
Discussion
Share your reflection, experience and ideas in the CTVSD2 forum.
Additional resources
Here are some additional resources on strategies for assessing competence:
Not just falling over the line? A snapshot of competency-based assessment – this report explores whether competency-based assessment is meeting the needs of users. There are a number of issues raised, including grading, the quality of competency standards and their treatment of underpinning knowledge, who are appropriate assessors, and what resources are needed to support assessment. The report puts forward strategies to improve competency-based assessment, directed at policy-makers and registered training organisations, that you may want to consider and if appropriate put forward for consideration in your institution.
Evaluating on- and off-the-job approaches to learning and assessment in apprenticeships and traineeships – this report summarises the views of Australian apprentices and trainees on their learning and assessment experiences in on- and off-the-job settings. It also examines through some 20 case studies the strengths and weaknesses of the approaches to learning and assessment that are being used today.
Improving the validity of competency-based assessment – this study considers the status of validity in the context of the assessment of VET in Australia. The project has involved reviewing the literature, reporting the outcomes of case studies, presenting the key findings and developing a diagnostic tool to guide assessors.
Vocational voices is the official podcast of the National Centre for Vocational Education Research, where you can hear leading experts discuss current trends in vocational education and training.
This publication is a Competency-Based Training (CBT):An Introductory Manual for Practitioners produced by the International Labour Organization (ILO) in 2020 is a very useful resource aiming to provide TVET trainers and developers with a basic understanding of the steps involved in designing competency-based programmes. Step 12: Understanding and designing assessments on page 58 provides a good overview of the common assessment strategies and comparisons of theoretical / knowledge based assessment items, as well as comparisons between the most common practical assessment methods, listing their strengths and weaknesses.
A Guide to Writing Competency Based Training Materials is written to assist educators and trainers to design and deliver competency based training programmes. It is worth exploring the entire guide as you will be exposed to a good deal of the content and will help you refresh or develop your knowledge of CBT. For the purpose of assisting you with designing competency based assessment refer to the Assessment of competence section page 18 to 24 as it provides an overview of forms or types of assessment, principles of assessment, evidence gathering methods, types of assessment evidence, and examples of assessment methodologies.
Course developer
Partner