facebook-script

The Theoretical Underpinnings of Reporting

The Theoretical Underpinnings of Reporting

cover image

Subscribe to the L&D Toolbox

Why Report on Learning

Accurate record-keeping and reporting on learning, through measurement of completion rates, is ​​critical to maintaining compliance with regulatory standards. This allows approved providers to monitor employees' training records easily. During an audit, L&D teams will regularly be required to produce learning records as evidence of their compliance with the relevant standards. Providers will also need to draw on evidence to show they:

  • Understand the requirements;
  • Can demonstrate they have applied the requirement;
  • Monitor how they are applying the requirements and the outcomes they achieve, and,
  • Have an effective system and process to improve their ability to meet the requirement continually.

The Challenge of Reliable Reporting

Reliability, accuracy, and accessibility in generating and retrieving reports are crucial for organisations to maintain high learning and development standards. In practice, this often comes down to the quality of the training system being used by the approved provider.

Organisations that use manual reporting methods or underdeveloped learning management systems (LMS) often have difficulty accessing key reporting metrics. For example, the compliance rate of individual staff, the compliance rate across different facilities, or the total training completed by an entire team or organisation all need to be reported and updated over time, and often, immediately during an audit. If this information is inaccessible and cannot be reliably produced when required during an audit because a provider is using manual record keeping or poorly functioning LMS, then compliance meeting the standards may be jeopardised.

Another common challenge is finding reliable ways to demonstrate links between organisations’ training reports and their relevance to the applicable training standard. Ausmed’s LMS combats this by enabling reporting of learning against built-in quality standards. This means providers can quickly show how much training has been done according to the applicable standards.

But does reporting stop there? Is this the extent to which we should report?.

The answer is no. Why? Because the essence of best-practice organisational learning is demonstrating training and education outcomes. Learner completion and evaluation data, reflection, and documentation are critical to helping an L&D team determine whether the activities they assigned to staff achieved the desired learning outcomes.

More than Just Compliance

This article aims to take an essential function an L&D team performs commonly, reporting, and go deep into why we do it. We will review a simple validated model that L&D teams can use to explain the theory behind the practice. Using examples of accurate data, we will link primary evaluation data available to L&D teams and educators with this simple model.

With this foundational yet theoretical knowledge, L&D teams will be able to demonstrate their impact on the broader organisational goals and be encouraged to look beyond reporting as a compliance function only.

The New World Kirkpatrick Model

The New World Kirkpatrick model is simple. It contains a four-level evaluation process beyond the traditional method of recording participant attendance. Each step builds on the previous. The New World Kirkpatrick Model is used extensively to evaluate the effectiveness of education as it provides a framework for obtaining, organising and analysing data from learning activities. Many, many other models exist.

The model forms Ausmed’s evaluation policy. We use this model because it sheds light on outcomes relevant to the individual participant (the learner) and both the organisation (for example, an aged care provider using the Ausmed LMS™ and the Ausmed Library™) and the provider of the educational activity (Ausmed).

In short, levels one (Reaction) and two (Learning) address the participants’ satisfaction with the learning experience and the knowledge and skills gained. Levels three (Behaviour) and four (Results) provide information for the organisation regarding the application of learning in the practice environment and how the training impacts the organisation's business level.

“The

Let’s look at the model in more detail.

Level 1: Reaction

This level describes the degree to which staff find their education and training engaging and relevant to their jobs. It also offers a measure of participant attendance and engagement and allows learners to assess how appropriate the training is to their real-life working experiences.

Level 2: Learning

At this level, the degree to which staff acquire the intended knowledge, skills, attitude, confidence and commitment is assessed based on their participation in the training.

In practice, levels one and two of the Kirkpatrick Model are easily achieved, but to adequately accomplish the strengthened aged care quality standards, levels three and four of the New World Kirkpatrick Model must also be embraced.

Level 3: Behaviour

At this level, the degree to which participants apply what they learned during training while actively engaged in their role needs to be assessed.

Level 4: Results

In the view of Kirkpatrick and Kirkpatrick (2020), the ultimate goal of training is to achieve targeted outcomes directly resulting from training, support and accountability packages. However, many approved providers need help to provide evidence that their training packages accomplish the goal of improved quality of care. It’s a challenge that needs to be addressed as auditing standards relating to the training are increasingly likely to require this sort of data. This may mean greater time must be given to reflective learning and open discussion about implementing best practices.

Examples For L&D Teams

This section provides practical examples of data at each evaluation level that an L&D team could use to evaluate the effectiveness of their training and education initiatives. From basic compliance metrics to evaluating cost savings, we will step through an array of metrics an L&D team can use.

Level 1 - Reaction Level 1 Examples

Level 1 data is the lowest level in the model. Data here only yields information about the learner’s reaction or satisfaction with the activity.

For example, a survey sent to participants or a basic evaluation form may include questions on attitude toward the material, subject matter expertise, and overall satisfaction with the training.

This type of evaluation is often delivered immediately after training has taken place.

While level one data is limited in scope, it is valuable as it lays the foundation for evaluating other aspects of the training or learning activity.

  • Attendance numbers
  • Completion rate
  • Time spent learning (for payroll purposes)

Most compliance reporting requires Level 1 data at a minimum.

Level 2 - Learning Level 2 Examples

Level 2 data aims to evaluate the degree to which learners acquire the intended knowledge (K), skills (S), attitude (A) and confidence (C)

It is often collected during or during learning but can be evaluated longitudinally post-training.

It also assesses how learners can apply their new knowledge and skills, i.e. with the pretest-posttest. Commitment or intention to change can also be evaluated at this level.

Self-reported intent to change (a positive precursor to actual behaviour change) can be assessed at this level, and it is possible to do so via the Ausmed App.

Intention (which takes into account attitude, subjective norm, and perceived behavioural control), represented by Azjen’s Theory of Planned Behavior, acknowledges that participants who express a higher level of commitment are more inclined to change their practice.

  • Report via Standards
  • Assessment responses to quizzes/tests, i.e. MCQ, T/F, open-ended questions
  • This would demonstrate the acquisition of new knowledge or reinforcement of existing knowledge (K)
  • Perceptions derived from reflection on learning demonstrate attitude towards learning (A) and confidence (C)
  • Responses to a case scenario or a problem-based exercise could demonstrate subjective skill acquisition (S)
  • Validation/verification of skill acquisition most soundly assessed by return skill demonstration (S)
  • Skill matrixes could be developed and linked to the training plan to allow correlation of learning with skill acquisition

Organisations with well-structured L&D systems and processes could enhance reporting to collect Level 2 data.

Level 3 - Behaviour Level 3 Examples

This level of evaluation data aims to forge learning and behaviour together by measuring the transfer of knowledge, skills, and attitudes from the learning to the practice setting.

It asks, what happens on the job post-training?

Standard methods for assessing and collecting level three data include self-reported behaviour change via survey and direct observation (if feasible).

There are many, many more approaches for measuring behaviour change.

Previously, this type of evaluation would occur 3-6 months post-learning.

However, even at 30 days, it is now seen as good-quality data given rapid learning cycles.

Generating level 3 data is much more complex and time-consuming. Still, it is precious as it uncovers whether participants were able to use or apply what they learned in practice.

  • Participant reflection on feasibility, practicality and sustainability of practice change in the practice setting
  • Could they do it?
  • Or were there barriers to implementation
  • Did they do it? Could they overcome barriers?
  • Are they still doing it? Was change sustained?

Auditing standards will increasingly require this sort of data.

Level 4 - Results Level 4 Results

Level 4 data measures the ultimate goal of training - not just the outcomes of training but the impact of those outcomes.

This data is often evaluated between 30-90 days after training.

Ongoing support and accountability packages are the gold standard for facilitating/accompanying level 4 evaluation. This recognised the presence of barriers to implementing and sustaining change, including knowledge decay.

Due to the time-lapse, other factors may occur during that period to affect the outcome.

The context of health, aged, or disability care also adds to the challenge of isolating the effects of a specific educational intervention. Typically, an organisation has concurrent efforts in addition to education and training in place.

Thus, the best practice is to interpret and encourage linking level 4 data with other organisational initiatives.

  • Any change in critical quality outcomes pre and post-training?
  • Return on investment calculation of a training plan
  • Impact of education programs on staff retention
  • Other cost savings

Level 4 data is critical to retaining staff and securing education budgets over the long term.

What Next? Practical Tips

  • Keep it simple - Not all training activities need to be evaluated at Level 4
  • Small steps - Pick an activity or program and identify which level of data best evaluates the effort
  • Conserve effort - Align a training program's effort, duration or cost with the evaluation method.
  • Set a goal — If you are not currently measuring beyond Level 1 data, consider setting a goal to collect one instance of Level 2 data in the next six months.

Moving in the Right Direction

When done well, Level 4 data is a robust and sound method to connect the impact and value of training with organisational outcomes. However, resourcing and time constraints mean that this is difficult., The beauty of this simple model is that it offers different options for evaluating activities that benefit the activity type. The main message is that L&D teams should work towards moving from simply monitoring introductory compliance rates to using a simple model like the New World Kirkpatrick Model to evaluate the effectiveness of various training initiatives, including annual mandatory sessions and other targeted educational efforts.

Linking Quality and Education

A more rigorous approach to evaluating training and education is critical, now more than ever. Action 2.9.5 mandates that providers align and measure their training systems and strategies. Moreover, with the new educational demands of the strengthened aged care standards high, it offers a chance for L&D teams to work closely with their Quality function counterparts to better link quality outcomes with education. Since quality indicator data are mandatorily available, they provide a valuable opportunity for these teams to collaboratively assess how well-integrated education and quality initiatives can enhance care, reduce risks, and yield measurable cost savings for an organisation.

References and resources

For more information, you can refer to the following resources:

Author
Zoe Youl - Head of Community at Ausmed

Zoe Youl 

Zoe Youl is a Critical Care Registered Nurse with over ten years of experience at Ausmed, currently as Head of Community. With expertise in critical care nursing, clinical governance, education and nursing professional development, she has built an in-depth understanding of the educational and regulatory needs of the Australian healthcare sector.

As the Accredited Provider Program Director (AP-PD) of the Ausmed Education Learning Centre, she maintains and applies accreditation frameworks in software and education. In 2024, Zoe lead the Ausmed Education Learning Centre to achieve Accreditation with Distinction for the fourth consecutive cycle with the American Nurses Credentialing Center’s (ANCC) Commission on Accreditation. The AELC is the only Australian provider of nursing continuing professional development to receive this prestigious recognition.

Zoe holds a Master's in Nursing Management and Leadership, and her professional interests focus on evaluating the translation of continuing professional development into practice to improve learner and healthcare consumer outcomes. From 2019-2022, Zoe provided an international perspective to the workgroup established to publish the fourth edition of Nursing Professional Development Scope & Standards of Practice. Zoe was invited to be a peer reviewer for the 6th edition of the Core Curriculum for Nursing Professional Development.

The Theoretical Underpinnings of Reporting

The Theoretical Underpinnings of Reporting

cover image

Subscribe to the L&D Toolbox

Why Report on Learning

Accurate record-keeping and reporting on learning, through measurement of completion rates, is ​​critical to maintaining compliance with regulatory standards. This allows approved providers to monitor employees' training records easily. During an audit, L&D teams will regularly be required to produce learning records as evidence of their compliance with the relevant standards. Providers will also need to draw on evidence to show they:

  • Understand the requirements;
  • Can demonstrate they have applied the requirement;
  • Monitor how they are applying the requirements and the outcomes they achieve, and,
  • Have an effective system and process to improve their ability to meet the requirement continually.

The Challenge of Reliable Reporting

Reliability, accuracy, and accessibility in generating and retrieving reports are crucial for organisations to maintain high learning and development standards. In practice, this often comes down to the quality of the training system being used by the approved provider.

Organisations that use manual reporting methods or underdeveloped learning management systems (LMS) often have difficulty accessing key reporting metrics. For example, the compliance rate of individual staff, the compliance rate across different facilities, or the total training completed by an entire team or organisation all need to be reported and updated over time, and often, immediately during an audit. If this information is inaccessible and cannot be reliably produced when required during an audit because a provider is using manual record keeping or poorly functioning LMS, then compliance meeting the standards may be jeopardised.

Another common challenge is finding reliable ways to demonstrate links between organisations’ training reports and their relevance to the applicable training standard. Ausmed’s LMS combats this by enabling reporting of learning against built-in quality standards. This means providers can quickly show how much training has been done according to the applicable standards.

But does reporting stop there? Is this the extent to which we should report?.

The answer is no. Why? Because the essence of best-practice organisational learning is demonstrating training and education outcomes. Learner completion and evaluation data, reflection, and documentation are critical to helping an L&D team determine whether the activities they assigned to staff achieved the desired learning outcomes.

More than Just Compliance

This article aims to take an essential function an L&D team performs commonly, reporting, and go deep into why we do it. We will review a simple validated model that L&D teams can use to explain the theory behind the practice. Using examples of accurate data, we will link primary evaluation data available to L&D teams and educators with this simple model.

With this foundational yet theoretical knowledge, L&D teams will be able to demonstrate their impact on the broader organisational goals and be encouraged to look beyond reporting as a compliance function only.

The New World Kirkpatrick Model

The New World Kirkpatrick model is simple. It contains a four-level evaluation process beyond the traditional method of recording participant attendance. Each step builds on the previous. The New World Kirkpatrick Model is used extensively to evaluate the effectiveness of education as it provides a framework for obtaining, organising and analysing data from learning activities. Many, many other models exist.

The model forms Ausmed’s evaluation policy. We use this model because it sheds light on outcomes relevant to the individual participant (the learner) and both the organisation (for example, an aged care provider using the Ausmed LMS™ and the Ausmed Library™) and the provider of the educational activity (Ausmed).

In short, levels one (Reaction) and two (Learning) address the participants’ satisfaction with the learning experience and the knowledge and skills gained. Levels three (Behaviour) and four (Results) provide information for the organisation regarding the application of learning in the practice environment and how the training impacts the organisation's business level.

“The

Let’s look at the model in more detail.

Level 1: Reaction

This level describes the degree to which staff find their education and training engaging and relevant to their jobs. It also offers a measure of participant attendance and engagement and allows learners to assess how appropriate the training is to their real-life working experiences.

Level 2: Learning

At this level, the degree to which staff acquire the intended knowledge, skills, attitude, confidence and commitment is assessed based on their participation in the training.

In practice, levels one and two of the Kirkpatrick Model are easily achieved, but to adequately accomplish the strengthened aged care quality standards, levels three and four of the New World Kirkpatrick Model must also be embraced.

Level 3: Behaviour

At this level, the degree to which participants apply what they learned during training while actively engaged in their role needs to be assessed.

Level 4: Results

In the view of Kirkpatrick and Kirkpatrick (2020), the ultimate goal of training is to achieve targeted outcomes directly resulting from training, support and accountability packages. However, many approved providers need help to provide evidence that their training packages accomplish the goal of improved quality of care. It’s a challenge that needs to be addressed as auditing standards relating to the training are increasingly likely to require this sort of data. This may mean greater time must be given to reflective learning and open discussion about implementing best practices.

Examples For L&D Teams

This section provides practical examples of data at each evaluation level that an L&D team could use to evaluate the effectiveness of their training and education initiatives. From basic compliance metrics to evaluating cost savings, we will step through an array of metrics an L&D team can use.

Level 1 - Reaction Level 1 Examples

Level 1 data is the lowest level in the model. Data here only yields information about the learner’s reaction or satisfaction with the activity.

For example, a survey sent to participants or a basic evaluation form may include questions on attitude toward the material, subject matter expertise, and overall satisfaction with the training.

This type of evaluation is often delivered immediately after training has taken place.

While level one data is limited in scope, it is valuable as it lays the foundation for evaluating other aspects of the training or learning activity.

  • Attendance numbers
  • Completion rate
  • Time spent learning (for payroll purposes)

Most compliance reporting requires Level 1 data at a minimum.

Level 2 - Learning Level 2 Examples

Level 2 data aims to evaluate the degree to which learners acquire the intended knowledge (K), skills (S), attitude (A) and confidence (C)

It is often collected during or during learning but can be evaluated longitudinally post-training.

It also assesses how learners can apply their new knowledge and skills, i.e. with the pretest-posttest. Commitment or intention to change can also be evaluated at this level.

Self-reported intent to change (a positive precursor to actual behaviour change) can be assessed at this level, and it is possible to do so via the Ausmed App.

Intention (which takes into account attitude, subjective norm, and perceived behavioural control), represented by Azjen’s Theory of Planned Behavior, acknowledges that participants who express a higher level of commitment are more inclined to change their practice.

  • Report via Standards
  • Assessment responses to quizzes/tests, i.e. MCQ, T/F, open-ended questions
  • This would demonstrate the acquisition of new knowledge or reinforcement of existing knowledge (K)
  • Perceptions derived from reflection on learning demonstrate attitude towards learning (A) and confidence (C)
  • Responses to a case scenario or a problem-based exercise could demonstrate subjective skill acquisition (S)
  • Validation/verification of skill acquisition most soundly assessed by return skill demonstration (S)
  • Skill matrixes could be developed and linked to the training plan to allow correlation of learning with skill acquisition

Organisations with well-structured L&D systems and processes could enhance reporting to collect Level 2 data.

Level 3 - Behaviour Level 3 Examples

This level of evaluation data aims to forge learning and behaviour together by measuring the transfer of knowledge, skills, and attitudes from the learning to the practice setting.

It asks, what happens on the job post-training?

Standard methods for assessing and collecting level three data include self-reported behaviour change via survey and direct observation (if feasible).

There are many, many more approaches for measuring behaviour change.

Previously, this type of evaluation would occur 3-6 months post-learning.

However, even at 30 days, it is now seen as good-quality data given rapid learning cycles.

Generating level 3 data is much more complex and time-consuming. Still, it is precious as it uncovers whether participants were able to use or apply what they learned in practice.

  • Participant reflection on feasibility, practicality and sustainability of practice change in the practice setting
  • Could they do it?
  • Or were there barriers to implementation
  • Did they do it? Could they overcome barriers?
  • Are they still doing it? Was change sustained?

Auditing standards will increasingly require this sort of data.

Level 4 - Results Level 4 Results

Level 4 data measures the ultimate goal of training - not just the outcomes of training but the impact of those outcomes.

This data is often evaluated between 30-90 days after training.

Ongoing support and accountability packages are the gold standard for facilitating/accompanying level 4 evaluation. This recognised the presence of barriers to implementing and sustaining change, including knowledge decay.

Due to the time-lapse, other factors may occur during that period to affect the outcome.

The context of health, aged, or disability care also adds to the challenge of isolating the effects of a specific educational intervention. Typically, an organisation has concurrent efforts in addition to education and training in place.

Thus, the best practice is to interpret and encourage linking level 4 data with other organisational initiatives.

  • Any change in critical quality outcomes pre and post-training?
  • Return on investment calculation of a training plan
  • Impact of education programs on staff retention
  • Other cost savings

Level 4 data is critical to retaining staff and securing education budgets over the long term.

What Next? Practical Tips

  • Keep it simple - Not all training activities need to be evaluated at Level 4
  • Small steps - Pick an activity or program and identify which level of data best evaluates the effort
  • Conserve effort - Align a training program's effort, duration or cost with the evaluation method.
  • Set a goal — If you are not currently measuring beyond Level 1 data, consider setting a goal to collect one instance of Level 2 data in the next six months.

Moving in the Right Direction

When done well, Level 4 data is a robust and sound method to connect the impact and value of training with organisational outcomes. However, resourcing and time constraints mean that this is difficult., The beauty of this simple model is that it offers different options for evaluating activities that benefit the activity type. The main message is that L&D teams should work towards moving from simply monitoring introductory compliance rates to using a simple model like the New World Kirkpatrick Model to evaluate the effectiveness of various training initiatives, including annual mandatory sessions and other targeted educational efforts.

Linking Quality and Education

A more rigorous approach to evaluating training and education is critical, now more than ever. Action 2.9.5 mandates that providers align and measure their training systems and strategies. Moreover, with the new educational demands of the strengthened aged care standards high, it offers a chance for L&D teams to work closely with their Quality function counterparts to better link quality outcomes with education. Since quality indicator data are mandatorily available, they provide a valuable opportunity for these teams to collaboratively assess how well-integrated education and quality initiatives can enhance care, reduce risks, and yield measurable cost savings for an organisation.

References and resources

For more information, you can refer to the following resources:

Author
Zoe Youl - Head of Community at Ausmed

Zoe Youl 

Zoe Youl is a Critical Care Registered Nurse with over ten years of experience at Ausmed, currently as Head of Community. With expertise in critical care nursing, clinical governance, education and nursing professional development, she has built an in-depth understanding of the educational and regulatory needs of the Australian healthcare sector.

As the Accredited Provider Program Director (AP-PD) of the Ausmed Education Learning Centre, she maintains and applies accreditation frameworks in software and education. In 2024, Zoe lead the Ausmed Education Learning Centre to achieve Accreditation with Distinction for the fourth consecutive cycle with the American Nurses Credentialing Center’s (ANCC) Commission on Accreditation. The AELC is the only Australian provider of nursing continuing professional development to receive this prestigious recognition.

Zoe holds a Master's in Nursing Management and Leadership, and her professional interests focus on evaluating the translation of continuing professional development into practice to improve learner and healthcare consumer outcomes. From 2019-2022, Zoe provided an international perspective to the workgroup established to publish the fourth edition of Nursing Professional Development Scope & Standards of Practice. Zoe was invited to be a peer reviewer for the 6th edition of the Core Curriculum for Nursing Professional Development.