Chair of Work and Environmental Psychology

TOOLBOX for evaluating the training of rescue workers

This toolbox was created as part of the FIRE cooperation projects between the NRW Fire Service Institute and the University of Münster under the project management of Prof Dr Meinald Thielsch. Further information can be found on the FIRE project website at https://www.uni-muenster.de/OWMS/bfo/projekte/fire/index.html.

 

A systematic evaluation is not based on the subjective judgements of individuals, but provides reliable facts about the quality of courses. The standardised procedure of an evaluation leads to a high degree of comparability between the results.

 

At FIRE, we understand evaluation to mean the description and assessment of rescue worker training. To this end, data is collected, analysed and interpreted.

 

An evaluation shows what is already going well and where there is still room for improvement. If several evaluations are carried out, it is possible to compare different courses or to analyse developments over time.

Steps

Preparation

A solid data basis ensures meaningful results. It is therefore important to have good measuring instruments (usually questionnaires) to hand. In FIRE, we have already developed several evaluation questionnaires specifically for the fire service context. These provide information on various dimensions of training quality, e.g. instructor behaviour or level of requirements. The first step is to select the appropriate questionnaire.

To ensure that everything runs smoothly, the data collection should be well prepared. You can find a checklist here:

Preparation & survey checklist​​​​​​​ 

Data collection takes place at the end of a course. If an examination is planned, the survey takes place before the examination. An examination evaluation (FIRE-P) takes place after the examination but before the results are announced.

In order to motivate participation in the evaluation, the purpose of the evaluation should be explained. Participants should recognise that their feedback is valued and has a positive effect (e.g. improvement in teaching quality, learning opportunity for the teaching team, recognition for good teaching).

For more information, see the checklist in step 1.

There are fixed rules for analysing the questionnaires, e.g. the order in which the individual values are added up or how to deal with missing values. This ensures that the results are always the same - regardless of who has analysed the questionnaires. This is referred to as evaluation objectivity. For the FIRE questionnaires, there are evaluation aids to support the statistical evaluation and result sheets for a clear presentation of the results.

Evaluation guide

 

Interpretation and categorisation should only begin once all the results are available.

The first step is to look at the calculated mean values. The scale descriptions help to interpret the content of the mean values. The values can be better categorised using the following key questions:

  • Are the absolute values in the lower, middle or upper range of the scale?
  • How do the values relate to each other? What are the strengths and weaknesses?
  • How do the values compare to previous or other courses?


This is usually followed by the question: Why did the evaluation turn out the way it did? In order to answer this question, the results should always be discussed with the participants if possible. Open comments can also provide important clues here.

The evaluation process does not end with the analysis and interpretation of the results. It is now a matter of deriving practical benefits from the results.

  • If the evaluation results are very good, those responsible - usually the teaching team - should be recognised accordingly . If the results are not so good, there should be no search for someone to blame. Instead, the focus should be on how better results can be achieved in future courses and what support may be necessary to achieve this.
  • Specific goals for future courses should be formulated on the basis of the evaluation results. What measures can be used to build on strengths and reduce weaknesses? Which suggestions from participants can be implemented?
  • The evaluation process will be repeated in future courses to check whether the measures taken are having the desired effect.

All FIRE questionnaires

Field of application:

Group leader, platoon leader, unit leader training

Scope

21 mandatory questions
3 optional questions
1 open question

+ possible additional modules (see below)

Downloads & links

FIRE core questionnaire

FIRE: Detailed documentation [German]

FIRE: Journal article

 

Field of application:

Basic training at municipal level

Scope:

29 mandatory questions
3 optional questions
1 open question

Downloads & Links

FIRE-B

FIRE-B: Detailed documentation [German]

FIRE-B: Journal article

Field of application:

Staff training

Scope:

22 mandatory questions
1 optional question
1 open question

+ one possible additional module (CPX, see below)

Downloads & Links

FIRE-CU

FIRE-CU: Detailed documentation

FIRE-CU: Journal article

Field of application:

Evaluation of examinations

Scope:

12 mandatory questions
11 optional questions
1 open question

Downloads & Links

FIRE-P

FIRE-P: Detailed documentation

 

 

Field of application:

Evaluation of one-day events

Scope:

20 mandatory questions
2 optional questions
1 open question

Downloads & Links

FIRE-ST

FIRE-ST: Detailed documentation