2.9 Feedback
Computerized assessment allows for a variety of different forms of feedback. Feedback can relate to the answering process and the answers themselves.
2.9.1 Feedback during the Assessment
Different feedback forms can also be distinguished, either always displayed during the assessment or retrieved optionally. Some of the basic options are briefly described in this section.
Real-Time Feedback while Responding: Interactive items can be designed so that the cognitive operations for completing a task change as part of the response format. This type of feedback during test-taking is illustrated in the item in Figure 2.23 for an example matrices task. In this example, once the answer is chosen, it is already presented in the context of the stimulus, supporting verifying the submitted answer without explicitly providing feedback about task correctness.
Real-Time Feedback about Results: Closed response formats or automatically scored items allow explicit feedback about the results provided either immediately or delayed (Shute 2008). As discussed in van der Kleij et al. (2012), the operationalization of immediately (e.g., feedback given during the completion of an item) and delayed (e.g., feedback given directly after completion of all the items in the assessment) differs across research. Different types of feedback are investigated and used in practice and real-time performance feedback is considered (e.g., scaffolding feedback, Finn and Metcalfe 2010), in particular, attractive for formative assessments (DiCerbo, Lai, and Matthew 2020).
Feedback about (Remaining) Time: Especially when moderate time limits are used for larger test sections, it is helpful to integrate feedback about the remaining time into the items to make the information available to all test takers in a comparable (standardized) form.
The feedback can be implemented by displaying the remaining time (numerically) or by using a graphical visualization that visualizes the remaining time only roughly (in order not to create unnecessary time pressure).
Alternatively, as illustrated in the item in Figure 6.37, feedback on the remaining time can be provided, for example, at the item level by displaying a hint if an item still needs to be processed after a predefined time threshold.
Feedback about Task Progress: In addition to the remaining time, it is also helpful to provide feedback on overall test completion in computer-based assessments where progress cannot be inferred from the printed test booklet, as shown in Figure 2.25.
Feedback on Task Completion: The feedback about the processing status can also distinguish between visited pages and answered items. In that case, it is also possible to read how many tasks will still have to be processed from a progress display. A simple example is included in Figure 2.25. The graphical design can be more elaborate if multiple items are combined on individual pages.
Feedback about Navigation: As shown already in Figure 2.13, as soon as test-takers can not navigate back after leaving a section, unit, or page, a feedback dialogue or popup message is often used to inform the test-taker about the consequence of continuing.
Missing Value Feedback: The number of unfinished tasks can be displayed continuously or when leaving a unit. Suppose test takers cannot navigate back to a previous section. In that case, it makes sense to display a warning that can be designed differently for the case that all items have been answered (feedback about navigation) or that items still need to be answered (missing value feedback).
Rapid Guessing Feedback: Feedback about test-taking behavior, such as rapid guessing, can be provided to test-takers, either automatically [see, e.g., N.N.] or by test administrators (see, e.g., Wise, Kuhfeld, and Soland 2019).
Feedback on Consistency: Feedback about responding (too) quickly or unexpectedly fast is just one of many possibilities. If the software allows, for instance, person-fit measures can also be used to give feedback about inconsistent answers.
2.9.2 Feedback after the Assessment
Once data collection is completed for a single test-taker, there are multiple opportunities for further use of data for feedback purposes. Steps that can be relevant for creating feedback after an assessment include the scoring of responses (see section 2.5.1) and the ability estimation (using calibrated items, see sections 2.5.4 and 2.5.5).
Technical Platform for IRT-Methods: When an IRT model is used to model the measured construct, appropriate psychometric algorithms must be used for ability estimation, using pre-calibrated item parameters (i.e., the number of correct responses is not enough if the raw score is not a sufficient statistic). The necessary functions for ability estimation with various IRT models are freely available within software packages for data analysis (for instance, in the form of the R package TAM, Alexander Robitzsch, Kiefer, and Wu 2022). In order to use these functions also for operational computer-based assessments, the corresponding platform must be available (e.g., by using R with the help of the environments ShinyProxy, OpenCPU or the R package Plumbr), or the delivery environment must provide the required IRT functions.
Technical Platform for Report Generation: A technical solution for the automatic creation of feedback is also necessary to create texts and, if necessary, graphics and combine them into HTML, PDF, or other text documents [e.g., R and the package kntir, Xie (2015); see section 7.3.5 for an example].
Technical Inclusion of Process Indicators: In addition to outcome variables, process indicators representing aggregations of Low-Level Features that can be extracted from log data using algorithms (see section 2.8) can be helpful in generating feedback. A technical platform is also required for this necessary analysis of the log data, in which, for example, the R package LogFSM (see section 2.8.5) can be integrated.