The Assessment Research presentations offered below aim to share IB assessment research outcomes with the academic community and the IB community as the impact of these research outcomes on IB programmes and processes unfold.
Towards the validation of an IB-Cito taxonomy for digital assessment items and objectives – a call for participation
Rebecca Hamer & Caroline Jongkamp
While models exist to classify digital items on design characteristics, these models do not seem to provide sufficient guidance regarding the link between item type and the assessment of complex thinking skills. From 2018, IB and Cito have been collaborating to develop a new taxonomy for digital assessment items that links item types to assessment objectives with the aim to develop guidelines on how to choose between different assessment item types. This presentation includes a call for item developers and assessment experts to contribute to a validation study to take place in 2021 and 2022.
Developing command terms for performance and creating in the arts
AEA-Europe, 2019 - Lisbon, Portugal
Rebecca Hamer & Christina Collazo-Haaf
Expected standards of student achievement can be communicated using action verbs or command terms, especially when these are clearly defined. However, many of these commonly used words focus on cognitive learning and may not be appropriate for describing achievement in the performing arts. This study presents ongoing work on developing command terms linked to specific recognisable student attainment levels for performing and creating in music.
Marking by Question-Item-Group (QIG): A survey of the examiner experience
AEA-Europe 2018 – Arnhem, Netherlands
Katie Schultz & Rebecca Hamer
Since 2016, International Baccalaureate (IB) examiners have been marking some IB exams by subsection or Question-Item-Group (QIG). However, achieving the benefits of QIGing has been less straightforward than anticipated and there are complex relationships between the benefits and costs. This study presents the examiner experience of marking by QIQ survey results. For examiners, a major perceived benefit was that marking by QIG allowed them to better familiarise themselves with QIG-specific marking standards and criteria.