Dear Delegate, If you would like to register for any of the workshops presented below, please indicate the name and date of the workshop and send your request to email@example.com with a subject line "IAEA Conference Workshop". Then you will receive a confirmation email and your name will appear in the registration list of the workshop you have selected.
Why IAEA members should attend this workshop:
The workshop will offer an introduction into a training course that helps teachers in secondary education to innovate their educational practice towards teaching and assessment in functional literacy in contexts for different educational domains: reading, mathematics and science.
Participants will gain insight how to support students in developing their learning process in relation to:
- participation in assessments using functional contexts related to their own field of expertise / subject;
- special focus on why, when and how to assess higher order thinking skills;
- dealing with contextual tasks containing closed and open ended questions that assess functional literacy, related to the skills as for example defined by the PISA frameworks for functional literacy.
In the comprehensive training program of several working days, as developed by Cito experts, the participating teachers were expected to become:
- able to integrate functional literacy into their own teaching;
- able to prepare items or tasks for the assessment of functional literacy;
- able to explain others how to solve items that assess functional literacy.
In this preconference workshop we will illustrate some elements, by showing best practice examples from PISA surveys and from national assessments in the Netherlands.
We will encourage participants to do practical work in small groups, as this will also stimulate the learning by exchanging experiences, views and opinions with other experts.
Who this Workshop is for:
The workshop is aimed at those who want to learn more about the development of good test items that assess higher thinking skills in high-stakes tests. Participants might be novice or more experienced users. No prior knowledge is required to attend the workshop, although we aim specifically to invite practitioners who are actively involved in and or responsible for test- and item development projects in their own professional environment.
Participants are invited to bring their own laptops for practicing (Windows and Chrome browser).
The workshop starts with an introduction to the general framework for assessing functional literacy. The concept of functional literacy will be explained. What makes functional literacy different from root learning? We will show how assessment of functional literacy can be organized and operationalized by the use of assessment frameworks. The example frameworks of PISA 2015 main survey will be used as models.
Next we will learn how to select for good contexts, before developing test items in functional literacy. Criteria will be shared and a checklist will be introduced by doing a practical exercise.
Then the participants have to develop some relevant and correct test items that assess application of knowledge in real-life contexts. Released (con)texts from previous PISA surveys will be used, but this time without the original PISA items.
Finally we will conclude with some general findings and the trainer will give recommendations for further learning (or for use in your own educational context).
Preparation for the workshop:
No special preparation is required, the workshop format will be interactive allowing participants to discuss their own experience and/or problems..
“Absolute fairness to every examinee is impossible to attain, if for no other reasons than the facts that tests have imperfect reliability and that validity in any particular context is a matter of degree. But neither is any alternative selection or evaluation mechanism perfectly fair.” (Standards for Educational and Psychological Testing, 1999, p.73)
The workshop will focus on some of the findings that have emerged from our research on assessment fairness which has drawn upon important material from a variety of sources, different disciplines and disparate jurisdictions in order to illustrate concepts with concrete examples and case studies.
The workshop will consist of an introductory overview followed by four sessions each separated by group discussion work.
Part One opens with a conceptual preface and distinguishes six different uses of ‘fair’ which have relevance to assessment. We also raise questions about several assumptions which are often made relating to, for example, Fairness to whom? And whether fairness applies to groups rather than individuals. Debate about fairness in assessment can involve a wide range of people, who bring their own expectations, conceptual apparatus and assumptions. We have found the metaphor of “lenses” useful for describing and distinguishing different approaches. We also describe a common structure of questions to apply to each lens which helps martial the structure of the workshop.
In Part Two we consider fairness through the lens of educational measurement and assessment. Fairness as viewed through this lens suggests, variants – such as the psychometric paradigms found in the authoritative US texts such as the Standards and the approach to public, award-based qualifications offered by UK awarding bodies, which is grounded on a curriculum-embedded paradigm. First, we explore the history of, and consensus on, fairness through a number of key publications focusing in particular on The Standards and Educational Measurement then critique some aspects of that consensus.
In Parts Three, Four and Five we extend the list to lenses that bring in concepts and assumptions from three other disciplines or traditions:
- Legal approaches
- Philosophical approaches
- Fairness as a contributor to social reform
In Part Three we shall explore international cases studies where a mix of statute and common law, reflect a range of legal traditions – rights-based, process-based, outcomes-based – and increasingly are influenced by legislation defining prohibited grounds for discrimination.
In Part Four we explore the links between assessment fairness and social justice It is precisely because assessment has the potential for important, life-changing impact over students’ current and future well-being, shaping their educational experiences in a multitude of ways, and informing their future directions and careers, that assessment fairness is a social justice issue.
In Part Five, we view fairness through the lens of philosophical approaches. Philosophers from Aristotle to John Rawls and beyond have linked fairness with concepts of justice, typically seeing “fairness” as a narrower concept, linked to, but not the same as, the wider concept of justice. We suggest that a closer look at philosophers’ treatment of fairness reveals some common ground with the accounts of assessment theorists.
The proposed structure and content of the workshop brings together a wide range of intellectual disciplines and experiences. From experience with groups of students, teachers and assessment practitioners, there is considerable interest in educational topics which bridge disciplinary divides and explicitly raise wider questions about social justice and public policy.
The workshop is envisaged as a resource for postgraduate students in educational measurement and assessment, for key practitioners in assessment agencies who wish to gain a deeper understanding of the implications for (un)fair assessment, for those with an academic interest in fairness, for teachers and for the novice who should be able to benefit from attending the workshop.
Why you should attend this workshop
The workshop will offer an introduction to the RCEC review system and the corresponding five-steps audit process. You will gain insight in:
- Quality standards to be used in constructing qualitative exam products;
- Evidence-based criteria to be used to review the quality of existing exam products;
- Substantive, organisational and psychometric aspects to be evaluated in selecting qualitative exam products.
The workshop starts with an explanation of the nature and specific description of the RCEC review system, including its elaboration in criteria and underlying sub questions. Examples will be given to elucidate the function and method of the RCEC review system in practice.
Then you will experience the different aspects of the RCEC review system. In small groups, you will evaluate the quality of the English testing materials of a real life exam product. Templates and instructions will be handed out. The exercise is divided into two separate assignments. The first exercise focusses on reviewing the quality of the substantive and organisational aspects of the exam product. The second exercise is offered to assess the quality of its psychometric aspects. The assignments will be complemented with an overall and joint recap. Finally, we will give recommendations for further learning and suggestions for implementing the review system in practice.
Who this workshop is for
This workshop is aimed at those who want to learn more about constructing, reviewing and selecting qualitative exam products. Those involved in examination can use the RCEC review system to construct their tests and exams according to the evidence-based RCEC quality standards. It can also be used to help the users of tests and exams such as teachers, schools, commercial/in-service training and examination committees to assess and select the best quality tests and exams.
No special preparation or prior knowledge is required to attend our workshop. Specific psychometric criteria to be used, will be explained during the workshop. Thank you in advance for your participation and your experiences and suggestions. We are looking forward to meeting you in Baku.
As Bachman & Palmer (1996, 2010) point out, it is important for language teachers to develop a certain level of competence in language assessment. One area in which a lack of such competence can often be seen is in the construction of multiple-choice test questions. For all their laws (see, e.g., Hughes, 1989), this task type is sometimes the best choice available, particularly in cases where the rapid scoring of comprehension items is necessary. Little has changed since Oller (1979) noted that creating good multiple choice tests is generally not practical at the classroom level, partly because of the difficulty associated with constructing them appropriately. Nevertheless, when language tests are to be written for use by an entire language program, such as in the case of placement or graduation tests, most or all of the actual writing of the test is often delegated to teachers in that program. These teachers may be given specifications for writing items, perhaps with lists of common item-writing errors which they should avoid (see, e.g., Brown, 2005; Brown & Hudson, 2002; Carr, 2010); nevertheless, the items they produce tend to require extensive editing and revision.
This workshop will begin by outlining errors commonly found in teacher-created test questions, particularly multiple-choice items, along with a number of commonly accepted rules for item writing. Participants will then practice editing problematic items assessing reading, listening, and grammar. They will also be given opportunities to revise reading and listening passages to better support the item types described in the test specifications (e.g., ensuring that a reading passage contains something implicit in order to provide the basis for an inference item). The items and passages used for application activities will target the A2 and B1 levels on the Common European Framework of Reference for Languages. At the conclusion of the workshop, participants will: (1) have a clearer understanding of common item-writing mistakes (2) have a firmer grasp of how to ensure that the items and passages they create will follow test specifications (3) be more confident in their ability to edit items and passages as needed.
Higher-order skills such as, creativity, critical thinking, scientific inquiry, and computational thinking transform lives and drive economies. However, measuring these skills by using traditional assessment methods is a challenging task. Recent advancements in technology, learning science, cognitive psychology, and educational assessment enable the development of innovative measurement methods for higher-order skills. This workshop offers hands-on learning experiences on concepts and techniques essential for the design and development of technology-enhanced assessments for higher-order skills at scale. We will explore most recent conceptual frameworks and assessment design in Programme for International Student Assessment (PISA) and other leading large-scale innovative assessment programs. The workshop will be structured in three phases. The first phase will emphasize critical review of research in higher-order skills and technology-enhanced assessments. The second phase we will apply assessment theories and techniques to design prototype assessments tasks based on pre-defined specifications. In the third phase participants will share their assessment prototypes and will discuss challenges and opportunities on applying assessment design principles to the context of their work.
This workshop is led by PISA 2021 Creative Thinking assessment design team at ACTNext/ACT.
This workshop centers around the assessment of creativity, but it will also offer valuable information for anyone who is interested in forms of assessment where the standard set of instruments just is not sufficient to arrive at valid conclusions. We will also explore and discuss approaches to assessment of other characteristics that are difficult to measure with standardized instruments, competencies that can mostly only be demonstrated and sometimes are at best noticeable. The workshop will include theory, methodological issues as well as some interactive exercises. Participation requires no special preparation from the attendees, just the willingness to think outside the box every now and then.
In the first part of this workshop we will explore different views on creativity and their implications for assessment. Normally, we see creative behavior and its results as doing or making something ‘original’, deviating from the standard. But the paradox is that in order to be able to assess the creative dimension of someone’s behavior (f.i. ‘risk taking’) products and processes, we first need to agree on what is considered to be ‘standard’. A very important issue related to standards has its origin in assessment in the context of professional arts training. Many assessments in the arts, formative as well as summative, are in fact ‘negotiated assessments’ based on a negotiated ‘contract’ about criteria, results and standards between student and examiner.
In the second part of the workshop we will explore examples of different designs of assessments and assignments for a number of ‘non-standard’ settings with a special focus on validity and the quality of the assessment procedure. Participants will be invited to use their own creativity when we discuss and develop design rules and requirements for suitable and sustainable instruments and assignments, based on the input and questions of other participants