Session Information
09 SES 11 C, Evaluation Policies, Monitoring and Assessments
Paper Session
Contribution
Addressing the question, “What is the purpose of the assessment?” this paper outlines some of the challenges which occur when students, teachers, school leaders and policymakers do not share the same understanding of the assessment practices. First, the paper explores what role national assessment tools and the data they provide play in school governing processes at the local levels. Second, it investigates how students’ performance data and educational standards are used as part of teachers’ practices at classroom level. Third, it aims to gain new theoretical insight about assessment practices.
The paper draws on theories about school governing and assessment practices in classrooms.This combination is necessary in order to understand how complex school governing processes integrate with assessment practices in the classroom. School governing processes are analysed from the angle of tools in use to provide data about educational quality. Tools such as national standardized tests are defined as “devices that is both technical and social, that organizes specific social relations between the state and those it is addressed to, according to the representations and meanings it carries (Lascoumes and Le Gales 2007). This means that instruments are not neutral devices or methods put in system to accomplish aims. They may seem neutral, but, instead they bear with them underlying assumptions, in terms of values, interpretations and meaning which influence their modes of regulation and possible effects (Hood 2007; Lascoumes and Le Gales 2007).
At the national level, the new governing modes are characterized by the use of data about student and school performances to inform and legitimize decision making. At the local levels, the municipal level and the school level, these data are important for quality management, one the one hand, to monitor educational progress, and on the other hand, to develop and support schools and students’ learning (Allerup, Velibor et al. 2009). The development of national assessment criteria and educational standards can, on the one hand, be seen as part of monitoring educational quality. On the other hand, it can be used formatively, to support students’ learning, if teachers use the knowledge from such tests to move the students’ learning forward (cf. Black and Wiliam 1998; Black and Wiliam 2007; Hattie and Timperley 2007; Black and Wiliam 2009; Hattie 2009). So far, research in Norway has shown that teachers struggle with knowing how to implement such assessments in the classroom. Assessment criteria are often written in a language which make it challenging both to understand for students and to interpret and use in practice. Different traditions and interpretations of ipsativ assessment are in use (Stokke, Throndsen et al. 2008). Such findings mirror research from US where evidence is that teachers do not apply consistent standards from school to school, and therefore teachers need support in how to develop consistency of judgment (Wiliam 2010) . Since one concern is that if external testing becomes the overriding concern of the system, teachers’ professional engagement with what learning means in their subject dwindles, unless it is actively fostered (Baird 2010).
Method
Expected Outcomes
References
Allerup, P., K. Velibor, et al. (2009). Evaluering av det nasjonale kvalitetsvurderingssystemet for grunnopplæringen (Evaluation of the National System for Quality Assessment in Comprehensive Education). Agderforskning. Kristiansand. 8. Baird, J.-A. (2010). "The Theory-practice gap." Assessment in Education, Principles, Policy and Practice 17(2): 113 - 116. Black, P. and D. Wiliam (1998). "Assessment and classroom learning " Assessment in Education: Principles, policy and practice 5(1): 7 - 75. Black, P. and D. Wiliam (2007). "Large-scale Assessment Systems Design principles drawn from international comparisons." MEASUREMENT 5(1): 1-53. Black, P. and D. Wiliam (2009). "Developing the theory of formative assessment." Educational assessment, evaluation and accountability 21(1): 5 - 31. Hattie, J. (1999, June). "Influences on student learning ". Retrieved 12th of May, 2009, from www.arts.auckland.ac.nz/staff/index.cfm?P=8650. Hattie, J. (2009). Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. Abingdon, Oxon, Routledge. Hattie, J. and H. Timperley (2007). "The Power of Feedback." Review of Educational Research 77(1): 81 - 112. Hood, C. (2007). "Intellectual Obsolescene and Intellectual Makeovers: Reflections on the Tools of Government after Two Decades." Governance: An International Journal of Policy, Administration, and Institutions 20(1): 127-144. Lascoumes, P. and P. Le Gales (2007). "Introduction: Understanding Public Policy through Its Instruments - From the Nature of Instruments to the Sociology of Public Policy Instrumentation." Governance: An International Journal of Policy, Administration, and Institutions 20(1): 1-21. Stokke, K., I. Throndsen, et al. (2008). Evaluation of assessment for learning. First Evaluation Report . Oslo, Institute of Teacher Education and School Development, University in Oslo: 68. Wiliam, D. (2010). "Standardized testing and school accountability." Educational Psychologist 45(2): 107 - 122. Yin, R. K. (2009). Case Study Research: Design and methods. London, Sage Publications Inc.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.