The Design and Implementation of an Assessment Method Combining Formative and Summative Assessment
Author(s):
Jens Dolin (presenting / submitting) Sofie Birch Jensen (presenting) Jesper Bruun Sanne Schnell Nielsen
Conference:
ECER 2016
Format:
Paper

Session Information

09 SES 12 B, Formative and Summative Assessments

Paper Session

Time:
2016-08-26
09:00-10:30
Room:
NM-F103a
Chair:
Josef Basl

Contribution

The two key purposes of assessment, formative and summative, are often in a contradictory position. The summative assessment of learning will normally prevent the formative assessment for learning to be realised (Butler, 1988), meaning that the learning potential of the assessment will often be minimal. It is therefore a central challenge to find ways to combine the dual use of assessment.

It is a central aim of the European research project Assess Inquiry in Science, Technology and Mathematics Education (ASSIST-ME) to design and implement various assessment methods (Dolin, 2012), and to research their validity and reliability used formatively and summatively. ASSIST-ME involves 10 partners in 8 European countries and runs 2012-2016.

One of these assessment methods is the so-called Structured Assessment Dialogue (SAD), developed by the Danish research team. The method has been tested in classes in three European countries and the results are currently being analyzed.

 

Objectives and research questions

There is a strong tradition in the Danish school system for classroom dialogue, so it felt natural to try to design an assessment method that structures and formalises the dialogue in the classroom. Most formative assessments within the typical classroom are quite informal in nature, used differently by different teachers (Shinn, 2013). But in order to be effective and to make it possible to be used for summative purposes, the assessment method must provide a standardized approach to how it is administered. A SAD is such a structured assessment format following three well-defined phases with clear instructions for the teacher and the students of their role. The three phases - 5 minutes of dialogue, 5 minutes of peer feedback, 3 minutes student self-assessment – each have their specific formative and summative function. The SAD concept will be described detailed in the presentation.

One of the points of the SAD is that it ritualizes the whole assessment process, so the different phases, each with their specific purpose, can be conducted as a routine and with no risk involved – once it is introduced and accepted. Another central point is that the formalized approach to the formative assessment makes it possible to establish reliability and validity measures – which are a prerequisite for a sound formative assessment and especially for using the method for summative purpose.

 

The ASSIST-ME project has some research questions in common for all the implemented assessment methods. We will in this presentation focus on this:

What are the main challenges related to the uptake of SAD in the daily practices in science, technology and mathematics in primary and secondary schools in different European educational systems?

 

Theoretical framework

Paul Black and Wynne Harlen are both partners in the ASSIST-ME project, and both have formed the project’s theoretical conceptualization of formative assessment (Black and Wiliam, 1998; Harlen, 2012). We see summative and formative assessment as part of the same cycle, but with formative assessment involving the students, judging their performance both on subject specific and on personal criteria and with the aim of finding the next learning step (Harlen, 2013).

Our design of a dialogue-based assessment draws upon the Norwegian researcher Olga Dysthe (1996), seeing dialogue as a central way to learning. Dysthe is inspired by the Russian linguistic Bakhtin (1981), and the key point is to open a room for student reflection in a non-authoritative environment.

The feedback-formats are theoretically based on Hattie and Timperley (2007). In order to be able to give and receive formative feedback, the ability to establish a learning progression within a specific domain became an important competence for teachers as well as students (Alonzo and Gotwals, 2012).

Method

The research is done in close collaboration with teachers as action research (Zeichner and Nofke, 2001). The assessment method was implemented by teams of teachers and researchers in Denmark, Finland, and England, working closely together within the national team and across countries. In Denmark teachers from lower and upper secondary school (level 7 – 12) in the subjects technology, math, biology and physics participated. The teachers and researchers met four times during the semester in which the implementations took place. The purpose of these meetings included making adjustment in the assessment design to make it fit the teachers’ classrooms better, to plan the inquiry teaching units that the assessment would take place in and to continuously reflect upon the possibilities and challenges of using the assessment method as the implementations progressed. Thus, these teacher-researcher meetings provided the teachers with an environment for working with their assessment practice in a structured and reflective way. A total of 20 SAD sessions were subject to research. Data were collected using written teacher self-reflections via an online questionnaire, group discussions led by researchers, observations of SAD, videotape of selected sessions and teachers’ own notes from planning and reflection upon the dialogues using a structured format. This data was systematically analyzed using SurveyMonkey’s Text Analysis tool and the categories were validated by the teachers.

Expected Outcomes

We here give our preliminary findings of the challenges perceived by teachers with regards to using SAD as a method for assessment. The teachers had different types of reflections for each of the three parts of the ritual. The strict 5-minutes limit on the dialogue posed a challenge to most teachers. They found it difficult to choose learning objective that they could assess formatively during five minutes. By doing SADs more than once, the teachers learned how to focus on smaller parts of the overall learning objectives. Furthermore, they found it hard to construct and implement learning progressions. It became clear that most teachers did not find the students’ learning path following the steps in a standard generic taxonomy (like Bloom or SOLO). Rather, the students’ performance and understanding were smeared out over all standard learning progression steps. So, the teachers formulated concrete, domain specific taxonomies, inspired by Alonzo and Gotwals (2012). Most teachers experienced that in the first peer feedback sessions students tended to praise the focus student more than challenging each other's understanding. Renegotiating the purpose of the feedback session worked to make students’ discussion more productive, while teachers who did not renegotiate did not see any improvement in student feedback quality. Thus, it seems that discussing the nature and purpose of feedback with students improved the quality. The teachers could see the value of the self-assessment phase at the end of the SAD as a strong formative tool, but many had reservations when it came to using the self-assessment as a basis for summative assessment. Many of the teachers noted that their students awarded themselves higher grades than they would think they should get. Again, working with the students’ self-assessment improved the reliability of their judgements.

References

Alonzo, A. C. and Gotwals, A. W. (eds.)(2012). Learning Progressions in Science: Current Challenges and Future Directions. Rotterdam: Sense Publishers. Bakhtin, M. M. (1981). The Dialogic Imagination. Austin: University of Texax Press. Black, P. and Wiliam, D. (1998). Developing a theory of formative assessment. In: Gardener, J. (Ed.): Assessment and Learning. London: Sage. p. 81-100. Butler, R. (1988) Enhancing and undermining intrinsic motivation: the effects of task-involving and ego-involving evaluation on interest and involvement, British Journal of Educational Psychology, 58, 1–14. Chappuis, S. and Chappuis, J. (2008). The Best Value in Formative Assessment, Informative Assessment, 65(4), 14-19. Dolin, J. (2012). Assess Inquiry in Science, Technology and Mathematics Education: ASSIST-ME proposal. Copenhagen: University of Copenhagen. http://assistme.ku.dk (retrieved 12.01.2016). Dysthe, O. (1996). The Multivoiced Classroom Interactions of Writing and Classroom Discourse. Written communication, 13(3), 385-425. Harlen, W. (2012). On the relationship between assessment for formative and summative purposes, in: Gardner, J., (ed.), Assessment and Learning. London: Sage pp. 87-102. Harlen, W. (2013). Assessment & inquiry-based science education: issues in policy and practice. Global Network of science Academies (IAP) science Education Programme (SEP). http://www.interacademies.net/File.aspx?id=21245 (retrieved 12.01.2016). Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81-112. Looney, J. W. (2011): Integrating Formative and Summative Assessment: Progress Toward a Seamless System? OECD Education Working Papers, No. 58, OECD Publishing. http://dx.doi.org/10.1787/5kghx3kbl734-en (retrieved 12.01.2016). Shinn, M. R. (2013). Measuring General Outcomes: A critical component in scientific and practical progress monitoring practices. Pearson: Aimsweb. Http://www.aimsweb.com/Wp-content/uploads/Mark-Shinn-gom_Master-Monitoring-White-paper.pdf. (retrieved 12.01.2016). Zeichner, K. M., & Nofke, S. E. (2001). Practitioner research. In V. Richardson (Ed.), Handbook of Research on Teaching (4th edition). Washington DC: Aera.

Author Information

Jens Dolin (presenting / submitting)
University of Copenhagen
Department of Science Education
Copenhagen
Sofie Birch Jensen (presenting)
University of Copenhagen
Dept of Science Education
Copenhagen
University of Copenhagen, Denmark
University of Copenhagen, Denmark

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.