Author(s):Rita Nadas, Irenka Suto (presenting), Rebecca Grayson

Conference:ECER 2012, The Need for Educational Research to Champion Freedom, Education and Development for All

Network:13. Philosophy of Education - Standard submissions

Format:Paper

Session Information

13 SES 10 C, Parallel Paper Session

Parallel Paper Session
Joint Session with NW 09

Time:2012-09-20
15:30-17:00

Room:FCT - Aula 15

Chair:Birgit Eickelmann

Contribution

Analyse, Evaluate, Review….How Do Teachers with Differing Subject Specialisms Interpret Common Assessment Vocabulary?


Many teachers in the UK teach and assess subjects outside their fields of expertise, for a range of reasons including teacher shortages; the growing popularity of interdisciplinary courses; and the emergence of qualifications not rooted within any particular subject disciplines, for example the Project Qualifications developed recently in England and Wales. Interdisciplinary courses and Project Qualifications are frequently designed to grant students the invaluable freedoms of choosing and exploring their personal fields of interests and also of developing personal styles of expression. However,teachers with different subject backgrounds may have different interpretations of assessment terms, leading to inconsistencies in judgements during assessment (Sadler, 1989). This is an international phenomenon, characteristic of a range of subjects, and independent of the local vernacular. Recently, a series of research studies in various European countries has confirmed that the background discipline of teachers and assessors affects their conceptualisation of good performance. Analysing students’ academic writing in the history of science in the United Kingdom, North (2005) found that ‘arts’ students received higher marks than ‘science’ students because the typical features of writing required by the arts (e.g. requiring careful expression and re-drafting, dealing with interpretations, balancing different opinions) were valued more by the assessors. In Norway, Dysthe, Engelsen and Lima (2007) also found significant, discipline-related differences among teacher-assessors of ‘soft’ disciplines (e.g. arts) and ‘hard’ disciplines (e.g. maths, sciences and engineering). Working in the Netherlands, Joosten-ten Brinke, Sluijsmans and Wim (2010, 71) found that the decision-making process is “identical for assessors in the same domain, but differs from those in different domains”, leading to differences in their understanding of the assessment criteria. These differences in interpretations may have implications for the reliability and for the validity of the outcomes of assessment and therefore bear a critical impact on the career prospects and identities of many individuals.

Stemming from the international literature, two research questions were addressed in this study:

1. Do teachers from different disciplines, such as in the humanities and in the sciences, have different understandings of common generic assessment terms such as ‘analyse’ and ‘evaluate’?

2. Do individual differences arise in interpreting the assessment terms?

The aims of the present study were to answer the above research questions by elucidating tacit, semantic understanding of educational assessment terms and by collecting and analysing evidence of any differences among subject experts with different subject specialisms.


Method

The methodology initially entailed identifying nine key assessment terms from assessment materials and the academic literature. The terms were: analyse/analysis; evaluate/evaluation; review; synthesise/synthesis; argue/argument; critical; creative; perceptive; and reflective. Next, written definitions for the assessment terms were gathered from eight published texts: two well-known English dictionaries, Bloom et al.’s taxonomy (1971), the English national qualifications regulator’s glossary and four subject-related glossaries. Definitions were also collected from nine subject experts with extensive teaching and examining experience in the fields of geography, physics, biology, psychology, history, English literature and English language. In total, 111 definitions were then analysed for semantic content using MAXQDA software where possible, and compared across individual sources as well as across subjects and between the humanities and sciences in order to find evidence regarding the two research questions.


Expected Outcomes

The methodology provided evidence to answer both research questions, and our results corroborate recent research findings. Sadler’s warning that the meaning of an assessment term in one context does not transfer directly to another resonates with our findings. We found great individual variation in the definitions given by the subject experts, and sometimes even between experts in the same subject. Two assessment terms (analyse and evaluate) yielded marked differences between the humanities and the sciences, corroborating previous research. For the remaining seven terms, differences in the definitions were shaped by experts’ individual experiences. Findings suggest that assessment criteria may have multiple interpretations, probably due to the differences in individual experts’ backgrounds, personal values and past experiences. This explains difficulties faced by some teachers in interdisciplinary contexts, or where they lack ample training. Furthermore, different interpretations carry a threat to the reliability and validity of any assessment, and thus the fairness of assessment comes under threat. Possible solutions involve standardisation; discussions among different subject experts; and preparing more explicit, context-specific assessment documents. However, the pay-off of the restricted student experience typical of constrained examinations versus the freedom of students’ choice and style of expression needs to be weighed carefully.


References

Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D.R. (1971). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. London: Longman. 16th Edition.

Dysthe, O., Engelsen, K. S. & Lima, I. (2007). Variations in Portfolio Assessment in Higher Education: Discussion of Quality Issues Based on a Norwegian Survey across Institutions and Disciplines. Assessing Writing, 12(2), 129-148.

Joosten-ten Brinke, D., Sluijsmans, D. M. A. & Jochems, W. M. G. (2010). Assessors' Approaches to Portfolio Assessment in Assessment of Prior Learning Procedures. Assessment & Evaluation in Higher Education, 35(1), 59-74.

North, S. (2005). Different values, different skills? A comparison of essay writing by students from arts and science backgrounds. Studies in Higher Education, 30(5), 517-533.

Sadler, D.R. (1989) Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.


Author Information

Rita Nadas
Cambridge Assessment, United Kingdom
Irenka Suto (presenting)
Cambridge Assessment, United Kingdom
Rebecca Grayson
Cambridge Assessment, United Kingdom