Session Information
09 SES 11 B, Language and Literacy Assessments (Part 1)
Paper Session
Contribution
Malta is a heavily populated island with one of the highest population densities in Europe and is effectively one large conurbation. It is technically a bilingual country and English is usually the language of instruction and assessment. The educational psychologist community has traditionally used UK standardised ability and achievement tests, this being in part, a postcolonial legacy. Historically, the educated elements in Maltese society have prided themselves on their linguistic flexibility and were rather lacking in the will to recognize the fact that the competence of the average Maltese learner of English falls short of UK national standards. In this far from ideal situation a local examination body commissioned basic reading tests in English and Maltese with a view to establishing clear criteria for the identification of significantly below average readers as part of a wider access arrangement reform. In the case of English, the Suffolk Reading Scale was selected unmodified for use in the Maltese context but with Maltese norms. This test provides test takers with a sentence in which one key word has been omitted and test takers choose the best fitting word from the five alternatives supplied.
This objective of this study is to identify and elaborate on the difficulties and shortcomings of adopting published tests wholesale from one context to another, even if no language or translation issues are involved; certainly cultural ones may be.
This study considers the differences in reading ability between the published UK norms and the Maltese norms by age, gender and type of Maltese educational provision (state, church and independent schooling). The second part of the study analyses the items in the test for suitability to the Maltese school population (item analysis) and suggests which items should be removed from the test should a Maltese version be commissioned. Possible cultural bias is expounded upon and described. In the third part of the study, children’s results on the Suffolk Reading Scale across the whole of the primary and the secondary school spectrum are compared with their results in national mid-yearly examinations and with a single word reading test in a bid to examine the scale’s concurrent validity. Given the stratified nature of the national examination language papers, this part of the study provides the possibility of conducting a factor analysis to examine whether like in the Nation and Snowling (1997) study, the Suffolk Reading Scale loads significantly on word recognition more than reading comprehension, so issues of face and construct validity will be considered.
Method
Expected Outcomes
References
Adolf, S.M., Catts. H. & Little, T.D. (2006). Should the simple view of reading include a fluency component? Reading and Writing: An Interdisciplinary Journal, 19, 933-958. Alderson, J.C. (2000). Assessing Reading. Cambridge: Cambridge University Press. Alonzo, J., Basaraba, D., Tindal, G. & Carriveau, R.S. (2009). They read but how well do they understand?: An empirical look at nuances of measuring reading comprehension. Assessment for Effective Intervention, 35, 34-44. Arffman, I. (2007). The problem of equivalence in translating texts in international reading literacy studies. Institute for Educational Research, University of Jyväsklä, Finland. Buck, G. (2001). Assessing listening. Cambridge, UK: Cambridge University Press. Cain, K. & Oakhill, J. (2006). Assessment matters: Issues in the measurement of reading comprehension. British Journal of Educational Psychology, 76, 697-708 Georgiou, G. K., Das, J.P. & Hayward, D. (2009). Revisiting the “Simple View of Reading” in a group of children with poor reading comprehension. Journal of Learning Disabilities, 42, (1), 76-84. Gough, P.B. & Tunmer, W.E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7, 6-10. Kobayashi, M. (2002). Method effects on reading comprehension test performance: Text organization and response format. Language Testing, 19, 103-220 Nation, K. & Snowling, M. (1997). Assessing Reading Difficulties: the validity and utility of current measures of reading skill. British Journal of Educational Psychology, 67, 359-370. Perfetti, C.A. (1997). Sentences, individual differences, and multiple texts: three issues in text comprehension. Discourse Processes 23, 337-55. Qualifications and Curriculum Authority. (2009). Assessing pupils’ eligibility for additional time. Accessed 19th November 2010 online at http://www.swindon.gov.uk/key_stage_2_access_arrangements.pdf Rupp, A. A., Ferne, T. & Choi, H. (2006). How assessing reading comprehension with multiple-choice questions shapes the construct: a cognitive processing perspective. Language Testing, 23, 441-474.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.