Session Information
09 SES 10 C, Findings from International Comparative Achievement Studies and their National Extensions: Evidence from, and in relation to, PISA
Paper Session
Contribution
PISA is one of the largest international comparative surveys, aiming to perform a cross-national assessment of the reading, mathematical and scientific literacy of 15-year-old students. The majority of the studies reported on the PISA survey use PISA-generated datasets in order to explore contextual factors that might be linked to students’ achievement. However, some studies have recently focused on PISA test items, aiming to explore students’ achievement in relation to specific characteristics of PISA science test items (e.g. Dossey, McCrone, Turner, & Lindquist, 2008; Nentwig, Roennebeck, Schoeps, Rumann, & Carstensen, 2009; Pinto & El Boudamoussi, 2009).
The core element of the PISA survey is the concept of literacy (reading, mathematical, scientific), i.e. the capacity of students to extrapolate from what they have learned and apply their knowledge in novel contexts.
In order to assess scientific literacy, PISA uses test units comprising stimulus material that consists of text and visual images (tables, graphs, photographs, etc.) followed by questions (test items) related to the stimulus material. This unit structure is considered to facilitate the simulation of a context that is as realistic as possible and reflects the complexity of everyday situations (OECD, 2006).
Visual images seem to play a significant role in assessing students’ ability to transfer the knowledge and skills acquired at school to novel settings. Graphs and diagrams can bridge the gap between everyday knowledge - based on verbal description - and scientific formalism - conveyed by mathematical formulas describing the central laws of the content area (e.g. Osborne, 2002; Stern, Aprea, & Ebner, 2003).
Moreover, scientific literacy requires that students are proficient in science language, since understanding some of the content and the appropriate use of science language is an essential component of scientific literacy (Osborne, 2002). The natural language of science is an integration of texts, visual images and mathematical expressions. Text, mathematics and visual images are needed to represent abstract and complex scientific concepts and explanations, because visual representations facilitate the representation of abstract concepts with concrete depictions (e.g. Lemke, 1998; Yeh & Mc Tigue, 2009). Several studies have suggested that students’ diagrammatic literacy, i.e. the ability to interpret and produce graphical representations, is important for success in standardised science tests (e.g. Schnotz, Picard, & Hron, 1993; Yeh & Mc Tigue, 2009). In addition, other studies have shown that visual representations -their type, mode and function- affect students’ understanding in science as well as their performance (e.g. Meyer, 1997; Schnotz & Bannert, 2003; Schnotz & Kürschner, 2008; Stern, Aprea, & Ebner, 2003).
Therefore, it is of special interest to explore possible convergences and divergences between visual material included in PISA test items and in assessment tasks within the national context in order to shed light on our understanding of factors that contribute to Greek students’ constantly poor performance in PISA. The aim of the present study is to identify any discrepancies between PISA test items and assessment tasks in national context based on the frequency of visual images’ inclusion, their type and their functional role.
Method
Expected Outcomes
References
Dossey, J., McCrone, S., Turner, R., & Lindquist, M. (2008). PISA 2003 – Mathematical literacy and learning in the Americas. Canadian Journal of Science, Mathematics, and Technology Education, 8(2), 140-152. Lemke, J. (1998) Teaching All the Languages of Science: words, symbols, images and actions. Available at: http://academic.brooklyn.cuny.edu/education/ jlemke/ papers/ barcelon.htm. Mayer, R. (1997): Multimedia learning: Are we asking the right questions? Educational Psychologist, 32(1), 1-19. Moline, S. (1995). I see what you mean. York. ME: Stenhouse. Nentwig, P., Roennebeck, S., Schoeps, K., Rumann, S., & Carstensen, C. (2009). Performance and levels of contextualization in a selection of OECD countries in PISA 2006. Journal of Research in Science Teaching, 46(8), 897-908. OECD (2006). Assessing scientific, reading and mathematical literacy: A framework for PISA 2006. Paris: OECD. Osborne, J., (2002) Science Without Literacy: A ship without a sail? Cambridge Journal of Education, 32(2), 203-218 Pinto, R., & El Boudamoussi, S. (2009). Scientific processes in PISA tests observed for science teachers. International Journal of Science Education, 31(16), 2137-2159. Schnotz, W., Picard, E., & Hron, A. (1993). How do Successful and Unsuccessful Learners use Texts and Graphics? Learning and Instruction, 3, 181-199. Schnotz, W. & Bannert, M. (2003). Construction and interference in learning from multiple representation. Learning and Instruction, 13, 141-156. Schnotz, W. &. Kürschner, C. (2008). External and internal representations in the acquisition and use of knowledge: visualization effects on mental model construction. Instructional Science. 36, 175-190 Stern, E., Aprea, C., & Ebner, H. (2003). Improving cross-content transfer in text processing by means of active graphical representation. Learning and Instruction, 13, 191-203. Yeh, Y. & McTigue, E. (2009). The Frequency, Variation, and Function of Graphical Representations within Standardized State Science Tests. School Science and Mathematics, 109(8), 435-449.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.