Author(s):Tobias Rolfes (presenting), Jürgen Roth, Wolfgang Schnotz

Conference:ECER 2016, Leading Education: The Distinct Contributions of Educational Research and Researchers

Network:16. ICT in Education and Training


Session Information

16 SES 05 JS, ICT and Mathematics Education

Joint Paper Session NW 16 and NW 24


Room:OB-H1.49 (ALE 2)

Chair:Ed Smeets


Learning Mathematics With Computer-Generated Dynamic Visualizations

Dynamic visualizations are fascinating for many people. With the rise of computer technology, the cost for generating them has reduced considerably. This development has raised the hope that dynamic visualizations could facilitate learning. During the last two decades, several interactive computer programs were developed in the domain of mathematics (e.g., interactive geometry software, computer algebra systems) that enable the user to create animated or interactive representations of mathematical phenomena and are intended to support mathematics teaching and learning.

According to empirical studies in different domains, dynamic visualizations are rarely more beneficial for learning than static representations (e.g. Hegarty, Kriz, & Cate, 2003; Mayer, Hegarty, Mayer, & Campbell, 2005). The reason for these results is still up for discussion. Gog, Paas, Marcus, Ayres, and Sweller (2009) assume that dynamic visualizations impose more load on the working memory. This additional load could cause a negative learning effect. Mayer et al. (2005) conjecture that the mental simulation of a dynamic process based on a static representation could induce a higher learning outcome than passively observing a dynamic visualization.

Some researchers, however, reason that dynamic visualizations can be beneficial for learning in certain circumstances. Schnotz and Rasch (2008) argue that dynamic visualizations can foster learning if they release cognitive resources: Either by enabling students to perform a mental process that would otherwise not be executable or by substantially facilitating mental processing. This argument is consistent with Hattie’s (2009) finding that computer-aided learning is most effective in demanding situations. In order to avoid a solely passive observation of a visualization, Koning and Tabbers (2011) suggest requesting students to manipulate representations interactively. Thus, students would connect the internal processing of the dynamic representation with an embodied action.

In the domain of mathematics, quantitative studies about the effect of dynamic visualizations are scarce. In order to address this deficiency, we conducted a laboratory study with secondary students that learned aspects of the concept of function with or without dynamic visualizations. The learning setting was constructed corresponding to the mentioned theoretical assumption that dynamic visualizations can be advantageous if they enable or substantially facilitate the learning process.

We used a three-factor posttest-only design. In the learning setting, the exercises were accompanied with two different forms of dynamic visualizations. In the animated representation, the students could only play an animation and observe the movement of a point on the triangle line and its effect on the length of a chord. In the interactive representation, the students had to drag the point with the mouse on the triangle line and could contemplate the effect of this manual manipulation. In the control condition, students had to solve the same exercises with a static representation and had to mentally simulate the point’s movement.

The instructions of the exercises differed only if necessary. The students using an interactive representation were instructed to “drag point G with the mouse on the triangle line.” The participants working with an animated representation were requested to “press the play button in order to move point G on the triangle line”, whereas the control group was prompted to “move point G in your mind on the triangle line”. The students were randomly assigned to one of the three experimental conditions (interactive, animated, or static representation).


One hundred and fifty seven students (88 eighth-graders; 69 ninth-graders) participated in the study. Gender was almost equally represented (55% female). The mean age was 14.2 years (SD = 0.66). Due to missing data, the data of 11 students were excluded from the analysis.
Several variables were collected about participants’ attitudes and abilities. From the PISA survey (Ramm et al., 2006), we selected the scales for mathematics self-efficacy (MathEff), mathematics anxiety (AnxMat), and intrinsic motivation to learn mathematics (IntMat) because they had substantial predictive power on the mathematics performance in the German PISA 2003 sample. The PISA scales about attitudes toward computers (AttComp) and the scale for computer-related locus of control (ContComp) were chosen because of our computer-based learning setting. Cognitive potential was measured with the matrices subtest of the German adaptation of the cognitive ability test (Heller & Perleth, 2000). Spacial-visual ability was assessed with three different scales. A test about dice rotation and one about compounding two-dimensional figures were selected from the German intelligence test I-S-T 2000R (Amthauer, Brocke, Liepmann, & Beauducel, 2001). As a third scale we used the paper folding test of the ETS (Ekstrom, French, Harman, & Derman, 1976). In order to assess the students’ ability to deal with graphs we developed a specific test. It consisted of 22 items that required a qualitative analysis of graphs (QualGraph, α = .73).
The computer-based learning setting consisted of 19 exercises. The student had to investigate the relationship between a variation of a point on the line of an equilateral triangle and the length of the corresponding chord from one particular corner to this moving point. The computer-based posttest (α = .71) comprised 14 items that required a transfer. Students had to apply their acquired knowledge to different figures (rectangular triangles, pentagon etc.).
The attitude scales from PISA and the test dealing with a qualitative analysis of graphs were administered in a paper-and-pencil test in the first lesson (45 min). In the second lesson, the students worked individually with the computer-based learning environment for 25 minutes. Immediately after the instruction, they completed the posttest. In the third and last session, the cognitive ability and visual-spatial ability scales were administered in a paper-and-pencil test.
We applied ANCOVA and orthogonal contrasts to evaluate the experimental effect. The total score for visual-spatial ability (SpatAbil) was generated by calculating a mean of the standardized values of the three different scales.

Expected Outcomes

The preconditions for conducting an ANCOVA were met: The covariates and the posttest score were independent, and the regression slopes were homogeneous. The covariates QualGraph, F(1, 135) = 14.38, p < .001, partial η² = .10, IntMat, F(1, 135) = 12.94, p < .001, partial η² = .09, and SpatAbil, F(1, 135) = 5.50, p < .05, partial η² = .04, were significantly related to students’ posttest score. Only marginally significant influence on students’ posttest performance had the covariates AttComp, F(1, 135) = 3.56, p <.10, and CogAbil, F(1, 135) = 3.03, p < .10. The three covariates ContComp, F(1, 135) = 2.37, p = .13, AnxMat, F(1, 135) = 1.61, p = .20, and MathEff, F(1, 135) = 1.35, p = .25, were not significantly related to the posttest score.
After controlling for the covariates, a significant effect of the form of representation on the performance in the posttest remained, F(2, 135) = 3.59, p < .05, partial η² = .05. Planned contrasts revealed that learning with animated or interactive representations is significantly more beneficial than learning with a static representation, t(135) = 2.67, p < .01, r = .22. However, there was no significant difference between learning with an animated or interactive representation, t(135) = 0.318, p = .75.
The experiment showed that students can benefit from dynamic visualizations in learning mathematics. However, the supposed effect of embodied interaction could not be proven. We selected a learning setting in which the use of dynamic representations could overcome a learning hurdle. Therefore, our empirical results seem to support the thesis that dynamic visualizations can be beneficial if they enable or substantially facilitate the learning process. Future studies should focus on other content areas in mathematics in order to broaden the evidence of the learning effectiveness of dynamic visualizations.


Amthauer, R., Brocke, B., Liepmann, D., & Beauducel, A. (2001). I-S-T 2000 R - Intelligenz-Struktur-Test 2000 R [Intelligence Structure Test 2000 (revised)]. Göttingen, Germany: Hogrefe.
Ekstrom, R. B., French, J. W., Harman, H. H., & Derman, D. (1976). Manual for kit of factor-referenced cognitive tests. Princton, NJ: ETS.
Gog, T., Paas, F., Marcus, N., Ayres, P. & Sweller, J. (2009). The mirror neuron system and observational learning: Implications for the effectiveness of dynamic visualizations. Educational Psychology Review, 21 (1), 21–30.
Hattie, J. (2009). Visible learning. A synthesis of over 800 meta-analyses relating to achievement. London, England: Routledge.
Heller, K. A., & Perleth, C. (2000). KFT 4-12+R - Kognitiver Fähigkeits-Test für 4. bis 12. Klassen. Revision [Cognitive Abilities Test (CogAT; Thorndike, L. & Hagen, E., 1954-1986) - German adapted version]. Göttingen, Germany: Beltz.
Hegarty, M., Kriz, S., & Cate, C. (2003). The roles of mental animations and external animations in understanding mechanical systems. Cognition and Instruction, 21 (4), 325–360.
Koning, B. B., & Tabbers, H. K. (2011). Facilitating understanding of movements in dynamic visualizations: an embodied perspective. Educational Psychology Review, 23 (4), 501–521.
Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: Annotated illustrations versus narrated animations in multimedia instruction. Journal of Experimental Psychology: Applied, 11 (4), 256–265.
Ramm, G., Prenzel, M., Baumert, J., Blum, W., Lehmann, R., Leutner, D., . . . Schiefele, U. (2006). PISA 2003: Dokumentation der Erhebungsinstrumente [PISA 2003: Documentation of survey scales]. Münster, Germany: Waxmann.
Schnotz, W., & Rasch, T. (2008). Functions of animation in comprehension and learning. In R. Lowe & W. Schnotz (Eds.), Learning with animation. Research implications for design (pp. 92–113). Cambridge, England: Cambridge University Press.

Author Information

Tobias Rolfes (presenting)
University of Koblenz-Landau, Germany
Jürgen Roth
University of Koblenz-Landau, Germany
Wolfgang Schnotz
University of Koblenz-Landau, Germany