How To Teach Statistics In Higher Education - Results From A Video-Based Experimental Study
Author(s):
Maximilian Sailer (presenting / submitting)
Conference:
ECER 2015
Format:
Paper

Session Information

22 SES 12 C, How to Teach….in HE?

Paper Session

Time:
2015-09-11
09:00-10:30
Room:
338. [Main]
Chair:
Serap Emil

Contribution

According to Ruggeri/Dempster/Hanna (2011) statistics “is one of the most common topics across disciplines and levels of study” (p. 35). This is why the discipline has to be easily adaptable to a wide range of fields (Navarrete-Alvarez/Rosales-Moreno/Huete-Morales 2010). Nonetheless from a student perspective statistics is confronted with both fear and aversion against its topics, and wrong expectations (Ruggeri/Dempster/Hanna 2011). As Gruber & Renkl (1996) have clarified, especially students of social sciences often have aversions against learning statistics and they often lack of a solid background in basic mathematical procedures which complicates the didactical task. To master those challenges, both researchers and practitioners deal with the issue of teaching and learning statistics for several decades (Claine et al. 1978; Gelman & Nolan 2002; Navarrete-Alvarez/Rosales-Moreno/Huete-Morales 2010; Batanero/Burrill/Reading 2011). Since statistics is seen as an “unique subject in any curriculum and requires a distinct way of thinking” (Ruggeri/Dempster/Hanna 2011, p. 35), statistical literacy is not only restricted to school education (Watson & Callingham 2003; Wild & Pfannkuch 1999), but also addresses an essential competency in higher education (Kirsch 1997). How to teach statistics with what effect is a question to be addressed when focusing statistic lectures on university level. Research indicates that videos and video-podcasts are a valuable instrument for self-directed learning (e.g. Berk 2009; Bassill 2008; Francom/Ryan/Kariuki 2011). As a consequence of the rapid technical development and new findings in media psychology and pedagogy (Halverson & Smith 2009), videos are used and discussed in teaching context more frequently (Brophy 2004; Chenail 2011; O’Donoghue 2014). According to Vural (2013) the “...combination of images and sound creates a powerful medium for explanation of concepts while instructing learners with content that provides multiple senses“ (p. 1315). Videos and video-podcasts enable learning anytime and anywhere and allow individual learning speeds (Kay & Edwards 2012).

Since video-based learning already plays an important role in mathematics (Fößl 2014; Kay & Edwards 2012) the question arises how videos can be used in order to foster learning and learning outcome in university lectures on statistics? Research proves that learning can happen without a face-to-face instruction (Chang 2004), but it remains unclear what role the visibility of a lecturer plays in a video. Is it important that the lecturer will be seen or just be heard in the video? And for either case, how does that affect the evaluation process as well as the learning outcome? There is also a lack of research on the connection between student’s perceptions of the lecture (evaluation) and learning achievement in regards to statistics. This leads to the question how important the evaluation factors rapport (Benson et al. 2005; Marsh 1982; Murphy & Rodriguez-Manzanares 2012), teachers’ clarity (Chesebro 2003; Hines/Cruickshank/Kennedy 1985; Rodger/Murray/Cummings 2007; Hattie 2013) or students’ interest (Krapp 1999; Müller 2006; Schiefele/Krapp/Winteler 1992) might affect the learning outcome. Major goal of the study is it to examine the role and usage of learning videos in the context of teaching statistics. Research Question (1) examines to what extent the visibility of the lecturer affects the learning outcome. Research Question (2) examines to what extent the rating of clarity, structure, enthusiasm and rapport affects the learning outcome. Research Question (3) has the goal to detect those factors which predict a high overall rating.

Method

A video-based experimental study was conducted in order to answer the research questions. Two consecutive statistical topics (correlation and regression) were chosen for this study. Each topic was presented by two different instructors with a different didactical approach. Four videos in total were evaluated. One didactical approach introduces the content more on a theoretical basis while the instructor could be seen and heard. In addition the instructor visualized the content (formulas, concepts) on a white board in the video. The other approach focused more on the application of the theoretical concept while the instructor was not visible for the audience but could be heard by introducing the statistical procedures computer-assisted with SPSS. Each instructor maintained his original didactical style. Video A and D introduced the topics theoretical while the instructor was seen and heard. Video B and C introduced the topics in an applied way where the instructor couldn’t be seen but heard. The four videos were analyzed by 86 participants, students of educational science of the University of Augsburg, both Bachelor and Master level. Two groups of Master students (n=27) and two groups of Bachelor students (n=59) had to evaluate the videos and did a knowledge test right after the evaluation of a video. The master groups evaluated Video A and C and the bachelor groups evaluated Video B and D. So therefore all groups evaluated both didactical concepts starting with a video about correlation. Every second group (Master II and Bachelor II) were informed about the knowledge test. The other groups didn’t get this information. The length of the videos was between 8-12 min., rather short but comprehensive visual and auditory summaries of the topics. The evaluation instrument consisted of 38 items combined in six scales (Structure, Clarity, Enthusiasm, Rapport, overall rating and personal interest). This process was literature based and included items from standardized evaluation instruments like the SEEQ (Marsh 1982, Coffey & Gibbs 2001), TRIL (Gollwitzer & Schlotz 2003; Gläßer et al. 2002), MFE-V (Hirschfeld & Thielsch 2014), HILVE-I and HILVE-II (Rindermann 2001), KIEL (Gediga et al. 2000) and FEVOR (Staufenbiel 2001). We used a 5 point Likert Scale (strongly disagree – strongly agree) for each item. In addition an extra knowledge test was prepared for each video. Each test consisted of 6-9 multiple-choice questions (4 choices per questions) with a maximum of 11 correct answers.

Expected Outcomes

The participants, as assumed, have been very homogenous in regards to age, interests and statistical capabilities. Gender differences couldn’t be analyzed because only 7 males participated. The internal consistency or reliability for the scales Rapport, Clarity, Enthusiasm and Structure was acceptable (Cronbach Alpha > 0.79). Apparently, since the scales have been tested in previous research, they seem to be fairly reliable. The overall instrument hasn’t been tested yet on Intra Class Correlation Coefficient (ICC), so there might be an inconsistency in regards to the Interrater-Reliability. The result of the knowledge test had overall a decent average of M= 6.83 and a standard deviation of SD= 1.82. Research question (1), how the visibility of the lecturer affects the learning outcome, was analyzed by using an independent sample T-test. Videos including the lecturer (M= 7.45, SD= 1.89) resulted in significantly higher test results than videos without the visibility of the lecturer (M= 6.20, SD= 1.53), t(170) = 4.80, p< .05. Research Question (2), to what extent the rating of clarity, structure, enthusiasm and rapport affects the learning outcome, was analyzed with a multiple linear regression. Even though test result and all of the four scales correlated significantly, r= .29-.31, p< .05, there was no linear trend. The result of the multiple regression indicated that none of the variables could predict the test results (R2=0.11, F(4,160)=5.01,p<.01). So therefore participants’ test results and participants’ rating are not linearly connected. A multiple linear regression was used to answer Research Question (3), to test which variables predicted significantly participants’ overall rating. The results indicated the four predictors explained 77.4% of the variance (R2=.77, F(9,156)=55.86, p<.01). It was found that Clarity (β= .18, p<.01), as well as Enthusiasm (β= .21, p<.01), Structure (β= .24, p<.01) and Rapport (β= .33, p<.01) significantly predicted the overall rating.

References

Bassill, John N. (2008): Motivation and Cognitive Strategies in the Choice to Attend Lectures or Watch them Online. Journal of Distance Education, 22 Nr. 3, S. 129-148. Batanero, Carmen; Burrill, Gail; Reading, Chris (2011): Teaching Statistics in School Mathematics-Challenges for Teaching and Teacher Education: A Joint IC-MI/IASE Study: The 18th ICMI Study. Dordrecht: Springer. Berk, Ronald A. (2009): Multimedia Teaching with Video Clips: TV, Movies, You-Tube and mtvU in the College Classroom. International Journal of Technology in Teaching and Learning, 5 Nr. 1, S. 1-21. Brophy, J. (Hrsg.) (2004): Using Video in Teacher Education. Oxford: Elsevier. Chenail, Ronald J. (2011): YouTube as a Qualitative Research Asset: Reviewing User Generated Videos as Learning Resources. The Qualitative Report, 16 Nr. 1, S. 299. Claine, Robert et al. (1978): Statistics from Whom? Teaching Sociology, 6 Nr. 1, S. 37-46. Francom, Jeff; Ryan, Thomas G.; Kariuki, Mumbi (2011): The Effects of Podcasting on College Student Achievement and Attitude. Journal of the Research Center for Educational Technology, 7 Nr. 1, S. 39-53. Gelman, Andrew; Nolan, Deborah Ann (2002): Teaching Statistics: A Bag of Tricks. Oxford and New York: Oxford University Press. Gruber, Hans; Renkl, Alexander (1996): Alpträume sozialwissenschaftlicher Studierender: Empirische Methoden und Statistik. In: Lompscher, Joachim; Mandl, Heinz (Hrsg.): Lehr- und Lernprobleme im Studium. Bern: Huber, S. 118-130. Halverson, Richard; Smith, Anette (2009): How new Technologies have (and have not) Changed Teaching and Learning in Schools. Journal of Computing in Teacher Education, 26 Nr. 2, S. 49-54. Kirsch, Irwin S. (1997): Literacy Performance on Three Scales: Definitions and Results. In: McLennan, W. (Hrsg.): Aspects of literacy. Canberra: Australian Bureau of Statistics, S. 98-124. Navarrete-Alvarez, Esteban; Rosales-Moreno, Maria J.; Huete-Morales, Maria D. (2010): Teaching Statistics in Labor, Social, Juridical or Economic Studies. US-China Education Review, 7 Nr. 10, S. 36-41. O'Donoghue, Michael (2014): Producing Video For Teaching and Learning: Planning and Collaboration. New York: Taylor & Francis. Ruggeri, Kai; Dempster; Hanna, Donncha (2011): The Impact of Misunderstanding the Nature of Statistics. Psychology Teaching Review, 17 Nr. 1, S. 35-40. Watson, Jane; Callingham, Rosemary (2003): Statistical Literacy: A Complex Hierarchical Construct. Statistics Education Research Journal, 2 Nr. 2, S. 3-46. Wild, Chris J.; Pfannkuch, Maxine (1999): Statistical Thinking in Empirical Enquiry. International Statistical Review, 67 Nr. 3, S. 223-265. Due to 400 words, not all references can be listed.

Author Information

Maximilian Sailer (presenting / submitting)
University of Augsburg
Chair of Education and Further Education
Augsburg

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.