Enhancing the Art & Science of Teaching With Technology. Robert J. Marzano
Чтение книги онлайн.
Читать онлайн книгу Enhancing the Art & Science of Teaching With Technology - Robert J. Marzano страница 7
Means and her colleagues’ (2010) optimistic perspective may be due to the fact that their meta-analysis focused on web-based instruction, a relatively new format for distance learning. Hattie (2009) reported an average effect size of 0.18 for web-based learning (equivalent to a 7 percentile point gain). A meta-analysis by Traci Sitzmann, Kurt Kraiger, David Stewart, and Robert Wisher (2006) found that web-based learning was most effective for declarative knowledge (understanding of facts, details, principles, and generalizations), as opposed to procedural knowledge (strategies and processes).
Blended Learning
Instruction that combines online and face-to-face elements is known as blended learning, or hybrid learning (Means et al., 2010; Schulte, 2011). The aforementioned meta-analysis by Means and her colleagues (2010) compared blended learning to face-to-face instruction and reported an average effect size of 0.35 in favor of the blended approach. Similarly, Sitzmann and her colleagues (2006) examined the effects of web-based instruction as a blended supplement to classroom instruction and reported a mean effect size of 0.34 for declarative knowledge and 0.52 for procedural knowledge. David Pearson, Richard Ferdig, Robert Blomeyer, and Juan Moran (2005) examined the impact of digital literacy tools on middle school students through a synthesis of studies published between 1988 and 2005. Most of the digital literacy tools they examined were used in a blended approach. They found an overall effect size of 0.49, indicating a 19 percentile point gain. Finally, Hattie (2009) reported a number of specific uses of digital media that indicated a blended approach. For example, he reported an average effect size of 0.52 for interactive video, an average effect size of 0.22 for audiovisual methods (including television, film, video, and slides), and an average effect size of 0.33 for simulations. As with the research on computer use, this research seems to indicate that using technology in tandem with effective instruction may provide the most benefits.
Interactive Whiteboards
The basic distinction between interactive and noninteractive technology tools is that interactive technologies are two-way systems: they provide output in response to a user’s specific input. To illustrate, a film is noninteractive because the response from the audience does not change the trajectory of the plot. An audience member cannot click on the screen, for instance, to elicit different reactions from the characters. On the other hand, a video game is interactive because what happens on the screen is a direct result of the way in which a player manipulates the controls.
In his meta-analysis of the effectiveness of audiovisual methods in general, Hattie (2009) reported a relatively small effect size of 0.22, indicating a percentile gain of 9 points. However, in a synthesis of research on the effectiveness of interactive video methods, Hattie found a medium to large effect size of 0.52, indicating a percentile gain of 20 points. From this difference, one might reasonably infer that interactive technologies are more likely than noninteractive technologies to lead to positive gains in student achievement.
In schools, interactive whiteboards (IWBs) are a common form of interactive technology. IWBs use projectors to display a computer desktop onto a large, wall-mounted surface. Using a stylus or a fingertip, users can write, select, move, and interact with objects on the screen. Two studies conducted by Robert Marzano and Mark Haystead (2009, 2010) focused specifically on IWBs. They found an average effect size of 0.44 for 85 independent quasi-experimental studies involving 3,338 subjects. This means that on average, teachers who used IWBs saw student achievement gains of 15 percentile points over what was expected when teachers did not use IWBs. Analysis of video recordings of teachers in the studies provided further detail about the effects of IWBs. Marzano and Haystead (2010) found that teachers whose students exhibited higher achievement integrated the technology better with research-based instructional strategies. Their findings suggested that “substantial increases in student achievement would be predicted with improvements in teacher behavior” (p. 70), particularly with respect to chunking, scaffolding, pacing, progress monitoring, clarity of content depicted on the IWB, and student response rates. All six of these instructional variables were found to correlate with the size of the IWB effect. In summary, when teachers used these strategies more effectively, student achievement gains were greater.
Other studies on IWBs include Omar López’s (2010) analysis of English learners’ (ELs) academic achievement with and without IWB technology. His results strongly suggested that IWBs increased student achievement among ELs compared to ELs in traditional classrooms without technology. He also found that IWBs narrowed the achievement gap between ELs in IWB-enhanced classrooms and non-ELs in traditional classrooms. IWB technology has also been found to increase student engagement (Beeland, 2002; Smith, 2000), improve student retention of content (M. L. Zirkle, 2003), and enhance teacher planning and organization (Latham, 2002).
Mobile Devices
Any pocket-sized, handheld computing device, such as a smartphone, tablet, or e-reader, can be classified as a mobile device. In 2013, Grunwald Associates published the results of a U.S. survey on student use of mobile devices. Of K–12 parents surveyed, 56 percent said they’d “be willing to purchase a mobile device for their child to use in the classroom if the school required it” (p. 3). However, the authors found that only 16 percent of schools allow students to use personal mobile devices in the classroom. Nevertheless, a quarter of all middle school students (28 percent) and half of all high school students (51 percent) carry a smartphone with them to school every day.
In a review of trends in research, Wen-Hsiung Wu and his colleagues (2012) defined mobile learning as “using technology as a mediating tool for learning via mobile devices accessing data and communicating with others through wireless technology” (p. 818). In other words, the learner does not have to be in a fixed, specific location to engage in mobile learning. The authors reported that 86 percent of the 164 mobile learning studies in their literature review present positive outcomes, although they did not offer an average effect size for these outcomes or define whether these outcomes are related to achievement, motivation, behavior, or some other variable entirely. More specific research on mobile devices can be organized into two categories: (1) smartphones and (2) student response systems.
Smartphones
A smartphone is a mobile phone that uses a computer operating system to connect the user to online data. Modern smartphones often have recognizable features such as compact digital cameras and touchscreen interfaces that allow a user to scroll with a finger through a web page or a document. Research by Christopher Sanchez and Jennifer Wiley (2009) and Christopher Sanchez and Russell Branaghan (2011) on the effect of scrolling textual interfaces on cognition and memory has direct implications for the use of smartphones for learning. Sanchez and Wiley (2009) reported that text displayed in a scrolling format is not only harder to understand but also harder to remember than text displayed in a print format, especially for individuals with low working memory: “Nonscrolling interfaces produced significantly better comprehension overall than did scrolling interfaces.… Whereas scrolling did lead to worse performance overall, there was a more pronounced effect for those individuals who had lower WMC [working memory capacity]” (p. 734). Furthermore, the learners in the study “were less able to develop a causal understanding of a complex topic when presented with a scrolling interface than when presented the same information units in discrete pages” (Sanchez & Wiley, 2009, p. 737). In sum, scrolling text is more difficult to read than stationary text.
Two years later, Sanchez and Branaghan (2011) conducted a similar set of studies to determine whether the small, scrolling displays on mobile devices affect a reader’s ability to reason or remember facts. They found that while “factual recall is relatively unaffected,” there is a significant decrease in performance when readers must use the factual information “to make appropriate decisions or otherwise reason about a given situation” (p. 796). The authors