The MIT Computer Pedagogy Study at West Point: Setting and Significance

It occurred to me that, while part 1 of my analysis of the MIT-West Point study was somewhat widely read, part 2, which delved into the study’s significance, was not. To that end, here are both parts in one post, so that you can read them together. Or you can click here, and save yourself the trouble: Part 1 set out to correct some serious misapprehensions about West Point pedagogy, Part 2 looked at the actual structure and results of the test.

Part 1 Classroom Technology and the West Point Study (May 26)

[Edit, 1/2/17: With apologies to Kevin Gannon for the snark and straw men in this post. I’m leaving it as is, however, out of a historian’s perhaps misplaced sense of source integrity. For a great, and extensive, exchange with Kevin and Josh Eyler, see our Twitter convo.]

I completely missed this when it came out the other day, but the Social Sciences Department at West Point apparently ran a study on student use of technology in the classroom. [Correction: MIT ran the actual study.] The result? Students not using laptops or tablets did (marginally) better throughout the semester than those who did.Reaction GIF: eye roll, Stanley Hudson, The Office Naturally, this got some folks riled up, and not without some reason–media tend to go for the big headline, and people scanning headlines will simply take away “ban technology!” and not delve further.

In other words, discussion is locked down, rather than opened up.

I first learned of this when someone (I can’t remember who) retweeted Jonathan Becker’s article on the subject. From there I was led to Kevin Gannon’s column. Both authors make some good points, but both also fail somewhat in their broader analysis largely because neither seems to know anything about West Point, and each makes several errors because of that ignorance. So, I want to shed some light on these issues, because it impacts the overall credibility of these articles (we’ll put aside for a moment the typical paranoid reaction these studies seem to always elicit). I’m going to pick on Gannon more, both because of the tone of his column and because his claims come off (to me, anyway) as far less constructive than Becker’s. [This is actually the first of two columns taking issue with Gannon’s assertions. Sorry Dr. Gannon, it’s nothing personal, but some of your recent writing just happens to intersect with my end-of-semester reflections.]

First, Gannon’s key argument against the study is that

We learn that there were three sections of an economics course, consisting of nearly 800 students. I’m going to go out on a limb and say that they were taught in the large-lecture format. We know that lecture is one of the most ineffective ways to teach, and that student learning increases significantly when it is eschewed in favor of a pedagogy that embraces active learning (see the classic meta-study here). Yet we’re being asked to believe that hundreds of students packed into a lecture hall and subjected to demonstrably ineffective teaching methods aren’t learning because of their device use. That’s a design flaw characteristic of the entire genre of classroom-tech studies. They don’t even acknowledge, much less control for, pedagogy.

Nope, wrong on all counts. That’s not how West Point pedagogy works (I’ve already dealt with anti-lecture truthers, no need to repeat my version of that). Each class runs from 15 to 18 (occasionally 19, if enrollment is high) students per class. The study was grouping sections into three larger groups [you would know this, if you’d bothered to read the study: see page 8]. West Point is actually pretty cutting-edge when it comes to pedagogy, and has been since its foundation, because the prevailing philosophy has always been that small class sizes, with high levels of active student engagement, provide the best learning environment. It’s called the Thayer Method, after Sylvanus Thayer, superintendent from 1817-1833. A good description of the method is here, and more technical one, as applied to chemistry classes, here. I’m left wondering why, if you’re going to criticize something, you wouldn’t do a little research into the thing you’re criticizing…

A typical classroom at West Point, with COL Musteen leading the study of military history
A typical classroom at West Point, with COL Musteen leading the study of military history

But ultimately this is partly excusable, since, as Becker realizes, the study itself doesn’t really indicate what happens in a West Point classroom, and that leads to this:

are you able to tell me what happened in those classrooms? If not, how can we really know if the laptops are/were a distraction or not? What if the professors had the students doing lots of group projects that required them to do online research or collaborative work using Google Docs? If the course is designed for lecture and intense discussion, then maybe laptops are a “distraction.” But, then, that means the study would only generalize to classes where there is only lecture and intense discussion. Instead, we get articles like the Washington Post one that lead people to conclude that laptops should be banned from classrooms entirely, irrespective of instructional design.

And this leads me to my second point…

(Second) Becker’s point about instructional design is very well taken, because up to now West Point has actually strongly encouraged student use of technology in the classroom. The History Department in particular actually took the lead in promoting tablet use in class and tailoring its curriculum to a hybrid digital learning environment–the Thayer method with iPads (and now Android devices). The whole point of the iPads to begin with was to employ the power and potential of modern technology to make traditional subjects more dynamic. And the results seemed to bear it out. Granted, there was no study comparable to the current one (despite, cough, one or two people’s suggestion that one be implemented), but through extensive polling it was fairly well established that cadets found the new digital curriculum more engaging, practical, and rewarding than before. For example, with the ability for any participant to take over the projector, cadets could collaborate in real time in class, and then present (“brief” in military lingo) the class on their analysis. The interactive text illustrated concepts and ideas, and students had a lot more flexibility in studying to meet their individual learning styles than before. Not exactly Universal Design, but pretty darn close.

So, I’m not really sure what the internal politics of this new econ study actually were (and I agree with both Gannon and Becker, the performance deviations are not significant, especially when you remember that standard deviation includes outliers which impact the final number). When the iPad/tablet concept was introduced, there was a general concern that cadets would be more likely to goof off in class, and that this would be harder to correct with tablets (the basis of the “tablets flat on desks” lingo of the study), but that turned out to be marginal. Quizzes were still done via paper, exams were done via paper or laptop, at the instructor’s, and below that the student’s, preference. Certainly over-reliance on technology was not encouraged, not least because, well, this was the military, and everyone relies too much on technology (the Naval Academy has been sensing this too).

Oh, and if you’re wondering how come I know so much about this, well, I’m the guy who piloted the History Department course and then served as the assistant course director for 1100 students spread over about 60 individual classes.

There does remain, however, the issue of West Point’s “special” student body.

So, third and last is the issue of how “exceptional” West Point is in terms of the student experience at a four-year liberal arts college (well, engineering college, but a lot of people agree that that is one thing that has outlived its usefulness since Thayer’s day). West Point is, of course, different in some significant ways. The whole military side of the house, for example, is something regular college students don’t have to deal with, unless you’re at one of the other federal or state academies. West Point students tend to be very driven, very physically fit, and very resourceful. They’re also under a heck of a lot more pressure than their civilian counterparts, so it tends to balance out. They do also tend to be more respectful than your average student. If late for class, for example, cadets will stand at attention in the doorway until you give them permission to enter and sit down. Cell phones, which Gannon wonders about, are hardly to be seen, not least because the uniform pockets aren’t exactly made for cell phones, except maybe the back pocket, so if everyone’s cell phones are put away, you can’t very well sneak yours out in the small classroom environment.

But as for Becker’s guess that students at West Point are “focused on compliance”? The truth is, no more so than your average student. West Point students do what they need to do to survive and get through your class, and they’re like any other college student in that regard. Because they’re driven to succeed they tend to pick up your suggestions faster than many students, but by the standards of the civilian students I’ve taught over the last decade, West Point students are what we call “slightly right of center mass.”

It’s also worth noting the study’s authors are aware that West Point offers advantages to educators that many other schools don’t (page 26):

In a learning environment with lower incentives for performance, fewer disciplinary restrictions on distracting behavior, and larger class sizes, the effects of Internet-enabled technology on achievement may be larger due to professors’ decreased ability to monitor and correct irrelevant usage.

Now Gannon says what we should be asking is why that student felt it was more worth their time to surf the web than engage in class. To which I can only say, my friend, I know you teach surveys with unwilling conscripts. It ain’t always the professor’s fault that the student isn’t engaged (we can go down this road later…).

There is one point, however, where Gannon does hit the nail on the head, in that the study makes no provision for disability. And that is correct, because the brutal truth is that military schools are not geared to students with disabilities, nor can they be, in practical terms. In fact, in my last year there I found myself meditating on the curious fact that, while the support system includes counseling, resiliency training, and the arrival of therapy dogs during finals (very popular, those), the military schools are probably the last places where disability rights activism for students will take root, if it ever does, simply because the military is an outcomes-based organization. In that sense, the West Point study is informed by the “unique,” non-academic side of things, and will only tell you about a certain range of person. Granted, you have the ability, as the study authors noted, to control for a lot of extraneous factors that you normally wouldn’t in a regular college, which is, actually, a strength of the study. But it is only a range nonetheless, and even then, each cadet has his or her own struggles during the semester, so you can’t assume equal prep come the final exam.

Ok, so that’s two points. Yet, most of his premises being factually wrong, the rest of Gannon’s column is really more like a “get off my lawn” echo chamber than an actual attempt to engage with unwelcome findings. This is a typical reaction I’ve noticed from what I call “avant garde educators”: refuse to acknowledge or engage, heck, a refusal to even read, studies that contradict your beliefs, whether or not those studies are actually wrong (and I agree that this new study doesn’t tell us much). It only helps to bifurcate the profession into camps of the enlightened and the troglodytes. The complaint about exams, while not completely without merit, can be dismissed on much the same grounds as I pointed to for Twitter metrics: you can’t ditch metrics simply because you don’t like them on principle. Admin won’t find that a convincing reason, and Admin would be right. Persuade Admin to use different metrics because learning outcomes, but good luck on getting rid of finals.

I get that Gannon and Becker are concerned with people taking a study done at a military school and extrapolating to all of higher ed, but if you’re going to make that concern relevant you’ve got to get your facts right. It just underscores to me that most academics simply have no idea what goes on at places like West Point, and imagine some kind of authoritarian degree-mill dystopia where “compliance” is key. The truth of the matter is rather different, and because of that, if not out of principle, you should endeavor to do your homework before casting stones at straw men. The truth of the matter is that cadets are students–with a difference, to be sure, but still students in most of the ways that matter to higher ed in general. The truth of the matter is that, wherever your pedagogy is, West Point probably passed it five years ago.

Oh, and as for Becker’s parting query? Joke’s on him. Thayer Hall, where history and social science instruction takes place, is mostly windowless. Problem solved.

Thayer Hall, from the river courts

[Last edited at 7:48 p.m., 26 May]

Part 2 Classroom Tech: the MIT-West Point Study, part 2: Interpretation (June 8)

So, on May 26 I covered the MIT study on the West Point economics classes, and spent most of the time correcting a lot of errors
about West Point, of both fact and interpretation, in Kevin Gannon’s column and Jon Becker’s column. There were also points on which I agreed with them, including the opinion that the study’s results are not as significant as most people would make them out to be. Here I want to expand on how we should interpret the results of the study, in my case from the perspective of the humanities, and specifically from that of history.

Relevant portions here are “Results” (pages 16-25), “Conclusions” (pages 25-28), and Tables 3, 4, and 5 (pages 33-35).

On the evening of the 26th, an econ professor at West Point gave me some valuable critiques, and in the process of digesting them (what I call “writing out loud”), I think I solidified my interpretation of this study. What follows are my replies, expanded, revised, and corrected. Bottom Line Up Front: The study supports previous research that, for people possessing a certain kind of neuro-biology, handwriting notes is more effective than typing them. Yet the study did not test other kinds of learning with technology, and it demonstrated no real correlation between handwriting notes and arguing abstract concepts. Hence, the study’s significance is not as great as the 3-minute version would lead us to assume.

My correspondent made two very good points that should provoke a lot of thought: 1) Military academies are excellent places to do studies of this kind, because they randomly select and distribute students across all sections, in contrast to what you see in regular colleges where the population is mostly self-selecting.  2) When you compare “the coefficient [of computer users] to the average score [of non-computer users],” the difference is large enough that you could compare it to hiring a tutor. So, the negative results from electronic device usage are actually significant.

So, I went back to the study’s analysis of the results, and my thoughts ran as follows:

a) I would both agree and disagree regarding academies being the “perfect place”; agree b/c students are assigned randomly to classes, something that the study itself actually draws attention to. The study is very methodologically sound, which Gannon doesn’t get, though Becker does. Methodology matters, and they got the methodology right in a very important way.

I’d disagree in that, while I’ve already explained how West Point resembles civilian colleges in many ways, it cannot, by its very nature, replicate the diversity of individuals. You simply won’t get the same range of student as in a civilian school–whether elite private university or community college, and Table 1 (p31) notwithstanding. So, there IS a risk of extrapolation from USMA to the rest of academia, which should be borne in mind, but won’t because people will just read the headline.

b) Are the variations in student performance “marginal” or significant? There are several ways to interpret the results.

First (what in debate we would call “off case kritique”), bear in mind the mechanism used to “determine outcomes”: the final exam. The benefits of standardization still apply, and the authors took great care to factor in grading variations among instructors (pages 13-16). On the other hand, there is a growing chorus of voices critiquing the final exam, especially the in-class final, as an effective learning or assessment tool. Some of those critiques are legitimate, some are wide of the mark, others are simply trying to start a conversation–David Perry’s recent column is a decent read (I moved away from in-class finals this year, for a variety of reasons). So, while logical, assuming an in-class final exam is the best way of measuring semester-long learning for all students across the board is somewhat problematic when transferred to a civilian college setting.

But second (on case), consider what the figures actually mean. Standard deviation is a relative and contextual entity, but won’t be interpreted as such by the majority of people who read the newspaper article. For example, readers might notice that students in laptop/tablet classrooms had a deviation of -.23s on short answers, and then say “nobody did as well in short answers.” But that isn’t what the data actually say. So, a more accurate way of stating the results is that you’ve a greater likelihood of scoring lower on short answers, but that’s hardly guaranteed.

Third, the study is significant in one key area. Based on the coefficients under the “multiple choice” and “short answer”  I think the study does confirm the Mueller/Oppenheimer 2014 study (April version here; latest version under lockdown on Psychological Science) in that raw data retention seems unquestionably to be affected by mode of inscribing the data. Now, you can like this or not, I don’t really care–this is something Gannon and co. find a very hard pill to swallow, for a variety of reasons not least of which is political agenda. But a large, rigorously designed study has now provided additional evidence that, when it comes to note-taking, you tend to remember more data if you eschew keyboards.

Fourth, here is the key reason the study’s significance doesn’t really extend to much beyond data transcription: there was NO appreciable difference in performance on the essay portion of the exam (part D in Tables 3, 4, and 5). As in, statistically insignificant or non-existent deviations. Now, I’m not sure how they break down final exams in econ classes, but in history classes, both at West Point and in many other places that I’ve worked, the essays are weighted much higher than the multiple choice or short answers. So if the numbers don’t lie, that means that students with (on average) inferior data retention STILL did as well conceptually as their peers.

Conclusion…

What does this mean? I think it can be explained in one of two ways. It either raises questions about the rigor of the controls for instructor bias (instructors grading electronic essays tend to give B’s where they’d give C+’s if it was hard copy), or it means that the metacognitive impact of computer use is negligible, which, from a humanities standpoint, REALLY minimizes the impact of the study. Especially if, as Gannon and others keep arguing, the point is concepts not content. Personally I think history is burdened by requiring a certain amount of content mastery, but a KEY result of the study is that computer use doesn’t impact your grasp of conceptual, abstract thought. And ultimately, I think Becker is right that this study ONLY tested one kind of learning, and not the kind of learning activities that HI301 and HI302 were/are designed for (heavy computer use in collaborative activities).

The triumphant “old school” response to the study is, therefore, rather misplaced. Do I ban all technology because a study geared to one kind of tech use indicates lower performance in data retention? Perhaps instead I can broaden my pedagogy for methods not accounted for in this study. After all, numerous studies charting other kinds of pedagogy and learning activities point to the benefits of electronic devices, such as iPads, and it suits the way I teach. But I still take notes by hand, and the MIT-West Point study confirms that that is more effective than typing if you can do it. And yes, I share that result with my students. But with caveats, because you can’t test everything. And neither could this study.

Moving on…

One Reply to “The MIT Computer Pedagogy Study at West Point: Setting and Significance”

  1. I just want to know the GPA difference between the groups. Just show the difference in grade scores and I can draw my own conclusion as to the value of the results. It seems like this entire discussion could be simplified… I don’t want to tell people the West Point study found something when it is actually insignificant. Thanks Phil.

Comments are closed.