Saturday, July 7, 2012

Knight Physics for Scientists and Engineers

In the discussion of the lower FCI scores seen by students taking the Matter and Interactions curriculum on the PHYS-LRNR mailing list (see here for my department's experience), there were some people who made the claim that students do not read the textbook, and therefore the textbook should not have as much of an impact as the classroom engagement. I can't speak for other schools, but we have seen a very large change in FCI scores when we have changed textbooks.

The lower scores with the M&I text that we saw mirrored those reported in the AJP paper that started the discussion, but we also saw a very large uptick in scores in 2006, when we began using Randall Knight's Physics for Scientists and Engineers.
Before 2006, different instructors used different texts (Serway, M&I, Fishbane). In 2006 we decided to unify the textbooks, and we chose Knight largely because it was advertised to be strong in PER approaches and providing conceptual understanding. At the same time, we moved from homework submitted on paper to homework done on the Mastering Physics online system that can accompany the Knight text.

Whether the change can be attributed to the text, or to the online homework, there was clearly a change in the normalized gains our students had. Looking at individual instructors shows that the improvement was universal.

Instructor A (above) teaches with clicker questions and group problem solving. He pushes conceptual understanding (often using Ranking Tasks  in class) before equation solving. While his gains were near 0.4 with the Serway text, they increased with the Knight text, and now fell dramatically the one semester with M&I.


Instructor B (above) teaches with more demonstrations than the rest of the department. They are often done in ILD format. His results with the Serway text were more mixed, but the Knight results are very consistent, and higher than most of the Serway results.


Instructor C teaches with a lot of questioning of the students, but does not follow many other PER suggestions. Although he recognized that the Knight text had much more conceptual material than most texts, he opted not to change the material he covered (continued his "standard treatment"). He, too, saw an impressive increase.

All three instructors created their own homework assignments. Some were due weekly, others every other day. As mentioned above, the three instructors had vastly different teaching styles and approaches. The consistent elements were the Knight textbook and Mastering Physics. Whether it is one, or the other, or the combination, it seems clear that the choice of text and/or homework system can make a large difference in results.

FCI and Matter and Interactions

The July 2012 issue of the American Journal of Physics has an article that compares Force Concept Inventory (FCI) scores between a traditional section of introductory mechanics and one taught using the Matter and Interactions (M&I) textbook. The finding was that the post-test scores were higher for the traditional curriculum than for the M&I textbook, even though both were taught using similar interactive methods.

That finding mirrors what we have seen in our department. Three separate instructors have used the M&I text (1st edition once, 3rd edition twice), and all have had lower normalized gains than other sections taught that semester.

There is a lot of data there, but one instructor in particular has taught the introductory mechanics out of three different textbooks, using generally the same interactive techniques ("clicker" questions, group problem solving) with each text, and the results on the FCI with the M&I text were significantly lower.

As a department we made the change to M&I two years ago because we were seeing students coming in with more and more physics background. M&I provides the students with a different approach than they would have seen in high school, so it seemed less repetitious to our best prepared students. We also appreciated the introduction to computer modeling with VPython that is easily associated with M&I. In two more years, when the first batch of M&I students have made it through our entire curriculum, we will have to decide whether the seeming reduction in conceptual understanding (at least as measured by the FCI - and the CSEM for the E&M semester) is appropriately balanced by other M&I benefits (computer modeling, best prepared students remaining more engaged, etc.)

They said it better

My last post suggested that Khan Academy is not an enemy to education because it does not meet the highest standards of pedagogy and content. Rather there are increased expectations that are being placed on it with its increased popularity in the news and sponsorship from the Gates Foundation and Google.

Shortly after that post I read two pieces that said much the same thing. On was written by Robert Talbert in the Chronicle of Higher Education, and an agreement with that article here.

Saturday, June 30, 2012

A "Defense" of Khan Academy

I have been following the story of Khan Academy for about a year now. My kids used it for summer refreshers in math last year. I've seen the TED talks. I've followed Bill Gates' support of the Academy. I've seen him on mainstream TV. And I've read a lot of critiques of him and his videos.

Now there is a push to do a series of video critiques called the MTT2K prize, based on a video by John Golden and David Coffey that satirized and commented on one of the Khan math videos. Physics professor Rhett Allain did an analysis of a Khan projectile motion video as well. These critiques point out various errors in the videos. Depending on who you are, the "errors" range from being big and important to being rather nit-picky (Dr. Allain commented that it was annoying that Khan repeated himself often, but then found himself doing it occasionally as well in his video response). In the end, if these critiques remain constructive, and really concentrate on important issues rather than looking for every little item to criticize just to prove that Khan is inferior, some good could come from them. There have already been corrections made in response to Golden and Coffey's video. At the moment, however, the tenor of the discussion seems to have a huge negative bias.


To me, however, the main issue with Khan Academy is not the places where it could be pedagogically more efficient, or where it skims over details a teacher would like to see. The main issue is that there is a group of people, Bill Gates included, who are asking/expecting it to be much more than it is, or than it ever likely can be. Even Sal Khan himself seems to have shifted the focus with the idea that it could be the center of a "flipped classroom", where students watch a video at home, and then do problems at school.


In general, Khan videos not do a very good job as the sole or even primary source of information. There are too many issues with the videos to be good with that. Because of Khan's attempt to teach a broad range of topics, he doesn't have the expertise to identify all of the common misconceptions and tackle them head on. And, of course, since it is just a video, there is no opportunity to ask it questions when those misconceptions are encountered by the student watching at home. Because of Khan's desire/need to keep the videos at a length of 10 minutes or so, he has to skip many of the fundamentals that explain why the math/physics/etc. is that way. They turn into problem solving tutorials, which is good for the question at hand, but doesn't do much for teaching general methods. Because Khan has no way to know which videos have already been watched, and/or in what order they are watched, there is no way to know exactly what topics the student may already be familiar with (there was a few minute discussion of air resistance in the projectile motion video that seemed too hand-wavy for someone who knew nothing of the subject, but was too in-depth for someone who already had solved a similar problem or two). 


Explicitly addressing misconceptions, the ability to go in-depth into the hows and whys of a topic, and classroom management that takes into account the current knowledge of the class are all things that a live teacher should be handling. I'll acknowledge that some teachers are probably not up to par on these things, but replacing the instruction of all teachers with Khan Academy as the primary source of information is not the answer.


Khan Academy has great use as a review for students, as my kids did last summer, or a refresher for people who have been out of school for awhile, or even a resource of explanations, problems and exercises for students as they learn the material from a teacher. For these cases, the occasional pedagogical oversight or imperfect explanation is acceptable. It is not a tool that can do anything and everything. It can not be the primary solution to poor education. Expecting it to be more and denying its weaknesses is the problem.

Tuesday, June 26, 2012

Marbles on a meter stick

I talked with Peter Bohacek at the WebAssign Users Group Meeting (WAUG), and he showed me some of his excellent physics videos. One of them was this one below:


There are a number of questions that may come to mind. Two were of particular interest to me/us. The first is in looking at the final state, there are a number of balls that are in a straight line, and then a number that are at different heights. What determines the number of balls in the two parts? Secondly, why is the linear part not horizontal?

I wrote a VPython simulation for the process. The screencast below shows the output.

Unable to display content. Adobe Flash is required.



When I remove the code for the interaction between the balls and the meter stick, there were 7 marbles in the linear part. That makes sense, because the physical pendulum made of a meter stick by itself should fall with an angular acceleration that would be the same as a simple pendulum with length 2/3 meter. That means all of the balls at larger distances would accelerate more slowly than the meter stick. There are 7 such balls (1, 0.95, 0.9, 0.85, 0.8, 0.75, 0.7 meters). So those balls should instantaneously separate from the meter stick.

When the ball/stick interaction was added back in, the general shape of the falling balls was reproduced, as shown in the images below.


There are a few differences, though. While I used the correct masses and sizes of the objects, the simulation produced a linear section of balls that is one or two balls (depending on exactly which ones you count as in the linear section) bigger than the video. The simulation also gave a horizontal linear set of balls, while the experiment gave about a 3 degree incline.

One way to make the size of the linear parts match is to reduce the mass of the balls, thereby reducing the force they put on the meter stick and reducing its acceleration. I found that the mass would need to be reduced by over a factor of 2 from the reported values, which is not reasonable. Another is to increase the mass of the meter stick, but again a factor of 2 change in mass would be necessary. Adding some friction at the rotation point would also slow the rotation rate of the simulated meter stick, but (untested) that seems like it would be a too large amount to make the necessary change by itself.

I did video analysis on the experiment and recorded the motion of each ball and the stick itself. Something about the experiment causes a lower acceleration of the stick than the simulation has, which is the ultimate cause of the difference in linear section size. When comparing the falling balls, the simulation of ball at 100 cm matches the experiment quite well (small discrepancy in the free fall acceleration). The simulated ball at the 25 cm mark has the same shape as the experimental one, but it begins to fall sooner - again due to the larger acceleration of the stick itself.
Position of the stick as a function of time

Position of the balls at 100 and 25 cm as a function of time. 

The simulation was also unable to mimic the non-horizontal nature of the line of balls. It isn't clear physically where that characteristic would come from. At one point the simulation did have it, but I believe that was more an effect of the discrete motion of the simulation that caused the balls to touch, and then not touch the stick repeatedly. My current ball/stick interaction model only does horizontal lines of balls unless you start the stick at an angle, but then it just keeps the angle you started at - which is not the case in the video.

For the curious, here is a link to the source code for the simulation:  BallStick.py  I'd love to know why I can't match the experiment a little more closely.