Monthly Archives: July 2011

On Assessment (I) – Retention

We teach to the best of our abilities. How do we know that our students have acquired the material and know how to use it? How should assessments be structured so that both we and our students get an accurate feedback? Here, I am not concerned with how we teach (traditional lecture, discovery projects, peer teaching – whatever), but rather on how to assess the knowledge our students have (hopefully) acquired.

Because we live in a high stakes testing world (cumulative AP exams and state exams in May), one important part of our assessment goals is retention. In addition, the US high school math curriculum is often labeled as “a mile wide and an inch deep.”  Because we expose the students to such a wealth of topics, retention becomes even more important.

Cognitive scientists and educators have a very simple answer to the retention problem – repeat, repeat, repeat. Willingham argues that repetition leads to what he calls deep knowledge, and that deep knowledge is necessary for critical thinking: “When one is very familiar with a problem’s deep-structure, knowledge about how to solve it transfers well. That familiarity can come from long-term, repeated experience with  one  problem,  or  with  various manifestations  of  one type  of  problem  (i.e.,  many  problems  that  have  different surface  structures,  but  the  same  deep  structure).  After repeated exposure to either or both, the subject simply perceives the deep structure as part of the problem description.”

In Willingham’s parlance, surface structure refers for example to the word-description of the problem, while deep structure refers to the mathematical concept(s) needed to solve the problem.  Problems may appear different to the students because one problem is about balloons ascending or descending and another may be about two people traveling in cars in different directions, but both problems deal with solving a pair of linear equations.

Willingham makes the point that practice by itself is not sufficient. “The unexpected finding from cognitive science is that practice does not make perfect. Practice until you are perfect and you will be perfect only briefly. What’s necessary is sustained practice. By sustained practice I mean regular, ongoing review or use of the target material (e.g., regularly using new calculating skills to solve increasingly more complex math problems, reflecting on recently-learned historical material as one studies a subsequent history unit, taking regular quizzes or tests that draw on material learned earlier in the year). This kind of practice past the point of mastery is necessary to meet any of these three important goals of instruction: acquiring facts and knowledge, learning skills, or becoming an expert.”

While there is a general agreement that repetition is necessary both for retention and critical thinking, how should we structure this repetition? In a series of papers, Pashler and his collaborators analyzed the amount of retention after students were given one test and then they were given the same kind of problems on a second test.  Schematically, they analyzed the following scenario:

Their result indicates that “optimal memory occur[s] when spacing is some modest fraction of the final retention interval (perhaps about 10%–20%).”

If we accept their results, then if we teach a topic for the first time say in November (random variables?) and we want that knowledge to be retained by May – six months away – then the second time we should study/test this concept is in December/January. In general, their results indicate that as the length of the retention interval increases, the interstudy interval also increases. Pashler and Rohrer maintain that “the data show that, if  a given amount of study time is distributed or spaced across multiple sessions rather than massed into a single session, performance on  a  delayed  final  test  is  improved  –  a finding known as the spacing effect…  [y]et few educators have heeded this advice, as evidenced, for instance, by a glance at    students’ textbooks.”

SBG offers an unique opportunity to put this advice into practice. Unlike other methods, such as Accelerated Math, in SBG teachers have control over when and what to test. We can control the spacing (the interstudy interval) as well as the frequency with which we go back and assess a topic. Since SBG allows students to retest a learning objective, that should provide a balm to the moans and groans that follow a test with problems that “we learned at the beginning of the term”.  Random variables, systems of equations and other topics are no less important in June than they are in November.

Advertisements

Vince Lombardi, World Cup and …SBG

Vince Lombardi was quoted as saying “Winning isn’t everything; it’s the only thing”. If the Wikipedia is correct, he said it not only once, but multiple times. Amy Chua the author of “Battle Hymn of the Tiger Mother” recounts that when she was a child she won second place in a essay contest.  She invited her parents to the awards ceremony, but afterward her father said: “Don’t ever embarrass me again like that.” The favorite US Women’s soccer team lost this past Sunday to Japan in the Women’s World Cup.  They were ahead twice and the Japanese team tied them. In the shootout, the Americans melted. But many in the media and some of the players themselves said they were proud of their effort and that their performance is nothing to be ashamed of.  In congratulating Japan, Women’s Health Magazine begins by saying “While we’re still mourning the loss (just a little!)….”  Just a little?

“Mr. S., I know I got a 2 in the AP Stats Exam, but I did my best…” “It’s not whether you win or lose. It’s how you play the game” What rubbish!  You failed the exam. You lost the championship. A year from now, few people will remember how hard you tried – the record books will say you lost.

When did we lose the Vince Lombardi ethos? When did we start making excuses? In the state tests in Mathematics in California, “Proficient”, the second highest category starts at 65% correct answers! (Algebra II – 2008 data). “Advanced”, the highest category starts at 80% correct. Isn’t 80% supposed to be a B?

This is a serious and damaging cultural phenomenon. It affects the performance of some of our better students, and as these students grow up it will affect the performance of American society, of our country.

It is very hard to go against the flow. However, I maintain that we – as teachers – must make every effort to reverse this culture of excuses, the culture of narcissism. Day in and day out we need give our students fewer pats on the head and more kicks in the deriere.

How do I reconcile this philosophy with that of SBG, where we offer reassessments as a matter of policy? First, reassessment – at least the way I plan it – is for the weekly quizzes, where students are assessed on basic knowledge and technical skills. I do not intend to offer reassessment for the summative tests, where there are more critical thinking questions. Second, reassessment will have its limits. I plan a maximum of two reassessments for each learning objective – that is all. Lastly, the part of the courses where a student can reassess (the quizzes) is only half the final grade. The other half comes from the summative tests – not “reassessable”.

My hope is that SBG will provide a structure for the students to be winners.

Teacher effects – At the margin?

As mentioned in my previous post, the results of the AP Statistics exam raised again in my mind the question of “how important is the teacher in the results of his/her students on the AP Exam?” The reason for this question is that I now have taught AP Statistics for 6 years and the results of my classes vary from year to year. Even though I got more familiar with the material, the average class scores of my students go up and down despite my having gained more experience and hopefully being a better teacher.

First I should qualify that, in my book at least, results are defined as (a) what percentage of my kids have passed the exam and (b) what percentage have gotten 4s or 5s on the exam. If one acknowledges another dimension of success – number of students taking the AP Stats course – then I am successful, the number has doubled. However, as I have argued in the previous post, I am more interested in quality than in quantity.

It may well be that as the number of students taking the class increases, the chances are the average performance will go down. After all, if AP is considered at the top of the pyramid in terms of challenging courses, there are only a limited number of students who are prepared or capable of meeting the challenge.

The other side of this argument is that “it is up to you Ms./Mr. teacher to ‘raise’ the unprepared so they too can meet the challenge”. I think this is a very simplistic argument that holds little or no water. Students come to class carrying a lot of different baggage. This year, in AP Stats, they told me of divorces, parents losing jobs, accidents and illness and that does not take into account the usual teenage angst about college admissions, relationships and others.

In addition to the personal there is the academic baggage. Some students did not have the mathematical maturity that others had by taking another year of math (see my previous post). Some had been successful in school without being challenged academically. Some were taking 5 AP courses and were overwhelmed by the demands on them.

To believe that teachers are inspirational miracle-workers that can do the “Stand and Deliver”  thing year after year is nonsense. After all, Jaime Escalante was not able to duplicate his achievement in Sacramento. There are too many confounding variables in students, administrators and teachers themselves to expect identically great performance year after year.

This leads me to conclude that teacher effects are somewhat marginal. Serious students, well prepared and with good work ethic will probably do well even if the teacher is below par. Poor students, those who give up easily despite all encouragements, those students are likely not to do well even with an above average teacher.

Therefore the game is played at the margin. When a student (who did not pass the AP Exam) writes “Even though this was one of the hardest classes I have ever taken I am really glad I did…. I feel like you have really prepared me for college and I know I would’ve regretted not taking this class!” – then you know you’ve hit a winner and average class performance… well we teach the kids that the mean is only one measure that characterizes the distribution.

Who should take AP courses?

I am not happy with the results my kids got on the AP Statistics exam. I am looking at the results in more depth, but I keep wrestling with two questions: Who should take an AP course? What is the contribution of the teacher as far as the AP exam score? In previous years, I had informal discussions with many other AP teachers at my school regarding these questions – we never arrived at a consensus.

There are two schools of thought regarding which students should take AP courses. The first is that AP courses should be open to the broadest spectrum of students. The argument goes on to say that even students who score a 2 or less on the exam will benefit from taking a rigorous, challenging course such as most AP courses. According to this school of thought, the AP experience, even if unsuccessful as far as an AP exam grade is concerned, will show the students what college level work means and will serve as a wake up call for those whose attitude and/or performance had been lackadaisical.

The second school of thought maintains that admissions to AP courses should be selective – only students who had previously demonstrated academic maturity should be encouraged to undergo further challenge. If the AP classes are heterogeneous in terms of ability and work ethic, the argument continues, the performance of the class as a whole will suffer since the instructor will have to slow down, repeat and generally not challenge the students as much. This school of thought tends to measure success by the number of 4s and 5s in the exam rather than by passing rate or by non- numerical, “positive thinking” measures.

Through experience and perhaps personal inclinations, I belong firmly in the selective approach school of thought. My arguments are as follows. First, although we could argue – and rightly so – that AP courses are good examples of the level of work required by students in college, it remains unproven that exposure to more rigorous courses in high school leads to change in work habits later on in college.  I would argue that if and when students perform better in college than in high school, it is not due to the “wake up call” of high school AP courses, but rather in the fact that students in college are faced with more individual responsibility and a much clearer and more practical relationship between academic preparation and a career.

My second argument has to do with the consequences of poor preparation for those students who come to an AP course without a strong background. The consequences include stress, a lower GPA – perhaps precluding admission to the college of choice – and sometimes parents who complain to administration that “Ms./Mr. X is too tough, my son/daughter never got a C since they’ve been in school”.

Consider the following example. At our school, the pre requisite for taking AP Statistics used to be a grade of C or better in Algebra II. As far as I can tell this is not unusual. Last year I had a large number of students who took AP Statistics without first taking Pre Calculus. Now, for those not teaching AP Stats, you should know that what we teach in Pre Calculus is NOT used in AP Stats – the latter is much more of a conceptual course than the traditional algebraic-based math courses. What happened is that ALL the students that had not taken Pre Calc started having difficulties early on in Statistics.  Parents started calling saying how stressed their children were, how they were losing sleep, how their extra curricular activities suffered, how their college acceptance was threatened by their low grades. At the end of the first semester, all these students dropped out of AP Stats. What these students had in common was the lack of academic/math maturity that the extra year of Pre Calc gave the other students, even though this had nothing to do with the actual subject matter.

Obviously this did not mean that having taken Pre Calc insured a student’s success in AP Statistics. However, it did point out that at least one of the prerequisite of success (even if success is only defined as completing the course) is a good academic preparation and maturity.

Finally, instead of a final exam in the second semester, my AP Stats students do a project. One group looked at the performance of AP US History (APUSH) students at our school.  Traditionally, at our school, APUSH students are very successful on the AP exam. The pre-req for APUSH is World History. The AP History teachers make every effort to tell prospective students how much more rigorous APUSH is than World and they show the demands for APUSH, including a pretty hefty summer assignment and exam. As a result, most of the C students in World do not go to APUSH (they take the regular US History course). As my group showed, the success of APUSH is directly related to the selectivity of admission in the course.

As far as I am concerned, I changed the pre-reqs for AP Stats for this year – students must have at least a C in Pre-Calc and I added a pre req of a B in English. My class will be half the size of the one I had last year, but I am sure it will be a better class.