We teach to the best of our abilities. How do we know that our students have acquired the material and know how to use it? How should assessments be structured so that both we and our students get an accurate feedback? Here, I am not concerned with *how* *we teach* (traditional lecture, discovery projects, peer teaching – whatever), but rather on *how to assess *the knowledge our students have (hopefully) acquired.

Because we live in a high stakes testing world (cumulative AP exams and state exams in May), one important part of our assessment goals is retention. In addition, the US high school math curriculum is often labeled as “a mile wide and an inch deep.” Because we expose the students to such a wealth of topics, retention becomes even more important.

Cognitive scientists and educators have a very simple answer to the retention problem – repeat, repeat, repeat. Willingham argues that repetition leads to what he calls deep knowledge, and that deep knowledge is necessary for critical thinking: “When one is very familiar with a problem’s deep-structure, knowledge about how to solve it transfers well. That familiarity can come from long-term, repeated experience with one problem, *or with various manifestations of one type of problem* (i.e., many problems that have different surface structures, but the same deep structure). After repeated exposure to either or both, the subject simply perceives the deep structure as part of the problem description.”

In Willingham’s parlance, surface structure refers for example to the word-description of the problem, while deep structure refers to the mathematical concept(s) needed to solve the problem. Problems may appear different to the students because one problem is about balloons ascending or descending and another may be about two people traveling in cars in different directions, but both problems deal with solving a pair of linear equations.

Willingham makes the point that practice by itself is not sufficient. “The unexpected finding from cognitive science is that practice *does not* make perfect. Practice until you are perfect and you will be perfect only briefly. What’s necessary is sustained practice. By sustained practice I mean regular, ongoing review or use of the target material (e.g., regularly using new calculating skills to solve increasingly more complex math problems, reflecting on recently-learned historical material as one studies a subsequent history unit, taking regular quizzes or tests that draw on material learned earlier in the year). This kind of practice *past* the point of mastery is necessary to meet any of these three important goals of instruction: acquiring facts and knowledge, learning skills, or becoming an expert.”

While there is a general agreement that repetition is necessary both for retention and critical thinking, how should we structure this repetition? In a series of papers, Pashler and his collaborators analyzed the amount of retention after students were given one test and then they were given the same kind of problems on a second test. Schematically, they analyzed the following scenario:

Their result indicates that “optimal memory occur[s] when spacing is some modest fraction of the final retention interval (perhaps about 10%–20%).”

If we accept their results, then if we teach a topic for the first time say in November (random variables?) and we want that knowledge to be retained by May – six months away – then the second time we should study/test this concept is in December/January. In general, their results indicate that as the length of the retention interval increases, the interstudy interval also increases. Pashler and Rohrer maintain that “the data show that, if a given amount of study time is distributed or spaced across multiple sessions rather than massed into a single session, performance on a delayed final test is improved – a finding known as the *spacing *effect… [y]et few educators have heeded this advice, as evidenced, for instance, by a glance at students’ textbooks.”

SBG offers an unique opportunity to put this advice into practice. Unlike other methods, such as Accelerated Math, in SBG teachers have control over when and what to test. We can control the spacing (the interstudy interval) as well as the frequency with which we go back and assess a topic. Since SBG allows students to retest a learning objective, that should provide a balm to the moans and groans that follow a test with problems that “we learned at the beginning of the term”. Random variables, systems of equations and other topics are no less important in June than they are in November.

Another great jumping off point for thinking about assessment can be found on the web. Dan Kennedy – an instructor at the Baylor School and a former AP Coordinator for Calculus – has a terrific and thoughtful essay called ‘Assessing True Academic Success’ I urge you to read that if you have not yet.

The reteaching, or rediscussing, aspect is one that I have argued for for years with colleagues with little success. Far too many math teachers want to simply end a chapter with a test, then move on as if that past chapter never happened. SBG can help prevent this mindset but I think it is one of the reasons why it has not gained a terribly strong foothold yet.

Thanks Jim – I’ll try and get Kennedy’s essay. I have found that AP Stats students are more receptive to reteaching – they are more mature and there is a potential pay-off for them in doing better on the AP Exam.

I will also be teaching Algebra 2 this year – I am curious to see how SBG and reteaching will play there. In our school Algebra 2 is a freshman course, so it will be interesting to see the difference.

Thanks again for the comment.