Monthly Archives: June 2011

SBG – Assessments Based on Cognitive Principles

1. SBG assessment and reading

Whether this is a general phenomenon or it applies only to our school I don’t know, but my experience is that students do not like to read. At the lower levels, this means students have trouble with (and sometimes do not even attempt to solve) word problems. At the higher levels, the disinclination to read results in relying exclusively on class notes and on examples worked out in class.

Because I agree wholeheartedly with Willingham’s idea that one of the best ways to accumulate factual knowledge is by reading (see previous post), I plan to build a reading component in the assessment/reassessment process.

As part of the homework there will be assignments such as “Summarize the main ideas in section 2.3”.  (In AP Statistics, asking for written summaries of textual material has the added benefit of preparing students for the free response part of the exam.)

Assessment can then take the form of: “In this section two measures of spread of distributions are discussed. For spread around the median, the textbook suggests that you should use the ______ and for spread around the mean you should use the ________”.  I think it is important to include wording such as “the textbook suggests” to signal to the students they should have opened the book.  In addition, to reinforce the importance of the text, reading the appropriate material will be one of the pre-requisites for reassessment.

What are the potential downsides?  (a) Too much to read and/or (b) Students will copy from the textbook or “Google it”.  I don’t think either of these are too objectionable. For summaries, it is easy to skim the paragraph and look for key words. One can detect copying rather easily, especially if it circulates on Facebook and multiple cherubs access it before they hand it in.

2.  SBG assessment and connecting factual knowledge

Willingham talks about chunking – the notion that factual knowledge is easier to remember if facts are connected in some way, if they form a single idea.  He argues that lists or drill have only limited use, may produce boredom and, when used exclusively, will preclude the joy of discovery.

For example, in AP Statistics we may depict distributions as box plots, histograms, stem plots etc. Each one of these depictions has its own vocabulary and definitions – IQR, bins, stems, etc. But the main idea is not the depiction, but to characterize the center, shape and spread of the distribution.

For example, some questions can be definition oriented – such as “What is the IQR for the distribution depicted in the box plot below?”

We can achieve a deeper assessment by using the following question instead: “Given the histogram below, if this distribution were plotted as a box plot, what would be the IQR?” (No graphing calculator)

I would argue that the second question is better at making the student understand not only what is the IQR and how to compute it, but also that the various graphs are just different ways of depicting the same distribution.

3. SBG Assessment and Critical Thinking

Frankly, I find this idea the most difficult to implement. Willingham argues that students have the necessary background knowledge to carry out critical thinking tasks in assignments. However, in AP Stats, I always find myself under time pressures – not only is the exam a month earlier than the end of the school year, but I also want to leave three weeks of cumulative review. Where to find the time for acquiring sufficient background knowledge before asking some critical thinking questions?

Perhaps one way is to make all the quizzes cumulative, and when we revisit a topic, say the third time, then the questions can become critical thinking questions. Any other ideas?

SBG Assessment – Cognitive Theoretical Underpinnings

We spent a very restful, week long vacation in Cambria, CA, during which I did not think about school at all. The day after we got back, I opened the Sunday paper and in the comic section I saw the following Doonesbury cartoon (this is a partial)

Zipper’s question, “What it means to be a student?” led me to think “What do I want my students to learn?” and more specifically (as far as this post is concerned), “What and how should I (re)assess learning?”  When Google or WolframAlpha can find answers in tenths of seconds, what should I assess and how should I do it in SBG?

Coincidentally, I am re-reading one of my favorite books on education: “Why Don’t Students Like School?” by cognitive psychologist Daniel T. Willingham. Willingham’s interest is in applying cognitive psychology to (mainly) K-12 education and he makes a point that “Factual knowledge must precede skill…facts without the skills to use them are of little value, [but it] is equally true that one cannot deploy thinking skills effectively without factual knowledge.”

Willingham transforms the question “Which knowledge should students be taught?” to “What knowledge yields the greatest cognitive benefit?” and for core subject matter courses (e.g. mathematics) he has a number of answers.

(1)   Teach the basic concepts and only then go on to critical thinking problems.

(2)   Factual knowledge must be meaningful – facts must be connected to each other.

(3)   One of the best ways to accumulate factual knowledge is reading.

These ideas open up a number of strategies for assessment and reassessment. I will discuss them in the next post.

Algebra II – Raw Materials

I now know that in addition to AP Statistics, my other preps will all be Algebra II. I intend to use SBG in my Algebra II classes, just as in AP Stats. The basic approach will be the same: define learning objectives (LOs) and then assess and reassess them. If “nobody” seems to get an LO, then I will be reteaching it. Fine and dandy as far as it goes – hooray for the LOs!  But what should be my TOs (Teacher Objectives)? Beyond the syllabus, what should the Algebra II experience be for the students?

Our school has three de facto tracks for frosh math. Students who did not do well in Algebra I or its equivalent in middle school, take Algebra I again in their first year in high school. Students who did pass Algebra I in middle school take Algebra II as freshmen in high school.  Finally, the top middle school performers in math have the opportunity to go into what we call Pre-AP Algebra II.  Originally conceived as an advanced course, with more challenging problems, Pre-AP Algebra II has just become a place for the better students – not a course whose syllabus is different than the regular one. (For us, Geometry is a sophomore course in the normal sequence).

I will be teaching the middle track, the regular freshmen Algebra II students. Who are these students?  What are the “raw materials” I will have to work with?

In December 2009 I did an analysis of the Algebra II students’ performance. Here is the mid-term grade distribution for the Algebra II freshmen.

This data is across all teachers who had Algebra II at that time – there were no great differences among individual teachers as far as grade distributions.

Another way to assess freshmen students is to look at a longitudinal study of math performance.

My AP Statistics students do end-of-year projects that usually look at student performance in our district.  Below is the data gathered in one such project. This group looked at one cohort of students (6th graders in 2002) and how their state test scores in math changed over the years.  It is clear that a teacher of freshmen will see a drop in performance when compared to middle school. This trend is mirrored across the state and from the data we have, it appears that  it holds not only for state scores, but for class grades as well.

So – if past experience repeats – I have to look forward to students who are not especially well prepared (mathematically) and who for cognitive and social reasons are likely to have a drop in performance compared to their middle school scores.

How should one address these issues? How would they influence my choice of TOs? To be continued…

SBG in AP Statistics (3) – Research Paper

The latest issue of the Statistics Education Research Journal (May 2011) has an article titled “The Impact of a Proficiency-Based Assessment and Reassessment of Learning Outcomes System on Student Achievement and Attitudes” by Michael A. Posner (Villanova University). To quote from the abstract: “This research compares a student-centered, proficiency-based assessment and reassessment of learning outcomes (PARLO) system to traditional assessment in a college-level introductory statistics class.”

Sounds like SBG – does it not?

The author’s justification for such a system is that “[e]ven those who dispute the need for alarm about grade inflation agree that students should be evaluated on standards rather than some normal curve reflecting the performance of those who happen, by choice or coincidence, to be sitting around them that semester.”

The components of the PARLO system are:

  • Define learning outcomes
  • Assess the outcomes using proficiency-based scoring (rather than numeric grading)
  • Provide opportunity for reassessment

Sounds like SBG – does it not?

Regarding reassessment, the author created two variables: “proportion resubmitted” (proportion of times a student resubmitted the assignment when they did not achieve mastery the first time) and “delayed proficiency” (the number of times the student received a proficient or better score only on the second attempt divided by the number of times the student received a proficient or better score).

The research primarily compared the results of the students enrolled in the PARLO group (n = 30) to those in a control group (n = 31). The results referred to the scores of the students on a CAOS test (Comprehensive Assessment of Outcomes in Statistics) and on a common final exam.

Among the conclusions were these:

  • Students had a more positive attitude towards statistics in the PARLO class than in the Control one
  • Students who achieved proficiency on the second attempts showed equivalent performance on the final exam as those who achieved proficiency in their first attempt

Interestingly, the author finds  “no differences between experimental groups … on the CAOS test or on the final exam.”  This is followed by a lengthy discussion basically saying the results of such a comparison are sensitive to how this proficiency system is set up, introduced, provides feedback, and is graded. In other words, implementation is key.

Sounds like SBG – does it not?

SBG in AP Statistics (2) – Letter Grades

Our school has about six grading reporting periods – progress reports and semester grades. We use a piece of software (School Loop) to enter tests, quizzes and homework grades. IMHO, the software is not sufficiently flexible to handle variations in grading schemes. For example, last year I calculated letter grades based on a normal distribution and a curve that maintained the same z-scores for each student. I had to do this in Excel since School Loop could not handle these calculations. However, as teachers, we have a legal obligation to use School Loop.  (Interesting challenge: define the dividing line between “we are all on the same page” and “assembly- line teaching”).

This sets up a conflict between the philosophy of SBG, which is based on learning and assessing separate objectives and the “one-grade-tells-all” scores that teachers must enter for the progress reports and the semester grades.

In addition, as an AP teacher, I feel that I need to give my kids a taste of the AP exam as often and as early as I can. Traditionally, this has meant chapter tests which were structured as 10 multiple choice questions, two free responses and something akin to the investigative task – all in about 60 minutes.

Therefore, the challenge is to extract from the SBG approach some sort of a single measure that can be put on a report card or that summarizes a unit test. A straight average would be the best choice if the test would consist of all different learning objectives (LOs).  However, not all LOs are created equal. For example, I would think that knowing how to use z-scores should carry more weight than labeling a distribution as being right- or left- skewed. A further complication is that AP test questions are not necessarily limited to one LO per question.

Right now, my inclination is to divorce SBG assessments from unit exams. The unit exams can be 40% of the overall grade and the SBG based work 60%. This has the immediate appeal of making it easy to calculate a student’s grade for the report card. It signals to the student the importance of achieving mastery in all the LOs, but also weighs sufficiently heavy the tests that mimic the AP exam to signal their importance.

Traditionally I have not checked the students’ homework in AP Statistics. Partially this was because I did not have time to go thoroughly through all the problems assigned. In addition, I wanted to make the students take responsibility for their education. SBG offers a way of looking at homework as a separate, specific LO so the students have an additional incentive to do it.

General Helmuth von Moltke famously said “No [battle] plan survives contact with the enemy”. We can plan our SBG experience all we want, but only contact with the students in September will show how good our planning was.

SBG in AP Statistics – Beginnings…

One of these days I am going to volunteer to be part of the Master Schedule team, because it is amazing (and puzzling) to me all the convolutions and changes that are taking place. We still don’t know what we will be teaching next year and school will be over in a week. There seems to be no taking account of the expertise and experience of the teachers and the idea seems to be just to shove students in a schedule and let the teachers adjust afterwards. When?  In August?  Perhaps, the underlying thought is that all teachers should teach with the same methods ANY math course, in other words that teachers are interchangeable. It seems to be the industrial, assembly-line model of education with all the silly testing and teacher evaluations based on exams that the exam takers have no stake in. But that rant is for another day.

Thankfully, the latest master schedule iteration still has me teaching AP Statistics and I want to share some of my thoughts on what I want to do with this course.

There are two new things that I want to introduce in the fall. The first is Standard Based Grading (SBG) and the second is melding the traditional curriculum with randomization techniques. In this post I want to share my thoughts on SBG.

From my reading, it seems there are two main pieces to SBG: first, assessing student progress through teacher-defined learning objectives (also known as “standards” or “learning targets”) and second, providing students with a structure to reassess their learning. As a consequence of reassessing,  SBG also requires that we rethink our grading system.

In defining the learning objectives (LOs), I rely on my textbook which I like very much. I am using “Statistics in Action” (SIA) by Watkins, Scheaffer and Cobb (the latter being an early and articulate champion of randomization methods in the teaching of statistics). The Instructor’s Guide for SIA has “goals” for each chapter and I take these to be my LOs. For example here are some LOs from the chapter on Exploring Distributions:

LO 1.14            Students will be able to interpret percentiles and read cumulative relative frequency plots

LO 1.15            Students will find areas under the standard normal curve

LO 1.16            Students will learn to convert values to z-scores (standardize)

LO 1.17            Students will learn to convert z-scores to values in the original units (unstandardize)

LO 1.18            Students will be able to use a table of normal distributions to estimate proportions and probabilities of events that come from a population that is normally distributed

As far as reassessment, I came up with the following nine points, taking into account the experience of other teachers as described in their blogs.

“Reassessment is voluntary. A student can reassess for specific learning objectives (LOs) if the following guidelines are met:

  1. Reassessment is for at most two LOs at a time.
  2. Students must show proof of remediation before a reassessment. This can take the form of completed HW, extra problems or other criteria established by the instructor.
  3. Reassessments will only take place two days a week, and only after school. We will establish the days and times at the beginning of the term.
  4. Students must make reassessment appointments 24 hours in advance and they must let the instructor know which LOs they plan to reassess.
  5. Students should reassess LOs in a timely manner, typically within two weeks of the original quiz return.
  6. Students should not schedule tutoring and reassessment in the same session.
  7. Students can not take more than three (3) reassessments for each LO.
  8. Students can not schedule reassessments in the week prior to the end of each grading period.
  9. Reassessment scores will replace the last score in the LO.”

In formulating these guidelines I tried to strike a balance between providing reassessment opportunities for students and making these reassessment a non-trivial process. I can see that reassessment – even when limited to three per LO – may mean that I make up lots of quiz problems and over the summer I want to see how I can solve this problem.

As far as grading is concerned, I am thinking seriously of using ActiveGrade. Their newest iteration of the software will come out June 15 and I will make a decision by then. The grading scheme I am inclined to use is that the latest assessment grade makes up 60% of the current grade and the previous grade is 40%.

SBG holds a lot of promises, but it also requires a lot of preparation. Who said that teachers just “take off” during the summer?

What and Why

This is a blog about teaching high school math in California in 2011. My goal is to share what new things I will try in the classroom this year and how successful these might be .

The symbol on the header is of the Roman god Janus – the god of beginnings and transitions. Since teaching is always about looking back and looking forward, the emblem seemed an appropriate choice.

WednesdayI will find  what I will be teaching next year, but it looks like AP Statistics  – my favorite course – has enough students signed for it to make it a GO!    YEAAAAAHH!

Over the summer I will be looking at how to implement Standard Based Grading (SBG) and (may be!) flipping the classroom.

For right now, I still have to give the kids the final exams and their grades and also start packing for the move in the new building.