Sometime last year, this question, or some variation on the theme, leapfrogged “Where can we find good problems?” as the most frequently asked FAQ asked of me. Below, my answers, as of today.

# “Formatively.”

“You clarify and share learning intentions and success criteria. You implement rich tasks that elicit evidence of student thinking. You pose questions that cause thinking.”

I presented teachers with four sample student responses to the following question:

A store sells a box of nine 200 g bags of chips for $12. How much should the store charge for twenty-four 200 g bags?

I asked teachers to consider (1) Where is the learner going? (2) Where is the learner right now? and (3) How can the learner get to where she needs to go?

This sparked some interesting conversations. The students in the top left and top right know that a unit price is an equivalent rate where one term–number of bags in TL, dollars in TR–is one. The student in the bottom left also knows that proportion problems can be solved by looking for a scale factor–albeit an inaccurate one–between ratios. What’s going on with the student in the bottom right? What’s the learning goal in terms of *content*? What’s the learning goal in terms of *curricular competency*? This activity was preceded by a conversation about the KDU model, so teachers were thinking “use multiple strategies” and “communicate mathematical thinking.” Is it fair to consider “use multiple strategies” using this–or any single–task as evidence? (A good time to bring up triangulation–products, observations, conversations with students.) What does “good” communication look like in mathematics? Do the bottom two responses need words? Would a ratio table help answer what’s going on in the bottom right?

While this was a worthwhile exercise, this answer was “not yet meeting expectations.” One reason for this is that *assess* is often a euphemism for *evaluate*. Or *grade*. Or *report*. As a student teacher, my school associate once asked me how I planned to assess. I began to tell him about upcoming quizzes. “That’s all well and good, but that’s *evaluation*. Minute-by-minute, day-by-day, how will you know they know?” This has been helpful for me as I’ve navigated through assessment by preposition (*assessment of*, *for*, or *as learning*) and “Is this formative or summative?”

# “Assess *what*?”

Answering with another question is probably unsatisfactory, but, to me, *what* is a much more important consideration than *how*.

The Ministry of Education released the following in the summer:

At the end of the school year or semester, Boards must provide a written summative report to parents that address the student’s progress in relation to the

learning standardsof the curriculum in all areas of learning as set out in the Required Areas of Study Ministerial Order.

(Emphasis added.)

Learning standards in BC’s curriculum are made up of curricular competencies (“what students are expected to *do*“) *and* content (“what students are expected to *know*“). (#MTBoS, think practice and content CCSS-M standards.) As late as June, some teachers were still wondering if there would be a requirement to assess–or evaluate? or report on?–the curricular competencies. To me, the MoE’s choice of “learning standards” makes this clear.

At the same time, there’s another message out there: *learning standards* and *curricular competencies* are synonymous. The gist of this idea is that content is interchangeable. And maybe that’s more true in other areas of learning. (I still take issue with “If you enjoy teaching ancient Egypt and ancient Egypt has moved, then you can still teach ancient Egypt” but social studies isn’t the hill I’ll die on.) And I’m all in favour of a greater emphasis on students *doing* mathematics. Helping teachers make this happen is my work–it’s what I (try to) do. Still, I’m baffled.

Of course, nobody argues that process and content exist without one another other. In the classroom, “I can use multiple strategies to solve problems involving ratios and rates” or “I can communicate my thinking when solving proportional problems” work as learning intentions. I can design learning experiences around these. My question is about evaluating: together or separately? Consider the student in the bottom right above. If she “fully meets expectations,” or is “proficient,” or is a “Jedi Knight,” it’s easy–the learning intentions above still work. But if she, as most agreed, isn’t, then why is that? My take is that she is proficient with respect to content (proportional reasoning)–or, at least, here’s one piece of supporting evidence–but not quite there yet with respect to competency (communicate thinking). What are some implications surrounding reassessment? And is it possible to fully meet with respect to competency without also possessing a deep level of content knowledge?

I’m beginning to enter the Land of the Gradebook, which, nine times out of ten, is at the heart of teachers’ “How do we assess it?” Standards-based grading, depth of knowledge, learning maps, rubrics, portfolios, etc. will be part of part two.