- Writing Learning Standards
- Constructing Proficiency Scales
- Designing Assessment Items
- Determining Grades

## Designing Assessment Items

There is a sentiment in BC that using tests and quizzes is an outdated assessment practice. However, these are straightforward tools for finding out what students know and can do. So long as students face learning standards like *solve systems of linear equations algebraically*, test items like “Solve: ; ” *are* authentic. Rather than eliminate unit tests, teachers can look at them through different lenses; a points-gathering perspective shifts to a data-gathering one. Evidence of student learning can take multiple forms (i.e., products, observations, conversations). In this post I will focus on products, specifically unit tests, in part to push back against the sentiment above.

In the previous post, I constructed proficiency scales that describe what students know at each level. These instruments direct the next standards-based assessment practice: designing assessment items. Items can (1) target what students know at each proficiency level or (2) allow for responses at all levels.

## Target What Students Know at Each Level

Recall that I attached specific questions to my descriptors to help students understand the proficiency scales:

This helps teachers too. Teachers can populate a test with similar questions that reflect a correct amount of complexity at each level of a proficiency scale. Keep in mind that these instruments are intended to be descriptive, not prescriptive. Sticking too close to sample questions can emphasize answer-getting over sense-making. Questions that look different but require the same depth of knowledge are “fair game.” For example:

Prompts like “How do you know?” and “Convince me!” also prioritize conceptual understanding.

## Allow For Responses at All Levels

Students can demonstrate what they know through questions that allow for responses at all levels. For example, a single open question such as â€śHow are 23 Ă— 14 and (2*x* + 3)(*x* + 4) the same? How are they different?â€ť can elicit evidence of student learning from Emerging to Extending.

Nat Bantingâ€™s Menu Math task from the first post in this series is an example of a non-traditional assessment item that provides both access (i.e., a “low-threshold” of building a different quadratic function to satisfy each constraint) and challenge (i.e., a â€śhigh-ceiling” of using as few quadratic functions as possible). A student who knows that *two negative x-intercepts* pairs nicely with *vertex in quadrant II* but not with *never enters quadrant III* demonstrates a sophisticated knowledge of quadratic functions. These items blur the line between assessment and instruction.

Note that both of these items combine content (“operations with fractions” and “quadratic functions”) and competencies (i.e., “connect mathematical concepts to one another” and “analyze and apply mathematical ideas using reason”). Assessing *content* is my focus in this series. Still, I wanted to point out the potential to assess competencies.

## Unit Tests

Teachers can arrange these items in two ways: (1) by proficiency level then learning outcome or (2) by learning outcome then proficiency level. A side-by-side comparison of the two arrangements:

Teachers prefer the second layout–the one that places the learning outcome above the proficiency levels. I do too. Evidence of learning relevant to a specific standard is right there on single page–no page flipping is required to reach a decision. An open question can come before or after this set. The proficiency-level-above-learning-outcome layout works if students demonstrate the same proficiency level across different learning outcomes. They don’t. And shouldn’t.

There’s room to include a separate page to assess competency learning standards. Take a moment to think about the following task:

Initially, I designed this task to elicit Extending-level knowledge of *solve systems of linear equations algebraically*. In order to successfully “go backwards,” a student must recognize what happened: equivalent equations having opposite terms were made. The *p*-terms could have been built from 5*p* and 2*p*. This gives for â‘ and for â‘ˇ. (I’m second-guessing that this targets only Extending; for â‘ and for â‘ˇ works too.) This task also elicits evidence of students’ capacities to reason and to communicate–two of the curricular competencies.

## Teacher Reflections

Many of the teachers who I work with experimented with providing choice. Students self-assessed their level of understanding and decided what evidence to provide. Most of these teachers asked students to demonstrate two proficiency levels (e.g., the most recent level achieved and one higher). Blank responses no longer stood for lost points.

Teachers analyzed their past unit tests. They discovered that progressions from Emerging to Proficient (and sometimes Extending) were already in place. Standards-based assessment just made them visible to students. Some shortened their summative assessments (e.g., Why ask a dozen Developing-level solve-by-elimination questions when two will do?).

The shift from grading based on data, not points, empowered teachers to consider multiple forms (i.e., conversations, observations, products) and sources (e.g., individual interviews, collaborative problem solving, performance tasks) of evidence.

In my next post, Iâ€™ll describe the last practice: *Determining Grades* (and Percentages). Again, a sneak peek:

## 3 Replies to “[SBA] Designing Assessment Items”