There are many tools at our disposal, and it is not my intention to describe them all in detail. Instead, in this post, we will check out a few specific tools I have found useful and the ways I use them. I hope other educators will think of ways of taking these tools and adapting them for their learning environments, and I am sure all of us can imagine other ways of using them.
The Informal and Formal Assessment Continuum
I think of these two types of assessment as a continuum with informal on one end and formal on the other. Watching students work on a project in a computer lab, and seeing how they work and where they struggle, is on the informal end of the spectrum. Giving a test is over on the formal end of the spectrum. The completion grades I record for activities done in and out of class fall somewhere in the middle of the spectrum.
It is important to work the entire spectrum. I often feel that I get a good sense of where students are from frequent informal assessments, but I am occasionally surprised that students don’t demonstrate proficiency in a formal assessment, which means I missed something, or something else is going on, that I had not anticipated. In these cases, it is important to backtrack and find out where the disconnection is, so that it can be addressed.
My time on assessment is heavily weighted towards the informal assessment end of the spectrum, with far less time on formal assessment. Students find formal assessments stressful, and the more formal the assessment, the more stressful it is. I always do everything I can to ratchet down stress around assessment. High stress situations are almost always counter-productive, and rarely give me a true representation of a student’s abilities. Furthermore, students who are under a lot of stress are more likely to cheat or look for alternative methods of achieving the mark they want, rather than take a hard look at an honest assessment of their skill level.
When I do use formal assessments, I keep them short and to a specific point. There is no reason to subject students to a two-hour test. I usually want to see if students can complete some specific task, or demonstrate understanding of a particular concept or skill. Find ways to hone your formal assessments down to the bare essentials. What do you really want to know about students’ abilities at this point in the curriculum?
Rubrics are a Communication Tool
There has been a lot of talk about rubrics in recent years. They come in many different shapes, sizes and flavors, but they all come down to one thing. Rubrics are primarily a tool to communicate expectations to students. We use them with students so that they can be clear about what needs to be included in their work, and what quality looks like. If my rubric isn’t doing that, or if students find it confusing, or don’t refer to it during the process of doing the work, then it is not an effective tool.
There are two basic types of rubrics that I use. One is a quantitative assessment tool, while the other is more qualitative. Both have their advantages and disadvantages.
A quantitative rubric checks to see if certain features of the student work are complete. Imagine you assign students to write a five-paragraph essay. If you are using a quantitative rubric, you might check to see if the student included a thesis statement. Did they include three supporting paragraphs? Do these supporting paragraphs include examples to back up the major point? Is there a conclusion at the end? You can use such a rubric to quickly and fairly evaluate each student’s essay and tally up the points to see the score each student received.
The strength in this type of rubric is that it is simple, clear and easy to use, as well as consisting of observable concrete elements that you can point out to students. The challenge for this type of rubric is that it does not really do much to address the quality of the work. The thesis may be present, but not very well written.
A qualitative rubric is set up on some sort of scale, frequently a five point Likert scale, although I am partial to a three-point scale, as it is simpler. If we take the same five-paragraph essay example, a rubric might look at how good the different elements of the essay are. On a five-point scale, what are the features of a really good thesis statement? What are the features of a statement at the bottom of the scale, and how do we separate the ones in the middle?
When done well, these rubrics can be effective and useful, but they often take a lot of tweaking to get just right. They also take a lot of explaining to students. One challenge is that assessing quality almost always brings in some form of judgment. If a students says, “Hey, how come I got a ‘3’ here instead of a ‘4’? I thought I did that part really well.” Can you truly defend the score you gave without resorting to any kind of subjective adjectives? It is very hard to do.
Both types of rubrics are worth the effort to develop and use for your assignments, as long as you are sharing them with students, and students are actively using the rubrics to help assess themselves and their peers. They will never be perfect, and frequently the conversation you have with a student over a particular score is more valuable than the score itself, and provides a great teachable moment. This is because rubrics are a communication tool and are best when they facilitate these conversations.
I have seen some crazy rubrics over the years. Probably the least effective ones result from the creator of the rubric forgetting that the main purpose of the rubric is to communicate with students. I have seen rubrics that are nine pages long. Some rubrics put too much detail into the criteria, while others put too little or are vague and refer to qualities that are difficult, if not impossible to observe in the work.
Below is an example of criteria I pulled from a rubric that I think is particularly problematic…
Develop design concepts and relate these to historical and contemporary trends and social context by producing successful visual solutions to assigned problems. Model the interdependence of content and visual expression and evaluate and critique their ideas.
What? Even if I can untangle that and figure out what it means, how am I going to observe this in a piece of work, in a qualitative way? Even if I could figure out how to do that, how am I going to explain what this means to a student?
The worst rubrics I have seen are those that are created by committees, as they are often a battleground for committee members to fight for their agendas and show off how smart they are.
Qualitative rubrics are the toughest to get right. I frequently have to tweak them a lot before they work well for me and for my students. I treat them as works in progress, and primarily as tools for communicating with students about what makes for quality work. I ask students if they make sense and I look for points that lack clarity for students, and try to figure out what will make them clearer.
In the world of the arts or design, critiques are frequently the go to method of assessment. They generally fall closer to the formal end of the spectrum, and can often cause a lot of stress for students. The biggest challenge I find with the critique process is finding the balance between blunt honesty and encouraging feedback. Some people talk about the “sandwich” method, where you sneak in criticism between two compliments, but I don’t find that particularly helpful and students see right through it. Most students prefer honesty, even if it hurts.
The easiest work to criticize is work that is strong, but could be a bit stronger. Poor work that a student clearly did at the last minute is not too difficult either. The hardest situation is when a student obviously put in significant effort, but the resulting quality is low.
Students who have poor taste, or lack sophistication are the hardest ones to reach. These two issues are usually best addressed by having the students immerse themselves in quality work by others, but often getting them to do so is a challenge.
Frequently, if I am running a critique, I will give students credit for participating in the critique. I tell them upfront that participating means both sharing their own work and providing constructive comments on other’s work. Students get full, half or no credit for doing so.
Critiques can involve the use of a qualitative or quantitative rubric by both the instructor and the students. Just be sure to watch out for those students who are embarrassed and feel “called out” in the critique process. Frequently, instructors want to use a mistake that a student made as a teachable moment for other students, so that they can avoid the same error. This can be done well, but it can also backfire, leaving the student who made the mistake feeling picked on. Even more egregious, I have seen instructors turn the mistake into a sort of running joke, ostensibly, to make light of it, but that frequently makes the student feel humiliated.
Finally, when critiquing, I try to avoid art-directing students. Students will ask me if I think they should do this or that to the piece of work. I usually say, “There are probably many solutions. I don’t want to tell you the solutions I would try. I want you to find your own solutions. I am simply pointing out that there is a problem here.” Students usually respond well to this line of reasoning and appreciate that I have faith that they can solve the problem, once it has been identified. It is a tough line to hold, because I usually have 15 potential solutions to try, and the designer in me is dying to try them. But students will learn the most, when they find solutions that work for them selves.
Here is a situation that frequently comes up that indicates the instructor is art-directing students: the student brings the revised work to the instructor, and the instructor tells the student that there are still problems with it. The student then responds with frustration, “You told me to make X bigger, I did that, and you are still unhappy with it.” This type of comment clearly shows that the student is trying to make the instructor happy, rather than solve a problem in the work that has been identified. I try to hold my tongue and not give away possible solutions. Furthermore, you are more likely to get a wider variety of work back from students if I don’t give out solutions. When all the work coming out of a particular class looks the same, it is a good bet that the instructor is art-directing the students.
I have found self-assessment to be a mixed bag for students. Certain types of students get a lot out of it, but others get little to nothing from it. This is not a reason to avoid using it with students. In fact, you can learn a lot about students by watching them assess themselves. It is simply not a reliable indicator of where their skills are. It can be useful to see if their perception of their skills matches yours.
Some students are modest, and grade themselves low. Others are overly confident and grade themselves high. Many are self conscious about the instructor watching them assess themselves and that affects their ability to be honest in the process.
I frequently ask students to write a short statement about how a particular assignment went for them. What was hard? What was easy? This kind of pointed self-assessment can be very helpful when gaging where next to take assignments. (Make them more challenging? Make them less challenging, or include more practice and review?).
Self-assessment can be an interesting tool, and give you insights into the students’ perceptions of their skill level, when used in combination with other tools.
If students are working on a larger project, it seems to make sense to break that project up into smaller pieces and grade each element as it is completed. These assessments might be called process grades. However, I have noticed a common pitfall here. Frequently students complain at the end that they did not get a passing grade for part of a project that is clearly present at the end when the whole project is completed. In these cases, students often become disgruntled and feel like the instructor is being petty. Let’s look at an example.
Suppose you have a five-week project, and class meets once per week. The project is broken down into five parts with the whole thing due at the end of week 5. Part 1 is due after week 1; part 2 is due after week 2, etc. Imagine you have a student who was absent for week 3 and never submitted that part and got a zero for that process grade. As the instructor, you made it clear that the student was required to turn in each part, regardless of attendance to class. The student turns in a completed project after week 5 with all the parts and gets an “A” on the final project, however the student is unhappy about receiving a zero for part 3. The student will invariably argue that clearly part 3 was present, as she got an “A” for the final project.
The instructor can point out the number of times she said that each part had to be turned in, until she is blue in the face. She can point out the instructions on the assignment document. She can point out the information on the syllabus. No amount of this documentation of communication is going to make the student feel better about it, in many cases. The student will feel that the instructor is being unnecessarily harsh, or petty.
I have seen this scenario unfold many times. I have struggled with it myself and I have seen other struggle with it as well.
The one case where the process grade does work well is when the instructor needs to evaluate, with the student, the direction and quality of the project at each stage. In other words, the student is not capable of going on to part 4 of the project, if part 3 has not been completed and discussed. This scenario avoids the possibility of the student turning in the completed project and getting an “A”.
Waiting until the end to grade a whole big project is a recipe for disaster as well, because so many students struggle with time management. If the instructor only grades the final assignment, he or she is likely to get far fewer final assignments from the class, and the assignments received will generally be of lower quality. So I recommend going into the process grade situation with your eyes open. Communicate with students frequently throughout the process and explain why you need to see each part.
Also, for most courses I teach, I tell students that I will drop their lowest one or two classwork / homework grades, and that can help mitigate the disgruntled feelings. This is an important technique, which helps ratchet down stress and build some much-needed flexibility into the assessment system.
This is a technique that I have grown to be very fond of using. It consists of providing specific challenges to students where they have a set period of time to complete a series of tasks. I will frequently have several challenges where each challenge assignment builds upon the previous challenge, requiring further work or development.
Challenge assessments are essentially tests, and fall more on the formal end of the spectrum, and can be either closed or open book. Be wary that these challenges can be stressful for students, and that can skew results. Making the challenges open book can help ratchet that stress down. Calling them challenges, rather than tests, also helps keep students from feeling too stressed about them
Making challenges timed also comes with advantages and disadvantages. The advantage is that you get a clearer idea of how well students know how to complete the task. The disadvantage is that it can be extremely stressful for some students. I have found a hybrid model for timed challenges works pretty well. Give the students a certain amount of time in class to complete the challenge; keep an eye on them and informally notice how far they get in that time, and then tell them to finish the challenge outside class. I have found this method is pretty effective at balancing the advantages and disadvantages to timed challenges.
The biggest pitfall for challenges that build upon each other is that if a student makes an error, or incorrectly completes an early challenge; he may be set up to fail the next or subsequent challenges as well. Sometimes you can mitigate this by having students complete the first challenge, quickly assess how they all did, and help them make corrections, then move on to the next challenge as a group. However this method is tricky if you have a group of students with a diverse set of abilities. You can easily lose the engagement of the students who completed the challenge first while waiting for the slowest student in the room to finish the challenge.
I do really like this type of assessment, and I use it frequently, because it seems familiar to the type of thing you see all the time on reality TV. Students respond to that, and it feels a lot more engaging than a traditional “test”, plus it is quick and done, unlike a more involved project.
Challenge assessments can generate a lot of work to be reviewed and graded, but frequently, while students are actually working on the challenges and turning them in, I can review them and grade them on the spot, as they work on the next challenge.
I will often include short quizzes, usually 10 questions or so that can be completed in 15 minutes or less. They are most frequently multiple choice questions that check to see if students are understanding specific concepts or can identify correct or incorrect facts connected to the curriculum. These types of quizzes are usually unannounced, and I tell students that they will not affect their grade much, but that if there are any questions that are a struggle for everyone, it is an indicator that we need to spend more time on that topic.
Occasionally, I use quizzes to test to see if students have done a specific reading. In this case, giving the quiz is only useful if you tell students upfront when you assign the reading that there is going to be a quiz to see if they did the reading. Getting students to read is something I really struggle with, and I frequently fail to convince them to do it. I have found that reading quizzes usually meet with very limited success. Sometimes, I might get one or two students to read, who would not have otherwise.
Badges and Gamification
I will throw this in here as another tool to try. I have not tried it much. I think there is a lot of potential here for motivating students who have grown up in a video game world of achievements and badges. I suspect that this would be especially successful if you have a learning management system with an active social community, and culture where badges and achievements provide individual students with a sense of standing within the community. It would take quite a bit of infrastructure, and a fairly large pool of students to leverage this.
Summary of Tools
Informal assessment, formal assessment, quantitative rubrics, qualitative rubrics, critiques, self-assessment, process grades, quizzes and challenges are some of the different tools in the tool kit that we can use. There are plenty more, but these are the main ones that I use. They all have their strengths and weaknesses, and it is important to be aware of those as you use them. It is always best to use a variety and to mix it up. Give a quiz, and then follow that up with a critique. Which students struggled on the quiz, but demonstrated that they understood the concepts in the critique? That is all very valuable information that will help you shape your further lessons.
Check out the other posts in this series
Introduction and Setting the Stage
The 3 F’s of Assessment & Meeting Students Where They Are
Part 3 < You are here
The Tools for Assessment
Micro and Macro Approaches
The Heart of My Assessment System
Gaming the System and Fair != Same
Real Deadlines & Extra Credit