Comfortably Numb

Alphie Kohn’s post “Why the Best Teachers Don’t Give Tests” struck a chord with me. In his post he particularly talks about how some educators are so vehemently opposed to standardized testing yet adopt other practices (such as grading rubrics) that share common features with such testing. The author argues against not just standardized testing, but against testing in general (one key point he failed to mention is take-home exams and/or in-class open-book tests, which I will talk about later). This interesting post counters the authors claims and argues for more testing instead of less testing! So which way should we go as educators? Is testing making students Comfortably Numb?

As a student that has experienced varied kind of testing environments, here is my take on what worked for me:

  1. Take Home Exams – these were typically fairly hard and required me to go beyond what I had learnt in the class and/or from the books. The really good take-home exams, gave me ample time to finish the task and challenged me to apply my knowledge. Many take-home type final exams have been project based which has helped in the learning process.
  2. In-class Exams w/ Open Books and Notes – These are probably just as good as take home exams, but I was still expected to solve the problems in a limited amount of time. However, some students just aren’t good test takers in such a high pressure environment.
  3. Repeated Testing – Some may disagree with me here, but my personal experience has been particularly good. An engineering course I took in my final year of undergraduate studies was designed to repeatedly test students on the material i.e. there were a total of 8-10 tests in the semester (No Homeworks!). However this meant that we were tested every other week on the new material we had learnt. This sort of an arrangement really kept me on my toes and made me pay attention in class (it helped that the teacher was outstanding). I can say that I got a lot more out of this course that other courses where there were only mid-terms and/or finals.

I want to shift gears a little here and talk about my experience from the flip side i.e. my experience as a Graduate Teaching Assistant (GTA). As a GTA I have not only taught courses, but have also evaluated student work (based on grading rubrics). Rubrics seem to serve a good purpose, but also have severe limitations. Some advantages are fairly obvious, for instance they:

  1. Provide clear expectations of what is expected of the students.
  2. Standardize grading practices across different teachers/teaching assistants that maybe evaluating different sections of the same course
  3. Make it easy to communicate student performance.
  4. Decrease ambiguity in grading practices.

I feel like these advantages are mostly from an educators perspective. Rubrics allow for standardized grading procedures, which are simple to follow both for the student and the teacher, but give minimal feedback in the amount of time available. Rubrics cheat the student of the detailed feedback that they deserve. Here are some limitations of using grading rubrics:

  1. Rubrics are typically designed to measure things that are easy to quantify and thereby maybe inherently biased.
  2. Makes students turn in work by following rules. I have often found several inferior assignments that touched everything on the rubric and received a decent grade and several other good assignments that were thought provoking and showed me the ability of the student to think outside the box, but received a poorer grade because the work did not adhere to the rubric or presented guidelines.
  3. It often leaves less room for the teacher to be an authentic evaluator of the student’s work.
  4. While it decreases the time needed to assess the student’s work, it doesn’t allow much room for authentic communication – such as providing extensive feedback consisting of questions and follow up comments.
  5. Overall rubrics/points do not seem to represent student learning/progress or competence of the student in the subject matter.

Do you use rubrics for your grading? What has been your experience with them?

13 thoughts to “Comfortably Numb”

  1. Thanks for the post! Your experience about the frequent testing in your engineering course caught my attention. There is a recognized “testing effect” in the pedagogical literature (detailed in the book Make it Stick: The Science of Successful Learning) that if you want someone to remember something, you test them on it, a lot. This test can be a non-graded quiz or even just self-testing (so, the test does not have to hold a lot–or any–weight grade-wise), but forcing yourself to recall information in this manner strengthens the connection to that information in the brain, so it’s easier to bring up the next time. This practice can be unpopular because the idea of lots of tests sounds awful to the average student, and not all classes or subjects lend themselves to this format particularly well. But I think repeated testing can be really effective in some cases, if the tests are relatively low-stakes in nature and provide feedback from the instructor. So, I’m glad that you had a positive experience with this format!

    1. Thanks for your comment, yes, low-stakes (possibly non-graded) tests with lots of feedback from the instructor would be ideal whenever the format of the class allows for it. I’m glad that the repeated testing format has been studied extensively in pedagogical literature, I thought I was the only one advocating for more tests!

  2. The class I TA does repeated testing – weekly quizzes, but no graded homework (because the answer key for basically everything is available somewhere on the internet). At the beginning of the semester, the students generally complain about it, but we’ve gotten a lot of positive feedback from the end-of-semester evaluations. Mostly, students said that they didn’t enjoy taking the quizzes, but they liked the way it forced them to keep up with the material as the semester went along. They weren’t stuck trying to cram everything just before the exams.

    1. Yeah, that’s what I found best about repeated testing. It always kept me on my toes, otherwise I would just procrastinate till the test.

  3. I’ve enjoyed most of the take-home tests that I’ve had in undergrad for exactly the reasons mentioned, but I’ve had issues with the ones in graduate classes. Oftentimes the tests in graduate classes will be on much more advanced topics and the information is more commonly in the form of research articles. These are good for hyperspecialization, but make it difficult to find the more broad-application information necessary for solving test problems.
    In my own class that I’m teaching, I’m hesitant to use rubrics because these students (Materials Engineering seniors) have a strong tendency to work to the rules and give the bare minimum effort to get a good grade. I have heard one of my students say, on multiple occasions, “I just need to pass, that’s all that matters”.

    1. Surprisingly, I’ve enjoyed my grad school take home exams more. Mostly, because they have been far more challenging. I agree that the tests maybe hyper-specialized, but there is so much fun in tackling a new problem/puzzle with new found knowledge. On the other hand, undergrad take home tests were rather bland and numerical based where the bigger picture was often missing.

  4. I agree with the advantages and disadvantages of rubrics, but I wonder to what extent the disadvantages could be tackled by making smarter rubrics. If you find that the grades some students get according to the rubric don’t reflect the quality of their submission, it seems like you should be able to identify why and make a new category to capture that in the future. One potential category could be something related to creativity or thinking outside the box. If this was valuable, you could make it worth a lot of points. However, students might try to be creative to get those points and not do well on the others. In this case, you can make the creativity and other metrics multiplicative rather than additive so you can’t get a good grade without doing well across the board.

    1. So, I’ve tried this, but it felt like a bit Kafkaesque. How do you put a number on creativity? In the end are you comparing creativity or originality across a batch of projects? (So student A gets 7 points but student B isn’t quite as creative so they only get 5?) I will say that I’ve used rubrics like Ben describes to get my head straight — to help me sort out what it is that I’m responding to and why. But I don’t fill them out for the student or post them with the assignment. I’ve found that students prefer meaningful feedback and interaction, and that they learn more from a response that engages the work as whole than they do from a series of numbers and “good jobs” or “needs work” on a grid.

      1. Thank you for your post. I agree that meaningful feedback works best, but I’ve also had so many students requests for grading rubrics (especially if they get wind that there is one). I am very hesitant about handing over the rubrics – there a varied number of reasons for this.

        I have often used grading rubrics as a general guideline – I first assign a preliminary grade based on the rubric and go back and adjust the grade based on how I felt about the quality of the overall assignment (i.e. creativity and originality).

  5. I actually appreciate rubrics because they help me know what I am being assessed for on a particular project. The type that helps me the most is pretty detailed in that it provides expectations for how we are to perform and to what extent. While this still leaves room for interpretation and flexibility, it provides the necessary scope of the activity–and clear insight regarding what we will be graded on. When the professor I have in mind provides feedback, it is also quite detailed regarding all aspects of the rubric. However, this has always been given for relatively small phd classes. I am not sure teachers of undergraduate classes have the kind of time to provide such feedback.

  6. Nice post. When I was a TA for fluid mechanics, the professor had a quiz EVERY week. Because students are usually not very well prepared, or at least for some tough weeks they don’t have enough time to review, most of them did worse in this one-week quiz than in their exams. I have never thought about the benefits of this format, but the disadvantages to me is that if you grade these quizzes (which we did and students often got poor grades), you might be making the students less and less confident on what they are learning. To be honest, fluid mechanics is the hardest course for undergraduate in my field, and we do not want them to get more afraid of the content. So my thought is, we can test them, but we need to think more of grading.

    1. Don’t you think poor grades will help you identify knowledge gaps, that can be addressed in future lectures? Poor grades have never discouraged me from learning on the contrary it has made me put in more effort.

  7. Cool post — and I like the title reference. 🙂

    This got me thinking, I wonder if there is a way to distinguish between testing designed to help the student see their own progress, vs testing solely for allowing a teacher to see progress. I agree with your regular testing point — I have found this to be very helpful. But somehow I see this as completely separate from a testing culture that promotes a grade-centered motivation. I’m trying to figure out what the difference is. I think if the testing is so regular that is ceases to be seen as a test, but as a regular pre-class review opportunity you have a very different result than you would if you simply multiplied out the effect of high-stakes tests.

Leave a Reply

Your email address will not be published.