The fine line between automation and insanity

For many years I’ve been arguing that instructors must create their own classes, and their materials (like this stuff) when possible. We should automate only those things that are purely factual or arguably objective, such as multiple-choice quizzes of factual information. We should avoid pre-packaged materials and course cartridges, picking and choosing those elements which forward our own pedagogy, not that of “learning teams” or publishers wanting to sell texts and ancillaries. I was taught one reason for this very early on by Louisa Moon, who in 1998 advised me to create online lectures that spoke in my voice, because at the time we were worried that others might take the lectures we created and give them to others to teach with, without credit or thanks. Now the reason is a little more complicated.

As the years have passed, my own pedagogical goals have focused more on student discovery, creation and writing. The historical facts I like to leave to Wikipedia and multiple-choice quizzes. All my lectures and materials are my own or I have found them. I use no course cartridges (I only ever used them for quiz questions anyway), and often eschew even a textbook. I do artisanal online teaching.

darthpearsonBut while I have been exercising my pedagogy and DIY skills, the market and the trends move in a different direction. This week I attended a session where a publishing company showed us history software, including a piece that could grade essays for us. Some of the other professors in the room admitted this would be helpful, since we have too many students and want them to do so much writing. I, on the other hand, warned that machine-graded essays is a step toward either having grad students teach our classes or having us teach hundreds of students. Other historians’ responses to the invitation to participate in helping create computer essay grading are here. The current popularity of MOOCs  bears out the concern about teaching massive classes, and so does this review of concerns from back in 1998.

But I notice that some of my own changes also begin to lean toward the dark side. A few students on last year’s evaluation said they wanted more feedback on their weekly writing, which I was grading only via a self-assessment at the mid-point and end of the class. I wanted the emphasis to be on practice rather than grading, but they claimed that they did so much work they wanted more feedback. Since it would be impossible for me to give individual feedback to weekly writing assignments, I instead implemented the “graded post”.  Writing posts are now randomly graded, with the grades aggregated for 20% of the total grade. I thought it would go faster if I created qualitative scales, which took me quite awhile to create but then could be used to provide feedback more quickly. But I have students now who are angry at the feedback, who want details on exactly what the ratings meant, or who can’t tell the scale itself from their own ratings. They are more unhappy now then when they saw the writing as practice and it got graded twice a year (for the same 20%, BTW).

Then, when a colleague came to me overwhelmed by grading essays, I suggested the ratings as a possible way to speed things up, since we know what the errors and issues are going to be. Scales can be super handy. I suddenly realized I was suggesting, and doing, a certain amount of automation. No! That is the path to demons offering publishers’ cartridges and computer-graded essays, assigning us to teach 400 students without any help, devaluing our labor and our knowledge and turning us into pushers of education rather than teachers.

On the other hand, it is insane to provide every student with individual feedback on every bit of work they do. I know many professors who sacrifice family time and sleep time commenting in detail on stacks of essays. I’ve been guilty of it myself. And then we know that less than half the students read the feedback we’ve painstakingly given, and less than half of them implement any suggested changes. But we keep doing it because we care, we want to communicate to them, we want them to learn and do better next time.

So what is the fine line between automation, where the work isn’t ours and may be taken from us or increased to unreasonable quantity, and insanity, where we give feedback to all on everything and sacrifice our off-the-job lives?

Some of the answer may lie in giving the right feedback to the right people at the right time. In my own class, for example, I think I’m doing it too often, which is stifling creative work and causing a focus on the grade. For the stack of essays, we could ask students to write “Comments, please” at the bottom of work where they want comments. Those who miss that in the instructions likely wouldn’t benefit as much from the comments anyway, and those who don’t read them won’t bother.

There must be other ideas, too?

3 comments to The fine line between automation and insanity

  • Thanks for this important post, Lisa! This is something I ponder every single day of the academic year because finding the way for students to get productive, useful feedback to improve their writing is the #1 challenge I face as a teacher. As we’ve discussed before, your teaching load is something I would consider almost insurmountably impossible: for students to really IMPROVE their writing, they do indeed require extremely detailed feedback, frequently, along with opportunities to revise. With the number of students you are having to teach, I just don’t even see how that would be possible, which is why I sometimes despair about the future of writing in American education.
    In my situation, I have a teaching load of appx. 80-90 students total (three sections, 25-35 students per section) each semester, and teaching my full-time job, so I spend 40 hours per week supporting those students (call it about 25-30 hours of providing feedback to students, appx. 10-15 hours per week of other things: course development, administrative tasks, professional development). That has worked out well; I think all the students improve their writing significantly, and so do so dramatically. Some other things, in addition to the reasonable workload, that has made this system viable for me:
    1. I do only short-form writing with the students; they turn nothing in to me that is more than 1200 words long. Their project consists of 5 of these shorter pieces, assembled into a whole. I am able to give detailed sentence-level feedback on short pieces in a way that just would not be possible for a traditional 7- or 8-page or longer paper.
    2. Students revise everything that goes into their project, using comments from me and comments from other students. Usually there are two rounds of revisions (one major, one minor) – for students with serious writing deficits, there is more revision (and so their final project sometimes contains only three or four items instead of the usual five). Here is the schedule: http://onlinecourselady.pbworks.com/w/page/13014521/storybook
    3. In addition to their project writing (which I read), the students blog (which is the equivalent of what would probably be the class discussion board in other classes). I use a carefully designed system to try to make sure that every students gets at least one, hopefully two, and sometimes even more comments from other students on those blogs. I believe that all writing should get some kind of human response – either from me, or from the other students, or both.
    I have NO USE OF ANY KIND for robograding. I am appalled by the whole idea of it because the quality of the grading is INSANELY poor. I mean: INSANELY poor. I cannot even find a grammar checker that will help students accurately with writing mechanics (comma splices, spelling, etc.) – and that’s just about the written form. Writing is not about form for its own sake; it is about CONTENT. And a robograder cannot assess content in any way.
    So, thanks again for this important blog post. I am very aware of the need to set limits to the time and effort I pour into my classes, and I am very good about doing that after all these years of teaching online. The way I do that is through the smart design of the class, using technology to enhance the efficiency of class communication. I believe absolutely in the incredible efficiencies of digital communication; in contrast, I find robograding a complete abomination, spawn of the devil, etc. – you get the idea. :-)

  • Should have known I’d see Laura here too. The push to sell essay grading software never really went underground but does seem to be in a renewed surge mode lately. I also wonder how may instructors are using it but being quiet about it. I’m in email conversation with a former student (non academic) who (single parent working full time) is back in college taking online courses to finish her degree. She came to me with questions about essay grading in a composition course and what kind of essay the instructor is telling her to write. I very strongly suspect machine grading and that the instructor (who already teaches full time at an area community college so is moonlighting online) is structuring writing assignments for machine gradability.

    Abomination indeed. Sick making feelings in the pit of my stomach… but all I can do is give tips on how to game the machine.