About Carlyle

In the mid-1880s, during his time at South Kensington, when he was supposed to be studying for his science examinations, H.G. Wells was instead educating himself. In his autobiography, he noted Thomas Carlyle as part of his self-required reading:

I was reading not only a voluminous literature of propaganda but discursively in history, sociology and economics. I was doing my best to find out what such exalted names as Goethe and Carlyle, Shelley and Tennyson, Shakespeare, Dryden, Milton, Pope—or again Buddha, Mahomet and Confucius—had had to say about the world and what they mattered to me. I was learning the use of English prose and sharpening my mind against anyone’s with whom I could start a discussion.

Wells considered it a treat to read Carlyle’s book on the French Revolution, a break from his other reading. He also noted that England as a whole was influenced by Carlyle in a nationalism that was “consciously Teutonic”. Later on in the 1890s, Wells claimed, every writer was considered to be a “second” someone, and that at one time Wells himself was called a second Carlyle.

Although Wells was about twenty then and I am, shall we say, at least twice as old, I am also educating myself, in Victorian culture and literature as well as education. I cannot read all the things Wells read, but I did want to take a look at Carlyle, since I had only read Signs of the Times (then, in a move I have regretted more than once,  I assigned it to students). I bought a copy of Past and Present a couple of years ago, and tried to read it. I say “tried” because I never made it through – the prose seemed awful, like a combination of Wordsworth on drugs and Kipling on a very bad day (one more exclamation point and I would have crawled under the sofa).

So shopping at Skoob on my last trip to the UK, I picked up a short biography on Ruskin (for obvious reasons – readers know how much I both dislike him and am trying to understand his influence). Next to it, in the same paperbound series (Past Masters, by Oxford University Press), was one on Carlyle, by A.L. LeQuesne. I read the whole thing in one day (I won’t say “in one sitting” because I had to get up for tea and chocolate…ok, more than once).

It was brilliantly written. I’m not sure why I didn’t expect that. Biography can be quite dull, and Carlyle himself was hardly exciting. LeQuesne’s thesis (I didn’t expect a clear thesis either) was that Carlyle’s best work took place in only a few years of his very long career: 1837-1848. Before this, Carlyle wrote poorly (I am apparently not the only one to notice this), and afterward he was behind in worldview and no longer speaking to the current generation.

I do not like biographies that explain in detail the personal lives and clinical ailments of their subjects. Some things seem relevant to me (like Holmes’ noting in his biography of Wellington that the Duke put bars on the windows of Apsley House because he feared the rabble) and others do not (like the many biographies detailing Wells’ sexual proclivities, either known or imagined). LeQuesne had just the right amount of personal detail. It was important to know how witty and endearing Carlyle’s wife was, and how charming their marriage (at least to outsiders), to help explain why their house on Cheyne Walk in Chelsea was appealing to many intellectuals as a place to meet and converse.  His dyspepsia and sensitivity to noise was mentioned a few times, mostly as a distraction to his writing that needed to be overcome, but not, thank goodness, in detail. Similarly, his religion was discussed only as it influenced his work.

Robert Tait, A Chelsea Interior (The Carlyles at Home with their Dog Nero at 5/24 Great Cheyne Row, London), 1857-58

 

Carlyle had roots in an agricultural family in Scotland, and lost some of his youthful religious beliefs when he left. As a young man, he wrote many reviews of books, and since Wells did some of this too it helped me understand the culture that had writers enter the market by writing such reviews. Rebelling against the Enlightenment emphasis, philosophically and intellectually, at university in Edinburgh, Carlyle began studying German romanticism. His Sartor Resartus is described as “a weird Romantic masterpiece which defies either classification or summary” (p19). His style was sometimes “rambling, turbulent, ejaculatory, vastly self-indulgent and metaphorical” (p21). In this work, he apparently developed a theme of the material expression of life requiring a spiritual or super-natural foundation. Earnest work, he thought, made possible the glimpsing of the spiritual beneath the material (Ruskin would have understood this, I think). The book apparently fit the Romantic idea, common among people like Wordsworth (duh) of the superiority of the imagination over the dullness of cold rationality.

Reading this sort of thing now, when rationality is so sorely missing in our culture, and imagination has gone awry into nightmares of duplicity and cruelty, is difficult. But as he continued, Carlyle turned himself into a historian, using that imagination to enliven deep primary research into the past, particularly the French Revolution and the English Civil War. LeQuesne claims he replaced a faith in religion with a faith in history (p33). This was not a faith in materialism, like that of Karl Marx, but of providential judgement. The horrors of the French Revolution seemed to be divine punishment of some sort, revealing God’s purpose. Carlyle thus opposed previous historians of the 1830s, who looked back on the revolution as a horrible deviation from the natural order and a warning about a possible uprising. Carlyle’s analysis instead provided a “cause of hope rather than fear; for it was a sentence of divine justice on a corrupt society” (p35, a page dog-eared by a previous reader of my copy).

Of even more interest to me was the analysis of Carlyle as a historian in professional terms. According to LeQuesne:

Carlyle did not believe that the historian’s function was to provide a smoothly flowing narrative for the entertainment of his readers, nor that history could be treated as an experimental science from which inductive laws of human behaviour could be derived, nor that rigid objectivity and detachment were either possible or desirable qualities in a historian.

This seems similar to the re-emergence in recent years of imaginative historical writing. Under what circumstances, I wonder, do historians appear who value the imaginative over the rational? Despite the rejection of narrative noted by LeQuesne, the passages quoted from Carlyle’s books show, as with Dickens, a deep-seated sympathy for the poor and downtrodden. And he showed it in such a way as to condemn beourgeois complacency, often in stirring prose (and prose that I could actually read). In one passage he makes the reader grieve for the dying Dauphin in prison, then jabs at a conscience which can lament this but ignore the conditions of “poor Factory Children” that perish while no one cares (p43).

The difference between mid-19th century social reformers and Carlyle was that as Carlyle’s career continued, he saw the answer to social inequities to be the rise of heroes, and sometimes a heroic nation-state. LeQuesne says several times in the book that Carlyle was “no democrat”. He claims that Carlyle’s work on the French Revolution won the “ears of a generation”, but that his work after 1850 lost it (p55). LeQuesne calls him a “prophet” (and spends a chapter or two attempting to prove that this title is appropriate) but his work became preachy and grumpy. By then people were actively involved in reform acts of many kinds, and Carlyle’s vision of providentially-guided history and heroic leadership seemed out of place. Moreover, his work began treating the downtrodden soldiers, colonials, and workers with derision rather than understanding. LeQuesne claims this transition is masked by his focus on hero-worship (p85), but the hero is needed to guide people precisely because people are so inadequate to the task.

Thus Carlyle lost his readership, and certainly my interest — it was this sort of writing I encountered in Past and Present. LeQuesne sees his later approach as a rejection of humanity and an increase in impatience with slow progress, but it also seems to me a good foundation for dictatorship and all sorts of other nasty mechanisms that don’t trust people even with a republican system, much less a democratic one.

So in this biography, if not in Carlyle’s own works, I have gotten an idea of what Carlyle had to say and why it mattered — the goals of H.G. Wells’ own reading of him. Unfortunately, I have found myself with little sympathy for any of his ideas except those designed to help readers understand the lives of those less fortunate. Much of the rest (including anti-rationalism, imaginative historical writing, and hero-worship) I find to be at the foundation of much that is wrong with society now, as well as then.

 

Published — Cram and Criticism: H.G. Wells and Late Victorian Education

My journal article “Cram and Criticism: H.G. Wells and Late Victorian Education” has just been published in the Fall 2018 issue of The Wellsian, the peer-reviewed journal of the H.G. Wells Society in the UK.

My pre-publication paper is here, and further info on The Wellsian (including how to obtain copies and back issues, and to join the society) is here at the society’s website. Citation information:

Lisa M. Lane, “Cram and Criticism: H.G. Wells and Late Victorian Education”, The Wellsian, 40 (2018) pp. 28-42.

Enjoy!

Dipping a toe into Digital Humanities: word clouds

The term “digital humanities” has always confused me. When I first heard it, I assumed it was what I was already doing – applying digital approaches to humanities research and teaching.

But no. Digital Humanities seems to be about applying certain elements of computer science to the humanities, with emphasis on quantification. At least, that’s how I’d put it. Wikipedia says, “the systematic use of digital resources in the humanities, as well as the reflection on their application”. Stanford University says, “Digital humanities foster collaboration and traverse disciplines and methodological orientations, with projects to digitize archival materials for posterity, to map the exchange and transmission of ideas in history, and to study the evolution of common words over the centuries.”

[I am treading carefully here, since the term is now used by people who have professionalized the subject. Like most new disciplines, it’s already questioning itself.]

When I come across the term, it usually involves word counts, tallying the number of times a word or words is used in a text. I think that makes Wordle one of the first digital humanities tools. Wordle was an applet created by Jonathan Feinberg ten years ago. It counted the number of times a word appeared in a text, and created a tag cloud, with more frequent terms in larger text.

So using another progam, Jason Davies’ Word Cloud Generator, let’s see what happens.

For example, here’s the Declaration of Independence using 400 most-used words:

 

There are many uses for such an approach. I can compare it, for example, to Magna Carta.

where there is far less about the people.

Even without a word cloud, one can use a basic word search of one can get a whole document in a browser window. So if I have the declaration here, and I do a “find” for the word people, it tells me it’s there 10 times.

So today (stand back!) I’m going to apply this method to HG Wells’ autobiography.

The 19,332 words that result after removing the table of contents and the index took 7 minutes to process (with all words counted):

Hmmm. “Peace” is big, and “Nazi” is small. “Work”, “world”, “now”, “man” “life” are all big. “New” and “still” are the same size. There is no representation of the personality of the piece, which is part of the purpose, except in the words themselves. But really, not very helpful. What if I limit results to the top 25 words?

A little better, but hardly revealing.

Fiction, however, often fares better. That’s why it’s digital humanities, not digital biography. Taking The Sea Raiders by HG Wells at 25 words, we get:

Tentacles! Creatures! Well, that’s more fun, anyway.

Given the current environment in social discourse, digital humanities techniques are being used to ferret out trends in speeches, maps, and censuses, to demonstrate sexism or racism. So the use goes far beyond word clouds.

But I’m still sad. No digital humanities grants for me.

Standardizing what’s good

Every October, I work on my classes for next term. Partly this is because the spring schedule comes out the third week of the month, and partly because October has always been particularly difficult for morale and motivation (mine as well as the students’). I’m not sure why. Could be the lack of any real holiday except Halloween (Columbus Day is tainted and it was never a day off anyway), or just mid-term blues.

That’s my excuse anyway, since I’m not supposed to be doing this till after my sabbatical is over. But I am still doing my reading and research. Prepping is more like a break, because mostly what I’m doing is changing settings rather than creating things. It turns me into a non-thinking machine, changing hundreds of due dates and adding lots of links (why aren’t we at a place where I can assign this to someone?). Definitely mindless.

I’ve decided I like the sources and readings for my classes, I like my lectures, so no changes are needed. But at the end of last term, I added two elements to my weekly coursework for two of my classes, then tested again for three this summer. These elements are “Check primary source for points” and “Submit lecture notes”.

So once I’m done, the weekly tasks for each class I teach online will be this:

  • Due Wednesday:
    • Read the textbook
    • Read/listen to lecture
    • Research and post primary source
    • Check primary source for points
  • Due Sunday:
    • Read and discuss the documents
    • Submit lecture notes
    • Quiz

In addition, for the first two weeks there are multi-pages quizzed Learning Units about primary sources. And, three times during the semester, there are Learning Units for the next writing assignment followed by the assignment itself. Writing Assignments are based only on the sources that have been posted in the Boards by the class, and have a scaffolded format that I created myself, so they are difficult if not impossible to purchase or plagiarize. The Final Essay, for the full-term sessions, is based on the third writing assignment, and folds into the grading for Writing Assignments.

“Read the textbook” is linked to the actual textbook pages, except for the one class where I’m still using a purchased book.

“Read/listen to lecture” is linked to my online lectures, hosted on my rented server, which contain audio of me reading the lecture, video clips, etc.

“Research and post primary source” is the laboratory type posting, on a discussion board, of visual primary sources students find on the web, with citations and student commentary.

“Check primary source for points” is a one-question quiz checklist of all the things required for full points on a primary source (image, author, title, date, live link, commentary), so it’s a self-evaluation of their own source, instantly graded.

“Read and discuss the documents” is annotating the assigned textual sources using Perusall inside Canvas as an LTI, which assigns points automatically but I do have to check through all of them and make sure they’re right.

“Submit lecture notes” automatically assigns 2 points when they submit them, and they can be in any format, including images of handwritten notes.

“Quiz” is a multiple-choice quiz based on lecture, documents, and textbook readings.

The grading breakdown is:

Read and discuss the documents 20%
Quizzes 20%
Primary Sources 20%
Lecture Notes 10%
Learning Units 10%
Writing Assignments 20%

Right now, the only class that varies from this is the one US History where I have full discussion. In that class, it’s:

Homework 20%
Lecture notes 20%
Writing Assignments 20%
Discussion 20%
Constitution exercise 10%
Final Essay 10%

The pedagogy, briefly, is based on emphasizing task completion, with grading considerations as secondary. Each individual assignment is low stakes, though with only three or four writing assignments, the stakes are higher for putting all the knowledge together. Assignments that can be graded immediately (quizzes, learning unit knowledge checks self-assessed primary source points, lecture notes) are, so that students can get immediate feedback (yes, I reserve the right to change points if there are inaccuracies or instructions aren’t followed). The addition of lecture notes and self-assessed primary source points adds a metacognitive learning aspect. The work of doing history is engaged in multiple ways, including reading, writing, discovery, sharing, and visual analysis.

Student choice is built in, in several ways. Students choose their own primary sources to post, and their own topics for writing assignments. They can choose which days they work, so long as deadlines are met (each unit opens a week in advance). Lecture note format is up to them, to meet their own note-taking style. Since each individual item is low points, they can choose to miss one or two without it doing serious grade damage. Two attempts are given for self-graded items, so they can go back and correct something without penalty.

My role is guide on the side, in the middle, at the front, and in the end. Instead of grading constantly, I spend my time reading their notes, viewing their posted primary sources, answering questions, writing weekly or twice-weekly communications, conversing with students in the Perusall annotations, and yes, grading their writing assignments. I have had no complaints about how much work the courses are, since most of the things I’m requesting (like lecture notes) are common to on-site classes. Some students appreciate the trust, and the autodidactic opportunities. Others appreciate that I’m there for them, and respond quickly to their individual messages. (On this, I’ve decided that students want the individual approach, but not necessarily for class content – rather they want it for their individual problems and issues, most of which have nothing to do with the subject. My method leaves time for that.) And I can grade more generously, because the point is to do the work, be the historian, rather than show me you’re good enough to do history without me.

There is also something interesting about having the courses this structured. The course itself seems to be its own entity, has its own trajectory and completeness. It is almost like it’s me, the students, and the course. The students and I interact with the course together, instead of the course acting as a weapon with which I beat students using grades. This goes along with the LMS (Canvas – blech), which the students and I can work in (and on, when things go wrong) together — it’s them and me against the system.

So although on the one hand I don’t like the idea of standardizing courses, in this case I’m standardizing what’s good, what works, what meets my pedagogical goals. I am free to change readings, lectures, materials, instructions, at any time. After 20 years of building these courses, I think I’m onto something less subject to the vagaries of passing fads (personalized learning, individual learning styles), dangerous web spaces (MOOCs, open education), and changing jargon (student learning outcomes, guided pathways), and more founded in solid pedagogy.

 

 

Victorian Studies

To begin my work on Victorian England, I need to examine the field of Victorian Studies. Unlike History, area studies of all kinds are newer disciplines, and I often have difficulty figuring out what they’re trying to do. Every discipline has its own methodology and its own literature – that’s what makes it a discipline. Now that I’m moving away from working with online pedagogy and educational technology, it’s necessary to make sure I am aware of the milieu in which I’m operating.

Although by no means intended as an introduction to the subject, Martin Hewitt’s “Victorian Studies; problems and prospects?” from 2001 has nevertheless provided me a good entrée. Noting the expansion of books and graduate programs in Victorian Studies, the article nevertheless critiques the lack of interdisciplinarity on which the field is supposedly based. Hewitt notes several other concerns, including historians uncomfortable with the word “Victorian” and the dominance of presentist topics (gender, women, imperialism) that use the Victorian era just for examples. But a bigger issue is the fact that historians and literary studies have not really combined in an interdisciplinary way, even while conference panels may be multi-disciplinary. Apparently the most comfortable and useful pathway for Victorian Studies has been the “cultural history” of the 1980s and 90s, although it took awhile to shake off the perception that it was elitist. This was interdisciplinary because it used methods like Foucault’s analysis of culture (p. 141). 

But cultural history does not create a disciplinary field that is consistent and has an “agreed focus” (p. 142). The result is that there is no common scholarship, and Hewitt notes a lack of “key texts” (p. 144). This helps me because I couldn’t figure out what those key texts were when I was looking for a way into the historiography of Victorian Studies. Hewitt sees the historiography as fragmented, limiting the impact of important works. Previous historical works also tend to limit biography to a few “semi-canonical” men, such as Carlyle, Mill, and Ruskin (p. 145).

In literary studies, Victorian Studies has become a “sub-field”, and the many journals of Victorian Studies tend to be dominated by literary analysis . When I subscribed to Victorian Studies journal and Nineteenth-Century Studies, I noticed immediately that the editors were almost all from university English departments. As I read the articles, I kept rolling my eyes as the authors seem to plumb the text of Victorian novels for meanings that were obscure, presentist, imaginative, or all three. I found most striking Hewitt’s point that such studies focus on the reading as it takes place in the current reader’s timeframe (ours). The articles use the present tense, as if the characters in the novels are here with us now, while a historical article would use past tense (p. 148).

History, Hewitt notes, is constructive and materialistic, while literary and cultural studies are idealistic and interpretive (p. 149) – I would say “imaginative”. Focusing on the text ignores the history. This is why I dissuade students from constructing theses that seem to show the text as possessing causation (“propaganda led people to hate the enemy”) – we cannot prove such a thesis historically, although it is possible to prove that the text might have been meant to do something, or that something might have caused (or influenced, more likely) a work to be written.  

Hewitt’s agenda includes developing a solid historiography, and creating new research based on larger ideas. His prescription for historians (he’s one too) is to broaden the field to include more ideas and their production, combining more approaches. Since the context and environment of the era is embedded in the text, the process is one of sense-making. At least, I think that’s what he’s saying – he loses me when he talks about “syncretic hermenuetics” (p. 153). His focus seems to be on creating intertwinings of text and practices to create something truly interdisciplinary, where the “text becomes means rather than object” and the focus is on the impact (and reproduction) of the text (p. 154).

In determining which texts have been underutilized, Hewitt notes many that I am engaged with, including essays, lectures, and newspapers – forms of communication not intended to be high culture. His ideal Victorianist study combines elements from history, anthropology, ethnography, literary criticism, sociology, and art history (p. 155).

I believe that the goal here is to provide a more well-rounded, thorough, and (by implication) realistic understanding of the Victorian era. I am at a loss, however, to explain why it is necessary to do this through the methodologies of disciplines other than history. I don’t think I realized that I am a history snob until I began reading Victorian Studies journal, and finding myself enjoying it while at the same time becoming exasperated with the lack of evidence beyond popular texts. The field strikes me as similar to steampunk: an enjoyable romp through Victoriana to fulfill present (and presentist) needs by drawing imaginative connections. (I feel the same way about the new genre of “creative non-fiction”, about which I will write more later.) I in no way believe that the historical method can provide as accurate a portrayal as going back in a time machine, but history is adaptable enough to take on the perspectives, if not the methods, of other disciplines and use them effectively. I think I would have understood a plan for a new Victorian History better than I understand a plan for a more cogent Victorian Studies. 

  

Hewitt, M. (2001). “Victorian Studies: problems and prospects?” Journal of Victorian Culture6(1), 137–161.

Why journalists write such good history books

In an only slightly different life, I would have been a journalist. As a significantly younger person, I followed Watergate closely, reading All The President’s Men (as well as Haldeman’s The Ends of Power), and attending a lecture by John Dean given at my college. I saved all the Newsweek articles on Patty Hearst, and all my newspaper clippings of the 1975 World Series, in a laundry basket. I became copy editor and then editor of my high school newspaper, writing articles and proofing galleys and protesting the truancy laws. I majored in English at UCLA.

I switched to History due to an odd series of events involving a high school counselor who didn’t tell me when the AP English test was offered, a fascination with the musical 1776, and a brilliant course I took with historian Joyce Appleby. I never took a journalism class after high school, but instead trained as a historian. My degrees are in History, and my certificates are in Education.

For the past decade or so, I’ve studied the evolution of the web as a teaching tool, and in particular online pedagogy. I’ve experienced the typewriter, the internet, the web, as customer and creator. I’ve used rotary dial phones, dial-up modems, and cell phones. Even as I experienced digital history unfolding (or perhaps because I experienced it), I have “reported” my findings rather than studying the phenomena as a historian. After years of being the person in the room saying “but this has all happened before”, I have recently returned to the study of history as my primary task. And yet, the history books I most enjoy reading now are not written by historians. They’re written by journalists.

Most of these works are about the history of technology, which was my specialty in grad school (although I studied medieval, not modern, technology). Tom Standage (The Guardian, The Economist) published his brilliant The Victorian Internet in 1998, the same year I began teaching online. The book became a reference for me, a way to connect the present (in which I was frantically operating) with the past I understood. In 2003, a student gave me a copy of Empires of Light, by Jill Jonnes (New York Times), about Tesla, Edison, and Westinghouse. It was another reminder that so many things (commercial competition, technological advancement, bloody-minded geniuses) are not new. Atlantic and NY Times writer Nicholas Carr’s The Big Switch: Rewiring the World, from Edison to Google (2008) was a delight, part of a body of his work that supported my gut instinct that the web was making us stupid and that our dependency on computers had a serious dark side (that was the same year that saw the rise of MOOCs).

Steven Johnson (Wired, NY Times) wrote The Ghost Map, a 2006 book so clear and brilliant in its discussion of the cholera epidemic in London that I assigned both the book and his TED talk to students.

Few of these people have history degrees. Johnson’s are in semiotics and English lit. Carr, also literature. Standage has a degree in engineering and computer science from Oxford. Interestingly for those looking at women writers, Jonnes is the only one with a PhD in history, obtained after she was a published writer for the New York Times.

They don’t pretend to be historians. Standage notes his specialty is “the use of historical analogy in science, technology and business writing”. Johnson just calls himself a writer, and Wikipedia says the same about Carr.  Jonnes uses no noun to describe herself despite her degree.

With such a trend in evidence, it didn’t surprise me to read in Bloomberg Businessweek that New York Times reporter Cade Metz is writing a history of artificial intelligence.

Normally I’m quite the snob about non-historians doing history. For example, we have a number of departments at the college who offer classes with the word “history” in the title, but are taught by language or music instructors. The individuals teaching them are quite wonderful, but they aren’t doing history. They’re teaching cultural heritage, typically without reference to historical methodology. Their technique is usually narrative, rather than the development of a thesis to be proven with evidence. Similarly, the profusion of “history” days and months for groups of subcultures (women, African Americans, etc.) are all heritage-based, although they claim to be doing history in order to show they are on the right side of history, which is another thing entirely.

Such storytelling, however uncomfortable I may be with it as a historian, has always been important to human beings. It has become increasingly significant in recent years, as competing narratives are created to defend particular points of view. To the agggrieved, for example, all of human history may be a story of grievances. Historians study historiography, the “schools” of history formed by different viewpoints (such as Marxist history, or the Annales school, or the New Left). Historians tend to recognize these varying perspectives, though not always. Competing perspectives are inherent in the discipline. They’re a feature, not a bug. Historians know there is no “one” history, but rather histories told for varied reasons. That’s why historical evidence is so important — it is needed to support ones perspective, to ground it in fact.

Neither historian nor journalist, English prof Marshall McLuhan provided the foundation for many of the works mentioned here in his The Gutenberg Galaxy (1962)

So what do journalists and historians have in common? Both observe the world carefully, and note patterns. Both access the past for context. Both rely on sources, tell stories, create narrative, highlight key people and events. But they divide on method. A journalist may consult only a few sources, or a very broad selection of sources, and need not engage in exhaustive research among scholarly articles or primary documents. They may rely on scholars’ interpretations, since they themselves are not engaging in scholarship. Journalists may use more literary techniques to draw the reader in, to make clever connections. (These techniques have actually changed the way history is written by historians, as publishers now seek a broader audience for history books in an age where fewer people purchase books at all.)

Most importantly, journalists need not provide a new perspective beyond the telling of an interesting story. The originality lies in the creative telling of a tale, rather than in the development of an argument that must be proven with facts. Perhaps this is why the articles on Patty Hearst did not lead me to research the Hearst family, or terrorism, or cults. I never got into the history of baseball. I watched Watergate happening but did not feel an urge to research previous presidential scandals, or violations of the constitution, or the composition of the White House staff. The stories were complete in themselves.

So when a journalist turns a hand to history, it has the potential to be more lively, and more immediate. Liberties are taken (almost into “creative non-fiction”) with personalities, like those of Tesla or John Snow. “Bringing history alive” (a phrase that makes me cringe, with its implication of imposed drama) need not involve engaging in historical scholarship, but it does create the all-important analogies that Tom Standage mentions. These books bring facts to light, and connections between past and present. Without the work of writers like Standage and Johnson, it is unlikely I would have found the connections between what I was doing with my teaching, and what others have done in the past. Even if I discovered these connections while defending history in the various MOOCs in which I was enrolled, I might not have realized my own potential to write about them.

Skilled journalists make the reader feel engaged in the story, even if their thesis is nothing more than, “look at this cool series of events that happened”. Because they live in our time, their reasons for looking into the past are the same as those of historians: to find insights about ourselves in the present. With such similar goals, it isn’t surprising that so many good books featuring history are written by journalists.

Paradigm shift? best practice? perhaps not

In searching for information about distance learning theory that might inform my research into 19th century distance education, I came upon this article (thanks to Jenny Mackness):

Lee, K. (2016). A paradigm shift rhetoric and theory-practice gap in online higher education : a case study of an open university. In S. Cranmer, N. B. Dohn, M. de Laat, T. Ryberg, & J. A. Sime (Eds.), Proceedings of the 10th International Conference on Networked Learning 2016 : (p. 251-259).

The author is Kyungmee Lee, Lecturer in Technology Enhanced Learning at Lancaster University. Her paper focuses on the discrepancy between social constructivist learning theories and the actual instructional designs for online classes used at places like the Open University.

I have long suspected that the maniacal adoption of collaborative pedagogy was based on very little evidence of efficacy. Instead, in my experiences studying connectivism and constructionist theory, I was aware that such methods were lauded by techno-utopians, many of whom weren’t actually teaching first-year college students. Studies demonstrated student satisfaction with the methods, but not better grades.

Lee notes that despite the insistence on a “paradigm shift” from “old” methods to collaboration and constructivism, resistance implies that the paradigm never shifted at all. The shift has been purely theoretical, and not adopted in practice, where most online classes do not use these techniques. While many studies have chided instructors and designers, implicitly or explicitly, for resisting the new and superior methods, this one subversively questions whether the methods really are better.

Hercules and Bacchus presenting libations while Atë, goddess of mischief and deception, flies above (1778)

It’s an interesting approach, questioning the assumption of a change in the field. While the study does not get into the “new” pedagogies per se, it implies that they may not be better, or may be better under certain conditions, or that people who really want a paradigm shift think they can just declare one. This last is most interesting to me, because it begs the question cui bono?

Many of us assume that if there’s a new technique or tool, it might be better than what we’re doing, or at least be better than older options. We ask questions like: will this work for my students? is this an improvement on what I’m doing? We give it a try.

With a tool, it may occur to us that adopting it benefits the company providing it, especially if we pay for it. If we don’t pay, it’s become increasingly obvious that the “freemium” model either benefits the startup, or that our data becomes the product being traded, a la Facebook.

But perhaps with a method, we fail to ask these questions. To whose benefit is it that I adopt this method? The well-meaning researchers and their careers, certainly. But our students? If so, which students? Does it benefit me as an instructor? How?

I have attempted many different pedagogical models in my 28 years of teaching, both in the classroom and online. None have been inherently better than the others. Each method has advantages and disadvantages. Whatever is trendy, though, is considered a paradigm shift, or a “best practice”. Right now, for example, the Online Education Initiative, which is moving to control all online classes at California community colleges, insists that collaboration among students is required as part of its online course approval rubric. There is little research to support this requirement.

If we consider that a paradigm shift has occurred, we are much more comfortable requiring such methods, as if they were based on research instead of theory and some successful practice. By questioning whether the basic principles are sound, whether there is any support for “best practices”, we give ourselves much more choice. We also give ourselves the opportunity to examine past practices, not as outmoded or disproven (which in most cases they are not), but as possibilities for current and future practice.

What is required?

Although I have stepped back quite a bit from my reading and research in online education, I still have a Google Alert set, and still receive and examine recent articles, when I can stomach it.

The dictatorial tone of both articles in my inbox today is the subject here.

The first, The Necessary Knowledge for Online Education: Teaching and Learning to Produce Knowledge (Ferreira et al), did a study of 27 educators, all in the field of Education, to determine what knowledge (this sort of article usually says “skills”) are needed to teach online. What struck me was the premise, stated in the abstract:

Online education requires pedagogical mediation and the skills and competencies to work with technological resources which promote interaction, collaboration, and co-learning.

Well, that’s just not true. Online education does not require an emphasis on collaboration – rather it is one possible approach. It is also entirely possible to create online education that personalizes the class through different kinds of approaches to content, or emphasizes at every step the learner’s relationship with the material rather than through colleagues and “co-learning”. I understand that the current phase in online education pushes the collaborative approach, but it certainly is not “required”.

The second article, Online Continuing and Professional Education: Current Varieties and Best Practices (Schroeder, et al), features this idea:

Teaching online requires a team, not just an individual. While face-to-face teaching may be a singular effort, online teaching includes a multitude of technical, pedagogical, environmental, and associated considerations that requires a team of experts.

That’s not true either. I have never had a “team”, but rather developed not only my own pedagogical and technological skills, but helped design a “Pedagogy First” paradigm wherein the individual instructor’s strengths were basic to course design. I realize that these days there are more resources (among them instructional designers with advanced degrees and research articles produced by candidates for PhDs in Education), but those do not, by some reverse design, indicate that these things are required.

As the literature has developed over the last decade, much of it written by people who are not teachers and have not taught online, the “options” have become “requirements”, and the possibilities have narrowed into “best practices” (best for whom?) and necessary elements. This creates downward pressure on the creativity of teaching online, stultifying the field and cookie-cuttering our courses. Faculty who want students to focus on content are forced to develop “interactions” which oppose their own pedagogy, common sense, and experience. Helpless in a context they did not create, and for which they are pedagogically unsuited, they are told that not only is the social learning method “required”, but that a team is “required” to help them.

Did I mention I’d stepped back from reading the newest in online ed? There’s a reason for that.

The LMS and the End of Information Literacy

Having worked with the Canvas system deeply for several months, and then worked closely with an online student who needed help at various levels, I have concluded that the underlying philosophy of Canvas (and OEI in California) is to remove the information literacy requirement for online learning.

Canvas’ defaults encourage a simplistic, linear course with step-by-step navigation for all tasks. The features for instructors to customize extensively, have students collaborate, and make grading meaningful, are conspicuously missing. When requested in the community, such features meet with success mainly when they adhere to the basic philosophy of simplicity.

computerizedlearningThe implication is that any depth must exist within the instructional materials accessed through the system. At the top level, the environment in which the student must work, the danger of cognitive overload is mitigated by providing as few options as possible. It is a clear return to 4th grade “computerized learning”, the kind that takes place in a lab. Pupils sit at stations, and the software guides them step-by-step by pressing as few buttons as possible. With visual and touch-screen interfaces, this is now even easier. Complete a small task, get instant feedback, press ‘Next’.

The fact that such interfaces prevent branching, distributed, or complex learning is considered to be a feature, not a bug. All information is “chunked” for easy understanding and assessment.

Back in the early 1990s, we were all excited about the open web and its possibilities for the exploration of human information. We were able to look up things that had previously been inaccessible before, and we developed pedagogies designed to use that easy-to-access information. To do so meant designing our own pathways through the material, to help students turn their study into knowledge.

With the coming of the read-write web, it became possible for users to interact with the software in online spaces. IRC and other forms of synchronous chat had been available, but required some technical knowledge. Web-based interactions, which required little technical understanding, became simpler and easier to use. With the development of private web spaces like Facebook and Google, companies came to control the interfaces, simplifying even further what we needed to know to use the tools, and pruning the content we could access easily.

wikinoAlthough at first there had been plans to teach information literacy as a school requirement, this trend has tapered off because of such ease of use. In many places, information literacy is still articulated as a goal, but is not implemented in any meaningful way. The result has been students who have no idea what to type into Google when asked to find, for example, information about American imperialism in the late 19th century. We already are aware of the challenges of distinguishing between good and bad sources of information, and want students to distinguish between a scholarly source and a pop culture source. But instead of increasing skills, the fear of bad websites has led to banning certain things, through filters in grade schools and syllabus dictates in college. (When I encouraged my student to use Wikipedia to find primary sources, she was aghast, telling me it had been drilled into her head for years never to use Wikipedia for school.)

Increasing numbers of students have no conception of what constitutes a website, or a link, or a browser. With no understanding of how to navigate a complex web page or database, students have become unable to comfortably navigate a complex online course, regardless of the LMS. It is possible that only students with more sophisticated web skills are able to benefit from the learning pathways we design. As instructional designers remove more and more of our responsibility to construct these pathways ourselves, the “best practices” encourage computerized learning goals such as chunking, instant feedback, and tightly controlled pathways at the expense of discovery, integration and community.

While I would prefer, for the sake of our democratic society, a metacognitive awareness of the control exerted on us by our tools, I have to admit the temptation to follow the larger trend. We have successfully trained an entire generation not to think while using an electronic tool. We may no longer be able to expect them to do so for the sake of their education.

Related posts:

Results from 131 students

It’s only taken me 17 years of teaching online to develop a student survey that is both broad enough to cover all my classes and narrow enough to give me good feedback.

Just sharing a few things here. Total students responding was 131. Most students responding were passing the class.

ClassElementsRestuls

They still like my lectures the most, and textbook readings the least. They still like posting their own primary sources.

AddtlElementsResults

Hours and hours of work on that Help Page and – no surprise given what they email me about – they don’t use it. They do like seeing the whole course on one page (so I won’t switch to showing only the current week, an option in Moodle) and they like my comments and the audio of my lectures (I’ll read it for you!). The None category is a little depressing….

EngagementResults

The engagement results are clear, too. They like the lectures and posting their own source. They don’t like reading much. But they really liked what I added this year – the completion checkboxes on the Moodle page. I will be sad to lose that. But note: they like seeing each other’s work, but don’t require contact with other students. I’ve been saying that for awhile – collaboration and teamwork is online classes is not always needed. For my class, engagement with the work and posting what you find may be taking the place of “interaction” among students. They can learn from each other without necessarily engaging in forums in response to each others’ posts.