Sustained argument

The Economist‘s column Johnson recently wrote about the extent to which artificial intelligence can compose prose, and claimed we need not fear the “Writernator”.

The reason? Because during an experiment:

Each sentence was fine on its own; remarkably, three or four back to back could stay on topic, apparently cohering. But machines are aeons away from being able to recreate rhetorical and argumentative flow across paragraphs and pages.

Well, most of my students can’t do that either.

So while the article was trying to reassure writers that they need not fear losing their jobs to a computer, I saw quite another angle, a trail of thought that goes something like this:

Computers cannot create sustained arguments. Neither can most of my students. And only the best journalism seems to bother. Educated people can both follow and create sustained arguments. But to whom are they writing? We have many voters and public figures who are anti-intellectual, and only interested in the realms of fear, emotional expression, and personal identity. They have no interest in sustained argument. The media reflect this, with the emphasis on factoids. Articles have gotten shorter, even in journals like The Atlantic. Computers can’t do it, and computer programming is a reflection of ourselves. Perhaps the very idea of sustained argument needs to be defended. But how can one defend it except with sustained argument, and a reliance on the very intellectualism being increasingly rejected?

If I had read the article 20 years ago, I might have nodded along, not because computers weren’t writing then, but because I felt that sustained argument was a norm. It’s perfectly obvious to me why it needs to be preserved, as obvious as the preservation of free speech, democracy, respect for the opinions of others. These things aren’t obvious anymore.

Today on BBC4’s PM program, Evan Davis reported on Jacob Rees-Mogg’s insensitive statement that he would have got out of the Grenfell fire by ignoring the orders of the fire authority to stay in the burning building. He interviewed Andrew Bridgen, Conservative MP of North West Leicestershire:

Davis: Do you think he meant to say that he thought he would not have stayed put?

Bridgen: That’s what he meant to say…

Davis: And that in a way is exactly what people object to, which is he’s in effect saying, I wouldn’t have died, because I would be cleverer than the people who took the fire brigade’s advice?

Bridgen (Sigh.) But we want very clever people running the country, don’t we, Evan?

Well, I’m not sure people do. In fact, I’m pretty sure many people don’t see why having educated people run things is a good idea. They think that educated people run things to the detriment of uneducated people, and sometimes that is true.

People, Bridgen noted, tend to defer to authority, as they did in the case of the fire.

When people trust these authorities and the authorities fail, there is popular anger. At some point people will ask from whence does this authority derive? And one can say “from your votes”, but they feel that isn’t completely true. The response is to elect uneducated populists.

Intellectualism, and education itself, may have a much tougher advertising campaign to run than we suppose. The old norms are suspect, and assumed ideas need a cogent (or, better, non-intellectual) defense. I don’t think saving writers from computer-generated text is quite going to do that.

 

 

 

 

The internet’s not for learning?

I confess to being depressed by a summer article in The Economist, “The second half of humanity is joining the internet” (June 6). In the spirit of Thorstein Veblen’s critique, poorer parts of the world are getting on the internet*, mostly though mobile phones. And even fewer people there than in the developed world are using this online time to learn things.

The Economist article did not specifically count online courses, only “education information/services”, but the use is pretty low. And it likely includes looking up something on Wikipedia so you can win a game, or checking the weather.

People everywhere do the same thing: use the internet mostly for “timepass” – passing the time by communicating with friends and family, playing games, and watching videos. I’m not saying these things don’t cause learning. They do. But the purpose is entertainment and emotional satisfaction, not becoming an educated citizen.

It just serves to remind me how truly wide the gulf is between those who value education for its long-term benefits, and those who just want to pass the time. Are the people who get satisfaction from intellectual challenges rare? If so, will the smartphones make them even more rare?

Because that’s the crux of the issue. When all this internet-y, web-by stuff began, we educators were all excited. Vast libraries of information! Massive open online classes! Anyone can learn anything from anywhere!

I’m not anti-entertainment. I’m a huge classic movie fan, and I watch a lot of TV programs where one character calls another “Inspector”. I read modern novels just for fun, or to get to sleep. I’m not always working, always teaching, or always learning.

But I am again reminded of the old Zits cartoon:


The internet relies on huge servers, and uses tons of resources. It only seems “clean”. The mobile phones contain rare earths, the servers are so hot they need to be in the Arctic, the power plants chug away so we can have long power strips full of our charging device plugs. It’s odd to make that sacrifice just so that people can play Fortnite from anywhere.

Perhaps our goals were too utopian. The article points out that our vision of the subsistence farmer checking weather on his phone to save his crop doesn’t really happen. But why shouldn’t everyone use the internet for whatever they like? And can’t we learn wonderful things on our own? Some little boy somewhere is watching a Zeffirelli clip on YouTube and is inspired to become a great set designer. Some little girl is watching the US women’s soccer team and will be a great player. Is formal education a more important use of technology?

After two decades online, however, I am saddened that there hasn’t been a little more educational uptake and a little less “Whasup?”.

 


* I used to be very careful to distinguish the web from the internet — the internet is the entire online structure, while the web is the world wide web accessed through a browser. The recent dominance of the “app” and sites requiring log-in is closing the web, and has become the most-used aspect of the internet other than email.

Wells and the moon shot

On the 50th anniversary of the moon landing, I picked up my copy of H.G. Wells’ The First Men in the Moon (1901), and found these paragraphs:

. . . Then with a click the window flew open. I fell clumsily upon hands and face, and saw for a moment between my black extended fingers our mother earth—a planet in a downward sky.
   We were still very near—Cavor told me the distance was perhaps eight hundred miles and the huge terrestrial disc filled all heaven. But already it was plain to see that the world was a globe. The land below us was in twilight and vague, but westward the fast gray stretches of the Atlantic shone like molten silver under the receding day. I think I recognised the cloud-dimmed coast-lines of France and Spain and the south of England, and then, with a click, the shutter closed again, and I found myself in a state of extraordinary confusion sliding slowly over the smooth glass.
   When at last things settled themselves in my mind again, it seemed quite beyond question that the moon was “down” and under my feet, and that the earth was somewhere away on the level of the horizon—the earth that had been “down” to me and my kindred since the beginning of things.

Knowing the ending

I joined many in celebrating the 50th anniversary of the moon landing this week, attending the only San Diego screening of Apollo 11. Todd Douglas Miller’s documentary was made by recovering and editing many hours of audio and video recorded (usually separately) during the mission. The film was delightful at many points. I had heard the interview on BBC4 of how it had been made, and smiled every time they aligned the audio with the video of the headset chat between mission control and the astronauts. I gleefully recognized the “go – go -go” sequences as tributes to The Thomas Crown Affair, which came out the year before the mission (1968) and popularized split-screen cinematography. I even understood how conspiracy theorists could think the moon was a movie set (a la Capricorn One), because it looked so unreal. But even more impressive was the audience in the movie theatre. The movie-goers responded to the film while it showed, and applauded at the end, and I remember thinking, “this is strange — we all know the ending”.

I had wondered about that going in. How could the film be suspenseful when, unlike some of the other space missions, we know that the astronauts land, walk about on the lunar surface, and return safely? And yet here was an involved audience, and a geeky audience too — who else would spend their Saturday afternoon at the multiplex watching footage of the moon landing? The film was made for this group — there was no narration that wasn’t primary at the time (news commentary, control room conversation), and little explanation about what was happening. Clearly the audience played along as, for example, the line drawing of the ship rotated to latch onto the lunar module. They knew about all this, but laughed at the astronaut’s jokes, hmmm’d contently at Armstrong’s “one giant leap for mankind”, and held their breath during the re-entry into the atmosphere.

We live in a world now where it’s very difficult not to know the ending, even if you haven’t seen the film or read the book. “Spoiler alerts” are heeded mostly by purists. We see the trailers of the movie, and we know if it’s a comedy or drama. Everything is reviewed, in print and on the web, among friends in Facebook and Instagram. Even if there’s a twist, we know there will be a twist, just maybe not what it is.

It shows the best in human nature that we are willing to pretend, to suspend our knowledge. It’s as if the ending no longer has the responsibility of carrying the meaning of a piece. Instead, the story itself is the meaning, how it is told, or even the fact of it’s being told. It’s the opposite of cynicism, even as we live in a cynical world.

And I can certainly be cynical. I am not happy that the theatre didn’t show the film in their IMAX room (Lion King seemed to be more important), or that the Ruben Fleet Space Theatre IMAX isn’t showing it at all (WTF?), or that no one seemed to take advantage of the fact that the anniversary and Comic-Con were happening simultaneously (where do they think all those sci fi geeks come from?).  We have multiplexes all over the place and very few movies worth seeing — why was it just one showing in one theatre in the county?

I can also be cynical and patriotic at the same time. This was a huge American achievement — why is the BBC doing more coverage than American media? Why can’t we spend more public money on space exploration? I lived through the space shuttle years (I even went to see it land at Edwards Air Force Base). I assumed that the shuttle would always run and just get better and better, not stop. We shouldn’t be leaving space to Elon Musk and private money. In 1968 we funded the space program and the Great Society at the same time, so don’t tell me there’s no money. And if everyone wants to make us great again, what better than the space program to do that?

But even my discontent was overcome by the actual history, and an excellent film about the main events. Since 1969, people who remember the moon landings look at the moon differently. There were young people at the film who now look at the moon differently too. We went there, they think, for real. Not on a video game, not CGI, but for real. I don’t know the ending, but I have hope about what they’ll do with that feeling.

 

A most dangerous game

A recent Economist article queries the simultaneous increase in general happiness and increase in votes for populist parties. How can people who say they are happier, and who have better jobs and better lives, vote for parties that vow to destroy current systems and cause massive disruption? It seems counter-intuitive. Doesn’t history tell us that it’s the misery of people, it’s Germany in the 1920s, that leads to the rise of dictatorship and the undermining of rights? Why would anyone benefiting from a system vote to overthrow it?

I would like to suggest that the issue is virtuality, and the gaming mentality engendered by virtual activities.

In a world where you contact your friends in a virtual space more than in reality, where your car beeps because you are too close to another car, where Alexa reminds you of your appointment at 10, and where you can use an app to get your groceries delivered, the virtual nature of life has become immersive. It is not simply a matter of too much screen time. It’s the transition to perceiving the real world like it’s on a screen, with you holding the controls. The virtual has overcome the reality.

I live in California, land of automobiles. As cars became more electronic, I noticed drivers behaving more erratically and aggressively. I saw people driving as if they were in a video game, where the other cars were merely obstacles to their goal. When cell phones became a distraction, the self-centered driving behavior became even more marked. It wasn’t just more erratic and dangerous due to the distraction — with or without phone in hand, drivers took even less notice of what happened beyond their own vehicle. I’ve seen driving behaviors that demonstrate a disregard of the fragility of pedestrians in crosswalks, intolerance toward disabled drivers, frustration at people who don’t get out of the way quickly enough. In this state, where turning right on a red light is legal if conditions are safe, you’d better do at or you’ll get honked at whether conditions are safe or not.

These kinds of behaviors are what we see in games. When you know you are secure, you take more chances, and behave more aggressively. You stockpile your money and weapons in Assassin’s Creed, then you go on your hunt. As far back as The Oregon Trail, the idea was to get set up properly, then begin your dangerous journey, and take your chances. You are encouraged to think you’re living your own Odyssey. The other characters in the game are only there as foils to your individual character. They were created only to make your goal more difficult to obtain. Of course you honk at them.

As one sits at the computer interacting in virtual space, the people with whom one interacts also seem virtual and unreal. They aren’t really in our space. We can turn them off by turning off our computer or our phone, and go do something else. Politics comes to us in a 24-hour news cycle that tends not to distinguish between the significant and the irrelevant, and it often comes to us online, on a 2-dimensional screen. The party debates, for example, can seem like a TV show, sometimes interesting and other times worthy of the Off button. But they don’t seem like anything very important.

So now people are interviewed, and say they are happier and more employed. Yet they are voting for populist parties that channel anger and aggression against others. These aren’t opposites: they are normal gaming behavior.

It’s more fun and more dangerous to join angry groups on Facebook, especially those that provide a sense of belonging at the same time. And one can happily support populist parties because it feels like taking a chance, doing something risky — that means you earn more points. It’s similar to the increased disconnect between your car and the environment surrounding it. The perceived distance between the individual and government combines with the actual distance between oneself, sitting at a computer, and the reality of how that government affects your life. Politics is just a game you play in groups, the ultimate MMPG. It’s why it makes no difference whether the current government is fulfilling its promises or not — you support it if that’s the team you’re on.

We feel empowered sitting in our captain’s chair, the computer under our control. We can help our team. Click! I can post an outraged comment. Click! I can answer a poll. Click! I can make a small donation to a party that says they’ll give people like me an easier life. I don’t click because I’m unhappy, because I’m destitute. If I were really destitiute I wouldn’t have the time or ability to spend all this time in the virtual world. I click because it all feels like a game. What’s the harm — it’s just clicking things, posting words in cyberspace. That little frisson of excitement is because I know it is real, but as user7864 I don’t have to be individually responsible if I don’t want to be.

While there are articles on gamifying education, and on the effect of games on the human psyche, I’ve seen little on the gamification of social and political interaction engendered by our electronically connected world. And I realize my point of view could be seen as an argument against online voting, which would only seem to increase the disconnectedness and virtuality of political participation. Since we are already immersed, however, I don’t see the difference in voting virtually. Perhaps my click will make something better happen. At least I’ll get more points.

The annoying web

Those of us who recall education, conversation, and research before the habitual use of the web often wax eloquently about all the affordances the web has given us. We can look up facts in seconds, engage in research from our sofa, video-conference with people in real time. It’s amazing!

We also know that things get lost with any new technology. It’s one of the major themes (well, the major theme) of my History of Technology class. We’ve seen that everything from real-life conversation, to civility, to shelf-browsing has suffered in ways connected to the advent of the web. I am considering examining these in some posts.

So here’s one. I have a colleague who researches American patent medicine in the late 19th century. Today I’m on Twitter, and see that A. J. Wright has posted a patent medicine advert. I’d like to share this with my colleague. In the old days of email, I might have sent it to him in an email. But now I won’t share it with him at all.

Why? Because the likelihood of it having been shared a zillion times, and him having seen it already, is very high. If I put the name of the product (H.R. Stevens’ Family Balsam Familine) into Google, I get some right away:

And there are more links below to sites like Rochester University, the Smithsonian, even eBay. My mind conjures up a Pinterest page of bunches of similar ads, and a quick search proves that yes, everybody and their mother saves images of patent medicines.

I am intimidated by the ease of finding more examples, so I’m less likely to share one. My “discovery” has been diffused by the commonality of the find, by the ease of access. Despite the fact that there are still areas of mystery (what is Familine made of?), I’m too deflated to care. It’s like that scene in Summertime where Katherine Hepburn discovers that the supposedly unique Venetian glass she’s been sold is just a cheap souvenir. In her case, she later discovers that hers is an original and the others are copies, but in my case they are all cheap souvenirs of a few minutes web searching. Hardly something one would share with a respected colleague.

It’s odd that in the web world, which seems so to value “sharing”, something can so instantly become not worth sharing.

Post-materialism and politics

I read in The Economist, in an article called “Will Europe’s Green parties be the new leaders of the political left?“, a helpful way to view contemporary political polarization. Here’s the section:

If I’m understanding correctly, the reason that the far-left and far-right are becoming popular is because both are founded on these “post-materialist values”.

I understand that postmaterialism emerged in the 1970s to help explain the self-actualization of the younger generation, who valued personal experience and happiness over economics (because they could afford to). It just had never occurred to me that such values were the foundation of thought for the current political edges. The far-right is post-materialistic in that they value nationalism and homogeneous culture above material considerations. The far-left values environmentalism (Europe) or personal identity (America) above material considerations.

It does make sense. The conflict between Marxism and liberalism marked the 20th century. I had always thought of the two philosophies as diametrically opposed. At the same time, I teach that there are historically two forms of liberalism: the political (Locke, Jefferson, et. al.) and the economic (i.e. free trade). In contemporary times, it is possible to see globalism as the updated form of economic liberalism. But what is the updated form of political liberalism?

It’s not what is called “neo-liberalism”, obviously — that is also primarily economic, and retro because it brings back the initial forms of liberal capitalism that treated workers like dirt and believed in a nasty form of natural selection that sacrificed humanity.

There’s a gap here that is neither right nor left, but is more basic. I’ve said for years (to anyone who would listen) that if liberal democracy is to survive, it needs to come up with a damn good argument as to why it’s the best system. It’s rested on its laurels for fall too long, to the point where during my own lifetime it was just assumed as best, the system that other nations would implement if they could just get more advanced. It’s time to think like Jefferson and, as he said about the Declaration of Independence, “place before mankind the common sense of the subject, in terms so plain and firm as to command their assent”. If liberalism is materialistic, someone better justify what materialism is important. If democracy is good, someone must tell us why.

In the absence of a proper, cogent, meaningful, and plain argument for liberal democracy, the fringes have more space. And these fringes don’t even acknowledge arguments for liberal democracy or Marxism, because both are based on materialism.

Certainly Tr*&p voters know want material things (jobs, money). Tr*&p’s government is not only reneging on its promises to provide material prosperity, it is actively making the finances of poorer people worse, and yet his supporters won’t hear a word against him even as he steals their food. That can only mean that a deeper philosophy is at play which is post-material. The wall (as a symbol of these values) matters more than food and money.

That may seem strange except for a 2011 study by Suzanne Mettler of Cornell University. I’ve been thinking about it for years. In 2008, 1,400 people were asked whether they had ever used a government social program. 57% said no. Of those, 94% had. They’d received food stamps, social security benefits, unemployment compensation, student loans, etc. This gave me the idea that many people do not understand the role of government at all, much less how they benefit from it. It may seem illogical that people would, for example, push for budget cuts to programs that benefit them (the original focus of Mettler’s study). But it makes more sense if the problem is principle rather than facts.

This post-material focus also helps explain the tendency, noted in this piece in the Washington Post, for people to believe untrue things. It’s confirmation bias gone amok. The woman in the story believes the “fake news” about the two women in red because it “was true to what she knew of their character”. The overall idea is far more important than the actual facts because it’s the principle that matters, rather than what happened. Facts carry the taint of the material.

So if Emilie van Haute is right (and I suspect she is, since she’s studied this stuff for years), there is no way to get everyone back together and stop the polarizing forces that are at play. It might be better to acknowledge the deep attachments people have to things other than money and goods, and work from there.

 

 

Doing grades well

There has been quite a bit posted lately about grades, grading, and not grading (or marking). I read that grading practices:

  • play into systemic inequalities,
  • undermine compassion for our students as people, and
  • are inadequate to assess learning.

I’ve been reading, for example, the tweets of Laura Gibbs. She is one of my heroes, bravely creating innovative teaching practices despite the use of Canvas, and even in temporary despair creating a community response that is both empathetic and practical. I’ve also come upon a number of things from fall of 2017 (not sure why then), such as Arthur Chiaravalli’s The Gradeless Garden, where he questioned whether getting rid of grades is enough. John Warner wrote in Insider Higher Ed that not only don’t we need automated grading tools, but that removing grades gives students better ownership of their work. And Jesse Stommel’s Why I Don’t Grade also dates from then.

Lately the issue of grading (and ungrading) has emerged again in my Twitter feed, and more professors and teachers post that they’re “giving up” grading.

I think something’s getting lost here, and I don’t just mean the continual marking. I’ve considered non-grading for several years and have adopted self-grading for portions of my classes (but not all assignments). One obvious problem is that our institutions require us to assign a final grade, so one question is how we do that without a record of grades to justify that final mark. Laura Gibbs, for example, does it with accumulated points — the student chooses how much to do to get that grade. Grading contracts, which I learned about years ago from David Cormier and tried for my Honors class, use a similar technique.

What’s being ignored, in educational reform as in other places in political and social life, is the idea of doing things well.

Grades, as the name implies, rank student work according to a set of standards developed within a discipline. At least, that’s what they’re supposed to do at the college level. Professors, good professors, agonize over grades. I’m not the first to sit with my finger hovering over the B or C drop-down, wondering about how that student had to have surgery, so shouldn’t I give her a break, but she missed the first three assignments, but another student who came to class between chemo visits managed to do it all.

Obviously, I realize that the quality of student work is affected by outside factors and institutional hierarchies, but it’s my job to make sure that the inside factors encourage good work. I’m supposed to do my own work as a professor, and do it well.

Grading, to my mind, is one of those things that should be done well. It should be fair, but not heartless. Standards and expectations should be transparent. There should be opportunities to improve work, and the work should be assessed by an expert (me). Not everything has to be graded by me, which is why I adopted self-graded assignments despite the abuses that occur. But the final grade in the course should reflect the overall quality of the work that has been submitted.

So within that context I’d answer the current trendy complaints about grading as follows:

Grading practices play into systemic inequities

Educational systems are based on unequal results. That’s what grades do – they grade. They let the student know where their work ranks in the context of other work within the discipline at that particular stage of education. One must still tackle the evil of the exclusivity of opportunity, which is quite different from results.  Everyone should have the opportunity to undertake college studies, but not everyone will succeed. The system is supposed to be a meritocracy, and I know that’s a bad word these days, but it shouldn’t be. Merit is simply another word for doing a job well. Those that do academics well get high grades. Those that don’t get the low grades so they have time to decide where else they should be, what they should learn, to have a successful, meaningful life.

As a side note, the educational systems (at least public systems) are supposedly designed to maximize chances to provide opportunity. I work at community college, which is open access. There is no standard for admission. For the last 150 years or so, scholarships and endowments and aid have been designed to provide access for people who need them. All of this is done to find those diamonds in the rough, the academically suited individuals who would otherwise be excluded. It is not designed to force people into academic molds that do not suit them. But even then, anyone is welcome to come and learn. Not everyone needs grades.

Grading undermines compassion for our students

This is where the idea of grading students is the problem. I’ve seen it technologically embedded into systems: “student grade”, “assign the student a grade”, etc. We don’t grade students; we grade work. Judging other people, particularly people whom you know only through one small life window, is wrong. I have had students say to me, “I hope you don’t think less of me because I did a bad job on this paper.” Of course I don’t — what on earth gave them that idea? Well, years of school where the grade was used to represent them, when someone punished them for poor grades, when they were called a “D student”.

There are no D students. But there is D work. And there is a D that goes with compassion, that says, I’m sorry but this work wasn’t up to the standard, and here’s why. Please let me help you as we go through the course. Let me find you the services you need. I’ll sit in my office and listen to you cry even when I’m supposed to be at a meeting. What I don’t want to do is change your C to an A because you need it to make your family proud or because you really need it to get into another class. If I do this, I am not doing my job well. I’m doing it poorly. I know it and so do you. That doesn’t mean I don’t care about you.

Grades are inadequate to assess learning

Of course they are. I’m not even sure it’s possible to assess someone’s learning, but it certainly isn’t possible when my class is just one discipline, at one level, in a limited area of study. Again, the grade only assesses the quality of work. And lest we get all uppity about education being about process instead of product, it is often an invitation to unfairness to grade only a process. Most disciplines require products of some sort.

I do everything I can to create an environment conducive to learning, featuring a good balance of freedom and structure. I individualize where that makes sense, and standardize where my experience tells me that works better. And yes, I completely understand that my students come to me adversely affected in many ways, by racism or mental illness or poverty or having their confidence undermined. My intention is to privilege an environment where it is possible to put that aside for a bit to allow the expansion of the mind.

In short, I think we ask too much of grades. They weren’t meant to make up for social inequality, or to symbolically represent a student as an individual, or to evaluate the learning process. They’re meant to discriminate and inform, to rank levels of work. We’re required to assign them, as social signals as much as credentials. They signal the level at which someone is good at school.

By all means, we must change the system if it isn’t doing what society needs it to do. Educational reform has been around as long as education. But we shouldn’t place the entire load of unfairness on grades. We should instead try to do them well.

 

 

 

Surf report

If you’re from around here, you know what a surf report it: “moderate waves today, let’s call it waist-high” a la Scott Bass on KPBS radio.

This is a report of today’s web-surfing, which is kinda different. Sometimes it’s piled a lot higher than my waist, but today I learned a lot, much of it triggered by Twitter posts. I don’t think I’m the only one who uses the “like” heart to file things for later, so I could find these again.

History Assessments

Except the first one. Somehow I found the Stanford History Education Group, and their Beyond the Bubble assessments. I’m not sure why I’ve never heard of this, but it’s a collection of items for teaching U.S. History. While geared toward the high school AP crowd, the method here is quite useful for college history. The primary source is embedded into the assessment. So for example, there would be a newspaper engraving of a protest from Harper’s Weekly, then a short list of facts related to that engraving, then open short answer questions. Sometimes these asked students to assess the veracity of the document itself in light of the other facts, or they might ask the student to say what the source tells us about the era.

These are short (usually just two short answers) and there’s a rubric with each one, indicating the level (proficient, emergent, basic) of various student responses. Some even include sample student answers that one is likely to see. Although undoubtedly intended to be used solely by the instructor, it might be interesting to give the rubric to students and have them analyze their own work!

The site has many assessments that a teacher could download, but it was their design that gave me ideas, because I could create my own assessments for any primary source I have.

And it was kind of eerie that I had just changed all my Learning Units to be inside the assessments. I must be very trendy in terms of design!

Cycloramas

Next, I found a serious gap in my knowledge about the history of media. A tweet by Civil War historian Lisa Tendrich Frank led me to a Smithsonian Magazine article on the restoration of the cylcorama in Atlanta. Apparently, during the 1880s, cycloramas were a huge draw as entertainment. Painters created 360-degree paints, attached to the walls of a circular building, and people would come to experience it. The article notes a scene might have a dirt floor and some trees to add a reality-inducing effect.

Beginning in the 1880s, these completely circular paintings started appearing from half a dozen companies, such as the American Panorama Company in Milwaukee, where Atlanta’s canvas was conceived. APC employed more than a dozen German painters, led by a Leipzig native named Friedrich Heine.

Half a dozen companies? How could I not have known about this? This isn’t just virtual reality, it’s late 19th century entertainment for the people. The closest I’ve gotten to in-the-round entertainment was the film they used to have at Disneyland, America the Beautiful, a movie made with multiple cameras that surrounded you. Yeah, I know, in days where the Google truck drives through your neighborhood, this may seem archaic, but it was very cool.

So now I have a whole research area to discover.

Paratexts

Can I use this word in a sentence? It shouldn’t be new to me: it’s a word I keep bumping into, but somehow it never entered my thinking as something I could use.

A tweet by early Americanist Michelle Orihel sent me to Digital Paxton, and reading the post I had an Aha! moment. Advertising and editors’ notes and issue numbers, as included in Victorian periodicals, would be paratext! I may not have a theory, but I at least have a structure, an interpretation, a word I can use for what these types of things are.

Some days it’s enough to learn one new useful word.

Blackmail

The last item for today was a piece of email spam. Yes, I know you’re not supposed to open these, but there was no attachment and I decided to read it. I found it fascinating.

The title was:

Security Alert. lisa@lisahistory.net was compromised. Password must be changed.

The email went on to explain that my account had been hacked, my information and surfing habits downloaded, and they wanted money, paid in Bitcoin. The blackmailer explained how s/he got access:

How I made it:
In the software of the router, through which you went online, was a vulnerability.
I just hacked this router and placed my malicious code on it.
When you went online, my trojan was installed on the OS of your device.

I noticed that there aren’t any contractions where you’d expect, indicating this person does not speak English natively. The OS of my device?

They also claimed to know that I have pornographic habits:

A month ago, I wanted to lock your device and ask for a not big amount of btc to unlock.
But I looked at the sites that you regularly visit, and I was shocked by what I saw!!!
I’m talk you about sites for adults.

I want to say – you are a BIG pervert. Your fantasy is shifted far away from the normal course!

There’s a normal course for the viewing of pornography online? I had no idea. But that explains why so much money was being requested.

I’m know that you would not like to show these screenshots to your friends, relatives or colleagues.
I think $701 is a very, very small amount for my silence.
Besides, I have been spying on you for so long, having spent a lot of time!

Wait, $701? Cheap at twice the price!

After payment, my virus and dirty screenshots with your enjoys will be self-destruct automatically.
If I do not receive from you the specified amount, then your device will be locked, and all your contacts will receive a screenshots with your “enjoys”.

I guess we’ll see…

(Discovered after posting: turns out this is a known spam thing and I should dedicate as much worry about it as I have already done. So that’s five things learned online today!)