Fun with YouTube: the pure embed

I like using YouTube clips for my classes, but I don’t like the clutter: links to other videos when it’s done playing, the title showing at the top, low quality. So I play with the embed code:

<iframe src=”//;vq=hd720&amp;showinfo=0″ width=”450″ height=”253″ frameborder=”0″ allowfullscreen=”allowfullscreen”></iframe>

See what I’ve added after the video code, ending with the ?
rel=0 > YouTube adds this when you deselect the “show related videos” on the embed code

vq=hd720 > means to show it in maximum resolution or HQ if it has it

showinfo=0 > to get rid of the title showing at the top of the clip

That’s better.

First road test of

It’s all about annotation, and I’ve been comparing Kami and Last semester, I used Kami  ($50 for no ads) for students to annotate text with my History of Technology class. I had some success, but I was not happy with its limitations, so this summer I tried instead.

The students were offered a video tutorial on how to use it. I made a group just for them. The assignment was extra credit — for each of the three classes I uploaded an article for them to read and annotate, replying to each other. Sample instructions:

Extra credit for up to 3% of the grade:
1) Get your own account at at Please use your name as enrolled for the username.
2) Join the test group at
3) Go to
4) Annotate the article with your own responses and answer those of others. Annotations are graded on academic quality, connections to coursework, acknowledgement’s of others’ ideas, and evidence of understanding of the article.

I had been concerned that they would automatically post in Public instead of in the Test Group, because I could find no way to limit that or point them directly to the group page – the choice is made only via a drop-down menu in the upper right corner. Sure enough, several students posted in Public and missed the discussion going on in the group. I will have to add this to the instructions as well as in the tutorial.

I had thought that analysis and counting their contributions would be made easier by the brilliantly conceived Hypothesis Collector, created by John Stewart. It worked great last night. Unfortunately, when I tried it this morning, it only gave me the posts that had been made as of last night. I simply couldn’t get it to work and had to manually count annotations to assign points. I have been contacted by Jeremy Dean of for ways to integrate with Canvas – this might be a huge help next year.

I am considering providing my next class textbook, The American Yawp, with my own annotations. The book, an open textbook, has a number of faults and omissions that would make for great learning opportunities for students. My own annotations would be like mini-lecture commentary, glossing on the text. But for some of the summer articles (one out of three of mine) in, the section one highlights is quoted in the annotation without spaces, which is ugly. Also, there is little color or design in the annotation box to alert the student to the presence or unique character of an annotation.


I think Kami looks better for this, and then I will export my pages as PDF for the students.


I had originally thought I could use The American Yawp’s own affordances as an updated online text, but just got an announcement that, ironically, their current update will be integrating Each page served by them will then come up with an invitation to annotate publicly. While this might or might not help students with the text, it provides an additional way for students to go wrong beside the Public or Group problem, so I don’t think I’ll be working off the Yawp html pages regardless.

Don’t get me wrong – the business model of is wonderful. They make a real effort to reach out, adapt and update. In fact, that’s one of the reasons for this post – to provide input that I hope will continue its improvement as an open source product made by people who really understand the value of text annotation.

Adventures in Accessibility: Part I

Yes, it’s a pain. Yes, it stifles our creativity. No, it doesn’t make sense to pretend that we can make every online learning artifact accessible to everyone with any type of disability, be it physical, cognitive, emotional, socio-economic, or educational. But we do it anyway. Not because we believe in the dogmatic, administrative, litigation-phobic approaches of universal design, but because it’s cool to do it, when we can.

So I’m taking a closer look at some of my multimedia, to see what can be made more accessible to people with certain types of issues, or, better, to be made more interesting and comprehensible to all students.

The first discovery: YouTube’s captioning is so much better than it used to be! Log in. Upload your video. Wait overnight (or sometimes just a few hours). You can even set the video to private. YouTube will create captions as best it can. Select the cc button, and see the captions in a sidebar. Click edit and edit them. You can set the video to stop running when you type.

Oh, you say you have a transcript? Perfect. Just upload your video and select the option to transcribe instead. Paste in the transcript. YouTube will set up the timings as best it can.


Sliders are now available to move the caption around on the clip. You can even see the audio waveform below to help. You can insert caption bits. Then save.

But wait, it gets better. Don’t like YouTube? Want to serve your video elsewhere. Download the captions using the actions menu (.srt format is pretty standard). Then you can upload it somewhere like Vimeo or Dailymotion, which has better video quality and no ads.

The Value of Proximity

Togetherness is a good thing.

It’s pretty clear, even in recent studies, that we want to present information to students in “multiple modalities” (text, graphics, video). But there have been a few studies discussing the placement of “learning objects” (text, video, images) on a webpage, and how that placement relates to learning. The results of a 10-year study at UCSB by Richard Mayer and colleagues focused on how best to use audio, text, video and other media elements (1) . They discovered that how media elements are handled on the screen impacts learning.

Improved learning resulted from adding graphics to text, and from adding text to graphics. But “[t]he trick is to use illustrations that are congruent with the instructional message”, rather than for effect or entertainment.
Interestingly, a conversational tone and the use of an “agent” (a talking head video or animated cartoon), even just the voice, also helped learning.

Explaining graphics with audio improved learning also. But too much was overload. Audio and text explaining a graphic decreased learning, and any gratuitous or dramatic elements added only to get attention caused distraction and also decreased learning.

Putting the issue of relevancy aside for a moment (obviously the text and graphics should both be trying to further the same instructional goal), I think the important issue is proximity. If there is a graph at the top of the page, but the graph is explained with text three paragraphs later, I don’t think it will help.

Proximity is critical, because the relationship between objects that may be obvious to instructors may not be obvious to students.

In my online lectures, I have always put illustrative images next to the appropriate text. I remember in the late 90s repeatedly looking up a cheat sheet my mentor, Kathleen Rippberger, made showing me how to write HTML to wrap text around an image (thank you, HTML). Over time, I came to embed videos, then YouTube videos, also within the lecture page (thank you, embed code). This year, I began embedding the primary sources right into the lecture (thank you, iframe).

The desire to keep things together even caused me to explore putting a lecture and the corresponding discussion together on the same page, which I could do using iframes in Moodle. But the effect is still not seamless, and it looks awkward on mobile devices.


If we extend the principle of proximity to the defaults on a typical Learning Management system, however, we will be disappointed. I despair as I look at Blackboard’s default menu, with everything separated: “course materials” here, discussion forum there, tests way over there. It was this problem that led our instructors to create the main page as an interactive syllabus. But even there, the page is a list of links:


The goal of proximity explains why so many instructors try various forms of “modules” and “units”, which seem to me like online versions of the paper packets we used to use in grade school.

Proximity thinking has come a little late to online education, but it needs a place at the table. The delay has been caused by not only the LMS, but by all the reasons the LMS is popular, including deceptive plug-and-play functionality and ongoing difficulty creating structured learning experiences if you aren’t a web-head. Time to consider proximity as its own design concept, within the LMS if necessary.

(1) Ruth Clark, Six Principles of Effective e-Learning: What Works and Why, Learning Solutions Magazine (2002)

Another voice for history

To take students through the text of a historical document, I downloaded a sample UK voice called Peter from Infovox (free for 30 days, then $20 for the one voice). It works through my Mac’s Universal Access system. It’s quite awkward to have it read just text, since even at high-threshold settings it wants to read aloud all the computer commands and window changes. By putting the Magna Carta into a TextEdit document and recording with Snapz Pro, I did this:

I also tried a UK male voice at Cepstral but I couldn’t get it to behave properly.

This approach might be more effective with bouncing ball or highlighting, but I’m not sure.

Animated lecture in context

What if I could give a bit of history lecture “on location”?

Continuing with looking at animation, I downloaded Tellagami  (which I first read about on Greg Kulowiec’s blog) to my iPod Touch and was able to do this:

It saves as mp4 to the Tellegami website, and their Share button gives embed code. Or I suppose one could download it using one of those sneaky browser extensions.

The limitations were that I had to upload photos to the iPod, and that the audio was a little dicey – I had to make sure the Touch was a couple of feet away from me to not get static. Oh, and it’s limited to 30 seconds!

I don’t have an iPad so it hadn’t occurred to me to look at apps, but now I will.

And next I hope to borrow an iPad to try Explain Everything, the other app from Greg’s post.