Jump to content

Recommended Posts

I hope this posting is OK, because its not about ST teaching as such, but some of the technical issues I’ve faced in making videos for my students while classes are suspended.

It was suggested that I make some follow-along videos to keep my students engaged. I’ve now made 12 classes (and released 8 of them), totalling 15 hours of videos. They’re free and publicly available from my website, but I’m not linking to them here, because I’m not particularly proud of them, though people can find them if they’re willing to search. Here are some lessons I learned, in case anyone else is doing the same thing. I was initially depending on a blog post that Kit made some years ago on videos, and the main lessons I learned were to use good lighting and sound, and aim for a final resolution of 1280x720.

1. 1280x720 at 25 fps gives a nice clear image if the image is contrasty and the lighting is good, and also makes small file sizes, useful both for storage and upload (it’s what I store my final processed videos at routinely). If you download from Vimeo the data rate will be 2 MB/s or below, so I set that as my final data rate. This is an appallingly low data rate by many standards, but works OK for my videos which don’t have much detail and have little movement. However, it led to problems (see below).

2. My space is lit by an assortment of cheap warm-white LED bulbs accumulated from hardware stores over time (LED to avoid heating, important in summer). It made nice bright even lighting, suitable for all scenes, but leading to other problems (see below).

3. Initially I thought I could just video myself running a class to the camera. However, it pretty quickly became clear that performing to a camera was different from giving a class in reality – not only are things like adjusting spectacles or tucking in a shirt in the middle of an exercise much more intrusive, but issues of form become more noticeable (“hold your arm vertical”, says he, while holding his own up at a diagonal). So I might have to repeat some exercises a few times to get them right – so then I thought I’d made a library of short videos of each separate exercise which I could then stitch together relatively seamlessly to make a complete follow-along video. Problem four.

4. The style which my students like is the one used by Ailsa Gartenstein who originally taught many of my students, which is a large number of short exercises (35 or so in a class), rather than just a few longer ones as in standard ST. There is also a big variety, to keep interest. So now I have a “library” of 180 separate exercise videos (of varying quality) which I assemble into the final classes. At first I made my library of 720p videos at 2 MB/sec, thinking that there would be no change when they were rendered to the same parameters in the final video (another mistake) – not only were there problems compensating the colour balance (see below), but the rerendered videos lose contrast in the details. The later “library” videos are 1080p at 10 MB/s, and these issues do not occur.

5. I use 2x 4k Panasonic cameras, one looking straight-on, the other from the side at a 45 degree angle, running simultaneously. This way, if I need to change the view for visibility I can select the other camera. The cameras are easily synchronised in time, from the waveforms of their sound tracks.

6. I originally got 4k cameras so that I could record stage performances and leave the cameras running unattended covering the whole stage, and have enough resolution to zoom in on the action in post-processing when I needed to. Therefore they give me enough resolution to zoom in close on an exercise. The cameras were both set to their widest angle, covering the whole area.

7. The first problem was that background sounds (even with AGC turned off) became incredibly intrusive and noticeable in the final videos– even though I hadn’t noticed it when making the video (neighbours’ dogs, nearby building work, etc). So I had to find a time of day (early evening) when these were least. Apart from this, the audio from the built–in camera microphones was good enough.

8. The next problem was to get the different videos to register spatially with each other so that the image didn’t jump around when I spliced the exercises together in the final video. In spite of using guide marks etc I could never set the cameras up accurately enough, but found I could easily register them spatially in post-processing.

9. The main problem was getting the colour balance constant between exercises (done on different days and under different conditions). Even small shifts in colour balance from clip to clip are highly intrusive, and the control over colour balance in my software (Corel VideoStudio X9) is clunky and does not give me the precise control I need. This is in fact the main issue I want to address in this posting. At first I used my LED lights plus daylight, then realised I needed to keep my light source constant. However I found that although the light looked a warm white to the eye, the camera often recorded it with a magenta tinge, and this was variable from session to session. If I corrected the colour balance in post-processing a 720p 2MB/s video, even though the resulting video looked fine, when it was re-rendered into the final video (with the same parameters) there were often very obvious artefacts – colour fringes across the blank white wall at the back, flickering in blank areas. The 1080p 10 MB/s videos do not have this problem when re-rendered, so this is what I now use for my "library" videos, though the final videos are still 720p at 2 MB/s. But for this reason some of the earlier exercises have not have their colour balance corrected, so sometimes you will see a very pink or magenta-tinged exercise among the others. Also, I can’t always hit on the colour I need when correcting the colour balance, so there are still odd variations (I believe the later version of the software is better at this).

10. Why should a light that appears white to the eye have a magenta tinge to the camera? Here I needed to do some detective work. LEDs generate a white light from a blue LED that puts out blue light, the blue light also activating a yellow-red phosphor that emits over a relatively broad band (see figure). The mixture of blue and yellow can be adjusted to appear white. For a warmer white, the phosphor emission is made more intense, for a cool white the blue component is made more intense. However, this means that with cheap LEDs such as I am using there is a dip in the spectrum in the blue-green range. Because to our own eyes blue+yellow can make white, we do not see a colour tinge. But cameras have their RGB sensors spaced differently over the spectrum from the spacing of human RGB cones – the camera G sensor lies over the dip in the spectrum, so if the colour is adjusted to look white to humans, the camera can see it with a magenta cast. I presume the same does not apply to expensive photographic LEDs which may – or should – have a more even spectrum. If my software gave me separate control over the RGB channels I’d be able to deal with this, but it doesn’t.

I wonder if people would find these thoughts useful when making their own videos.


Camera human LED spectral responses.jpg

  • Like 1
  • Thanks 1

Share this post

Link to post
Share on other sites

Excellent. Thank you for sharing, Jim. Resources like this are needed now, and I am sure other ST teachers will benefit.

  • Like 1

Share this post

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...