Final Project Proposal


-A live camera will be pointed down at imagery underneath.

-The live camera will be displaying imagery from a box that I will be building, with paper cutouts of each of the three scenes represented in my animations so far.

Phase One

-The box will be placed underneath the camera. Three scenes will slide in and out of the box, revealing different environments.

-These scenes will go from the mountains to the plains and the city

-Paper contraptions within the box will reveal different elements of each environment (The sun will come out, a tractor might cross the plains, etc.)

Screen Shot 2019-04-27 at 1.38.45 PM.png

Phase Two

-The actions on the box will be repeated, only this time, the screen will display animations on top of the live feed

-Pre-rendered animations with alpha channels will interact with live manipulation of the box

Phase Three

-I’m imagining the loop being carried out two more times, with animation getting crazier and more convoluted with each go-round, and the cycle goes on


-Sound. I don’t want to be sitting in silence for this whole thing, but I don’t know what to use for sound

-Exactly what will be happening with the paper display. I’m hoping that what happens in the paper world will trigger the animations (eg. Sun coming out and melting mountains, harvester popping up and triggering the wheat transformation) but exactly what that will look like will depend on what I can pull of with paper.

-The ending. 

Live Imaging Midterm 1

For this project, I wanted to combine a couple of ideas that I’m interested in. I wanted to work with programmable LED’s, I wanted to work with a live camera taking pictures of hand-drawn stills, and I wanted to play with some audience participation. For the programmable LED’s, I started by building a new lamp/camera mount for subtraction class.


The lamp has two lighting elements inside – a fairly bright element that uses a slider, and a Neopixel ring whose RGB values can be independently controlled. It has some potentiometers attached to it that I haven’t gotten to work consistently, but for this project I’m hoping to control the lights through Isadora.

(I had been hoping to use Max for this project, but as I started looking at the scale of this thing, I realized that I wouldn’t be able to do everything that I wanted to do if I had to slog my way through Max as well.)

In order to send RGB values to the Neopixel ring, I had to borrow heavily from Arduino’s website on accepting a matrix of Serial values. I wound up with a code that could take five separate integers and assign them to different light values – Three for the RGB values on the Neopixel, and two for the separate lamp element.

The next step was to be able to send those lighting values from Isadora. I set up a Serial actor in Isadora and told it to give Arduino the following code –

Now for the real heavy Isadora programming. I have some experience in Isadora, so setting up a series of still live captures was something that I was familiar with. Which isn’t to say that it was simple. I wanted the images to wipe on in a special way, so I used a colored-pencil animation that I had made for a separate project to play as each new drawing wiped on. I used a hue saturation actor to change the wipe to look more red, green, and blue.

For the live performance, I will be handing classmates Cyan, Magenta, and Orange markers (yellow doesn’t show up) and animating their sketches real time. More to come tomorrow!

Live Image Processing Week Two

Light and movement were the two most important things that are happening in the little movies that I shot last week. There’s not a whole lot going on in terms of shape or symbolism, so I need to really focus on the things that can be adjusted that will make a difference.

The first variable that I wanted to be able to adjust was speed. I knew how to change the metronome to send out a bang to read the video player at a certain frame rate, so I thought that the ability to change that would get me what I was looking for.

Unfortunately, changing the “frame rate” in that way doesn’t actually affect the way that the movie plays, it just changes the rate at which the movie player displays what the reader is reading. I was able to create some jittery video, but I couldn’t get the video to play back any faster or slower. That’ll have to be something I look into later.

I do want to be able to change the brightness of the video, though, and fortunately, that’s something that was pretty directly covered in online tutorials.


Next, I wanted to be able to layer one video on top of another additively. I’m hoping that that could make for some fun disorienting kind of motion with lights moving in different directions. I don’t know if Max defaults to blending two videos together additively or not, but I figured I’d plug in a couple different videos to the same output and see what happens.


Well, the images did blend together the way that I had hoped they would, but the frame rate dropped down to about 4 frames a second (by my estimation). I wondered if maybe I had gotten something wrong in programming, but the more I played around in Max, the more it became apparent that the program was getting jammed up by having two videos play on the same player. Which isn’t necessarily a surprise, but I’m curious to find the best way to get these videos to blend together the way that I want them to.

Live Image Processing Week One

I’ve been spending a lot of time on mass transit lately. Not just in the typical, New York City commute sense, but I’ve been spending a significant chunk of the past week on buses and trains to small towns around the northeast. So for my first project this week, when I was asked to capture footage of interesting light, movement and color, my first thought was capturing the lights of a small town going by on a train.


These videos were taken from the side of a bus from New York to Reading, PA on Thursday night. They’re layered a little bit and built to loop, which should make them more useful as videos in Max for our project next week.