Moshe Ladanga

Ira Greenberg’s Negative to Positive: Adaptation for Video

leave a comment »

I’ve been studying several programs for video feeds, such as Golan Levin’s FrameDifferencing, JMMyron’s library, etc. They are basic (supposedly, but to me it looked daunting enough just to read) programs that do simple things: detect motion, detect and isolate brightest points in the frame, detect and draw around illuminated figures- all of these for capturing particular kinds of data from video that can trigger programs to do something.

But what I wanted for my side of the installation was something subtle, and I wanted to interfere in how the computer captures the image, not necessarily using the pictorial data as just data. For me an image is more than just an image, and maybe I wouldn’t need to go about this whole interactive thing through a roundabout way. I wanted to learn how the computer reads video data, and in the processing books, almost all of the video capture programs dealt with operations at the level of pixel arrays. Since video is such a rich data stream, programmers deal with video data intrinsically, meaning working at the level of the bits to facilitate fast calculations for real-time rendering of video.

What I did to adapt Greenberg’s original code was to toy a bit with the inversion factor; it was really just changing it a bit, trying to see how the code reacts to video feed.

*Copy the code into a Processing sketch, make sure you have a webcam or video cam connected, and run it to see how it really is- and apologies, my code I think is still messy.

**Please click on the images to see them in full

import processing.video.*;

Capture video;

void setup() {
size(640, 480);
video = new Capture(this, 640, 480, 30);
}

void draw() {
if (video.available()) {
video.read();
image(video, 0, 0, 640,480);
loadPixels();

float invertFactor = 255.0;
for (int i=0; i<pixels.length; i++){
float r= abs(red(pixels[i])-invertFactor);
float g= abs(green(pixels[i])-invertFactor);
float b= abs(blue(pixels[i])-invertFactor);
float a= abs(alpha(pixels[i])-invertFactor);
pixels[i]=color(r,g,b);
if(i>0 && i% width==0){
float wave = b+r+g+a;
invertFactor-=(255.0/wave);
}

}
}
updatePixels();
}

Negative to Positive with Ghosting 01:

There are two main differences in this code from the previous one. I discovered the alpha factor can be integrated in how the video feed is displayed. Since the computer reads color values as A, R, G, and B, the translucency of each color can be factored in into the array, hence producing a ghosting effect.

The second point of difference is the modulus operator that I added to the invertFactor which breaks down the its value to finer increments, which I think produced the pale palette of the image (but honestly I’m not so sure, maybe it’s the one responsible for the ghosting).


import processing.video.*;

Capture video;

void setup() {
size(640, 480); // Change size to 320 x 240 if too slow at 640 x 480
// Uses the default video input, see the reference if this causes an error
video = new Capture(this, 640, 480, 30);
}

void draw() {
if (video.available()) {
video.read();
image(video, 0, 0, 640,480);
loadPixels();

float invertFactor = 255.0;
for (int i=0; i<pixels.length; i++){
float r= abs(red(pixels[i])-invertFactor);
float g= abs(green(pixels[i])-invertFactor);
float b= abs(blue(pixels[i])-invertFactor);
float a= abs(alpha(pixels[i])-invertFactor);
pixels[i]=color(r,g,b,a);
if(i>0 && i% width==0){
float wave = r+b+g+a;
invertFactor-=(255.0/width%wave);
}
}
}
updatePixels();
}

Negative to Positive with Ghosting 05:

This experiment captured the aesthetic that I wanted for the video feed; the image is at the point of erasure, and coupled with the ghosting, the image becomes more of a trace, rather than a direct representation of what the camera captures.

I simplified the wave value that affects the invertFactor by multiplying it with the sine of the pixel sum, and I made the pixel sum monochrome by only having one value plus the alpha value.

import processing.video.*;

Capture video;

void setup() {
size(640, 480); // Change size to 320 x 240 if too slow at 640 x 480
// Uses the default video input, see the reference if this causes an error
video = new Capture(this, 640, 480, 30);
}

void draw() {
if (video.available()) {
video.read();
image(video, 0, 0, 640,480);
loadPixels();
float invertFactor = 255.0;
for (int i=0; i<pixels.length; i++){
float redsum= abs(red(pixels[i]));
float bluesum= abs(blue(pixels[i]));
float greensum= abs(green(pixels[i]));
float r= abs(red(pixels[i])-invertFactor);
float g= abs(green(pixels[i])-invertFactor);
float b= abs(blue(pixels[i])-invertFactor);
float a= abs(alpha(pixels[i])-invertFactor);
r=redsum;
b=bluesum;
g=greensum;
//pixels[i]=color(r,g,b,a);
pixels[i]=color(redsum*r,a);
if (i>0 && i% width==0) {

float wave =sin(pixels[i]);
invertFactor-=(255.0/width*wave);
//invertFactor-=255/width;
// pixels[i]=color(r,g,b,a);
}
}
}
updatePixels();

}

– Learning how to read arrays almost made me snap, but it was a necessary step in understanding how pixel arrays work in capturing and displaying video in the Processing Environment. What I got excited about was the effect of the slight manipulation of code on the way the moving image is displayed.

The last experiment, the one in black&white, gave me confidence in pushing it further. The black&white image looks more like a photograph rather than a video image, and there is this resulting tension between the experience of seeing yourself move in what looks like a photograph.

Advertisements

Written by mosheladanga

May 29, 2008 at 7:59 PM

Posted in Reflections

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: