Day 1
Today started with doing recap through the coding we had done in our last week of processing, and went back through the code sketches we had created on OpenProcessing.org
During this recap we were introduced to a new piece of code called "if(frameCount > 220)", which basically made it so that animated elements of our code sketches only started upon reaching 220 frames.
From this point, we began to cover some new coding exercises. The links to these coding exercises were shared with us by the tutor as he was talking about them, so we could just see how the code worked firsthand instead of just trying to re-create it all from scratch. These were some of the exercises we looked at:
https://openprocessing.org/sketch/1525999https://openprocessing.org/sketch/1534619
https://openprocessing.org/sketch/1527157
https://openprocessing.org/sketch/1511636
These exercises cover a mix of image and camera related functions. For this session, the tutor personally provided us with webcams so we could properly test the camera exercises. While we were going through them, we were also periodically given time to mess around with the code ourselves to test how it would change the result.
The exercises I found the most interesting were the filters and image blending/mixing, as well as object detection and Real-time Human Pose Estimation.
We were also shown through several different libraries which each provided examples and references on how to create different things using code, and were encouraged to use these tools in the future to inform any future coding we might do.
While I thought the concepts introduced were very cool, I had some trouble understanding them code-wise. Because of this, when the time came to code something myself, I found myself struggling a little. As a result, I spent most of my time looking through the libraries the tutor had provided instead.
Day 2
Today we began looking at how to set up a multi-screen video installation.
Before we did that though, the tutor talked to us about how we could extract and use different laptop parts such as the screen or the webcam to create our own multiscreen displays.
The tutor proceeded to show us how to do this by dismantling a laptop he had brought into class and showing us how to extract the particular parts we needed.
Once he was done extracting the parts, this is what it looked like:
![]() |
The bottom of the laptop and the empty screen casing. |
![]() |
The laptop webcam alongside a raspberry pi for scale. |
![]() |
The underside of a different extracted screen that's attached to a motherboard |
Day 3
This session was mainly focused on this installation mock-up. Except for the shelves, most of the equipment had been put back away. This wasn't much of a problem though, as the hardware was pretty easy to find again. For the initial setup, all we had to do was assemble a set of six laptops, three on one shelf, and another three on the shelf below.
Because the top three screens had to be upside down to create our "screen wall", we had to go into the settings of those laptops and rotate the screens 180 degrees via the display settings.
After all the charging and ethernet cables had been re-connected, this was where the more difficult part began, which was the process of setting the laptops to communicate and sync up the videos we played.
To test this, we used some stock footage of a coffee mug being occasionally picked up and put down. The first major step was deciding the dimensions of separate sections of the videos, so that different video sections would be displayed in each scene.
To figure this out, we had to draw out a plan. This was what we laid out:
Comments
Post a Comment