Processing [Week 2]

Day 1

Today started with doing recap through the coding we had done in our last week of processing, and went back through the code sketches we had created on OpenProcessing.org

During this recap we were introduced to a new piece of code called "if(frameCount > 220)", which basically made it so that animated elements of our code sketches only started upon reaching 220 frames.

From this point, we began to cover some new coding exercises. The links to these coding exercises were shared with us by the tutor as he was talking about them, so we could just see how the code worked firsthand instead of just trying to re-create it all from scratch. These were some of the exercises we looked at:

https://openprocessing.org/sketch/1525999

https://openprocessing.org/sketch/1526895

https://openprocessing.org/sketch/1511060

https://openprocessing.org/sketch/1511636

These exercises cover a mix of image and camera related functions. For this session, the tutor personally provided us with webcams so we could properly test the camera exercises. While we were going through them, we were also periodically given time to mess around with the code ourselves to test how it would change the result.

The exercises I found the most interesting were the filters and image blending/mixing, as well as object detection and Real-time Human Pose Estimation. 

We were also shown through several different libraries which each provided examples and references on how to create different things using code, and were encouraged to use these tools in the future to inform any future coding we might do.

While I thought the concepts introduced were very cool, I had some trouble understanding them code-wise. Because of this, when the time came to code something myself, I found myself struggling a little. As a result, I spent most of my time looking through the libraries the tutor had provided instead.

Day 2

Today we began looking at how to set up a multi-screen video installation. 

Before we did that though, the tutor talked to us about how we could extract and use different laptop parts such as the screen or the webcam to create our own multiscreen displays. 

The tutor proceeded to show us how to do this by dismantling a laptop he had brought into class and showing us how to extract the particular parts we needed.

Once he was done extracting the parts, this is what it looked like:

The bottom of the laptop and the empty screen casing.

The laptop webcam alongside a raspberry pi for scale.

The underside of a different extracted screen that's attached to a motherboard
and a set of buttons.

The screen we just extracted, along with the wire connector.


Next, we were given a whole load of Samsung tablets by the tutor and instructed to log into all of them, open the camera app, and stand them up in a circle with the cameras facing each other. While the task seemed simple on the surface, actually doing it proved to be difficult. First we tried putting some tables together and arranging the tablets on there, but that didn't really work, as the tables weren't lining up very well.

Using a few ThinkPad laptops, we created a mock-up of what would be our installation piece. To do this, we gathered a couple of extension leads which we could use to plug in the chargers for the laptops, as well as plug in a router. We also used a set of ethernet cables to connect the laptops directly to the router.

From this point, we began the construction of our display. Other than the construction of the shelves we needed to put our display on, most of the steps we went through were the same as the ones we had covered before.

Day 3

This session was mainly focused on this installation mock-up. Except for the shelves, most of the equipment had been put back away. This wasn't much of a problem though, as the hardware was pretty easy to find again. For the initial setup, all we had to do was assemble a set of six laptops, three on one shelf, and another three on the shelf below.

Because the top three screens had to be upside down to create our "screen wall", we had to go into the settings of those laptops and rotate the screens 180 degrees via the display settings.

After all the charging and ethernet cables had been re-connected, this was where the more difficult part began, which was the process of setting the laptops to communicate and sync up the videos we played.

To test this, we used some stock footage of a coffee mug being occasionally picked up and put down. The first major step was deciding the dimensions of separate sections of the videos, so that different video sections would be displayed in each scene.

To figure this out, we had to draw out a plan. This was what we laid out: 


To input the information, we had an extra laptop connected to the router to be used as the "master", through which we could add the commands necessary to input the correct resolution sizes for each of the "slave" laptops.



Comments