rebecca's WIP

Archive
spring 2015

MAIN_IMG_HI

After many weeks post-Spring Show, I finally have pulled together solid documentation for my final Readymades project, What in the world do you want to see? Following the show, I took some time to fix a few minor fabrication details, and document the piece with nicely lit stills as well as a video that clearly illustrates the interaction. Documenting the experience of this piece has felt especially hard to do, since you have to look into a device to experience the images in a very intimate way, so that’s something I’m still working on. The video edit is still in process, but here are some stills in the meantime as well as the text that accompanied the piece in the ITP Spring Show. I’m planning to revisit and possibly revise the text, post my final video and do a quick post-mortem on the project (what I learned, what I would like to change / revise / etc.) within the next week.

 

WHAT IN THE WORLD DO YOU WANT TO SEE?

What if you were able to travel to an office in Moscow or a parking lot in Nebraska? To watch people sitting poolside, folding their laundry, or eating lunch in the food court of a mall?

What in the world do you want to see? is an interactive video sculpture that reimagines the ViewMaster, an old stereoscopic imaging toy, as a portal into other people’s worlds using surveillance camera video feeds taken from around the globe. Have a look inside the device and pull down on the lever to change what you see.

IMG_8184_LO IMG_8191_LO

Read More

An example from Visually Similar Image Processing, which I made in Fall 2014

My final project in Reading and Writing Electronic Text is part of a larger exploration of image recognition, image-text translation and the poetic possibilities in machine vision. I started this exploration back in the fall with my Intro to Computational Media final project. I was interested in the ways that Google Search By Image provides a glimpse into the way computers “think” about visual similarity, and how digital images exist in a space of sameness and repetition; how things can be “visually similar” on the level of pixels, brightness, composition but entirely disparate in content, context and the suggestion of meaning. For my ICM project, I wrote an image processing program that algorithmically combines batches of “visually similar images” at the scale of their pixels. The result is a strange ghost or shadow of the original images, or a kind of pixel map that becomes more abstract the more images you add to the group.

For my final in Reading and Writing Electronic Text, I wanted to continue to develop some of these ideas in this and see if I could somehow translate (or at the very least, apply) them to the realm of language. How do machines see? What happens when you translate an image to text? Might it be possible to use these textual translations to create some kind of linguistic “average” of that image? Can we find room for beauty in the failures between image and language? These are some of the questions that I was interested in and thinking about as I embarked on my final. Luckily, there are a number of tools and APIs available to do image recognition and tagging such as Clarifai and ImageNet.

I eventually decided to use the Toronto Deep Learning Demo tool as the engine to generate my image-to-text translations, since it returns full sentence descriptions rather than just single word tags. I had some trouble scraping the site’s HTML using BeautifulSoup alone, but luckily I was able to find a Python module that somebody else had written  to scrape the site based on image input.

The basic steps in my program are as follows:

Read More

For my fourth assignment on functions and modules, I rewrote my midterm code into a Python module. Written this way, the same output can be generated from just a few lines of code.

My module and main Python script are below.

Read More

2015-05-04 20.25.16 2015-05-04 21.26.39 2015-05-04 22.42.18 2015-05-04 22.47.22

Read More

Screen Shot 2015-05-02 at 6.10.09 PM copy

It’s a bird, it’s a plane…it’s images of drones, as described by a computer.

I recently made my very first Twitter bot based on the notes from Allison’s “How to Make a Twitter Bot” workshop. Now I can’t stop thinking of ideas for bots.

The name of my inaugural bot is @dronesweetie. @dronesweetie is a Twitter bot that tweets computer-generated descriptions of images of drones. The bot is powered by the Toronto deep Learning image-to-text engine. I wrote a Python script to programmatically input the results of an image search for “drones” into  the Toronto Deep learning machine vision tool, and then scrape the results to form the corpus for the Twitter bot.

Read More

tempex_final_proposal_4.30.152

Swimsuit for the Anthropocene is a project about water. It exists at the intersection of wearables, citizen science, and speculative design.

tempex_final_proposal_4.30.153

Swimsuit for the Anthropocene asks if we can find imagination in contamination. Imagine a world where everything around us is toxic. Where your plastic bottle contains carcinogenic compounds and your air is full of dirty particles. Where your water is a source of life and a source of death. Where contaminants are becoming part of you, by way of your body.

tempex_final_proposal_4.30.154

Read More

2015-04-28 16.08.24

My plan for the final project is to continue working with the View Master and convert it into a stereoscopic video player using two tiny LCD screens. I learned a lot from prototyping on the Oculus last time and I think for now I’m going to move away from using the stock footage. My plan at the moment is to play webcam and security camera videos that are livestreamed on this website. I think there’s something really interesting in the idea of “seeing the world” through the View Master, but instead of going to exotic locations you might see a feed of a laundromat in Moscow or a parking lot in Nebraska. As a longer term project, I think it’d be amazing if you could play the livestream video feeds from the site, but for the moment I’m going to curate a selection of interesting videos and download them to my computer.

Screen Shot 2015-05-01 at 11.33.01 AM

I ordered some vintage View Masters in the mail and was able to take one of them apart with the generous help of Ben Light. It’s a bit difficult to open up the older ones (as opposed to the new ones which just come apart with a bit of pulling) and I broke mine in the process. Luckily I have two more to try. There are a lot of technical challenges to keep in mind with this project including: fitting all of my electronics inside the device, preserving the switch mechanism (I’m going to use the little lever as a switch to change the videos) and getting the stereoscopic video right. I’ll need to get two of the RCA converter cables, since each screen will be playing something different. I’m also worried about the resolution of the small screens, but it’s what I have to work with right now so fingers crossed it will look ok.

2015-04-28 16.09.58

To make my project feel a bit more manageable, I’ve broken it down into smaller
steps that I’m going to go through in the next couple of days:

– Make simple video playlist in Jitter
– Control video playlist with a simple switch from Arduino
– Make switch out of View Master mechanism
– Test cables with two small screens
– Scan still reels to figure out how to make the images stereoscopic (dimensions and placement)
– Figure out all parts and how I want to create my enclosure)
– Embed everything inside the View Master (eek)

Read More

The Drinkable Book

It feels like I’m hitting a series of dead ends with my project in a way that is frustrating, but I’m also learning a lot from it. Following my East River experiments last week, I learned a number of things: first that perhaps the East River is not actually that polluted (from what I’ve read, it’s not, which makes sense since it’s a flowing river). I also learned that I need to give some further consideration to what I’m testing for / sensing in water since there are a *huge* number of things that can contaminate water — from bacteria, to human sewage, to PCBs, nitrates, heavy metals, the list goes on — and it’s usually location-specific.

I keep coming back to this question about what does toxicity actually mean, and how can I think about it as a spectrum. I also still have doubts about the viability of the human test strip model (aka swimming), since nobody wants to go in water that is incredibly contaminated and I would never subject a human body to that. I have a hunch that it’s the water in that in-between space (not the Gowanus Canal or Newtown Creek, but the East River or your run-of-the-mill stream) that I’m interested in. My idea is less about extreme toxicity and instead about closing the gap of our cognitive dissonance, even if what you’re finding out is that something is, in fact, not actually that toxic.

My test kit from last week, which had the basic water tests such as pH and dissolved oxygen, wasn’t really telling me anything. So I decided to research other ways to test for general “pollution” in water and came upon conductivity. A conductivity meter can test for salt, but it can also detect the concentration of things like nitrates, and heavy metals. And so, I decided to experiment with making my own analog conductivity meter using some basic electronics and an LM555 chip.

Using a tutorial from Public Lab, I built the circuit and immediately encountered a lot of trouble getting it to work the way I wanted. Luckily, Eric Rosenthal came to the rescue and was immensely helpful in adapting the circuit and changing the wiring. The basic idea is that with two metal probes in the water, the LED will blink proportionately to the conductivity of the water; the higher the conductivity (aka the more crap in it), the faster the LED will blink.

FI66X99H7996R0X.MEDIUM

Read More

Screen Shot 2015-04-22 at 11.52.25 AM

Activist and distance swimmer Christopher Swain is going to swim the length of the Gowanus Canal later today on the occasion of Earth Day. Swain’s public performance is incredibly timely given the things I’ve been thinking about and working on for the past semester (I wish I could go watch him but I’m going to be in class).

I find so many things interesting about his gesture, in particular how it draws attention to questions around pollution, toxicity (what does toxicity mean? what are the ways in which we endanger our bodies?) and performativity. ALSO I’ve been thinking a lot about the aesthetics of hazard (how to do we signal hazard to ourselves, our bodies and the environment?) so the fact that he is wearing a hazmat suit — rightfully so — is also interesting.

I’m very curious about the public dialogue around this event (I mentioned it to a friend and he didn’t totally understand what message Swain is trying to send), so I collected some headlines about it that I found online:

Activist to Swim Brooklyn’s Toxic Gowanus Canal on Earth Day

Yes, This Man Is Really Planning to Swim Brooklyn’s Gowanus Canal for Earth Day

“It may be crazy to swim in the canal,” says Christopher Swain. “But what’s crazier is that the Gowanus Canal is so messed up.”

Activist Plans To Swim Super-Polluted Gowanus Canal For Earth Day

Toxic avenger! Activist to swim Gowanus

MAN TO SWIM EXTREMELY POLLUTED GOWANUS CANAL ON EARTH DAY

Don’t do it! Feds warn this guy not to swim in Canal

Activist Will Sacrifice Body, Swim In Gowanus Canal Tomorrow

Clean Water Activist Planning To Swim Length Of Gowanus Canal For Earth Day

NYC man to swim dirtiest U.S. waterway for Earth Day

Read More

My presentation for the Video Readymade assignment was a first prototype of what will eventually be my final project. Since my View Master hadn’t arrived, I decided to make a first prototype to simulate the experience using an Oculus and the Oculus library for Jitter.

oculus

how dorky on a scale of 1-10

For this first version, I decided to use cheesy POV stock footage of faraway places and exotic locales as a nod to the original purpose of the View Master device, which was to transport you to somewhere else. (Think scenic postcards in stereo). I kept the watermark on the stock imagery, which I really liked conceptually, but I don’t know that it works in execution. I got a lot of great feedback from my critique to help push my thinking along, such as:

  • What about creating a virtual experience for experiences that aren’t particularly desirable? Really boring situations?
  • It feels like an architectural rendering of a place that already exists
  • How can this cheap plastic thing get you to another world?
  • Is there a way to play with nostalgia?
  • Am I interested in having a dialogue with VR’s use in simulation and training?
  • How can I bring out the uncanny?
  • What is it that interests me about the stock footage?
  • Could I make them into a series (if I get one working on time)?

In my mind, there are three contexts that I can leverage with this project and use to play with expectations:

  1. View Master (toy, history, cultural significance, stereo imaging, private space)
  2. VR: what are the expectations around VR? what’s it supposed to do? can it serve another purpose?
  3. Stock Footage: is it a database? a simulacrum? a blank version of life?

I’m going to look more into the history of the View Master, other related devices and stereoscopic imaging to see if there are some things in there that I can use as a starting point. Another thing I’d be interested to explore is the tension between the private (immersive space) of the device and a public space. (For this I also need to remember the scale of the View Master and that unlike the Oculus, it’s not totally immersive because the image is so small).

What if the View Master was a portal to a database, and you could type in anywhere you wanted to go or anything you wanted to see? What if it piped in a live stream?

I have two new (possibly promising ideas) for the video content:

  • Use a live stream of different surveillance cameras, and it switches feeds every time you pull the lever (could be in dialogue with my catcalling surveillance camera)?
  • A stream of eyes looking back at you, like really intimate forced eye contact
Read More