rebecca's WIP

spring 2016


Mnemograph is a participatory installation that connects two people through the language of their memories.

Sitting at a one-person writing desk, a participant is invited to anonymously contribute a personal memory in the form of a handwritten note.  They insert their memory into a slot in the desk, and, through a process of automation, receive a print of someone else’s similar memory. Through this experience of anonymous exchange, Mnemograph creates brief moments of connection between untold stories, providing an uncanny glimpse into the private moments of another person. Its material — language — points to both the specificity and the sameness of the lives we live, and to our unique tendency as human beings to make meaning by connecting the dots.

Combining analog interactions and digital automation (in which the technology itself is obscured from view), Mnemograph poses questions about the relationship between technology and memory. What possibilities open up when we think about memory as an active process of creation rather than something fixed or static? How can a device illuminate the ways in which remembering is a generative act?

Media: Printer, document scanner, custom software, custom-built plywood desk, acrylic, paper, pens

More photos and documentation coming soon!

Read More

My response to Assignment 3 (extensions and customizations) is a series of proposed provocations that explore the following question: where is the line between connection and codependence?

These ideas emerged from my own thinking and crystallized in a conversation I had with Kyle. I’m interested in this question about social boundaries from two different angles / places in my intellectual and personal life. On the one hand, a lot of my work this semester (particularly in my thesis) has revolved around this idea of creating private spaces in public.

In that process, I’ve been thinking about intimate spaces like prayer shrines, phone booths, bathroom stalls, ATM machines — objects and environments and structures in public life where we are allowed moments of privacy. But looking more closely at these structures (literally and metaphorically), I’ve realized that while we may seek privacy in the haven of a bathroom stall, we are quite literally only separated by a thin wall from the person next to us.

This idea is also inspired by my relationship with my sister, who is not only my blood relative but also my best friend and roommate. Not only do we share a tremendous amount of intimacy, closeness and history, but we also coexist socially with one another and have overlap between our friends and social groups.

We have a mutual recognition of the ways in which our closeness borders on codependence, and have worked throughout our lives to figure out the ways in which we share and are similar, and the ways in which we are different. (I also found a statistic that siblings are made up of 99.5% of the same genetic material).

Below are three mini proposed projects that revolve around ‘hacking’ my relationship with my sister, automating our intimacy / communication, and pushing the boundaries between connection and codependence.

Provocation 1: Domesticity for Two

The first provocation in this series has to do with the fact that my sister and I are roommates, so we inhabit the same domestic space as one another every day (mostly). We have separate bedrooms with doors that close but there are also many other ways we share and express intimacy with each other within that space: sharing food, sometimes sleeping in bed, borrowing clothes, etc. On the flip side, there are, of course, unspoken boundaries around things we do not do together or in the same space — go to the bathroom, have sex, get dressed, etc.

This idea is a series of domestic objects meant to be shared by two people that ask questions about the line between connection and codependence — where the boundaries between two bodies in a space begins to break down, or at least become fuzzy. I thought about different ways that we could engage in these really private activities not only together, but doing so while having to somehow manage or negotiate the presence of another person. I thought about making something like a shared toilet, but then landed on three things that felt slightly more subtle.

Read More

img_3817 img_3853img_3918 img_4023 img_4094img_4113

Read More


The biggest challenge for me with my fabrication so far has been figuring out what exactly I want to create — a desk? a booth? a wall? a machine? With this kind of project, and in general, fabrication is so much more than sticking your project in an enclosure last minute; it’s really the part of the project that people see and interact with, and so it carries a huge statement about what the project says and is about. Very stressful.

I went through tons of rounds of sketching, first with my friend (and master fabricator) Luke Stern. We came up with the design for a kiosk-like machine, prototyped the scale in cardboard and started to plan my fabrication.


The next day, I realized the form factor was wrong for the kind of experience I want to create (something more intimate that people spend time with) so I switched to the idea of creating a writing desk with all the hardware embedded.



I’ve met a few times with Ben Light who, as always, has amazing suggestions and guidance both in terms of design and construction. I’m going to be creating the top part of a desk (essentially a box with all of my equipment inside) that I’ll put on pre-fab legs.


Ben suggested I prototype in cardboard which I’ve been doing a ton of over the past few days. It’s really informed my decisions about scale, the placement of things, and been helpful in thinking through a series of challenge with the construction. Here are some highlights from my fabrication prototyping adventures.


Read More

At Nick Hubbard’s suggestion, I went to look at an exhibition at Brooklyn Historical Society about letter writing. It gave me some great ideas about scale, as well as subtle visual (and aural) cues I can use to create the experience I want. I also visited an exhibition on the first floor designed by Potion that had some very satisfying tactile interactions. Pics below.

IMG_3403 IMG_3407 IMG_3411

Read More


Over the past few weeks, I’ve got the bulk of my code working. I still have to do my user testing and see all the ways that things can go wrong when humans interact with it, but the basic flow is up and running. I also may have to add some changes to the image processing (rotating etc.) depending on the final direction I use for my scanner. I have the code working with the Epson Inkjet printer, but I’ll no longer be using that one (printer saga is a whole other story).

Below are some videos of the following: getting my OCR to work, the first time my code worked (scan a memory –> pick a similar one –> send it to printer) and some snapshots of me taking apart my scanner so that I could hack the button. I eventually met with Eric Rosenthal who was able to help me rewire to an external button, because he is magic.

IMG_3296 IMG_3298 IMG_3300

Read More


I learned a ton about my project — what works, what doesn’t — at the Q&D show, so I decided to follow it up the next week with another user test. I began building my software immediately after (using NodeJS and a number of printing and file watching packages) but it wasn’t ready in time to test.

Alon had a great suggestion (and a great video to send me for inspiration) that I user test the core of my interaction in an analog way. So I built a machine interface out of cardboard and conducted a user test sitting behind it. Literally, I became the machine.


Even though the conditions weren’t perfect, I emerged with a lot of questions and surprising observations about how people received my project. It also taught me a lot about the way I need to set up the environment of the installation to get the kinds of responses I want. Some (very belated) documentation is below.

IMG_3185 IMG_3178 IMG_3177

Read More


Quick and Dirty Show Prep

Over the past few weeks, I’ve spent most of my time building something to test at the Quick and Dirty show. Up until this point, all of my ‘prototypes’ were really just story collecting experiments with spreadsheets and cardboard boxes so it was nice to have a deadline for myself to actually build something interactive, even though it’s still far from the final form.

I made a poster for the show that illustrates the steps of the interaction. I also narrowed down aspects of the physical experience and decided I want the memories to be handwritten / printed and scanned in a device.


I’ve collected a lot of memories from people over the past few weeks but hadn’t actually tried giving a memory back to the person submitting one so I knew I wanted to emulate this for the show. Even though my final input and output will be physical (a handwritten memory, and a printout of someone else’s handwritten memory), I decided to build a simple web prototype that would allow me to test primarily how people felt about the process of inputting a memory, and getting a memory back. This is basically the entire premise of my project, so it was important to me to test this mechanic to see if it worked at all. I was also curious to observe people in the act of writing their memories.

I made a quick prototype for the browser in which users select a prompt, type in a memory based on that prompt and get back a handwritten image of someone else’s memory when they submit. I pre-populated my database of memories with handwritten memories sourced from my friends, networks and supplemented by Amazon’s Mechanical Turk.

The live web prototype is here:


User Testing and Feedback

IMG_2984IMG_2991 IMG_2990 IMG_2987 IMG_2986

I had a number of things I wanted to get feedback on at the show. Primarily, I was interested in seeing how people felt about the relationship between their memory and the one they got back in the web demo. I also printed little cards with handwritten memories on them to give people a tactile sense of the output.

While people were testing my project, I took observations and timed the amount of time people spent writing. I also had a list of specific questions on hand in order to guide the feedback:

  • Does the interaction make sense?
  • Do you understand the relationship between input and output? Do you see a similarity or have any connection with it?
  • Do you like to choose your prompt versus having a single prompt?
  • Prompts – were they hard or easy to answer? Too specific? Do you need something more vague?
  • Did you want to share? What would make you more or less inclined to participate in this experience?
  • Did you like giving input by writing something?
  • Length of the interaction – Do you like the immediate reveal? Do you want to wait? How long does it take to reveal?
  • How inclined are you to participate in this experience? What would make you want to share a memory?
  • Do you want to see analysis of your own memories? or is there something magical about not knowing the basis of the match?
  • Did you feel like you got a private moment in public space? What did this make you feel?

People “got” the project and generally responded really well. I also figured out ways that I could simplify and streamline the project, which is a great thing to find out before beginning to build, troubleshooting software, etc. My main takeaways were as follows:

  • need to give more attention to how I design the physical setup (more private, writing on paper)
  • like the handwriting aspect, feels personal and important to the project
  • question of why i want to read another person’s memory
  • prompts – still not right? made people think of sad and scary things
  • it’s fine to just give people back a random memory for the same prompt rather than use fancy language processing, people can make that mental leap themselves

Technical Architecture

This week I started to think about the technical architecture of my project since there are a lot of moving parts to it. I met with Allison Parrish to talk about the language “matching” that I was planning to do using gensim (a machine learning library for matching similar texts), as well as interfacing with the scanner and printer on either end.

Through talking to Allison, I realized that I was trying to take on too much technically for the scope of the next 7 weeks. She advised me to think about what was core to the project and remove elements that I didn’t need. The QD Show was really helpful in showing me how to simplify; I was planning to do fancy language processing to match the two texts based on similar language, but the show made me realize that I don’t need that. Luckily, the connections between the input and output only need to be human readable, and thankfully people are good at making mental leaps. So for now at least, I’ll be keeping it a bit simpler and giving each user a memory back from the same prompt as the one they wrote about. My revised system diagram is as follows.


Next Steps

  • Refining flow / system diagram for my project (remove language processing), see above
  • Designing technical architecture for scanning / printing (which language will I use, etc.) — meeting with Lauren, Allison Parrish, Shiffman
  • Ordering scanners and beginning to work with the drivers
  • SMS prototype for Waverley Screen
  • Cardboard prototype of the experience (design an actual machine)
  • Creating a detailed timeline for the next 4 weeks and finalizing
Read More


Waterline is a map of a walk that I took along part of the border of Hurricane Sandy’s flood zones in New York. Using a map of the inundated areas (based on data sourced from NYC Open Data), I traced part of its edge along a route from Brooklyn Heights to Red Hook through walking. I began this project as part of my final for Digital Mapping and I hope to continue to develop it, eventually covering the entirety of the flood zone’s edge and mapping it through crowdsourced walks.

This project came out of an early assignment I did for the class, which was a simple map of the flood inundation zones from Hurricane Sandy from GeoJSON data I found online.


At the same time, I was (and still am) doing a lot of research around memory and urban space related to my thesis.


I had the idea to do something related to walking, as a way to explore lived experience of geospatial data in our urban environment, as well as the relationship between the scale of data and the scale of my own body. I thought this might be an interesting way to map the invisible – the traces of something that happened, traces we can no longer see — and try to spatialize it for myself. Here are some of the questions I posed to myself:


As a first attempt at answering, or at the very least complicating, some of these questions, I decided to take a walk along part of the border of the flood zone from Hurricane Sandy based on the data I had mapped. I would take the walk, document it with images, and then make my own map tracing the outline of the inundation zone by tracking my location while I walked.

Read More

Screen Shot 2016-03-23 at 1.43.38 PM

This week, I did 1.5 assignments — a web prototype for my thesis project, and a small experiment in response to the surveillance project prompts that I would like to develop into a more fleshed out project.

Project #1

My thesis is a participatory installation that connects two people through the metadata of their memories. Based on a given prompt, users are invited to anonymously contribute a handwritten memory and scan it in the machine. When you submit a memory, you get back a print of someone else’s memory based on common language. I wanted to test people’s feelings about inputting a memory, and their relationship to the output, so I made a simple web prototype that does just that. I pre-populated my database of memories with handwritten memories sourced from my friends, networks and supplemented by Amazon’s Mechanical Turk. (I have a lot more to say about the strangeness of paying people for their memories, but I could write a separate post about that).

Screen Shot 2016-03-23 at 2.00.45 PM

The web prototype is below along with some documentation from my testing.

IMG_2983 IMG_2991

Project #1.5

For the surveillance assignment, I used my thesis project as a jumping off point to think about surveillance in a broader sense. I was interested in the suggestion to collect the same piece of information from 100 people, and this got me thinking about the various ways in which private information can be hidden in public. We leave traces of ourselves everywhere — from our Instagram feeds, to the DNA we leave behind in our environments every day. I’ve thought a lot about if we can reconstruct or reverse-engineer portraits of people — some sense of who they are — from the things they leave behind (much like Heather Dewey Hagborg’s Stranger Visions, a project that has influenced my thinking a lot).

I am interested less in data and more in human stories; in the idea of leaving private things in public (or having private moments in public spaces), and in the possibility of using as a glimpse into somebody’s interior. That ranges from things like bathroom graffiti to Post Secret( So for this assignment I decided to turn to Craigslist as a place where people post things — objects, personal ads, listings — that might contain stories, or at the very least, pieces of personal information.

I had a few different ideas — the first was to scrape Craigslist for listings in which people leave personal stories with objects they are selling; the second was to scrape all the items being given away for free (to see if you can see people in the things they no longer want), and the last was to scrape the mirrors being sold on Craigslist as a way to inadvertently collect pictures of people’s personal spaces / domestic interiors.

Screen Shot 2016-03-23 at 11.59.10 AM

I decided to go with the last option and create a simple scrolling, one page website with a gallery of images of mirrors and interiors from Craigslist:


Technical approach + Challenges

I also thought this would be a great opportunity to learn how to scrape a website. I looked into some off the shelf tools (such as but because of the wayCraigslist creates their pages, it didn’t work and I couldn’t grab the images.

I realized that Craigslist posting images are not in the HTML but are dynamically created in carousels for each listing, so it doesn’t show up as part of the page content when doing or even scraping with BeautifulSoup. I haven’t yet figured out how to interface with the page’s Javascript to grab the images programmatically, so I had to figure out a hack for the time being.

In the meantime, I used Python’s beautiful soup to scrape a list of 100 search results that gave me 100 most recent mirror listings in the form of URLs. This allowed me to easily go through and grab the image for each one manually but sped up the process. I hope to keep working on this to figure out how to do purely automated scraping of the images (with a bit of manual curation thrown in).

Screen Shot 2016-03-23 at 11.55.31 AM

Something I thought about here is also the balance between automation and manual work. There’s a lot of crap and repeat postings on Craigslist, so it was important to me to comb through all the images and select just the ones I wanted. I also want to figure out how I can push this project to be more than just a series of images.

Other things I want to add to finish up the page are:
– more image results
– images are being squished – cropped rather than squish them
– make photogrid responsive
– add the lightbox

Read More