Quick and Dirty Show Prep
Over the past few weeks, I’ve spent most of my time building something to test at the Quick and Dirty show. Up until this point, all of my ‘prototypes’ were really just story collecting experiments with spreadsheets and cardboard boxes so it was nice to have a deadline for myself to actually build something interactive, even though it’s still far from the final form.
I made a poster for the show that illustrates the steps of the interaction. I also narrowed down aspects of the physical experience and decided I want the memories to be handwritten / printed and scanned in a device.
I’ve collected a lot of memories from people over the past few weeks but hadn’t actually tried giving a memory back to the person submitting one so I knew I wanted to emulate this for the show. Even though my final input and output will be physical (a handwritten memory, and a printout of someone else’s handwritten memory), I decided to build a simple web prototype that would allow me to test primarily how people felt about the process of inputting a memory, and getting a memory back. This is basically the entire premise of my project, so it was important to me to test this mechanic to see if it worked at all. I was also curious to observe people in the act of writing their memories.
I made a quick prototype for the browser in which users select a prompt, type in a memory based on that prompt and get back a handwritten image of someone else’s memory when they submit. I pre-populated my database of memories with handwritten memories sourced from my friends, networks and supplemented by Amazon’s Mechanical Turk.
The live web prototype is here:
User Testing and Feedback
I had a number of things I wanted to get feedback on at the show. Primarily, I was interested in seeing how people felt about the relationship between their memory and the one they got back in the web demo. I also printed little cards with handwritten memories on them to give people a tactile sense of the output.
While people were testing my project, I took observations and timed the amount of time people spent writing. I also had a list of specific questions on hand in order to guide the feedback:
- Does the interaction make sense?
- Do you understand the relationship between input and output? Do you see a similarity or have any connection with it?
- Do you like to choose your prompt versus having a single prompt?
- Prompts – were they hard or easy to answer? Too specific? Do you need something more vague?
- Did you want to share? What would make you more or less inclined to participate in this experience?
- Did you like giving input by writing something?
- Length of the interaction – Do you like the immediate reveal? Do you want to wait? How long does it take to reveal?
- How inclined are you to participate in this experience? What would make you want to share a memory?
- Do you want to see analysis of your own memories? or is there something magical about not knowing the basis of the match?
- Did you feel like you got a private moment in public space? What did this make you feel?
People “got” the project and generally responded really well. I also figured out ways that I could simplify and streamline the project, which is a great thing to find out before beginning to build, troubleshooting software, etc. My main takeaways were as follows:
- need to give more attention to how I design the physical setup (more private, writing on paper)
- like the handwriting aspect, feels personal and important to the project
- question of why i want to read another person’s memory
- prompts – still not right? made people think of sad and scary things
- it’s fine to just give people back a random memory for the same prompt rather than use fancy language processing, people can make that mental leap themselves
This week I started to think about the technical architecture of my project since there are a lot of moving parts to it. I met with Allison Parrish to talk about the language “matching” that I was planning to do using gensim (a machine learning library for matching similar texts), as well as interfacing with the scanner and printer on either end.
Through talking to Allison, I realized that I was trying to take on too much technically for the scope of the next 7 weeks. She advised me to think about what was core to the project and remove elements that I didn’t need. The QD Show was really helpful in showing me how to simplify; I was planning to do fancy language processing to match the two texts based on similar language, but the show made me realize that I don’t need that. Luckily, the connections between the input and output only need to be human readable, and thankfully people are good at making mental leaps. So for now at least, I’ll be keeping it a bit simpler and giving each user a memory back from the same prompt as the one they wrote about. My revised system diagram is as follows.
- Refining flow / system diagram for my project (remove language processing), see above
- Designing technical architecture for scanning / printing (which language will I use, etc.) — meeting with Lauren, Allison Parrish, Shiffman
- Ordering scanners and beginning to work with the drivers
- SMS prototype for Waverley Screen
- Cardboard prototype of the experience (design an actual machine)
- Creating a detailed timeline for the next 4 weeks and finalizing