rebecca's WIP

Archive
programming a to z

For my final project in Programming A to Z, I have a few different ideas that grow out of my thinking on some past projects I’ve made, such as my visually similar image processing algorithm and my twitter bot @Dronesweetie. I feel like the interactions aren’t totally fleshed out at the moment so I’m hoping to get feedback from the class on how to push the ideas further.

1) Image recognition chrome extension (2 variations)

A big interest of mine is the way the machines see, in particular how machines translate images into language, or into other (similar) images. Last year, I did a couple of projects with a Deep Learning tool from University of Toronto that translated images into sentences using neural networks. Unfortunately, this tool is now defunct.

I have two variations on an idea for a chrome extension that recognizes the images on a web page. The first idea is that for every image on a webpage, the chrome extension would swap it out with a visually similar image. I’m still on the hunt for a good API that is able to do this, but I think I could use the following workflow:

Original image URL —> Clarifai API —> get tags / keywords —> Feed into image search API —> Replace original image with new image

nytimes_original

nytimes_imagereplacer

The challenge with this idea is that it might just be confusing and not that impactful. One big question on my mind is how to indicate to the user that these are different images (if at all). I did, luckily, find an API that seems to be able to retrieve similar images (though I haven’t looked closely at the documentation).

Screen Shot 2015-11-30 at 11.41.57 AM Screen Shot 2015-11-30 at 11.42.09 AM

The second idea is a play on the notion that a picture is worth a thousand words. What if, with my Chrome extension installed, there were no images on a web page, only placeholder images with descriptions of the images that were originally there.

nytimes_descriptions

The challenge here would be how to generate the sentences for each image, given that I don’t have access to a readymade tool or API that does this for me. I could use keywords and some text generation system to generate simple sentences that describe the images. (Thoughts on this would be helpful).

Relevant projects and APIs:
dronesweetie
Interesting JPG
Google / stanford research on neural networks
Clarifai – does keyword tagging
IBM Watson Alchemy Vision – does image tagging, face detection, similar images + detailed documentation
IBM Watson Visual Recognition

2) UFO Sighting Twitter Bot

I came across this database of crowdsourced UFO sightings from the National UFO Reporting Center and I immediately thought it could be an interesting corpus of text for a project.

Screen Shot 2015-11-30 at 11.06.32 AM

I have always been interested in the paranormal and the culture surrounding it (going back to the work I did in college with number station recordings), but I think there’s something more poetic about these descriptions of foreign things floating in the sky. After all, a UFO simply stands for “unidentified flying object” and there’s something resonant about that idea in the age of drones, terrorism, and widespread fear of things that are “other.”

My idea is to create a Twitter bot that periodically tweets these mysterious sightings. It will scrape the NUFORC database to generate a corpus of text, and then use a Markov chain (or CFG or other text generation method) to create collaged sentences of these sightings. I’m interested in using a Markov chain because I want to create abstractions of the sightings (rather than the actual sightings themselves), since there’s an interesting tension between their specificity and the scale / repetition.

A bar of lights in the sky. Sustained in sky as if it was hovering then bank left in a straight line in the sky.

1 green light flying horizontally across the sky, then going down towards the ground at 45 degree angle.

Flying triangle made no sound.

Orange and green lights on a dark obj. with no obvious noise, no obvious wings and no propellers

V shaped object with 6 white lights and 1 red light at the point. When I got to the stop light it took off with a sudden burst of speed and was out of sight in no time.

Side note: the descriptions also remind me of what comes out of neural network image-to-text generators.

3) Extension of SkyMap project — either Twitter bot or digital cyanometer

My last idea is to continue developing the SkyMap project I started and make it into a Twitter bot. Currently, I have a website that pulls images of the sky from Google Street View for a large dataset of cities in the US. When you rollover the tile of the sky, it tells you where that sky  is from.

Screen Shot 2015-11-30 at 11.55.53 AM

One idea I had is to make a replier twitter bot that uses the metadata of a tweet (every tweet has a latitude and longitude) and replies to people with a picture of the sky where they are. It could use Google Street View, or instagram to pull a geolocated picture. A question I have about this idea is what the criteria would be for replying to a tweet. Does the person mention the sky? Does it randomly pick a tweet, as if to signal to the person, get off your computer and go outside? Is it a more sinister gesture, as if the reply is to say “I know where you are right now”? Not sure about this, or what the tone is.

Another variation I had on this idea is to make a digital version of a cyanometer, a 225 year old tool for measuring the blueness of the sky.

cyan

https://en.wikipedia.org/wiki/Cyanometer
http://www.thisiscolossal.com/2014/05/the-cyanometer-is-a-225-year-old-tool-for-measuring-the-blueness-of-the-sky/

Read More

Find a source text and manually perform one of the mashup techniques below (or one of your own invention.) Create a webpage with the results using some combination of HTML, CSS, and/or JavaScript. Host the page on github pages (or your own server). There is no need for programming for this assignment, it’s just about getting set up in an environment and starting to think about creative ways to play with text. However, you may choose to include animated or interactive elements if you like. 

For our first week’s homework, I came up with a simple formula for rearranging text based on a series of noun phrases. The instructions are as follows.

  1. Choose a paragraph in a text
  2. Pick the first four noun phrases in that paragraph (exclude proper nouns) – ie. “green piece of paper”
  3. Find the end of the sentence of the last noun-phrase
  4. Skip two full sentences
  5. Add the next full sentence to the end of your text
  6. Example result: “The green piece of paper, the call box, one corner of the paper, the heat. Someone on the ground floor was playing music very loudly.”

Since I’m very new to JavaScript (with the exception of some dabbling in p5 last year), my main goal for this assignment was to familiarize myself with HTML / CSS, Git / Github workflow, basic JavaScript syntax and using the p5 DOM library. Also, Github pages are a dream.

You can see the results here.

 

Read More