rebecca's WIP

Archive
computational portraiture

Today I met with Julia to go over advice for re-texturing some of my Hart Island models in preparation for building our Unity world. I’ve been getting great geometries in Photoscan, but when I view the models in Unity the textures look pixelated and fuzzy which is not what we want. I’m going to need to do some serious Photoshopping to get the level of detail I want in the coming days. One of Julia’s suggestions was to export the texture maps as PNG files rather than JPGs (since JPGs interpolate the spaces between textures and can make re-texturing more difficult). This afternoon I tried doing this for some of my models and got the following texture maps. This doesn’t have much to do with my project, but I think they’re really beautiful images that capture the unfolding of a space.

13_ferry_dock abandoned_bldg

Read More

Some screen recordings of Apple Maps 3D flyovers that will hopefully be used with photogrammetry as the basis for our virtual museum. I wish there were a way to export an OBJ file or the depth information for a given place.

More info on Apple and Google’s large-scale photogrammetry mapping operations below
http://aerometrex.com.au/blog/?p=752
http://www.gearthblog.com/blog/archives/2014/09/google-earth-automatically-generated-3d-mesh.html

Read More

Screen Shot 2015-10-20 at 11.45.09 AM Screen Shot 2015-10-20 at 11.44.14 AM Screen Shot 2015-10-20 at 11.43.09 AM Screen Shot 2015-10-20 at 11.45.28 AM Screen Shot 2015-10-20 at 11.45.51 AM Screen Shot 2015-10-20 at 11.46.04 AM Screen Shot 2015-10-20 at 11.44.45 AM Screen Shot 2015-10-20 at 11.44.03 AM Screen Shot 2015-10-20 at 11.42.35 AM

 

PROCESS

I began by spending a lot of time on Periscope, watching people’s live streams and noticing some patterns about the ways people portray themselves on this kind of platform. I started to take notice of a strange tension between performativity and intimacy; during a livestream, people let you into these incredibly intimate (and often banal) moments in their lives — getting ready for work, putting on makeup, watching tv with their spouse — in this incredibly performative way, broadcasting their lives to a mostly invisible audience. (As a viewer on Periscope, you can talk to the person streaming via chat if their chat is enabled, but there is no verbal communication).

I also found myself interested in the ephemerality of these streams, their precarious “liveness.” You see into people’s lives for a brief moment, and then they’re gone. I had the idea to try and somehow capture or preserve these moments, these windows into people, by making 3D portraits using Periscope video streams as the basis for photogrammetry.

Read More

Below are some progress snapshots of 3D models I generated using Photoscan, a photogrammetry software program, and Meshmixer, for 3D sculpting. I began the project making models that I generated using Periscope livestream videos (which I then split up into stills and fed through the photogrammetry software).

When I became frustrated with low res video feeds, crappy models and angry people — many of whom blocked me and refused to let me make portraits of them — I turned to Amazon Mechanical Turk as an experiment. I paid people to send me high resolution videos of their faces from different angles. The Mechanical Turk models are, as I suspected, much better quality but I feel less inclined toward going that route since it’s not as interesting to me conceptually. I think it lacks a lot of the interesting questions around liveness, digital traces, voyeurism, etc.  I’ll be tightening things up conceptually and creating a Unity environment before I present my final project on Saturday, but I hope this is the beginning of a long journey with this project.

Screen Shot 2015-10-14 at 1.33.44 PMScreen Shot 2015-10-13 at 7.50.01 PM Screen Shot 2015-10-13 at 8.32.48 PM Screen Shot 2015-10-13 at 10.08.07 PM Screen Shot 2015-10-13 at 10.57.11 PM Screen Shot 2015-10-13 at 11.31.17 PM Screen Shot 2015-10-14 at 11.27.10 AM Screen Shot 2015-10-14 at 11.29.03 AM

Read More

PROJECT BRIEF
Create a computational portrait.

CONSTRAINT
The portrait must portray a person, in an environment conveying something about them that is not visible to the eye. It must be of a person who is not an ITP or NYU student.

INSPIRATION
Consider integrating one or many of these creative constraints into your portrait:
– The viewer of the portrait may interactively influence the perspective.
– The portrait is synthesized from entirely found images that you did not take yourself.
– The portrait’s final form is not an image, but rather a sculpture.
– Place a real person in an imaginary space.
– The person portrayed was never actually present in the environment depicted.

 

Concept 1: Periscope Portraits

periscope_collage

I’m interested in the culture of live image-making and recently became interested in Periscope as one of many platforms for this weird kind of performative broadcasting. What interests me most about the streams as mini-portraits is both their intimacy and ephemerality; you see into people’s lives for a brief moment and then they’re gone. I’d be interested in using Periscope livestreams as a way to create portraits — to humanize these flat people and preserve these moments of them in their “natural habitat” (I’ve thinking about taxidermy as a crude analogy).

To do this, I would create either 3D videos or static 3D scans of a series of people using screencaptures of the livestreams, and pair these scans with the audio of their ‘story.’ I’m not exactly sure what the process would be for this — if it would be something along the lines of The Ghost of Strangers project (http://theghostsofstrangers.com/), where I’m actively soliciting images of the person and asking them to do a full body scan, or where I’m more of a voyeur, collecting video without their knowledge. My concerns around pursuing this direction are ‘ethical’ (slightly too strong a word) questions around dehumanizing / flattening my subjects, but also possible technical issues where I’m not getting enough coverage on the person’s face and their environment. I’m open to working with glitches and mistakes, but I also worry that the image quality would be low and the models would look crappy. I forsee ultimately working with these models in some kind of game engine — most likely Unity — and outputting them as an explorable environment or VR experience.

Read More