Kristina You is a multidisciplinary designer based in New York, NY and a recent graduate from the ITP Program at NYU. Her practice focuses on UI/UX, interaction design, print design, and brand development.


︎︎︎Hypercinema

Synthetic Media: Runway ML



Identify 3 models on Runway ML (or another tool) you are most intrigued by and a concept for each of them that could be used for this assignment.



1. Kids Self Portrait GAN


The first model that I was really intrigued by was the Kids Self Portrait GAN model that generates portraits in the style of children’s drawing styles. I thought that an interesting idea would be to utilize this model to take portraits done by famous painters and are exhibited in a museum setting and convert the portrait into a children’s style portrait. I would then create mock ups of these portraits and create a new museum environment setting to imagine a museum filled with children’s art that is not often seen as real art. I think that it would be a really interesting way of seeing how it would change the ambiance of a museum setting. I plan to start with portraits and possibly experiment with advertisements as well.




2. SPADE COCO/SPADE FACE



The second model(s) that I would like to experiment with is the SPADE COCO/FACE models that generate photographic images from sketches made with simple shapes using data sets of images. Similarly to the concept mentioned beforehand, I would like to take existing images of art, advertisements, or found images to see how closely I can replicate them through the use of these models. I think this will bring up an interesting point especially with the material that we are covering in class right now about ownership and appropriation of art. It would also be interesting to test the limits of these synthetic media generating models and see how easily and accurately they can be used to replicate other pieces of art/images.





3. DenseCap / AttnGAN
 



The third pair of models that I am interesting in creating a project with is DenseCap and AttnGAN- both of these models work with generating images through detecting words and vice versa. I think that an interesting project would be to start off with a random found image of a scene or objects and obtain a description of the image through DenseCap and then input that description into AttnGAN to generate a new image. I would repeat this process until the result is unrecognizable from the initial input image and create a series of images that are generated through this process. This project would have a similar purpose to the second idea which is to test the limits of these ML models and also to bring to question ownership/appropriation of an image.






︎︎︎ BACK TO ITP BLOG
Last Updated:  April 9, 2023
kristinayou00@gmail.com