Generative Playgrounds from Sketches through Image Segmentation
Yasmin ran an experiment around the creation of a custom COCO dataset of annotated playground images for the purpose of generating interesting algorithmically-generated backgrounds for the gather space we use during the AI Playground program using image segmentation, GANs, and the SPADE COCO dataset. The goal was to explore the annotation of images, the creative applications of image segmentation, and the ethical considerations related to dataset preparation.
šŖĀ Setup of the experiment - What is this about?
This experiment started from a conversation with Computational Mama (aka Ambika Joshi) for AI Playground. If you donāt know what AI Playground is, itās a set of public online events we held on Gathertown, inviting artists and researchers that were using AI in their practice. Ambika wanted a way to make the Gathertown event background more interesting or algorithmically generated.
My idea was that after each event, attendees can collectively draw a sketch with different parts of a playground, which will be fed as an input into a Generative Adversarial Network (GAN) that will generate an image be used for the next event.
At first, I wanted to create a pix2pix model like Edges2Pikachu where one can draw a sketch and it transfers an image of a Pikachu.
Demo of an image-to-image model, play with them here: https://affinelayer.com/pixsrv/
However, a playground is a far more complex scene than a character so I decided that image segmentation and SPADE COCO dataset would be a better fit, similar to the one you can play with on RunwayML here.
Github repo for image segmentation model: github.com/agermanidis/SPADE-COCO
Another reason why I did this experiment is that I havenāt come across many custom image segmentation datasets (let me know if you have!). The custom dataset below is a tutorial from Paperspace so I feel like this experiment can add to this developing area of image synthesis.
āTraining GauGAN on Custom Datasets | Paperspace Blogā
This experiment is about the process of creating a custom COCO dataset of annotated playground images that can then be used to make a GauGAN, similar to the one developed by NVIDIA below. They even have a live demo you can play with.
Output from NVIDIAās GauGAN2. Left is my sketch input with different colours, right is the output
Research questions:
_How can I annotate images and whatās the best method/tool?
_What are the possible creative applications of image segmentation?
_What are some of the ethical questions of dataset preparation and how can these work to be resolved?
šØĀ Output + Resources - What did you make?
Github Files
I have uploaded all the code Iāve used and referenced where I got them from. You can check out the repository for this experiment here.
For reference, hereās the main github repo for training an image segmentation model. I didnāt successfully prep the dataset so I didnāt get to the point of training the model. Github for training image segmentation here.
Instagram guides
A Guide to Generative Image Models
Infographic on Instagram about different image models. Text-to-image diffusion models blew up in 2022 but they are the descendants of a machine learning architecture called Generative Adversarial Networks (GANs)
Rundown on Image Segmentation
Infographic on Instagram about a type of computer vision method called āimage segmentationā that builds up from object detection. Pixel-perfect masks of objects in a complex scene.
š„ Videos of process
Talk about the research stages, looking at different models and Github repos. I talk about semantic image synthesis and Nvidiaās GauGan2
I talk about synthetic datasets and how I use scripting in Blender to rotate objects in a scene and potentially make as many images as I want.
šĀ Reflections - What did you learn?
At the beginning of the experiment, I was really determined to make a custom image segmentation model. Not only was it something that I hadnāt seen much or done myself, I thought it would be a worthy challenge. I realised later on that maybe that was too high of a challenge to accomplish within my time and knowledge constraints.
The multiple modalities, or the different kind of inputs a user can make is so exciting for me. Not only can you write a text input describing a landscape, but also draw a sketch, but also paint a colourful segmentation map. By combining all of these, it could really increase the potential of our creativity. The next frontier of AI is creating multi-sensory systems that are capable of processing more of the world.
Challenge 1: Dataset diversity
Challenge 2: Dataset preparation
Synthetic datasets reduce the issues of making mistakes in labelling, because the masks of the categories will be pixel-perfect, unlike doing it by hand. I still have a lot more to learn in this area, but this is an exciting part of AI that means you donāt have to scrape images of online with real peopleās sensitive pictures, or offshore cheap labour for annotating images.
Comments on the research questions
- How can I annotate images and whatās the best method/tool?
- labelme worked for me BUT I had issues converting into a COCO format. It appears that the annotated files could still be used for other image recognition purposes though. See my reflections on this here: July: Using labelme to annotate
- What are some of the ethical questions of dataset preparation and how can these work to be resolved?
- See here for some of my thoughts on this question: 24/08: Synthetic dataset - downloading models and scripting
As this experiment comes to an end, Iām realising some of the mistakes I made that Iāll try to avoid or limit next time.
- Start simple. And I mean REALLY simple
- In the setup of the experiment, I identified there hadnāt seen much research on image segmentation before by artists/designers doing it themselves. Maybe that was for a reason! What followed is many months of trying to do something complex that was a big stretch from my current knowledge.
- Instead, I could have identified this gap and I could have made more simple mini experiments that could have led to the āmainā experiment of making a gauGAN. Mini experiments could be:
- Generating images with a pre-made COCO dataset, like with ADE20K
- Using the Runway ML model in a creative way, as an API maybe?
- Create an image detection model of a playground (simpler than image segmentation)
- Ask for help as soon as possible, so your friends and peers can check you
- I went ahead with this project very individually and didnāt involve people or ask for help until much later on. I could have jumped to trying a synthetic dataset faster, but also could have just been told āYasmin, this is a bit difficultā and simplified this much more. By the time I realised I should make this experiment way more simple, it was time to wrap up.
- It was tricky to involve people because it can be a difficult process to onboard someone. I had conversations with friends that were eager to help but didnāt have specific ML knowledge or I talked to people that had loads of knowledge but little time to spare. Next time, it would make more sense to have a partner on the project for accountability and also reality checks. Pairing with someone with expertise from the beginning could be really helpful.
Hope you enjoyed the read! If youāre someone that feels like they could contribute to another iteration of this experiment that can solve some of the issues Iāve identified (or missed), please email me š yasminmorgan.info@gmail.com. You can also Slack me on the AIxDesign slack.