What’s the combination of a red sports car and a blue truck?

It’s a simple question you and I both understand. But how do we reach a shared idea of this complex combination?


Meshup is a tool I created with Kevin Dunnell at the MIT Media Lab’s Viral Communications Group to accelerate synthesizing complex ideas.

A general adversarial network (GAN) is a type of machine learning model that can generate new images. We trained a GAN model on images of cars → so we had a machine learning model that understood images of cars and could in turn generate new images of cars.

We then created an interface to explore all the possible outputs from the model, given influencing cars. When a user clicks on a car, the synthesized image output in the center is nudged towards the influencing car.

My contributions

In the fall of 2021, I worked as a fullstack developer on this project. Below are my technical contributions.

Below is demo of the multi-user prototype.

Personal note: I thought this project was awesome. It pointed to exciting elements of getting humans closer to the feedback loop of machine learning and artificial intelligence.

My supervisor Kevin Dunnell and I were fascinated by modeling + exploring the latent space (all the possible generative outputs) of machine learning models and continued research with the Latent Lab project.

Another Meshup writeup can be found on the MIT Media Lab website: https://www.media.mit.edu/projects/tools-to-synthesize-with/overview/

⭐️ Access the Meshup website here: