Home

ArtTalk: Multimodal Interactions in Museums and Galleries

In my final semester at MIT, I took the graduate-level class Intelligent Multimodal User Interfaces. The goal of the class was to build multi-input systems that could be used in real world applications. My group and I were interested in bringing new life into how we interact with art in museums and galleries.

I worked with Andrew Stoddard and Maria Ascanio to build ArtTalk.

Motivations

Andrew and I had both take the MIT class Extending the Museum. During the class, we visited lots of museums and noticed something. Visitors walked from one exhibit to another, silently admiring the art but hardly having a conversation about it. We wondered, "What if we could change that? What if we could make these gallery spaces more interactive?"

So, we designed ArtTalk. With ArtTalk, you don't have to keep your thoughts to yourself. A visitor simply points at a detail and shares their comment aloud. The application tracks your hand gesture and captures your voice, letting you pin your thoughts directly on the art piece.

There's no need for any extra gadgets - your voice and your gestures are the magic wands.

User Experience

Below is an early diagram of our user journey.

A user can walk up to an art piece and the ArtTalk system will register that they are starting a new interaction.

A user can engage with the system by pointing at different parts of the painting with their hand.

Once a user has locked their pointing position in place, the system will read aloud past comments from other visitors.

ArtTalk then prompts the user to leave their own thoughts about the painting.

To wrap up the interaction, the system will read aloud the user's comment and confirm that their comment has been recorded.

Technical Details

ArtTalk is built using the following technologies. For more information about the technical details, reference our final paper writeup.