





Exploring how to spatialize audio data as a continuous interface through an inclusive design approach.
Designer and Creative Technologist
Unity3D, C#, Figma, Blender, Ableton, Illustrator, SuperCollider and Photoshop
Sound design support from Xan Alfonse
Fundamentally, this sound-and-space project offers a way to reunite our physical and social spaces, which includes the variety of accessibility needs of all probable users, and acknowledges how context and wonder plays a major role in socialization.
Lured by the cloud-like visual representations of user-generated content available in the Snap Map feature on Snapchat, I became obsessed with the behavior and form of atmospheric clouds.
To grasp the lived experience of those who have limited access due to the changing dynamics of social life, I spoke with two individuals from the visually impaired community who imparted deep knowledge about the ways their community interacts with the world and people through technology.
For my prototype, I developed a Unity first-person experience that allowed audience members to interact with audio notes in a virtual space.
In my project exhibition, I created a sound-based experience that allowed audience members to immerse themselves in the spatial sounds of different cloud types.
Voice recordings are posted in physical locations, to be later encountered by others when they visit those locations. As these posts accumulate in space, they form visible “clouds.” As with atmospheric clouds, these assume abstract but classifiable forms reflective of dynamic and emergent conditions––in this case, social rather than meteorological.