Sonus

Exploring how to spatialize audio data as a continuous interface through an inclusive design approach.

Role

Designer and Creative Technologist

Tools

Unity3D, C#, Figma, Blender, Ableton, Illustrator, SuperCollider and Photoshop

Sound design support from Xan Alfonse

Based on the density of audio notes across a 24-hour timeframe, the application automatically switches between two states. A hue change in the background of the application, purple vs. blue, and an auditory experience.
The experience can be observed in a more immersive manner through the mixed reality perspective.
To record an audio note, the user, or group of users, simply press on the central button. 
To listen to the audio note, the user simply walks towards its fixed location and the recording automatically plays as spatialized audio. 
In spaces that are denser, the user is nearly unable to wade through the individual audio note left by others, and thus requires them to initiate a spatial filter according to the space’s top topics.
Users are able to tap an individual audio note and be presented with its metadata, such as number of plays and responses.  

IMPACT

Fundamentally, this sound-and-space project offers a way to reunite our physical and social spaces, which includes the variety of accessibility needs of all probable users, and acknowledges how context and wonder plays a major role in socialization.

METHOD

Lured by the cloud-like visual representations of user-generated content available in the Snap Map feature on Snapchat, I became obsessed with the behavior and form of atmospheric clouds.

To grasp the lived experience of those who have limited access due to the changing dynamics of social life, I spoke with two individuals from the visually impaired community who imparted deep knowledge about the ways their community interacts with the world and people through technology.

For my prototype, I developed a Unity first-person experience that allowed audience members to interact with audio notes in a virtual space.

In my project exhibition, I created a sound-based experience that allowed audience members to immerse themselves in the spatial sounds of different cloud types. 

Snapchat’s Snap Map was the project spark
Following experiments with the creation of a Snapchat Lens, photogrammetry, and video compositing in Blender, I created a video sketch.
Cloud Formation: Cumulus

In the Cumulus state, the audio notes expire after 24 hours, revolve in the area, and slowly dissipate in size and opacity.
Cloud Formation: Stratus

The Stratus state is triggered 12 hours after a space that was significantly occupied becomes completely empty.
Cloud Formation: Cirrus

Each audience member produces a "Cirrus" cloud which streaks across the space if they are walking and recording
Cloud Formation: Nimbus

The Nimbus state initiates when the space becomes overpopulated with audience members that they audio notes
My project installation used video and sounds created from site-specific recordings, then modified in SuperCollider, to provide audience members access to these emergent spaces in familiar places.
For users to add content to spaces in the world, an application was developed with functionality to play audio notes, record audio notes and interpret the audio notes in different spaces.

OVERVIEW

Voice recordings are posted in physical locations, to be later encountered by others when they visit those locations. As these posts accumulate in space, they form visible “clouds.” As with atmospheric clouds, these assume abstract but classifiable forms reflective of dynamic and emergent conditions––in this case, social rather than meteorological.

ADDITIONAL CASE STUDIES