The Generation of Maps
2020 AUDIOVISUAL PERFORMANCE MODULAR SYNTHESIS
Custom Java software featured a sound-reactive F.U.I. (fictional user interface) that triggered both systems in real-time. Each performer shared clock information and other sync signals. In addition to the F.U.I., the video projection featured a graphic performance score revealing itself over time. The score played a fundamental role in keeping both performers in sync throughout the pseudo-improvisation.
CREDITS:
Performers: Yin Yu (于音) & Juan Manuel Escalante. Several sounds for this project were initially recorded at different locations of the University of California Natural Reserve System. We want to thank the support of the Mildred E. Mathias Research Grant (2019) for making these recordings possible. Big thanks to the Media Arts & Technology (MAT) Graduate Program at the University of California Santa Barbara.
Project supported by the National Endowment for the Arts (MEX) / Proyecto apoyado por el Fondo Nacional Para la Cultura y las Artes (SISTEMA NACIONAL DE CREADORES DE ARTE, FONCA / MEX).
_
2024
TENOR 24 (9th International Conference on Technologies for Music Notation and Representation, ZHdK, Zurich, CHE)
_
2021
ARS ELECTRONICA FESTIVAL (Synaesthetic Syntax II. Seeing Sound/Hearing Vision - Expanded Animation Symposium, Linz, AUT)
Special thanks to: Prof. Huoston Rodrigues
_
2020
MODULAR MANIFESTATION v3.0 (Pasadena, USA)
ACKNOWLEDGMENTS:
SupplyFrame Design Lab: Majenta Strongheart (Director)
On-Site Support: Erica Earl
Graphic Design: Jospeh Antony
Special thanks to: Andrew Bakhit