Program as of 24 August 2019 and is subject to change.
KISS 2019 events will take place at Daedong College (marked by a purple star in this map):
Click on the name of a presentation to view the program notes or abstract for that presentation.
Thursday 29 August 2019
What is resonance? Where can you find it? How can you use it? Is there a “dark side” to resonance?
An adaptation of Alvin Luciers “I am sitting in a room” for ambisonics and audience.
“Unmasking” evolves using thematic material from the Korean folk song: Milyang Arirang. Melodic and harmonic fragments of the theme are subject to various processing techniques and regenerative methods including Markov modeling. As the piece unfolds, the underlying theme becomes gradually more apparent. Resonance in its various interpretations is also a salient feature, including metaphors for the collaborators bouncing ideas off of each other, as well as acoustical resonance.
Laissez Vibrer was composed in 2019 using samples from artwork by University of North Dakota Art Student Jesse Boushee. Mr. Boushee creates interactive pieces using found objects and musical instrument strings. The pieces are intended to be handled and “played” by the observer. Some of these pieces have guitar pickup installed so they can be amplified. Sample playback and processing in “Laissez Vibrer” is controlled with another stringed instrument played live.
Mr. Boushee’s pieces do not have effective resonant chambers. Using impulse responses from a number of other resonant chambers, his “instruments” are given virtual resonance that changes through time, providing them with evolving character. To be sure, Mr. Boushee’s artwork resonated with me the instant I saw and heard them. I am grateful to him for allowing me to use his creations.
ImPossible (ca. 8 minutes), interactive realtime performance, four-channel
ImPossible is an interactive performance composition for three custom-made infrared sensors, Max and Kyma. ImPossible is a true virtuoso performance work that requires the rapid execution of thousands of notes within short timespans. I’mPossible is about musical speed and pounding action controlled through physical micro- and macro- movements. Through waves of musical intensifications the interaction between performer and instrument drives the dramatic thrust of the composition to its final climax. The title is a play on words that refers the extreme technical difficulties of performing the piece – Impossible – and the idea that these impossible difficulties can be overcome – I’mPossible.
This is a piece for live electronics that uses room-dependant processing to enhance the resonances of the performance space and explore the emergent sound behavior of the resulting ecosystem. When doing a soundcheck one often tries to minimize the effect of the resonant frequencies in the performance space, for example through equalization, to avoid undesired effects. This piece is built around the idea of exploring precisely those frequencies that are strongest in the specific performance space to try to make it resonate in interesting ways through audio feedback and control signals generated by monitoring the acoustic response of the space and used to affect audio synthesis and processing parameters. It takes inspiration from compositions that explore the interaction of sound with the physical space where the performance takes place, like those by Alvin Lucier and Agostino Di Scipio.
This piece attempts to create an acoustic ecosystem involving the interaction between the space, the operator, and a processing network in Kyma, producing an experience that is unique to that specific place and moment, and where the room vibrates and resonates in extreme ways that are different from the typical concert experience. In a sense, it can be thought of as trying to create an experience similar to being inside the body of a resonating musical instrument.
Samul nori is a traditional Korean folk music with four performers playing a large gong (꽹과리, Kkwaenggwari), a small gong (징, Jing), an hour-glass drum (장구, Janggu) and a bass drum (북, Buk), which represent lightning, wind, rain and clouds. Normally these instruments are thought of as percussive. We use surface transducers, contact mics and Kyma to set up feedback loops through our own versions of these instruments to re-imagine them as evolving resonators while still evoking the spirits of lighting, wind, rain, and clouds.
“Back in 2219, in what was then New Gyeongsang Agricultural AAD, there was a group of musicians from the farming collective calling themselves the Gong People. At that time there was a popular folk music tradition (Jindong) within the agricultural autonomous districts that saw itself as a counterpoint to the increasing virtual urbanisation. Instead of using the central resources of The Crystal as most people did in the creation and calculation of Sounds, they used their permaponics skills to recreate separate, silicon based sound engines, inspired by those of the early 21st century, to augment even older musical instrument forms. Often making use of the four device Samulnori which had come back into vogue. A central idea of the Jindong movement was that only through the inherent disharmony and disunity of using discrete, unconnected devices could the true nature of empathy and consanance be celebrated. The Gong People in a performance they composed for the annual festival in 2219, unusually decided to envisage a time 200 years into their future. In their future history the Samulnori form had first fallen into obscurity and then experienced a rediscovery. They imagined in this new iteration it had, at least partially, returned to its rhythmic percussive roots as it had been some 200 years before their time. It must be noted that in 2219 the practice was entirely feedback driven resulting in evolving drone-scapes. It is fascinating to reflect on the historical future from the Gong People as we are here at the 2419 Nong-eob Eum-ag, the festival they were effectively anticipating, how eerily accurate their predictions turned out to be. So, without further delay, I am delighted to welcome to our stage Gongmyoeng to perform their tribute to the Gong People.” -translation, 29/7/2619, of original recording transcription.
Friday 30 August 2019
Rhythm is quantized time, pitch is quantized frequency. The quantum scale is from a very human perspective, defined by the limitations of the senses. The time and frequency domains merge at zero Hz and eternity, and as we approach this asymptotic singularity at DC, there is a duality of the quantities. It is in this grey area, the margin of yin and yang, that the orthogonal nature of reality and anti-reality can be explored. Time tends to integration, low pass filtering, the sine, order, the masculine. Frequency tends to differentiation, high pass filtering, the cosine, chaos, the feminine. By extending this paradox of axiomatic quantum uncertainty to the creative process, we can flow around problems and work more effectively, by simply shifting the phase of our consciousness by 90°.
Why I adapted this piece. Political and philosophical problems with the idealised I and how the piece was made using Ambisonics and Kyma.
We’ll be demonstrating how to get a trowel of dirty analog-ness into the pristine DSP of Kyma. We’ll be showing how you can use cheap microphones, surface transducers and piezo elements in feedback loops with Kyma at the centre. We’ll focus on showing how to build automatic gain controls, use energy at frequency, modal filters and other feedback conditioning techniques.
This talk will present the ideas behind the piece “Resonances”, and their implementation in Kyma. It will also give a brief overview of some custom Kyma sounds that I have developed for this piece.
“Laissez Vibrer” uses samples from DIY stringed instruments. Playback and processing are controlled by another stringed musical instrument played live. This lecture presents problems and solutions created by this particular situation.
A five stanza poem. After a reading of each stanza, a change of length, spectrum, articulation and overlap, resonates as in a changing fantasy environment.
Specifically based around field recordings and small, scrap metal objects acquired in Armenia, ‘Birds Over Ani’ explores, via various rhythmic and synthetic means, the noteworthiness of resonance in the piece itself as well as in the composer’s work.
This work is building on a series of recursively gating and sampling textures I put together for my performance at St. John the Divine in NYC. The pedal tones and harmonics constantly sampled and changed each other over a 5 hour period.
All about one thing feeding in to another thing feeding in to another thing forever and ever.
I will focus on the signal flow of my Kyma sound; from the field recordings to the sound design and synthesis. One thing I’ve always wanted to see at KISS was someone go sound by sound explaining their thought process and parameter fields, so that’s what I’m going to do.
Exploring interesting ways to generate a variety of sonic changes to a recited poem, creating “unusual” resonances, while keeping the poem identifiable.
In this hands-on session, we will take time series data related to the city of Busan and map the data to sound. Can we hear patterns in data that we might not otherwise detect?
“Sawari” is a Japanese word to describe buzzy resonance intentionally introduced into an instrument’s timbre. The design of the shamisen and biwa (lutes) include mechanisms to create these noisy points of resonance. Composer Takemitsu Toru called this noise “an intentional inconvenience that creates the expressiveness of the sound.” This piece considers the expressive roles of imperfection and chaos in modern technoculture.
After the Storm will ultimately be a 50-minute multimedia theater piece for soprano and baritone vocalists, percussionist, interactive and fixed video, and live electronics using Kyma (likely with some live performative elements performed by me). I’ve received a $5500 grant from the Center for Latter-Day Saint Arts and will seek additional support for a May 2020 performance of the piece here at BYU where I teach, and then a June 2020 performance in NYC as part of an arts festival there by the granting organization.
The piece I plan to present at KISS 2019 will be a series of movements, a sort of suite, for solo percussion, video, and electroacoustic music with Kyma, that will serve as a major component of and foundation for the completed theatrical work. I’m at the beginning of creating this work so duration is as yet flexible and could likely be adjusted to fit the needs of the symposium–probably from 10 to 20 minutes.
I have long been interested in the idea of “resonance,” perhaps as evidenced in part by my solo CD release Available Resonances (here’s a link: https://store.cdbaby.com/cd/stevenricks ) that includes several electronic improvisations created by manipulating close-mic’d aluminum cans and other objects while passing them through various types of delay/effects. The resonance of found objects has always fascinated me, and I appreciate the way contemporary percussion music (and percussionists in general) embrace a broad approach to potential instruments and resonance initiators. I imagine a diverse setup for my piece that will include pitched percussion (likely marimba and vibraphone), traditional non-pitched ringing metals, skins, and then found metal and glass objects.
With the exception of the vibraphone and ringing metals, the relatively short decay of both traditional and found percussive instruments presents a unique challenge and opportunity. I plan to use Kyma to create different types of “resonances” for these instruments that effectively manufacture echo/reverb/etc. for the instruments that expand their resonances beyond their initial acoustic capabilities. Ideas I’d like to explore in this realm based on previous discussions with Kyma creators Kurt Hebel and Carla Scaletti include: using an amplitude follower in Kyma to allow percussive attacks to trigger and control various filters/resonators/etc.; using receiver sounds tuned to specific frequencies to create a sort of piano-sostenuto-pedal effect for these percussive instruments; exploring the use of “continuous glissando spectrum” to control filters so that resonances can be subjected to Shepherd-tone-esque behaviors; filtering the marimba, vibraphone, and other instruments with tuned filters to draw unexpected resonances out of both pitched and non-pitched percussion instruments; etc.
I hope I’ve touched enough on the idea of resonance—I think it’s a very rich and interesting touchstone that is directly related to my piece. In addition to the musical aspects of “resonance” my piece embodies, the suggestion of it relating to collaborating with others and bouncing ideas off of fellow artists also fits. I expect this piece to build on one of my previous works, Medusa in Fragments, and I’ll be working closely with two friends and collaborators who helped with that piece—writer Stephen Tuttle, and designer/video artist Brent Barson. We’ve already begun sharing ideas and I expect the bouncing will continue throughout the creation of this work. These two artists will work together, with my input and collaboration, to create the video component of the piece, which will primarily be animated text in the vein that Brent has pursued for the last several years. You can see his work on Vimeo via the following link:
https://vimeo.com/user1425019
I’m definitely at the front end of serious work with Kyma, but I’m committed to it, have a dedicated setup with a Paca and interface, and plan to enlist the help of my former student, Austin Lopez (a KISS 2018 presenter) to help me create and fine-tune the setup I’ll need to make this a success. I also have Jeff Stolet’s book, and some dedicated time coming up in May – July to focus on working with Kyma and completing this piece for the Symposium.
“I took a deep breath and listened to the old bray of my heart. I am. I am. I am.” —Sylvia Plath
“Where shall the word be found, where will the word Resound? Not here, there is not enough silence.” —T.S. Eliot
“Not only is there no silence, but we humans don’t have the capacity to perceive what lies beyond it. We create tools to capture it.” —İlker Işıkyakar (from the Meditation corner, sitting on the Mackintosh chair)
Past: Arrival
Stepping into the Southwest from the Northeast. The starkness, expanse, light… All resonates, echoes of places from our youth in Southern Spain and Central Anatolia. That impressionable then and altering there, crystalized and immortalized in our childhood memories. Something rises to meet us in the Southwest. Such that we see, feel, hear. Such that, it triggers a reaction and evokes quiet nostalgia. What lies underneath that we sense not?
Present: Evolving
In Language, Linguist Edward Sapir discusses what he refers to as drift, the changes a language undergoes through time. He characterizes language as… “moving down time in a current of its own making. It has a drift […] direction […] constituted by the unconscious selection by its speakers of individual variations that are cumulative in some special direction, inferable from the language’s past history.” Art historian George Kubler proposes that “noise is irregular and unexpected change.” And Jacques Attali, economic and social theorist and political adviser, suggests that “a noise is a resonance that interferes with the audition of a message in the process of emission. It does not exist in itself, but only in relation to the system within which it is inscribed: emitter, transmitter, receiver.”
The Schumann Resonances (SR) are a set of spectrum peaks found in extremely low frequency portions of the Earth’s electromagnetic field spectrum. Their fundamental mode of 7.83 Hz. is accompanied by second and third order frequencies. This is the Earth’s pulse… What if we could perceive the electromagnetic landscape around us? What other sounds, resonances, would envelop us with every passing moment of our existence? While electronic induction techniques for sound installations have been used before, what of sound resulting from the interactions of magnetic fields?
Future: Departures
Remapping other signals to an audible spectrum, we shall redefine sound. Collecting these inaudible sounds with electromagnetic detectors, we shall impose another layer of noise on an already noiseful environment. We shall perceive the Earth’s essence, replete of soundscapes emanating via electromagnetic interactions, including radiation. KYMA shall amplify, purify, synthesize and organize these sounds, creating a construct that we shall interact with via controllers. The design a process of assignment, adjustment, and selection that evokes “Avant-garde jazz.”
This work shall transform the way we hear, we feel, we see the world… How past memories flow towards future cumulative realizations… How Time, at its very core, is preciously fleeting.
Inspiral, for contrabassoon and Kyma, is based on LIGO detection of gravitational waves from colliding black holes. The 12-minute work has three parts: in the first three minutes, the two black holes collide, merge and ringdown. Then follows a fanciful depiction of the resulting gravity wave’s 1.3 billion light-year journey to Earth, “zooming out” by powers of ten, with a passacaglia in the electronics.
This journey culminates in the actual signal as detected by LIGO, which forms the basis for the coda; a meditation on this new frontier of “multi-messenger astronomy,” receiving signals from the electromagnetic spectrum as well as gravitational waves.
In a world where walls are being built and people are told where they cannot go, Sons of Chipotle want music to be a place of openness.
Saturday 31 August 2019
How do you get organised with Kyma? This is the distillation of interviewing a number of leading Kyma-nistas covering:
where to put files
how to version control your Sounds
encapsulation
naming things
how to remember what you were doing
Goal is to port a modal analyses tool in Kyma for use with the modal filter
An in-depth description of the implementation in Kyma of several high-quality “zero-delay feedback” time-varying filters. Several simple designs will be presented (1-pole filters, 2-pole state variable filter). Additionally, more complex sounds and applications will be shown, such as a resonator filterbank for modal synthesis, an 8-pole phaser, a morphing multimode filter (à la Oberheim Xpander) and two virtual analog models: a 2-pole Korg filter and a 4-pole Moog ladder filter (both with built-in nonlinearities).
This presentation involves the Kyma timeline in various ways, treating it almost as a sequencer.
Computer music offers us the potential for digital perfection in the creation of all dimension of musical performance. I am excited by the expressive potential of imperfection by design, and, throughout my works in Kyma, I look for ways to introduce obstacles and irregularities into my instruments. In this technical presentation, I go over some of my sound design strategies for creating “intentional inconveniences”
“I took a deep breath and listened to the old bray of my heart. I am. I am. I am.”
–Slyvia Plath
“Where shall the word be found, where will the word Resound? Not here, there is not enough silence.”
–T.S. Eliot
“Not only is there no silence, but we humans don’t have the capacity to perceive what lies beyond it. We create tools to capture it.”
–İlker Işıkyakar (from the Meditation corner, sitting on the Mackintosh chair)
Past: Arrival
Stepping into the Southwest from the Northeast. The starkness, expanse, light… All resonates, echoes of places from our youth in Southern Spain and Central Anatolia. That impressionable then and altering there, crystalized and immortalized in our childhood memories. Something rises to meet us in the Southwest. Such that we see, feel, hear. Such that, it triggers a reaction and evokes quiet nostalgia. What lies underneath that we sense not?
Present: Evolving
In Language, Linguist Edward Sapir discusses what he refers to as drift, the changes a language undergoes through time. He characterizes language as… “moving down time in a current of its own making. It has a drift […] direction […] constituted by the unconscious selection by its speakers of individual variations that are cumulative in some special direction, inferable from the language’s past history.” Art historian George Kubler proposes that “noise is irregular and unexpected change.” And Jacques Attali, economic and social theorist and political adviser, suggests that “a noise is a resonance that interferes with the audition of a message in the process of emission. It does not exist in itself, but only in relation to the system within which it is inscribed: emitter, transmitter, receiver.”
The Schumann Resonances (SR) are a set of spectrum peaks found in extremely low frequency portions of the Earth’s electromagnetic field spectrum. Their fundamental mode of 7.83 Hz. is accompanied by second and third order frequencies. This is the Earth’s pulse… What if we could perceive the electromagnetic landscape around us? What other sounds, resonances, would envelop us with every passing moment of our existence? While electronic induction techniques for sound installations have been used before, what of sound resulting from the interactions of magnetic fields?
Future: Departures
Remapping other signals to an audible spectrum, we shall redefine sound. Collecting these inaudible sounds with electromagnetic detectors, we shall impose another layer of noise on an already noiseful environment. We shall perceive the Earth’s essence, replete of soundscapes emanating via electromagnetic interactions, including radiation. KYMA shall amplify, purify, synthesize and organize these sounds, creating a construct that we shall interact with via controllers. The design a process of assignment, adjustment, and selection that evokes “Avant-garde jazz.”
This work shall transform the way we hear, we feel, we see the world… How past memories flow towards future cumulative realizations… How Time, at its very core, is preciously fleeting.
I would likely discuss and explore the setups I use within Kyma to transform the relatively short and pitch-absent sounds created by non-pitched percussion, both conventional instruments and found objects, into rich harmonic and resonant textures.
There are similarities between expressing some of ones emotion with an instrument and expressing some of ones emotion with another human. Behind the vocal and logical aspects of human interaction there are emotions that we feel. We may try to share them with facial expressions or specific actions such as laughing. We may even attempt use these to influence the emotions of the other human or we may feel the same thing the other human feels just as objects vibrate when their resonant frequency matches sound energy reaching it at that object’s resonant frequency. This piece could be considered a resonance between two performers, a Helmholtz resonator and a Pacarana.
A Glove with Some Sensors is an interactive composition for custom-made performance interface, custom software, and Symbolic Sound Kyma. The composer attempts to study and explore date-driven instrument through the process of building the interface, composition, and performance. The performative actions selected by the composer derives the control data include bending finger joints, contacting two fingers using different pressure, hand movements in 3D space. After mapping data through a software layer, the data is eventually routed to the sound synthesis environment – Kyma. During the performance, the control data is sent to Kyma in real-time so that the performer can control the sounds’ timber, pitch, location, duration, and volume, which as the resonances reflecting in the listening environment.
A quadraphonic composition, heavily influenced by the late sound of UK Garage, augmented through the speakers of the mobile devices of the audience, and intensified by minimalist computer graphics generated by P5.js sketch processing OSC messages sent by Kyma.
The audience, connecting to my web application on Heroku, will receive tones generated in the browser with a Javascript library called p5.sound and manipulated in real-time through OSC messages generated by Kyma EventValues. The mobile phones of the audience will be therefore used to create a resonance of the music coming out from the main speakers of the room, interplaying with different degrees of detune conditioned by the different accelerometer data on each device.
My minimalist work will rely on heavy use of resonant detuned bass lines (the wobbles) and vintage fashioned dub echoes (designed by Cristian Vogel for the Never Engine Labs “Kyma Capsules”), interfacing a multigrid with an Arturia BeatStep MIDI controller.
The arrangement of my composition will be developed predominantly by the use of algorithmic scripting.
The idea of using the mobile devices of the audience as additional speakers comes from Dr Garth Paine’s “Future Perfect” performed at the ZKM of Karlsruhe at the inSonic2018, and from the enlightening chats with him at KISS 2018 in Santa Cruz.
My desire was to resonate a Kyma performance through the web taking advantage of the modern MERN stack. In the future, the framework elaborated during my research could also allow the performer to play live on a Paca(rana) set up in a different place in the world or to collect OSC data from a performance in a NoSQL database programs like Redis or MongoDB.
In my presentation, I will illustrate the stages of development with Node.js, Socket.Io and Express of the servers underlying the p5.sound powered web-app, and their UDP connection with Kyma through NEL OSC Tools in order to manipulate p5.sound variables.
Moreover, I will explain my Kyma multigrid in detail, with a particular emphasis on the encapsulation of my bass wobbles generators, and on the characteristics of the dub-style echoes from the Kyma Capsules collection of the Never Engine Labs.
Finally, I will discuss the generative use of SmallTalk and Capytalk in my composition.
The presentation will include the structure of the performance Interface and the data flow from the performance Interface to the computer. Besides, I will discuss the hand movements sound design strategy in Kyma that I use in this piece.
Brain to computer interfaces and the aesthetics of emotion. The challenges they would focus on are integration of the EEG/BCI and accurately transcribing my emotions into a musical/sound design into Capytalk and the timeline.
I’ll show off my Kyma patch and the software connectivity of an Emotive Epoc BCI (brain to computer interface). I’ll be explaining how I interpreted the signals from the interface with Kyma and why I chose the sounds that I did.
If you want to get the most out of Kyma at some point you’ve got to do some maths.
Do you wonder what logarithmic or exponential functions look like and why they might be useful?
Do you know the shape of a curve you want but don’t know the maths to make it?
Have you got the start of a function but you don’t know how to move it, rotate it or squidge it into the right shape?
Do you want a recipe book with pictures showing common math functions useful for music with their equations, how you’d do them in Kyma, and what they’re useful for?
I do… and I haven’t found anything like that yet.
In this workshop, we share our favourite and most useful maths, or maths questions, so I can collate them all together in a community-written musicians visual guide to maths functions.
Rupture features a suite of sounds and music from “Wí Shpá, A journey in bare feet”, which is a 40-minute dance piece that premiered in April 2018 with choreographer Minerva Muñoz, geophysicist Alejandro González, and composer Carla Scaletti.
After the 2010 El Mayor Cucapah 7.2 magnitude earthquake in northern Mexico, seismologist Alejandro González Ortega went to interview Don Chayo, a Cucapa native who had witnessed the surface rupture. During the interview, Don Chayo related a story told to him by his grandmother about a disobedient child who used his harpoon to pierce the testicles of a snoring giant in the south, resulting in the creation of the Colorado River and the Gulf of California. Based on the landmarks identified in the story, the descriptions of low rumbling howls of the angry monster, and the descriptions of water covering large parts of Baja California, Gonzalez recognized the story as metaphorical recounting of an earlier seismic event that had been passed down over several generations by the Cucapa grandmothers.
Inspired by Don Chayo’s story and its similarities to the El Mayor Cucapah earthquake in 2010, Gonzalez and choreographer/physicist Muñoz began developing a dance performance piece; they enlisted the help of composer Scaletti to map 3D seismological data collected by 12 measurement stations to sound and music for the piece.
As Muñez and González continued to conduct research and on-site interviews with the Cucapa elders, a more disturbing story began to emerge — that of a displaced people whose name, Cucapa, means “those who live on the cloudy river,” but who now live in a desert, their livelihood from fishing and their water rights becoming increasingly constricted. What had originally been intended as a science/art collaboration about a seismic event began to morph into a deeper metaphor for displacement, disruption, and geopolitical borders. The result—Wí Shpá, A journey in bare feet—is an inter- and transdisciplinary dialog of artistic creation and research that combines stories of Cucapa cosmogenesis and the scientific studies of the El Mayor-Cucapah 7.2 magnitude earthquake, weaving a network of collaboration, tradition, scientific research, knowledge, and experiences, but above all, creating a dialog between scientists, artists, the native community, artistic collaborators and the general public.
Rupture (a 15-minute distillation of the music and sound design from the 40-minute Wí Shpá work) was specifically designed to explore the 3D spatial potentials of the high-density loudspeaker array in the Cube to create a sonic realization of the spatial information inherent in the dataset captured during the El Mayor-Cucapah earthquake, the geographic waypoints outlined in the Cucapa story, and the symbolic meanings of the four cardinal directions found throughout the Cucapa cosmogony.
In Rupture you can hear 3D seismic data recorded during the event played directly as audio signals, interpreted as impulse responses, and as parameter control signals for synthesis and processing algorithms while simultaneously hearing hints of the northern border and elements of the Don Chayo’s story of the mischievous boy, the monster, the flood, and the numerous morphing birds, coyotes, and other magical animals that populate the Cucapa universe.
TCF4 is a piece that transforms the resonating body into sonic waves. The work uses a custom digital musical instrument called Distance-X, which consists of a hacked Gametrak, Nintendo Wiimote, and customized Kyma software. The piece is performed with human-powered computer music. No tapes. No spacebar playback. Just body movements turned musical mutants.
TCF4 focuses on the gain control of hand distance, vibrational scrubbing of an audio sample via the Wiimote’s accelerometer, and parametric control of the Gametrak’s XY joystick. The piece serves as a study of developing sonic phrases emerging from physical body movements and in pushing my ideas to the point of physical and aural exhaustion.
Introduction: Order/Disorder
19th Century: Victorian Crisis: self in society
20th Century: Existential Crisis: self in the universe
21st Century: Information Crisis: self in the dataverse
Each century rolls like a 100-year-long wave, bringing with it primary changes in human self perception.
What might be the fundamental set of perceptions through which individuals interpret themselves and their communities? 19/20/21 proposes that each of these three centuries—19th, 20th, and 21st—has produced a particular set of self perceptions based on three different crisis conditions, one for each century. While these three unique crises manifest across the community, their acute stress primarily concerns the individual.
The three crises are: Victorian Crisis (19th Century), based on perceptions of the self in relation to society; the Existential Crisis (20th Century), based on perceptions of the self in relation to the universe; and the Information Crisis (21st Century), based on perceptions of the self in relation to data.
Integral to an individual’s engagement with his/her century’s appointed crisis is the struggle to balance Order and Disorder. Order allows stability, ease, and reflection but also inclines itself toward atrophy and stagnation. Disorder allows change, action, and response but also inclines itself toward chaos and loss. Therefore, attendant with each state of crisis is the individual’s attempt to navigate and construct ratios of Order and Disorder.
Underlying these theses, we’d like to suggest that all creative work is driven by the qualities of the specific crisis unique to each century and the individual’s efforts to navigate order and disorder within that particular crisis state.
The sonics of 19/20/21 are performed on three primary communications technologies that have contributed to shifted perceptions of the self for each century: the telegraph, the telephone, and the smartphone. The text of 19/20/21 begins with an introduction, followed by three first-person point-of-view poetical responses, one for each century.
Intermediate Goods is a spoken word (processed) performance on the subject of how Shipping Containers have revolutionized the way we work and shop. The availability of cheap intercontinental shipping has “resonated” throughout the Global Economy. The title refers to the use of containers to ship not just completed goods, but parts of things, such as the hair for Barbie dolls, which now constitutes the bulk of what is shipped in containers, and has resulted in a highly interconnected system of manufacture.
Busan is now the fifth busiest container port in the world, and played a powerful role in the development of the South Korean economy. Arguably one of the resonances is this Symposium itself!
Sounds resonate when it is trapped. The size and shape of a trap is a crucial factor that determined what kind of sounds we perceive. The piece will be an experiment about how different sizes and shapes of virtual spaces would shape sounds, what would stay and what would fade.
Sunday 1 September 2019
My goal is to have a piece that works on (at least) two levels: 1) be an easily understood and interesting communication about what I think is a fascinating and under-reported development in modern life. 2) incorporates resonance to add interest to the voice, and emphasize the underlying material, as by using a model of a shipping container as a resonator. The title refers to the need to keep resonance under control, to maintain both interest and comprehensibility, and of course to avoid agony.
We will discuss the musical and technical aspects of “Unmasking”. This will include our process of collaboration, how the ideas evolved, genesis of our raw material, and implementation details and tools that may be of interest to Kyma users.
Expression Toolkit is a digital music composition toolkit. The toolkit includes data modification prototypes, objects and functions, control paradigms, and composition ideas that help one formulate musical possibilities from discrete and continuous control signals. The talk will focus on Kyma Sounds and Concepts used across several musical works on the Distance-X digital musical instrument.
Using space as a sound filter is not a new concept. In this presentation I am sharing the techniques and concepts I have acquired during the process of composition.
Will consider questions and donations of $1,000,000+
We’ll be talking about development of our work 19/20/21, from the conceptual underpinnings of order and disorder and century-long crisis states to technical strategies used in composing with the three sound generating devices of the telegraph, telephone, and smartphone through Kyma. We’ll also talk about the ancillary application Isadora for live video playback control.
An overview of some new features of Kyma used in the piece and highlights from some of the data-driven Sounds.
This performance relies on re-using, echoing, looping, and transforming sounds using Kyma. A new composition and new dynamics will emerge from reverbs, filters and loopers of the Pacarana.
To control the Kyma, we will use a multi-touch interface which is able to compute complex interpolations between presets.
Resonance is everywhere and every moment.
This piece composed in ambient music that inspired by sounds from the nature and our daily life.
Robert and Ilker return to KISS to bounce musical ideas off of each other, communicate, cooperate, react, and perhaps even RESONATE!
The main technical challenge The Unpronounceables face, is finding time to rehearse. Given that, we have had to become pretty good (you will judge how good) at listening to each other, and reacting to the cues we get. Our first experience performing together was as part of the Emergent Ensemble during KISS 2016 in Leicester. While we don’t explicitly use the conducted improvisation techniques we learned there, the basic message of attention and reaction has proved invaluable.
We’ll explain about how we use the signals and informations from Kyma System to OSC(Open Sound Control) and MIDI controller for this performance.
First presented in KISS 2018, the MIEM multi-touch preset interpolator, based on geometric shapes, has been much improved during the year. Written in C++ for ultra-low latencies, the interpolator now runs on iOS to complement the Windows and macOS versions.
The interpolation concepts remain the same, but additional features concerning the parameters have been studied and implemented for accurate OSC control of any device.
Concerning Kyma in particular, a C++ library for retrieving and using presets in MIEM is being developed. Features and first user tests of this library will be presented.
Starting with a collection of 3D ambisonic recordings from various iconic locations in and around Busan, we will learn how to process, spatialize, mix down for interactive binaural presentation for games and VR.
The performer and the audience be in the circle of the speakers. There is a simple vector has been made between performer and audience each other because the sound among the speakers are not fixed.
The sound “objects” (playing the musicalinstrument and realtime sampling etc) are always moving and they will never stop. That means the all of the people (including the performer) have not feel the fixed direction from the sound.
From the performer side of view, the sound is comming over from the audience. The situation helps to generate the closer feeling and the resonance between the performer and the audience.
On the audience side, they can move for selecting the best listing point as they like. The sound from speakers are mixed in the air and making the strange resonance that have never felt from the ordinary audio system.
A shimmering sheet held between sky and earth.
소용돌이 (Maelstrom) is a piece for a massive corrugophone (“whirly tube”) orchestra where every attendee of KISS2019 is participating in the performance.
The audience-performers are immersed in a beautiful shimmering, resonating surface of evolving harmonics.
Each audience member is given a corrugated plastic tube of various tuned lengths. When the participants swirl the tube through the air above their heads they create a tone. When swirled faster they will pop through a harmonic sequence.
The performers are sub-divided into 8 groups. Each group will have a microphone and speaker which are controlled by Kyma. The audience members are given simple instructions on how to react to what they can hear on their group’s speaker. In this way the performers themselves are elements in a feedback process, they become resonators. Kyma will go through a sequence of cross connecting microphones of one group to the speakers of other groups to create different networks for feedback which provides the structure and progression of the piece.
Resonance is celebrated in a number of ways in this piece. At the small scale, the tubes themselves make sound by resonance. At a larger scale the piece is an exploration of creating resonance across a field of performers. The performers have two main dimensions over which they can resonate at the larger scale – pitch (or rather harmonic) and position in their swing. Kyma, using the network of speakers and microphones will orchestrate a transition through various modes of resonance across the performer field.
The performance will take place on top of the college roof, the most appropriate place to experience a massive horizontal sheet of spatially rotating, Doppler shifting harmonics, held between sky and earth.
Performers:
All the attendees of KISS2019 except Tom and Alan.
The average length of a tube is less than 100cm. Positions will be marked out in the space a safe distance apart, a tube at each location. The audience will first gather around Tom and Alan for a safety briefing and explanation of the mechanics of the piece. They will then be invited to take a position in the sound field.
It is likely this will be the first time an ensemble of corrugophone this large has performed.