About my installation coming up at the Blindfold Gallery in about a fortnight.
The installation is a collaboration with sculptor Tina Aufiero who has already created these swan-neck shapes out of white plaster. they have a horn at one end and a bulbous base at the other. Presumably the base could house the speakers and perhaps we might be able to project some sound out of them that is interesting.
One of the ideas i had, which could go down very well as a model for a ‘sensing’ system was to record and spectrally process the ambient sound through a variety of methods. One way to do that is through an obvious spectral separation via FFT, ATS that would yield multiple bands of selected frequencies that could be used to synthesize new sounds or new processes for further analysis.
A model for realtime playback of the sound that could be used is the Meddis Inner Hair Cell model (thankfully already part of the UGens library within SuperCollider) which basically processes sound temporally based on the working algorithms within the biological inner hair cell. This includes lag times from factory to produce neurotransmitters, calculations of force, and distance of propelsion across the synaptic cleft, amplitude of signal and magnitude of neurotransmitter release from the free pool and reuptake by the store.
currently the Meddis Inner Hair Cell model in Supercollider is less than impressive, and the help file doesn’t reveal very much about the inner workings of the model. What i get when i feed in a pure tone sounds like a smoother (LPF?) version of the tone with an octave above generated. some kind of harmonic partial perhaps. With white Noise, it’s definitely a smoothened signal with some LPF running.
the help file states that
“The functional effect is like half wave rectification and low pass filtering, with a more physiologically plausible mechanism.”
but looking through other options for Hair Cell modelling yield very different results indeed. SuperCollider’s Haircell model gives a very hards sounding tone. but has more options for tweaking and programming than the generally hard coded Meddis model. i wonder if there is a way to achieve the Meddis sound through the Hair Cell. that would bring more validation to the both as demonstrations of actual physiological phenomena. It is all pretty arbitrary at this point.