Poster abstracts

Comparing human and machine listening for consonance/dissonance rating of isolated chords
Yuko Arthurs & Amy V. Beeston, University of Sheffield

A frustration sometimes reported in human/computer music surrounds the unreliability of machine listening in live performance settings. This paper asserts the need for an improved ability to interpret sound with a computer in order to promote a more fruitful relationship on stage with audio processing. The human auditory system reveals remarkable sensitivity to some aspects of sound, yet remains oblivious to other variational aspects since we perceptually compensate for the effects of the performative and listening environment on the sound source itself. Machine listening, on the other hand, does not sufficiently distinguish ‘indicative’ and ‘trivial’ aspects of this variation. Fundamentally, we query whether it is possible to reconcile human and computer ‘experiences’ of sound. Either way, we believe it is desirable to attempt to do so, because such work may eventually benefit performers and composers alike.
The present study aims to find algorithms for machine listening that correlate with human behavioural data. When human/computer music involves the ‘extended’ sound-world of an instrument it can be hard for a machine listener to determine whether acoustic variation arose due to perceptually important aspects of sound, or in less musically-relevant factors (e.g. room acoustics, microphone placement or performer repeatability). To remove this uncertainty, we used synthetic stimuli to assess consonance/dissonance (C/D) perception in isolated chords. For each stimulus (12 chord types X 2 key-roots X 2 timbres), 33 participants rated its C/D on a 7-point scale. Acoustic features were created using two Matlab toolboxes, and statistical analyses were performed in SPSS. Correlations were found between human and machine listening data for a number of acoustical features, indicating that C/D perception could be predicted to some degree by measures of signal content (spectral centroid, crest and roughness) and signal variability (spectral flux and flatness).

 

Composition as performance
Fergal Dowling & Michael Quinn, Dublin Sound Lab

We present a model that has been applied to performance scenarios in which a compositional framework is devised in advance, while the details of timing and execution are postponed until performance time. The performance is further reinterpreted in real-time by a computer application which reveals a formal coherence through the collective interaction of performer, composer and computer.

‘Sound-events’ produced by the instrumentalist(s) are recognised and recorded by the computer; as the performance continues more sound-events are detected and recorded; and the start-points or end-points of each sound-event can be used to retrigger real-time playback of previously recorded sound-event(s). The detailed execution of recording/replay is therefore dependent upon timing and articulation details of the on-going performance. Thus, the performer can influence the evolution of the compositional form while he is realizing the piece.

The emerging structure relies upon direct quotation of the continuing performance and results in frequent elisions and re-orderings of the temporal flow. By this process we can render self-contained and concentrated arguments to expose an evolving relationship between musical material and its self-constructed context. In place of the familiar ‘Composer – Performer – Listener’ paradigm, this process leads to a re-evaluation of the respective roles of composer and performer, while also allocating a performative/compositional role to the computer’s interactions. Moreover, this process does not align neatly with the concept of ‘musical works’ – as described and critiqued by Lydia Goehr, amongst others – and prompts a reconsideration of how the resulting musical events should be categorised.

 

Computer Music and Posthumanism
David Roden, Open University

I will introduce two flavors of posthumanism: critical posthumanism (CP) and speculative posthumanism (SP) and provide an overview of some of the ways in which they might be explored by thinking through philosophical issues raised by computer music practice.

CP questions the dualist modes of thinking that have traditionally assigned human subjects a privileged place within philosophical thought: for example, the distinction between the formative power of minds and subjects and the inertia of matter.

The use of computers to supplement human performance raises questions about where agency is ascribed. Is it always on the side of the human musician or can it also be ascribed also to the devices or software used to generate sound events? If so, what kind of status can be granted to such artificial agents? Does their agency locally supervene on human agency, for example? I will also argue that the intractability and complexity of some computer generated sound also confronts us with the nonhuman, mind-independent reality of sonic events. It thus provides an aesthetic grounding for a posthumanist realism.

SP (by contrast) is a metaphysical possibility claim about possible technological successors to humans. It can be summed up in the SP Schema: “Descendants of current humans could cease to be human by virtue of a history of technical alteration” CP and SP are conceptually distinct but, I argue, the most radical form of SP converges with the anti-anthropocentrism of CP (Roden 2014). In particular, non-anthropologically bounded SP implies that the only way in which we can acquire substantive knowledge of posthumans is through making posthumans or becoming posthuman. I will argue that computer music development may have a role in this project of engineering a posthuman succession.

 

Can improvisation-driven music systems be Musical Works?
Mark A.C. Summers, University of Sheffield

Improvisation-driven music systems (IDMS) are often treated as musical works (MW) by performers. In ontological terms, however, this may not be appropriate. Most discussion of MWs offers no ready way to differentiate MWs from non-MWs and most often concentrates on paradigmatic examples from the ‘classical’ tradition (eg. those by Beethoven). Most theorists describe MWs as having Sound Structures (SS). The unique element that a MW has but a  non-MW (improvisation) does not is a pre-determined ending (or method to pre-determine one). Therefore, in order to decide if an IDMS is a MW, it is necessary to see whether such an ending is contained within the system. Some systems have the ending specified in their code, others rely on the improviser enacting the ending, others still have no associated ending. Thus, an IDMS can be a MW to be performed, a tool used to perform a MW or an agent/tool for improvisation. However, even with a pre-determined ending, it could be argued that the IMDS is not a MW, but that a MW subsists within it.

Advertisements