CMALT Portfolio – Specialist Option

My specialist area of activity concerns research and development work that I have undertaken on a project called the ElectroAcoustic Resource Site 2 (EARS2) – in collaboration with the Music, Technology and Innovation Research Centre (MTI) at De Montfort University.

Guided Listening and the ElectroAcoustic Resource Site 2 (EARS2)

Electroacoustic, electronic or sound-based music, is one of the most exciting new forms of music that came out of the twentieth century and is thriving today around the globe. Much of what we can do with sounds has been made possible due to the development in technologies which we can use to make music. (EARS2 2014.

Guided listening has been developed as an online educational tool for introducing people to electroacoustic music from a listening perspective. It is an integral component of the EARS2 online educational resource which is aimed at Key Stage 3 (11-14 years of age).

EARS2 has been developed at De Montfort University by the Music, Technology and Innovation Research Group in close collaboration with several internal and external partners.

As well as being on online resource that has its own inherent learning pathways (e.g. see the EARS2 Project pathways –, EARS2 is being incorporated into classroom-based teaching packs that have been developed to be integrated into the Music curriculum at Key Stage 3. It is important to note that although the EARS2 site is currently live, it will begin to be integrated into the music curriculum at several pilot schools at the start of the 2015/16 school year in September. At this point evaluation data will begin to be collected on the effectiveness of the EARS2 site.

Guided listening, as a concept emerged from my doctoral research and post-doctoral work around Intention/Reception ( In particular exploring ways in which electroacoustic music can be introduced to listeners who have not previously (knowingly) encountered this type of music. It is my belief that introducing this particular form of music making to young people, a form of music that is not fundamentally based on the music of ‘notes’ but on the creative use of potentially any recorded sound, can open doors to a form of expressive sonic creativity that does not require knowledge of a music-based theory (i.e. how to read music or organise note-based content into structures that conform to note-based music theory). It can inspire and empower young people to follow a creative path that is of their own making, allowing them to make their own rules as to how they structure, manipulate and organise sound to create interesting sonic artefacts. Hence tapping into a creative drive that emerges from within themselves rather than being an external pre-defined creative model that they conform to. I have come to term this approach creative emancipation.

Creative emancipation, with guided listening as one of its threads operated at the heart of my teaching in music technology in HE. As a teacher on an undergraduate Music, Technology and Innovation course my challenge was to discover ways to engage my students in experiences that would help them to develop their abilities to innovate in music composition and production. Creative emancipation was a general mechanism that I chose to employ in order to help free my students from the confines of creating within strict musical boundaries, to help them enhance their creative palette giving them a broader range of compositional possibilities, which may lead them towards innovation.

This notion of creative emancipation is not solely confined to the field of music. It also concerns more general ideas around creative freedom. In that by stepping outside of predefined models, whatever these models might be, in whatever area of practice this might be, and engaging in creative experimentation can potentially lead to new ways of approaching problems and challenges. I firmly believe that it is important to create, facilitate and encourage participation in such creatively experimental spaces, as in my experience it is often in these spaces where innovation occurs. Indeed, I have carried this notion of creative emancipation into my work as a learning technologist. I believe that it is essential that teachers and learning technologists have space, time and effective support to engage in creative experimentation with the potentials of technology for teaching and learning. As I have noted in the Operational Issues section of this portfolio:

The key is in understanding and/or discovering through ongoing experimentation the pedagogic potential of a range of technologies in the context of both general and localised/specialised teaching and learning needs; how a technology might be effectively applied to serve teaching and learning even when its core function may not, on the surface appear to be directly related to enhancing teaching and learning.

Guided listening in practice

Guided listening in practice concerns presenting text-based information about an electroacoustic composition to a listener in real-time whilst they listen. The information presented has a direct bearing on the sounding elements of the composition. Classroom-based guided listening was one of the approaches that I employed to engage students with electroacoustic music – a corpus of music that the majority of them had not engaged with previously and as noted above is not principally organsied around note-based music theory – I saw this approach as a gateway to creative emancipation if you will.

As a class group I would play an electroacoustic composition without providing any contextual information, such as the title of the music. The group would write down their thoughts about the music as they listened. As this music did not tend to conform to the structures and parameters associated with the music of notes, they had no musically formalised point of access to the content, so their interpretation of it was potentially emancipated (at least from their learned musical understanding). This initial listening would then be followed by several repeated listenings in which I would introduce contextual information about the composition. This might include its title, information about compositional techniques, the properties of sound (acoustics), the intended communication of and possible interpretation of meaning, or I might ask them leading questions, or to reflect on what they were hearing in a certain way.

Having found this informational drip-feed method to be an effective way of engaging music students with experimental forms of sound-based music in a face-to-face classroom situation. I began to conceptualise this model as an integrated component of the EARS2 project. To achieve this integration I was tasked with finding a way of presenting the guided listening method online. Fortunately the solution is a simple one, whereby the presentation of the composition and text-based information is facilitated by way of a digital animation (video) that contains an audio recording of the composition with animated text annotations.

Deep Pockets by Larisa Montanaro
Copying this content or any part thereof is prohibited without permission of the composer

In this format, guided listening objects employ a multimodal approach. Multimodal learning involves the integration of two or more of the learning modalities (visual, aural, read/write, kinesthetic). In this case the primary mode is aural (sound/music) – this is this type of content that is the focus of the learning. Yet it is in the presentation of this sounding content via a video format which allows for the sound to be dynamically annotated with text or other visual materials in real-time, thus integrating a visual/textual learning modality, and allowing for this specific type of learning to take place.

The ability to utilise this multimodal method is fundamentally facilitated by technology. Firstly in the technological ability to create a multimodal learning object where audio is annotated with text or visual content in real-time. And secondly in the ability to provide simultaneous multi-user access to the learning object – which in this case is facilitated by the internet. Rather than having to reproduce multiple copies of the content on physical media, e.g. DVD, and then find an effective mechanism to disseminate this physical media to those who are required to view the content.

Resource creation and knowledge content

Creating a guided listening object requires identifying something to hold on to factors (SHF) within a particular composition that can offer the listener certain keys to understanding the composition in terms of how it was made and what it is attempting to communicate/express (if indeed it is attempting to do this).

The ’something to hold on to factor’ has to do with making musical works accessible to the listener. It works as follows: the creators of a work offer their public something to hold on to in terms of appreciation in word and deed. This ’something’ in electroacoustic music can range treatment of parameters to homogeneity of sounds and/or the search for new sounds to the density of layering to an appropriate form of narrativity. This ’something’ does not have to be the key element of the work in question. It is, however, an aspect of the work which helps one feel more comfortable, providing a greater understanding of the work.

(Source – Leigh Landy (1994). The ’Something to Hold on to Factor’ in Timbral Composition. Contemporary Music Review Vol. 10, Part 2. London: Harwood: 49-60.)

These SHF can be used to describe, for example:

  • subject specific terminology – in this case Granular Synthesis

Never (excerpt) by Curtis Road
Copying this content or any part thereof is prohibited without permission of the composer


  • organisational approaches that the composer has used to create the content

Basketball Glitch (excerpt) by Sebastien Lavoie
Copying this content or any part thereof is prohibited without permission of the composer


  • the listening experience; how we make meaning from sound


  • subjective meaning-based aspects that the composer is attempting to communicate through the composition

Camera Oscura (excerpt) by Francois Bayle
Copying this content or any part thereof is prohibited without permission of the composer


  • technological methods that the composer has applied in the creation of the content

Never (excerpt) by Curtis Road
Copying this content or any part thereof is prohibited without permission of the composer


  • acoustics and properties of sound

Camera Oscura (excerpt) by Francois Bayle
Copying this content or any part thereof is prohibited without permission of the composer


The annotations can also present leading questions for the listener, rather than offering direct information.


Depending on what the learning requirements are, will depend on the type of SHF that are used to annotate the composition. Indeed, using this method the same composition can be used to explore different areas of knowledge. For example, if the aim is to learn about the use of certain audio technologies to create certain musical effects, e.g. reverberation. Parts of the composition where the composer has used reverberation effects can be highlighted and brought to the attention of the listener. Yet in the same composition the focus might be on how the composer is, for example creating a sinister ambiance through the layering of particular sound types, these again can be highlighted in the text annotations. This also presents an opportunity to show pupils how a composer of this type of music, in the creation of a single composition has to be able to apply a range of skills and knowledge, across a broad spectrum which includes the use of technological production techniques to the understanding of aesthetics and the articulation of meaning through sound.

Developmental considerations

It is important to note that there are some limitations to be aware of in the creation of guided listening objects; or indeed any multimodal learning resource in which the primary content is audio and the secondary content is text annotations. Due to the temporal nature of music (or sound in general) and the inability to freeze-frame sound as can be done with moving image, there are constraints that this brings to the use of real-time textual descriptions. Such descriptions must be succinct, yet must be presented on screen long enough to be read before disappearing from view and to be understood in the context of the sounding elements to which it applies. This means that complex compositional approaches that would require a significant amount of text to describe and understand cannot be engaged with using this method. Also, the level of language used, and the expected level of understanding should be considered in relation to the reading level and knowledge-base of the age group accessing the resource. For example, using complex terminology and language to describe content for an 11 year old may be difficult for them to read, interpret and understand in the time required for them to apply it to what they are hearing in real-time. A listener can obviously view the guided listening resource multiple times, or pause it to read and make sense of the text if there is difficulty in understanding. Nevertheless, in general developing guided listening objects requires a considered and judicious use of text. The ideal approach being to use short sentences and a language that effectively highlights and/or describes the composition.

Given that guided listening was being developed as an integrated component of the EARS2 site, it was critical when developing the guided listening objects to ensure that the vocabulary and terminology used in each one matched the specific themes, terms, techniques and concepts being introduced to the users at the three distinct learning levels under which the EARS2 learning content has been structured; Beginner, Intermediate and Advanced.

Given that the EARS2 site is aimed at Key Stage 3 pupils (11-14 years of age) it was also paramount that, a) the level of vocabulary used was accessible to an 11 year old yet still written in such a way as not to be too young for a 14 year old; and b) the pieces of music chosen as guided listening objects were level and age appropriate with respect to the sounding content – i.e. the pieces were not too sonically complex or contained inappropriate subject matter.

This required some detailed triangulation between identifying appropriate compositions to use as guided listening objects, the compositional complexity of a chosen piece of music relative to the three levels (beginner, intermediate and advanced), its relevance to the themes and concepts being taught via the EARS site, and the level at which certain themes and concepts were being introduced to site users. This was not an easy challenge.

To ensure that there was a consistent parity in the terminology used, the level of vocabulary, and the level in which the learning object would be best suited I worked in close collaboration with the EARS2 development team. To this end, during the creation of the guided listening objects some of the videos went through a couple of iterations as a result of the level of vocabulary used being deemed to be beyond the level that would be comprehensible to the intended level of the user, and that some specific terms used in the annotations were being presented at the wrong level. For example, the concept Density is introduced at the advanced level. It was important therefore that all of the other guided listening examples and the vocabulary used to annotate them in the beginner and intermediate levels did not engage directly with the concept of Density or any intermediate or advanced terminology associated with it.

Here are a couple of examples of before and after iterations of some of the guided listening content:


The level of the vocabulary was deemed to be above the level of the user group.


It was simplified. In this instance the use of the specialised terms automation and loudness are not problematic, as these are compositional terms that the pupils will have learned as part of the intermediate level.


The term micro-edits was problematic.


In this case the pupils will know what is meant by a sample and will understand the concept of sampling, but will not understand or will be required to understand the term micro-edits.

Ease of development for guided listening objects

In my opinion, a critical element of the guided listening approach is that teachers themselves are able to create guided listening objects, in order that they can tailor content to their localised teaching and learning needs. It its initial iteration the EARS 2 site has a relatively small number of guided listening objects that have not been created by the teachers themselves and that will be mostly pertinent to the integrated structure and context of the learning content on the EARS2 site. Teachers may wish to use the guided listening method to engage with other concepts in the field of music that are relative to their classroom activities, or indeed to other areas of the secondary curriculum entirely. Given this, in the development of the guided listening objects I have been mindful in using relatively basic multimedia production techniques, such that it would take relatively little up-skilling in order to create guided listening objects and would not rely on expensive media production software or hardware. To this end the process of developing a guided listening object requires – access to musical works and basic video production software. In my case I have used the standard out of the box Mac iMovie software. The standard Windows Movie Maker software can also achieve the same results for PC users.

Guided listening objects comprise three layers of media:

1. A static background image which includes the title of the composition and composer credits.

2. A musical track, which can be in any format accepted by the video editing software being used to create the GL object.

An important note on the use of musical examples in Guided Listening objects. All of the music used in the guided listening objects on the EARS2 website has been used with the express permission of the copyright owners of the music. If a teacher were to create their own guided listening object they must be mindful of copyright issues.

3. A layer of text annotation that changes over the duration of the video

A basic ‘how to create a guided listening object’ guide has been developed. This guide is included in the EARS2 Teacher Packs.

Translator packs

There is an intention for the EARS2 website to be translated into other languages. Given this I felt that it was important to put together Guided Listening Translator Packs to make it as easy as possible for the translators to translate the Guided Listening texts, and to be able to add these translations directly to the Guided Listening objects themselves. This approach is critical in terms of scalability and resilience given that my work on the EARS2 project is voluntary and hence I may not have capacity to create a new Guided Listening object each time one is required for translation.

The translator pack consists of :

A video copy of all of the Guided Listening (GL) objects
A .txt file for each GL object containing a transcript of the text
A blank version of the background image so that the title of the piece and the credits can be added (if these need to be translated)
iMovie project files for all of the GL objects and an associated IMovie style guide. This allows the translator to simply edit the GL object directly in iMovie without having to create a new one from scratch. This obviously relies on the translator having access to the iMove software. With the translation work being carried out by persons employed in universities access to a Mac computer is not problematic.

Other potentials under consideration

The creation of guided listening objects is an area that I am currently interested in developing further. In terms of Electroacoustic Music, I am exploring the possibility of establishing an online repository of guided listening OERs. This repository would be available for anyone to upload guided listening objects of their own and to download any to use for teaching and learning. I feel that using elements of crowd sourcing aligned with OEP would be the most effective mechanism for potentially creating a large repository of content for those who do not have the capability of creating their own guided listening objects. Indeed, this could become a key repository of content for teachers should the secondary music curriculum continue to include more content from the electroacoustic music paradigm.

The use of similar multimodal approaches to teaching and learning via audio and text-based video objects may well have a place in Health and Life Sciences – in my current role I work with academic staff in the Faculty of Health and Life Sciences, and so can see where such content might fit. For example, I am imagining a teaching scenario whereby medical/nursing/midwifery students are learning about how to interpret breath sounds or heartbeat sounds or fetal sounds when using a Stethoscope. A series of annotated videos of breathing or heartbeat or fetal sounds which represents different conditions and medical scenarios could be developed.