Brain Soup| Part I

Embodied Spacetime

A Perfect Synthesis

A close friend of mine experiences synesthesia - a phenomenon where senses stimulate one another, leading to interesting perceptions like ‘tasty words’, ‘textured emotions’, and… coloured notes. 

I met James back in secondary school (that’s UK high school) when I caught him in a classic improvising-composing practice state at one of the practice-beaten music department pianos. He showed me his Kapustin-inspired compositions and described his remarkable (even for most synesthetes) colour-texture-sound associations. 

Here’s a tune from his latest EP, Crimson Skyline, showcasing his synesthesia with animated MIDI. 

He’s now working on his PhD at Cambridge University in Signal Processing AI, while breaking juggling world records part-time.

I remain captivated by James’ brilliant mind, and I hope to share this captivation with you here. 

Living Art

In contrast with visual art, music is an expression across time. Thus to successfully experience music, our brains need to decode sound waves and sort them by pitch and tone, instrument and player, intention and harmonic language, structure and feel, all in real time. 

This hourglass of music processing came about following a recent conversation with Timothy, and illustrates how every moment of musical experience is constantly emerging out of physical sound waves and complex brain activity:

There are three very interesting things going on in the brain during your live musical processing:

  1. Embodied Cognition:  If you’re a musician, you’re able to consider musical sound in terms of the performer’s physical movements and thought processes. That is, the brain activity associated with playing your instrument and listening to someone else play the instrument is remarkably similar.

  2. Hierarchical Predictive Processing: A feedback loop is maintained between you (a processing hierarchy) and the environment, allowing your experience (and potentially your movements) to move in tandem with the music around you.

  3. Phenomenological experience: Completely unconscious processing of complex physical events is fed upwards in a hierarchy to produce the ‘real’ experience you perceive as taking place (e.g surprisingly spicy chords for your dentist’s waiting room playlist).

Let’s take these one by one. 

Embodied Cognition

To make you a better guesser, your own anatomy has a design which picks out important features in the world around you. So much so that even the shape of your ears is optimised to amplify higher sounds from above, and lower sounds from below… an optimisation which reflects the fact that sounds in the environment really do hit our ears like this.

This is to illustrate that the structure of the brain and body complement one another to produce an entwined human system, and your system is itself entwined with the world around you. But as well as this embodied cognition is also about being tuned to other humans. 

If you’re a follower of Adam Neely, you may know the self-confirming truism that:

“Repetition legitimises, repetition legitimises, repetition legitimises”

In the original context he’s talking about how repeating a wrong note in your solo can make it sound intentional. I’m reusing it here in an even wider context to illustrate the fact that music is built out of repetition across all pitch and time scales. 

The gist here is that prediction is your brain’s main game, and music is a perfect example of this because we can watch it as it happens:

  1. Recognising a song’s genre even without being able to describe the musical features that make this the case

  2. Dancing in time with musical phrases and changes in section

  3. Predictive drumming

As attentive listeners and proficient musicians, we build a high resolution of understanding, getting to pick out the surprises within a soup of predictable structure. Here’s a simple example that’s going to help as we break this idea down further: 

You’re teaching a beginner to sing to pitch, so first you say ‘sing me a note on la’. They go ‘laa’, but it doesn’t sound like a pitch, but more like a grumble (any teachers may be familiar with the reluctance some students have to singing). But we can say something like ‘try to hold the note’ or ‘sing higher’ to get them to think in terms of pitch, and eventually - through our feedback - they become able to sing. 

This was a nice simple example of a feedback loop. Now let’s look at a slightly more complicated one:

Let’s imagine teaching a student to sing back a note they play on the piano. First they play the note, then they hear it in their heads, then they sing it, then they play it again to check. If they hear they’ve got it wrong, they sing it again (hopefully) closer to the correct note. This is an entirely self-sufficient process that they can then use to practise at home, and might look a bit like this:

This is a nice example of a double feedback loop - the student has received sound from both the piano and their own voice, and is tasked to compare the two to adapt their own response. 

Now imagine that you’re teaching them to sing intervals, and you’re using the piano to find notes (as above), while introducing your own sung note, then asking them to listen to the harmony between the two notes. 

As you can imagine, this stuff gets complex quickly. It's good our brains have cooked up this nice intuitive space for consciousness so we don’t have to think about the nitty gritty ourselves. Feedback loops like this occur all the time in music, and allow us to play intricate rhythmic and melodic patterns with other players without even batting an eyelid. 

The next couple of sections explain two big organising principles we can use to understand some feedback loops which take place in music.

Hierarchy

Your brain processes music in a hierarchy from unconscious decoding of consonance and dissonance in sound waves, into an elaborate conscious experience built from feeling and memory. The hierarchy goes downwards too with action, as you decide to move to the music or focus your attention on the next soloist. 

To act effectively in the outside world, the new information coming upwards needs to work effortlessly alongside your movements and expectations projecting downwards. So the big theory goes, the upwards and downwards interactions are interlaced in such a way that your neurons themselves ‘model’ the world as they see it. Then above that not single neurons, but whole neural populations, then the entire brain engages in this process of world-modelling through intrinsically linked perception and action. 

Hierarchical organisation appears in a very physical way in the structure of systems for vision and audition (sound), and motor control (movement).

In particular, we’ll go deep into the auditory hierarchy next time to tease apart all the different stages of processing your brain goes through to allow you to perceive music. 

Prediction

Music’s ability to harness our inherent desire to know ‘what happens next?’ is one of the primary reasons neuroscientists get so excited about it. Here’s a quote from one of the big names in music-neuroscience research:

“...Music perception is an active act of listening, providing an irresistible epistemic offering. When listening to music we constantly generate plausible hypotheses about what could happen next, while actively attending to music resolves the ensuing uncertainty” Koelsch et al, 2019 

Music involves playing a game with the predictive processes of listeners across a broad range of time periods, from the tension and resolution between changes; to the developments in pacing of a solo; to changes in texture and groove across a whole recording. 

Your brain is even making predictions over far longer stretches of time - you practise because you expect to play in a gig tonight, you develop good relationships with the band because you want to play together more, and you pursue music because you expect it to fulfil you across the length of your life. You’re hard-wired to make and test accurate predictions across every time scale from milliseconds to decades.

Hierarchical Predictive Processing

Here’s a recent model called Active Inference which absorbs these last two and aims to transcribe the collected system onto paper. Though just like a musical transcription, phenomenological experience gets left behind as a mystery:

You are an engine constantly making predictions and testing them in the environment through action. You compare your expectation with outcomes to minimize error and build a better ‘internal model’. 

While this ambitious model has some gaps to be filled, it’s made a big splash in philosophy of mind and theoretical neuroscience circles. It not only serves as a great way to illustrate the concepts described above, but also uses very similar probabilistic approaches to those used in AI (Reinforcement learning, if you want something to Google). 

This installment has been all about how our brain-body system is pre-geared to make predictions and take actions across a range of complementary tiny, medium and huge timescales, and has a natural tendency for hierarchical organization which underlies this. It’s important because I’m about to describe musical elements in terms which perfectly complement this.

That’s the end of Part 1.

See you next time for a dramatic synthesis of universal musical principles in sound waves and the brain, to open huge questions about what on earth music really is.

Jethro Reeve

Jethro is a neuroscience graduate from the UK, and wrote his dissertation on Cognitive Neuroscience and Music Teaching. He is a jazz pianist and teacher with a performance diploma from Trinity College London. He now studies Interdisciplinary practice in Culture & Complexity while continuing to teach piano during weekends, and looks to reapply theoretical principles of the brain to the world at large.
https://www.linkedin.com/in/jethroreeve

Previous
Previous

Self-Esteem And Improvisation

Next
Next

Arranging And Improvisation Are More Similar Than You Think.