Brain Soup| Part II

Mind-Brain-Body

Last time we covered the concept of embodied cognition, how the brain exists to reconstruct the crazy complex outside world inside a comparably tiny organism (you), using both mind and body, so that you can effectively act in it to survive and thrive. 

This installment is all about introducing the puzzle of music as it presents itself to cognitive scientists - how the structure of musical sounds traveling through the air is decoded in the architecture of your brain - and understanding how it fits into this paradigm.

The long and short of it is that the lines between rhythm, pitch, timbre, and harmony are actually quite blurred at the physics level. These only exist as features once our brain decodes them. This section focuses on building from the lowest level of structure - in the physics of sound waves - and continues in a straight shot up to our perception of these vibrations as musical sound. 

We’ll look at two ways of reverse-engineering sound:

  1. Using a mathematical function called a fourier transform.

  2. Using the cochlear of your inner ear

It’s All Vibrations, Man

If you speed up a regular tap fast enough (at least 20Hz i.e 20 taps/second), your brain suddenly perceives it as a pitch; you can experiment with this with your lips by moving between extremes of the brass embouchure - try going from ‘low tuba’ to ‘Wynton Marsalis kissing an angel’ and you’ll notice the flap ascend until it becomes a pitch. 

So far, so good. Now if you know your Harmonic Polyrhythms, you’ll be aware that if you tap two different tempos at once (i.e a polyrhythm), you end up with an interval (A 4:5 polyrhythm becomes a major third; a 5:6 polyrhythm a minor third). So we just need to keep on stacking sped-up polyrhythms together and we get chords - a 4:5:6 polyrhythm is a major chord!

The trouble is, music doesn’t actually sound like lip farts - instead we have an endless array of timbres capable of producing any given pitch. In fact- it’s playing with this timbre landscape through synthesisers and post-production that’s defined the last few decades of Western music. 

So instead of periodic taps, we reimagine our ‘repeating unit’ as a continuous wave with peaks at moments of high intensity, and troughs as low intensity - the spacing of these peaks is what determines the pitch of the note. The simplest form of this is sine waves.

Take a look at this illustration from Nahre Sol’s Elements of Music. It shows how a static note is in fact a regularly repeating sound wave, and our taps from before have become the peaks of the wave. In this particular example, a 1:2 ratio has been used, which corresponds with an octave. Such a harmonious interval that we give both notes the same name.

The piano doesn’t actually produce sine waves. What we need to do is take these simply defined mathematical units and stretch, amplify, and sum them together to produce the sound we want. Like this, we can produce absolutely any timbre you can think of. The lowest frequency is the one which defines the pitch, and gives us the concept of fundamentals, meanwhile the others are harmonics. While our note plays, the relative amplitudes of these sine waves shift, causing a mutating mix of fundamentals. This ‘note changing over time’ aspect is captured in sound engineering with Attack, Decay, Sustain, and Release. 

Psychoacoustics is a big topic, and the detail really explodes after this. However, I’ve said all I need to make my point for the article, so for those who are interested here’s ChatGPT giving more detail:

We’ve explored the surprisingly blurred lines between rhythm, pitch, timbre, and harmony when we come at them from the physics angle. Now we’ll turn to how your brain intuitively distinguishes between these, as well as how music can play with these intuitive distinctions to tickle our predictive systems.

How your Brain Distinguishes between sounds

The image below shows a model put together by a neuroscientist Stefan Koelsch, based on music and language research*. Reading from left to right, you can see a full breakdown of the stages your brain processes music in. 

*Language is more than just a good analogy in music - the two share an awful lot of sensory processing areas, especially at the earlier stages!

Firstly, when you hear music, your brain starts by picking up the sound waves and turning them into electrical signals. This happens in a part of your ear called the cochlea, and then these signals are sent to your brainstem. Interestingly, people who have had musical training are even better at picking up the pitch of sounds, which is why musicians often have a good ear for music. 

Next, your brain starts to make sense of these sounds, forming what are referred to in this model as ‘gestalts’. It stores the information in your auditory memory and starts to form a picture of the music, like putting together a jigsaw puzzle. This happens in the primary auditory cortex, a part of your brain that's dedicated to processing sounds.

After that, your brain starts to analyse the music in more detail. It looks at the intervals between the notes, which helps it understand the melody and harmony of the music. This is a bit like understanding the grammar of a language - your brain is learning the rules of how music is put together.

The next stage is the really fun part - prediction (structure building). You can see in the diagram that there’s an arrow pointing backwards to Feature Extraction 2 - this is exactly the thing we introduced last time with hierarchical predictive processing, where your past experiences and knowledge of music are used to estimate what will come next. For example, if you've grown up listening to Western music, your brain will have an understanding of Western musical structures and will be able to predict what notes are likely to come next. 

The flipside of this predictive process is surprise - like when there's an unexpected change in the melody or rhythm. When this happens, your brain goes into 'repair mode', trying to make sense of this new information. This is also the stage where music can really make us feel alive, or "vitalized". This can involve physical responses like increased heart rate or even goosebumps, including our particular personal response to a chord or melodic line. This is the fundamental recipe for good and new music - patterns which break your expectations, but not so much that they lose their structure.

Music also has a big impact on our motor system, which controls our movements. This is why we often find ourselves moving to the beat of the music, whether it's tapping our foot, dancing, or playing an instrument. One of the areas within your motor system - the dorsal premotor cortex - seems to be sat right at the top of the predictive hierarchy for rhythm perception, responsible for holding the beat in your head as you move and play.

Music also has a social aspect. When we move in time with music, especially in a group, it can create a sense of unity and shared experience, sharing the same emotions between musicians and listeners. This likely takes place through the release of endorphins, which are chemicals in our brain that make us feel good. It’s pretty fascinating stuff

Hopefully I’ve illustrated through this that music is a unique challenge for our brain in drawing together so many different elements all at once.

Conclusion

You should now have a good idea of how vibrating air waves produce layers of structured electrical responses in your brain. Next time we’ll turn to how we make sense of music from within our mysterious subjective experience - that space which we all navigate in our day to day life, and experience augmented by music. 

It’s pretty magical stuff, and it gives us context for asking bigger questions about why music leads to the sheer variety of experiences it does - including the desire to move, be moved, and feel spiritually satisfied.

See you in Part III for the finale to the series– Do You Feel That?

Jethro Reeve

Jethro is a neuroscience graduate from the UK, and wrote his dissertation on Cognitive Neuroscience and Music Teaching. He is a jazz pianist and teacher with a performance diploma from Trinity College London. He now studies Interdisciplinary practice in Culture & Complexity while continuing to teach piano during weekends, and looks to reapply theoretical principles of the brain to the world at large.
https://www.linkedin.com/in/jethroreeve

Previous
Previous

Microtonal Transcription Basics

Next
Next

Self-Esteem And Improvisation