The relationship between music and science is more complicated (and beautiful) than you ever imagined
I love the interplay between art and science, because it often demonstrates my conviction that we have more than one set of “senses” by which we interpret reality. While our five physical senses–touch, taste, sight, hearing, smell–are critical for apprehending the physical universe, it is our “spiritual senses”–what the Hebrew and Christian scriptures refer to as the “eyes of the heart”–that enable us to comprehend the spiritual universe. Music is but one place where the interaction between the two is so evident… and so beautiful. We’ve reposted Pam Belluck’s excellent introduction to Levitin’s work and Cory Turner’s report on a practical application of it for those less interested in the technical jargon. -GDS
To Tug Hearts, Music First Must Tickle the Neurons
By Pam Belluck • The New York Times
The other day, Paul Simon was rehearsing a favorite song: his own “Darling Lorraine,” about a love that starts hot but turns very cold. He found himself thinking about a three-note rhythmic pattern near the end, where Lorraine (spoiler alert) gets sick and dies.
“The song has that triplet going on underneath that pushes it along, and at a certain point I wanted it to stop because the story suddenly turns very serious,” Mr. Simon said in an interview. “The stopping of sounds and rhythms,” he added, “it’s really important, because, you know, how can I miss you unless you’re gone? If you just keep the thing going like a loop, eventually it loses its power.”
An insight like this may seem purely subjective, far removed from anything a scientist could measure. But now some scientists are aiming to do just that, trying to understand and quantify what makes music expressive — what specific aspects make one version of, say, a Beethoven sonata convey more emotion than another.
The results are contributing to a greater understanding of how the brain works and of the importance of music in human development, communication and cognition, and even as a potential therapeutic tool.
Research is showing, for example, that our brains understand music not only as emotional diversion, but also as a form of motion and activity. The same areas of the brain that activate when we swing a golf club or sign our name also engage when we hear expressive moments in music. Brain regions associated with empathy are activated, too, even for listeners who are not musicians.
And what really communicates emotion may not be melody or rhythm, but moments when musicians make subtle changes to the those musical patterns…
What Does It Mean to Be Musical?
by Daniel J. Levitin • McGill University
Musical ability is popularly regarded to be innate: one either is or is not born with musical talent. Increasingly, neuroscientists are collaborating with geneticists to understand the links between genes, brain development, cognition, and behavior (Ebstein et al., 2010; Posner et al., 2011). Music can be seen as a model system for understanding what genes can accomplish and how they relate to experience. On the practical side, identifying genetic components that underlie musical ability can also help us to predict who will succeed or, more interestingly, what types of instruction will be most successful for individuals according to their geneticcognitive profiles. In all domains, successful genotyping requires an accurately described phenotype. Unfortunately, the latter has not yet been accomplished for music, creating a significant hurdle to further progress. Part of the difficulty in describing the musical phenotype is its heterogeneity, the wide variety of ways in which musicality presents itself (Sloboda, 2008). My goal in this article is to review those factors that might be associated with the phenotype and to discuss definitions, measurement, and accuracy, three common obstacles in understanding the genetics of complex behavioral phenomena (Ebstein et al., 2010), with the hope that this may stimulate discussion and future work on the topic.
The Functional Neuroanatomy of Music
We now know that music activates regions throughout the brain, not just a single ‘‘music center.’’ As with vision, music is processed component by component, with specific neural circuits handling pitch, duration, loudness, and timbre. Higher brain centers bring this information together, binding it into representations of contour, melody, rhythm, tempo, meter, and, ultimately, phrases and whole compositions. The idea that music processing can be broken down into component operations was first proposed as a conceptual tool by cognitive theorists and has been confirmed by neuroimaging studies (Levitin and Tirovolas, 2009). The early distinction that music processing is right hemisphere lateralized and that language is left hemisphere lateralized has been modified by a more nuanced understanding. Pitch is represented by tonotopic maps, virtual piano keyboards stretched across the cortex that represent pitches in a low-to-high spatial arrangement. The sounds of different musical instruments (timbres) are processed in well-defined regions of posterior Heschl’s gyrus and superior temporal sulcus (extending into the circular insular sulcus). Tempo and rhythm are believed to invoke hierarchical oscillators in the cerebellum and basal ganglia. Loudness is processed in a network of neural circuits beginning at the brain stem and inferior colliculus and extending to the temporal lobes. The localization of sounds and the perception of distance cues are handled by a network that attends to (among other cues) differences in interaural time of arrival, changes in frequency spectrum, and changes in the temporal spectrum, such as are caused by reverberation. One can attain worldclass expertise in one of these component operations without necessarily attaining world-class expertise in others
This Is Your Brain. This Is Your Brain On Music
by Cory Turner • NPR
Musical training doesn’t just improve your ear for music — it also helps your ear for speech. That’s the takeaway from an unusual new study published in The Journal of Neuroscience. Researchers found that kids who took music lessons for two years didn’t just get better at playing the trombone or violin; they found that playing music also helped kids’ brains process language.
And here’s something else unusual about the study: where it took place. It wasn’t a laboratory, but in the offices of Harmony Project in Los Angeles. It’s a nonprofit after-school program that teaches music to children in low-income communities.
Two nights a week, neuroscience and musical learning meet at Harmony’s Hollywood headquarters, where some two-dozen children gather to learn how to play flutes, oboes, trombones and trumpets. The program also includes on-site instruction at many public schools across Los Angeles County.
Harmony Project is the brainchild of Margaret Martin, whose life path includes parenting two kids while homeless before earning a doctorate in public health. A few years ago, she noticed something remarkable about the kids who had gone through her program.
“Since 2008, 93 percent of our high school seniors have graduated in four years and have gone on to colleges like Dartmouth, Tulane, NYU,” Martin says, “despite dropout rates of 50 percent or more in the neighborhoods where they live and where we intentionally site our programs.”
The World in Six Songs: How the Musical Brain Created Human Nature by Daniel J. Levitin
This Is Your Brain on Music: The Science of a Human Obsession by Daniel J. Levitin