Latest News

Music, Mind and Meaning: A Day at the Peabody Institute

DSC00505-4 I'm fairly certain it's common knowledge by now that I spent about a decade in college. In my defense, a good chunk of that time was spent in graduate school. Find me at the bar at the next audio show and you'll undoubtedly catch a note a wistfulness folded in with all the wild gesticulations. You see, I have a lovely pair of rose-colored glasses for viewing that part of my life, and quite frankly, I'm still a bit mystified as to how I let all that spin away. But there was a time that I was on the track to being a Prof Scot, all Piled Higher and Deeper, a would-be Diogenes wandering the annals of argumentation, lamp swinging. You know what they say about the best-laid plans. Bah. Anyway, on my tour of duty in highest-ed, I spent a deployment or two slogging through the minefield that was Cognitive Science. Back then (all of 20 years ago), the fight was just shifting from the armchair to the wet-lab, and questions of semantics were largely hand-waving exercises. The field of linguistic syntax had just started yet another upheaval, perhaps induced by progress in biology that in turn demanded a stricter parsimony. I remember asking my profs whether or not "models of mind" were even worthwhile, given (as I believed both then and now) that such mysteries can and will only be answered by the brain. Yeah, I wasn't very popular. Two decades later at the very tail-end of a January, I found myself at the Peabody Institute, the music conservatory attached to Johns Hopkins University. The topic being explored, Music, Mind and Meaning, lay at an intersection tailor-made for a philosopher-turned audio-writer, if you ask me. It was as if my own crossroads demon had suddenly stood up and offered me a ride on the way-back machine.


The demon in question happened to be an Associate Professor at Stony Brook. As roommates in college, we used to sharpen our rhetorical skills on each other like we were whetstones. Good times! And as lives tend to go, my friend invested another decade in academia while I went off and invested that time attempting to not be so freakishly broke. We’ve been close ever since. Life as a pre-tenure Prof means that he spends a lot of time canvassing the conference circuit, submitting papers, presenting posters, and shepherding his students as they do the same. Interesting conferences come up all the time, and they’re fantastic opportunities to see what’s what and rub elbows with some real thought leaders. Which may be why he dropped this on in my lap. This conference is new to the tour and an interdisciplinary adventure at that, which usually means that the presentations have at least a chance of being comprehensible to outsiders. I mean, it makes sense — with an audience of respected experts in differing disciplines, the use of hyper-specialized academic lingo should be light.

My wife thought this was a stellar opportunity to stretch my brain. Apparently, I could use the calisthenics. Ahem. Anyway, too bad registration had already closed! But, on the off-chance that something could be worked out, I figured what the hell — I sent in a note. I mean, now I could claim to be press, right? Ha ha! But, whatever … there was no chance I was getting in. Except … I did.

Which brought me to Baltimore and to my first observation. All conferences held in January, unless they have something to do with freezing your balls off, really ought to be held someplace warm. Preferably tropical. Just saying.

So, with that said, lets take a tour through some truly interesting shit.

Day One



First up was Professor David Huron of Ohio State University. The title of his talk was Emotions and Meanings in Music (slides are here and here).

This was a great way to start! With some profs, it’s pretty obvious when they’re into their research. They’re geeking out on all the cool bits and prone to the seemingly random diatribe. And joke! I lost track of how many times he cracked up the audience. He cracked wise, poked fun of himself (giggled, actually), and pumped his hands into the air, all the while sporting a huge grin. Dude was clearly having fun — and I, like everyone else in the hall, had a great time following his talk. I think the high point, which came early, was in noting the word-recognition rate amongst the various genres of music (classical was the worst, jazz the best) and an anecdote from a famous R.E.M. tune: “Let’s pee in the corner | Let’s pee in the spotlight”. That pretty much cracked up everyone.

His premise, that music carries actual meaning … well, that was a little more problematic for me.

In short, the notion of any inherent “emotive content” of a piece of music is tenuous at best and “evokes [some emotion]” really must be teased apart from “means [some emotion]”. Hard as that is to say, it’s much harder to do … of course. Still, there’s a lot of hay to be made tying musical cues to emotions — and Huron landed some interesting points, correlating some very interesting audible nuances to what we hear with what they tend to evoke in us. A video of his talk is now on YouTube, so I won’t go through it, but it’s fun — and his enthusiasm is infectious.

But to me, the conflation obscures and abstract away from the heart of his own pivot for the talk, which was meaning. Arguments that even parallel the thesis that meaning is response to external stimuli may well ring a bell or two. Behaviorism in the 21st century is interesting, if for other reasons, but I take it as read that there is “something going on in the black box of the brain” and that the whole point is unravel what that is. Call this the jumping off point, then. His talk is here:

Musical Intermission

What followed was a live performance from Vijay Iyer, “a Grammy-nominated composer-pianist who has been described by Pitchfork as ‘one of the most interesting and vital young pianists in jazz today’.”


He was accompanied by renowned saxophonist Gary Thomas, who’s played with a who’s-who of jazz greats, including Miles Davis, Wayne Shorter, Herbie Hancock, Pat Metheny, Ron Carter, Wynton Marsalis, and about a hundred more headliners.


This was quite a setup … but I want to say that this was not my cup of tea. I want to. But I actually do quite like jazz. Even improv. But after 30 minutes, I was unable to sit still. Or stand still. Or even vibrate in a discrete location. This miserable excuse for “music” that these two possibly innovative educators inflicted on the unsuspecting audience was, in short, a self-indulgent travesty. Two obviously proficient musicians managed, in that not-brief time, to thoroughly convey the reality that “magic” doesn’t always happen spontaneously in a jam session. Further, it was entirely unclear if the two performers had ever even spoken with each other, much less, played together. Oil? Meet vinegar! Now –mix! Yeah. Didn’t work for me.


Day Two: Morning


The next day dawned just as miserably cold as the first, so it was glorious to creep into the old building in downtown Bal-mer with the light of early day trickling in the windows. The Peabody institute is this grand old dame of a building, with sweeping staircases, marble floors and huge ceilings. I found myself spending more time watching light fall on the mosaic tiles and the various architectural nuances than the mob of intellectuals rubbing elbows. Snapping to required several quickly-chugged cups of joe.

A note about the room itself: it was grand and acoustically enormous. The anteroom was dominated by a half-scale cast of doors from the east portal of the Florentine Baptistry, originally created by Italian Renaissance sculptor Lorenzo Ghiberti. I think I spent a good half-hour wandering this installation, mesmerized by the incredible detail.

In the main room itself, the dominance of the pipe organ was occluded by the presentation screen and sat sadly silent, but I can imagine how crushing that sound would be in a room like this! Mmmm, mmmm! Another day ….


CT6A5860-2First talk in the morning was from Professor Aniruddh Patel of Tufts: Does instrumental musical training enhance the brain’s processing of speech? And if so, why? Personally, I think this is about as arresting as it gets in several seemingly unrelated disciplines, and in the course of the next 30+ minutes, Patel walked us through the highlights of several decades of his research.

As to the title of his talk, it turns out that musical training does positively impact a variety of language-learning behaviors. I was particularly caught by the result that showed an improvement in the ability of the musician/student to discern speech in some contexts of noise — especially if that noise is non-coincident with the speech (off to the side). It’s almost as if a musician can better pick out individual sonic elements, even when presented with a sea of audible information. Makes sense, if you think about it — a musician has to follow not just the conductor, but the lead, the cues from the others, &c. Cool that this skill then translates to language perception! Crossover improvements informing a variety of language learning and proficiency have been studied, and the reasons for these improvements, Patel theorized, may well have to do with neural processing adjacencies. That is, the part of the brain that processes language is physically next to the parts that are worked in the learning and making of music, and maybe exercising one strengthens the other. Anyway, lots of work needs to be done here, but Patel has been constructing theoretical frameworks that would allow future research to answer questions like these. Later talks developed and built on these foundations.


CT6A5971-2When Petr Janata of UCLA Davis stepped up with Mapping Music to Meaningful Memories, it became blazingly apparent how far the state of the art has advanced since I left CogSci some 20 years ago.

Professor Janata used neuro-imaging to investigate “music-evoked remembering” and the notion of “nostalgia” as a vehicle for musical meaning. Now, “meaning-ful” is a bit far from what could be traditionally described as “meaning” (i.e., what’s studied traditionally in semantics), but it’s mesmerizing work, nonetheless. He studied how the brain actually responds to music, and was able to tease that apart from memory — and from “mere” auditory processing. Neat trick, but then, good empirical design will do that for you. Great talk and wickedly cool research.



During the mid-day break there was some time for the poster sessions. There were quite a few, stuffed into a hallway halfway across the (large) building. Some were staffed, some were not, but I had a great time surfing snapshots of conversations I was clearly unprepared to follow. This was a thicket of incredibly smart people asking a variety of questions of varying levels of interest, and it was little like walking through an intellectual wood chipper. 

Here’s the list:

  • The effects of awe on musical preference by Claire Arthur, et al, Ohio State University.
  • Effects of Key Consistency on Tone Recognition Memory by Jeff Bostwick et al, University of Albany, SUNY.
  • Neurocognition of Language and Music: What is (not) Shared Across Domains by Nicole  Calma, et al, Stony Brook University.
  • Sounds like a Cow’s Tail: Space and Shape Analogies in South Indian Rhythmic Design by Fugan Dineen, et al, Wesleyan University.
  • The Neural Correlates of Musical Memory: An ECoG Study on Musical Imagery by Yue Ding, et al, Tsinghua University.
  • Melodic Intonation Therapy and Schizophasia by Colin Fry, et al, Grinnell College.
  • Between Auditory Inspiration and Illusion: Utilizing the Bicameral Mind by James Gutierrez, et al, UCSD.
  • At the “Crossroads”: Speech/music interactions in Tom Waits’ spoken-word songs by Chantal Lemire, et al, Western University.
  • Vocal Music and Cognition in Alzheimer’s Patients by Linda Maguire, et al, GMU and JHU.
  • Dwelling on Structure in Music by Jennifer Mendoza, et al, University of Oregon.
  • Ascending/Descending Melodic Interval Asymmetry in Arnold Schoenberg’s Vocal Music by Daniel Meredith, et al, Brooklyn College/CUNY.
  • Global and Local Musical Expectancies by Brooke Okada, et al, University of Maryland.
  • Knowing more music enables creating more pleasurable music: An assumption that illuminates some developments of music, including emotions by Mark Riggle, et al, Casual Aspects LLC.
  • Sonifying Gait: Using Music to Understand Parkinson’s Disease by Mararet Schedel, Stony Brook University.

DSC00508-3 DSC00511-3 DSC00515-3 DSC00516-3

Day Two, Afternoon

After lunch, Professor Isabelle Peretz of the University of Montreal discussed another fascinating set of case studies around amusia in a talk called Losing the Beat: A New Window on Human Rhythm. From the abstract:

A defining characteristic of our interactions with music is the ability to identify and move to the “beat”. This complex ability is universal and spontaneously acquired early in life. Yet, as we have recently shown, some individuals fail to find the beat in music and to synchronize with it, despite having normal hearing, normal motor and intellectual abilities. This disorder, to which we refer as beat deafness (a new form of congenital amusia) provides a natural experiment, a rare chance to examine how a selective cognitive deficit emerges and how it is expressed in the brain.


My wife, a professional singer, was quick to comment that this “explained a lot”, which was followed by a rather pointed look in my direction. Funny lady, she. Anyway, the hour was mostly spent covering a pair of case studies that showed in rather painful (if amusing) detail how some people simply can’t find or hold a beat. MRI scans of the brain of the afflicted actually show physical differences, including an unusual “cortical thickening” of the areas normally indicated by these tasks. More isn’t necessarily better, at least with physical brains structures, as specialization tends to correlate with increased proficiencies. Anyway, seeing this deficiency correspond exactly to a physical difference is curious. Usually, the evidence is markedly less obvious.


Things accelerated at this point — the talks were shortened to about 30 minutes and the presenters were managed quite effectively.

Professor Laurel Trainor of McMaster University presented Rhythmic Entrainment and the Developmental Origins of Cooperation. Neat talk, and a bit far-ranging, covering a predictive brain wave (on the 20Hz “beta band”) in both the auditory and the motor cortex that tracks the beat in perceived music and that’s pretty cool. Weird, but cool. Next result, that bass-range instruments systematically carry rhythm while soprano-range instruments carry melody. Makes sense, but statistical analysis shows this is fairly universal, which is weird. Last bit was best part. With toddlers, they were able to show helping behavior dramatically improved with even as little as a couple of minutes of coordinated “rhythmic activity”, in this case, being “bounced” in a Snuggie while the experimenter jumped in time with the toddler. The conclusion “that connections between auditory and motor systems develop very early, are enhanced by experience, and that movement entrainment with others is an important experience for social development” is a little tenuous to me. Especially that last bit as it doesn’t appear to adequately differentiate “rhythm” from “team behavior”, that is, would the same result have been shown with something other than rhythmic movement — would organized games or co-located play show similar results? I’m guessing yes, but that undermines the role of (and purpose of?) music. Apparently, one of the other presenters thought so, too, and wondered aloud if his teenagers would show any benefit to being bounced in a Snuggie.

Sorry pal. Nothing helps with teenagers.


CT6A5855-2The most engaging/troubling talk of the day came from one of the best presenters, Professor Charles Limb of the Peabody Institute and JHU School of Medicine. Dr Limb is one of the pioneers of cochlear implants and has performed thousands of these surgeries over the last several decades. He discussed the process by which the catastrophically impaired are fitted with a complex little device, with a receiver-wire threaded directly into the cochlea of the affected individual. Crazy. The result, in most cases, is restored hearing, but Dr. Limb suggests that this may be quite a bit short of restoring beauty, specifically, the ability to actually hear music. His talk, What If We Couldn’t Hear Music? was by turns chilling and awe-inspiring. Apparently, the devices are only able to partially capture a portion of the audible pitch-spectrum, and what is captured isn’t necessarily accurate to within a semitone. Timbral information, likewise, is likely lost as well — it’s too subtle. Translated — while many sounds are captured reliably, there is quite a bit lost, and many sounds are almost meaningless. Like music.

There are, I’m sure, many ethical issues about hearing and deafness that are being glossed over here, and I’m okay with that — the point, here, is that music may well be the pinnacle of hearing. If you can hear music, you ought to be able to hear anything. Which makes this an interesting target for developers and medical research. Flipping this around, the question “what makes music meaningful” can be answered mechanically — apparently, there’s quite a difference between music and mere sound. And that’s something that can be lost. 

I spent a lot of time whirling around with this talk, which is why I’m devoting so much space to it here. Honestly, I don’t understand the science enough to know if there’s an issue with the technology of the converter itself, or if the transducer (the wire in the cochlea) is simply not resolving … or what the hell is going on. Fertile ground, for sure.

A more concise version of Dr. Limb’s talk appeared on TED, so hit the link below for that.

Next up was the problem of timbre. Mounya Elhiali of JHU presented The Biological Bases of Timbre Perception, an altogether too-brief discussion of what may well be the most complicated psychological phenomena confronting scientists studying music. One of the things that human beings do, to a fairly high degree of reliability (at least with some training and familiarity), is to pick out the instrument actually making the sounds we’re hearing. The amount of information used in making these judgments is daunting, and given that each aspect may well engage a different processing structure, modeling timbre judgments is, well, complex. Guess what? Neural imaging supports this (big surprise), but what was shown was a model that seems to mimic what humans actually do when making such decisions, and one that appears to perform with a commensurate level of precision. That’s interesting. Whether that is how the brain actually works is another question, but in terms of meaning, knowing how is a huge step to understanding why, which hopefully will take researchers like Dr Limb closer to “what can we do about it”. Wish there was more meat here, but it appears that this corner of the science is still setting the table.


Professor Summer Rankin of the JHU School of Medicine was on-hand for one of the most perplexing talks of the day: Fractal Patterns in Rhythm and Pitch Perception. Now, mathematical models are old-hat in biology, and regularities are pretty much expected. But fractals? Well, yes — it appears that the mathematical regularities in the biology and music … well … they repeat. On different scales. Weird. Now, fractals are pretty common: think growth in trees, which follows the same patterns if you’re talking about root-to-truck, trunk-to-limb, limb-to-branch, branch-to-leaf, all the way down (and past) the vein formation in the very leaves themselves. It’s the same. Freaky, right? Anyway, it turns out that music has the same structural repetition — and, more interestingly, the more fractal the structure, the more “liked” it is. This is a rather fertile ground for the philosopher, and the temptation is dive face-first into the why’s and wherefore’s of the structural regularities in the universe; in Scot’s Philosophy of Physics, what this is means is that God Took Shortcuts — but that’s a conversation for another day. Let’s just say I have lots of thoughts about this. Anyway, Professor Rankin showed some rather bizarre and unexpected examples of this — but whether or not there’s meaning that can be written in to them is another question. Suggestive and interesting, for sure.


The last talk of the day came from Professor Mónica López-González of the Peabody Institute and the JHU School of Medicine. Clearly the oddball of the group, Professor López-González is actually a filmmaker, and yes, we got to see one of her shorts, a multimedia exploration of “fear”. Her talk, Creativity and Emotion: An Interdisciplinary Approach was entertaining, the strongest bits including the focus on neuro-imaging during improv. Yeah, nifty. If I’m reading my notes correctly, she stuck a bunch of jazz musicians under a giant imaging device and asked them to jam and watched what their brains were doing. Can’t imagine what that would have felt like, or sounded like, but it’d have been hilarious to watch if nothing else. Actual meaningful results (much less ones that bear directly on music and the why’s of its representational content), however, still seem ephemeral.

The sum of it’s parts

Conferences are bizarre little windows into alien worlds for those of us on the wrong side of the glass, and that’s true even if that conference is “interdisciplinary” in nature. There’s really no easy way to bring two or more discrete disciplines together without a collision of vocabulary, process, standards, or interpretation, and Cognitive Science, in whatever form, is particularly fraught with this “intelligibility problem”. Quite simply, the languages being used by musicologists, neuropsychologists, linguists and clinical physicians are almost discrete. Crossover is not only rare, it’s almost impossible. The deep level of specialization entails a level of professional myopia that fully precludes familiarity with fields that are also directly bearing on the work. Which is why interdisciplinary conferences like Music, Mind and Meaning are so important and really ought to be mandatory.

I spent about a day and a half touring the field, and I have to say, I was extremely impressed. I was also bemused — so much has changed! On the other hand, it’s also clear that many issues that plagued “the field” back in the 80’s and 90’s are just as problematic 20 years on. Basic philosophical distinctions, like “meaning” versus “meaningful”, or “causal” versus “correlated with”, are apparently intractable. I jest, at least a little, but the equivocation, dissembling, and a faux humility required to push papers into academic journals does imply a the pathological aversion to being caught creating Maginot Line fortifications. And that means that actual advances and bold statements and suggestive linkages get lost. But seriously, folks — if you’re ever going to stake your flag on the mountain, it’s a conference like this.

So, where are we? Hard to say. My field, “scientific approaches to mind”, is now figuratively lightyears from where I left it. Neuro-imaging techniques like EEG, fMRI and MEG have utterly revolutionized the armchair-bound philosophy-heavy studies of the Big Black Box we all carry around, but extracting philosophical understanding from these tools is still a bit off, it seems.

A quick note about the title of the conference. “Music”, is by all accounts, the easy part of this puzzle. True, there are lots of moving parts here and I most definitely don’t want to downplay the role or importance of musicologists or ethnologists working in their fields. But compared to “mind”, “music” is more of an insertion point, a place to start peeling the onion in a different way. A counter-point to the main thrust of CogSci, and I think, a fertile one. As for “mind”, well, this chestnut has been roasting for several hundred years and it’s debatable on how much progress that’s actually been made. I think 20th Behaviorism has done few favors to the study of that which makes us quintessentially human, that part of the universe capable of investigating itself, other than one: scientific rigor. And that’s huge. But with the advent of modern linguistics, I think it’s been settled law for about 50 years now that the mind does in fact admit to structure, that there are regularities, that those regularities are predictable and amenable to study. The most elusive bit, here, has and will ever be the last, “meaning”. In fact, we’re still not even in agreement as to what this term actually means (ahem). Interestingly, it’s how you come down on this one that dictates how you look at the rest, at what is “interesting” and relevant and what is not, how you set up your research and your experiments, how your whole approach is best structured. Which is, perhaps, why there’s so little in the way of cross-disciplinary linkages. Why actual understanding still seems a bit out of reach. And why I think Philosophy is pretty much required reading for anyone doing anything scientific with the brain. But then, I’m biased.

So, here’s what I was wrestling with at the end of these two days:

  • What do all those disparate locations within my cortical structure actually do to actually construct my world? That is, what does this structure do, or contribute, to this particular experience?
  • When does a particular neural network response encode “Ode to Joy” and why? Is it always the same network structure that responds when evoked by memory or experience? What if you could separate out the variances of individual memory — is there any “core” activation pattern? Does that vary across people? All the time or some of the time — and why might that be?
  • Can a spoken word or phrase evoke the same physical response in a listener as a chord struck on a keyboard? Is it even similar? What would that say if it did? When is it the most similar? And why would that be? And would that be the case for anyone else? And if not, how much do we vary person-to-person? Is this the same for [joy] or [fear] or [cute], if “activated” by speech or image or sound?

These are some of the questions that were zinging about while I was listening to all those bright minds discuss their work. For me, or more properly, for my old research programme, such answers might give us a better way to talk about concepts, thoughts, ideas — and “Meaning in the Mind”. I suppose these concerns all spool out from those early influences of the old-school AI research programs I was indoctrinated with, and I still feel that the best answers are and will be biologically-driven … but all that’s another bag of glass we can swing some other time. Admittedly, this is all perhaps only interesting me, but if so, I’m not in the least dissuaded. We all bring our baggage to the train; the hope is that the ride will take us all somewhere interesting.

Anyway, I want to thank the organizers and Scott Metcalf specifically for the chance to come and spend the day with so many interesting ideas. Here’s to a successful return in 2015!

Get your Occasional now
About Scot Hull (979 Articles)

Founder, Editor and Publisher at Part-Time Audiophile and The Occasional Magazine.

5 Comments on Music, Mind and Meaning: A Day at the Peabody Institute

  1. Russ Stratton // February 17, 2014 at 7:24 PM //

    Hmmm… This got me thinking about one of the tired old debates in high-end audio: “it’s about the music, not the gear”. Why do we have to choose? Why can’t we acknowledge deriving enjoyment from both?

    When I listen to compelling music, either live or reproduced, the experience can take me to a mental and emotional happy place. When I fire up the soldering iron and put stuff together that will potentially make my system sound better, the experience takes me to a different, but equally happy place. These are two different experiences, but both are part of the hobby, and both make stuff happen inside my brain to give me happy sensations.

    Can the ideas, or maybe just the language and labels of cognitive science be used to better articulate some of the rhetorical roadblocks we keep running into in this crazy hobby?

    • Part-Time Audiophile // February 18, 2014 at 10:25 AM //

      I think the “debate” of gear vs music is kinda hilarious. I mean, if it were “about the music”, we’d have been done with the iPod, but even then, that was/is a mighty nifty device when it came out. Anyway, I’m already on record as thinking audiophiles are fetishists, so I think I can pretty much withdraw from that argument now. LOL. And … scene!

      I think you do make a very interesting point — in studies of “musical semantics”, no one really has looked at ‘nostalgia’ (for example) and tried to tease apart the musical evocation from any other. My guess is that the nostalgic reaction can be evoked (from the perspective of a pattern of brain activity) from lots of activities. Playing catch, replacing a fuse, knitting a sweater — whatever. And that’d be interesting.

      And as for science informing our hobby, well. Dunno. It can certainly inform some of our debates! Psychologists, for example, have been studying obsessives for a long time ….

  2. Interesting conference, but just trying to understand the information presented made my cortical cortex hurt. I’m gonna spin some REM and relax and let the meaning flow over me, “That’s Scot in the spotlight”.

  3. Interesting stuff, Scott. It would take far too much effort to say anything meaningful, so I won’t. Just that I enjoy your intersection of philosophy and audiophilia.
    Thanks for the post.

1 Trackback / Pingback

  1. Interesting Report on Music, Mind and Meaning Symposium

Comments are closed.

%d bloggers like this: