This article examines taste clusters of musical preferences and substance use among adolescents and young adults. Three analytic levels are considered: fixed effects analyses of aggregate listening patterns and substance use in US radio markets, logistic regressions of individual genre preferences and drug use from a nationally representative survey of US youth, and arrest and seizure data from a large American concert venue. A consistent picture emerges from all three levels: rock music is positively associated with substance use, with some substance-specific variability across rock sub-genres. Hip hop music is also associated with higher use, while pop and religious music are associated with lower use. These results are robust to fixed effects models that account for changes over time in radio markets, a comprehensive battery of controls in the individual-level survey, and concert data establishing the co-occurrence of substance use and music listening in the same place and time. The results affirm a rich tradition of qualitative and experimental studies, demonstrating how symbolic boundaries are simultaneously drawn around music and drugs.
We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music-happy, sad, or no music, or sequences of random tones-and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the “happy” than in the “sad” and “random tones” conditions. Via structural regressions based on CPT, we found that the observed changes in participants’ choices can be attributed to changes in the elevation parameter of the probability weighting function: in the “happy” condition, participants showed significantly higher decision weights associated with the larger payoffs than in the “sad” and “random tones” conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting.
…or why casinos play music.
There are many health hazards involved in heavy metal, including sexually transmitted disease, pyrotechnic burns, rabid bat ingestion and decapitation. But the genre’s most popular stage move also carries with it a considerable amount of medical risk. Studies stretching back to 1983 have reported some very scary side effects of head-banging, including deaths from carotid dissection and subdural hemorrhage. But a new Japanese study of a professional head-banging who survived his injuries suggests that the practice should be considered a grown-up version of an early-life danger: shaken-baby syndrome.
The case study, published in The Annals of Thoracic Surgery, describes a 34-year-old guitarist from a popular “visual-kei” band – a subculture of Japanese rock that appears to be descended from American hair metal. The patient came to the doctors with recurrent neck and chest pain, which commonly flared up after his concerts. After a CT scan, the doctors discovered mediastinal emphysema – essentially a pocket of air where it shouldn’t be, between the lungs and chest. The guitarist was sent home with painkillers and antibiotics, which (according to my doctor wife) are given to prevent infection in case these air pockets are the result of “esophageal rupture.”
While the patient recovered after this treatment, the doctors sought the cause of the unusual injury through reviewing concert DVDs featuring his band. They noticed “that he shook his head violently throughout the concerts” and that many members of the audience did as well. Digging into the literature, they found a similar case of mediastinal emphysema in soldiers that vigorously shouted “Hooah!” during a training exercise while trying to outperform their peers.
All this injurious, violent behavior reminds the doctors of the “head-banging, head-rolling and body-rocking” commonly observed in infants. In extreme or abusive cases, these motions can cause Shaken Baby Syndrome, the symptoms of which squarely overlap with the various reports of head-banging damage from the literature – with the disturbing additions of retinal hemorrhage and failure to thrive (both excellent heavy metal band names, incidentally).
To avoid such injuries, the doctors helpfully recommend playing slower-tempo music instead of heavy metal, head-banging every other beat instead of on every single one, or wearing “personal protective equipment.” Perhaps it’s not a coincidence that the drummer in this visual-kei video is wearing a neck brace? But while many public health campaigns are underway to warn parents about the dangers of shaken baby syndrome, the authors of this paper seem much more pessimistic about the potential of a head-banging hazards awareness campaign.
"Unfortunately, it is difficult, if not impossible, to change the habits of heavy metal aficionados," the article concludes. Metal health will drive you mad.
Matsuzaki S., Tsunoda K., Chong T. & Hamaguchi R. (2012). Mediastinal Emphysema After Head-Banging in a Rock Artist: Pseudo Shaken-Baby Syndrome in Adulthood, The Annals of Thoracic Surgery, 94 (6) 2113-2114. DOI: 10.1016/j.athoracsur.2012.05.054
While traveling down a reference wormhole from this paper, I landed on a 1927 book chapter called “An Experimental Study of Musical Enjoyment,” by a psychologist at Clark University named Harry Porter Weld. Basically, Weld played records to individual college students, measure their heart rate and breathing, and asked them to describe their listening experience. Despite some steampunkish technology (a plethysmograph, two Sumner pneumographs, a Zimmermann kymograph, a Victor Talking Machine Model V) the biological findings were pretty ho-hum: basically, music accelerated heart rate and respiration.
But much more fascinating to a modern reader are the extensive descriptions of visual imagery that the subjects reported after listening to the musical selections, which consisted or orchestras, concert bands and military bands. Bear in mind that this study was conducted the same year as The Jazz Singer was released, launching the talkie era in motion pictures. Television was 20 years away, MTV was 54. Thus, you might expect the visual imagery reported by music listeners in 1927 to be strictly literal: musicians performing the piece, or maybe a dramatic scene from an opera (these were Boston college students, so they probably made the occasional trip to the theater).
Instead, the imagery reported for pieces of predominantly instrumental, non-descriptive music is surprisingly psychedelic or narrative. “One observer visualized staccato notes of high pitch as bright points of steel, another as sparks of light, still another as drops of rain falling on water; a flitting movement in the music evoked the visual image of fairies.” Three of the listeners even came up with detailed stories to fit different pieces of music, ranging from dancing scenes to staged performances to parades and religious ceremonies:
Bizet, Pearl Fishers. Military Band. ‘I saw street Arabs in the Midway;-then I saw soldiers dressed in electric blue; sometimes it seemed like a circus parade-then like a military parade. The scene changed toward the end, and I seemed to be in a church. I heard the organ but could not see the organist. I saw robed figures; as they marched, something swayed back and forth, maybe censors. Near the end where the heavy chords came, I thought of a triumphal anthem and ‘I went along with it.”
In the next experiment, five days later, this composition was inadvertently played a second time. The robed figures came in as before; but on a stage instead of in a church.
'The priests came on the stage, the pipe-organ was on the stage; I did not feel the triumph, but I felt a solemn thanksgiving-not exultant as before. I think the change was due in part to a change in mental attitude; all the sincerity of it was gone. Before, I lived it. Then the priests were in church, they were really celebrating a triumph in the Middle Ages. Now, it was acting on the stage.'
Weld observed that the common thread of these stories was motion, and that the motions varied relative to changes in rhythm and pitch in the music. “The outline of the imaginal experiences invariably conforms to that of the musc,” he wrote. Weld also noted that the imagery was involuntary — all the observers denied that they were trying to invent a story to explain the music, and claimed that it spontaneously appeared in their mind’s eye.
"Visual imagery seems to be entirely unnecessary so far as the enjoyment of music is concerned," Weld writes. "On the other hand, there is always the possibility that the possession of such concrete imagery may give definiteness to an experience that might otherwise be vague and abstract."
But when composers attempt to harness this musical triggering of imagined stories, the results can be unpredictable. An explicitly narrative piece of music (Voelker’s “Hunt in the Black Forest”) describing, well, a hunt in the Black Forest, was played with the title and program information withheld. The listeners reported imagining a barnyard, a circus, men selling stocks, an amusement park, a zoo, a battle. Only one reported visual imagery of a hunt.
"These introspections show that the composer is powerless to evoke any one definite mental picture in the minds of all of his auditors," Weld concluded.
Now that feat can be accomplished through a music video, even as the popularity of the form has dipped a little bit from the heyday of MTV. I’d guess repeating this experiment today, updated with contemporary music, would produce considerably less diverse musical imagery from the listeners, whether due to the influence of a music video or the omnipresence of music stars across media — but maybe that’s just “kids today don’t have to work for their musical imagination” grumpy old man talk. Yet while we share the tendency to attach visuals to music with listeners from nearly a century ago, there’s a possibility we may have lost the freedom to choose our own musical imagery adventure.
Weld H.P. (1912). An Experimental Study of Musical Enjoyment, The American Journal of Psychology, 23 (2) 245. DOI: 10.2307/1412844
Generally speaking, with a few notable exceptions, all humans are capable of recognizing the beat within music. Studies have shown that the brains of adults, children and even newborns respond to the downbeat of a simple 4/4 rhythm, even if the test subjects were unreliable interpreters of that knowledge on a wedding reception dance floor. Some scientists have even sought to test whether beat perception exists beyond humans, in species of birds, monkeys, apes and even dolphins and seals. The most convincing evidence for non-human beat-keeping was found in songbirds, specifically a cockatoo with a taste for the Backstreet Boys  and parakeets, which suggests a link between beat perception and vocal learning. But then what about those dolphins and seals, who don’t show the ability to recognize the downbeat? And what about our closest relatives, the nonhuman primates?
Studies using tapping or head bobbing along to musical rhythms failed to find evidence for beat detection in rhesus monkeys and chimpanzees. But everyone has met a few human beings who wouldn’t be able to perform these simple functions either. So a team of researchers from Amsterdam and Queretaro, Mexico sought to use a more objective method of measuring rhythmic perception: brain waves.
The subjects were a couple of rhesus monkeys named Aji and Yko, setting up an enticing array of Steely Dan and John Lennon jokes. The monkeys were outfitted with EEG electrodes — just like a human receiving a sleep test — set up in a comfy chair (no really: “The animals were seated comfortably in a monkey chair where they could freely move their hands and feet,” the authors write) and played various musical samples. These tests followed what’s called an “auditory oddball paradigm,” where a pattern of sound is established and then a deviant pattern is introduced. For example, a tone is played several times, and then a different tone (or a period of silence) is introduced into the mix. In humans, this unexpected break in the pattern produces a characteristic change in brain activity called a mismatch negativity component, or MMN, which is thought to be an “error signal” that the predicted pattern has gone awry.
In the first two experiments, testing the introduction of a deviant tone or unexpected silence, the monkeys showed a robust MMN similar to the response seen in humans. The researchers then tested a rhythmic pattern, “based on a typical 2-measure rock drum accompaniment pattern…composted of snare, bass and hi-hat” (maybe it was “Black Cow”). In human studies, omitting the first beat of this pattern, the downbeat, produces a strong MMN, which interestingly can be interpreted as either a mistake or syncopation, depending on the context. However, putting a gap into any of the other, weaker beats of the pattern fails to trigger this rhythmic error signal in the brain.
Aji and Yko were found to be less concerned with the music “hitting on the one.” While a missing downbeat triggered an MMN, other patterns that deleted weaker beats of the pattern also could evoke the error signal. The authors concluded that the monkeys were tracking “rhythmic grouping” without detecting a “beat” — recognizing that a rhythmic pattern contained a certain number of elements, while not awarding any extra power to the first beat over the others.
Because rhesus monkeys and other nonhuman primates don’t exhibit vocal learning (though this is controversial), the results of this experiment support the idea that beat perception goes hand in hand with the ability to hear and mimic the sounds that others are making. So humans and songbirds — and perhaps seals and dolphins, if scientists could find the right way to test them — are capable of sensing the beat within a rhythm, a hypothesis that works well with the theory of Steven Pinker and others that music is a by-product of language. Meanwhile, nonhuman primates and other animals would be terrible percussionists, despite the popularity of scary cymbal-playing toy monkeys.
Honing H., Merchant H., Háden G.P., Prado L. & Bartolo R. (2012). Rhesus Monkeys (Macaca mulatta) Detect Rhythmic Groups in Music, but Not the Beat., PloS one, PMID: 23251509
Among the most frequent zingers any music critic receives is some variation of “if your so smart, why don’t u do better?!?” or “sorry you’re a failed musician, dude.” The underlying logic of these attacks is that a listener must have some musical ability in order to write about and judge music. But many music critics don’t have a formal musical education, or aren’t that interested in performing or creating music themselves. I myself know enough guitar to play some open chords and can tell the difference between whole notes, half notes and quarter notes on a sheet of music — that’s about it. Yet I feel no qualms about putting my listening experience into words, since the vast majority of the audience for most music is made up of non-musician or amateur-musician listeners, not fellow professional musicians.
But for the sake of science fiction, let’s imagine a world (dystopic, surely) where all critics were required to be accredited musical listeners before the fragile careers of bands were entrusted in their merciless hands. What would be the driver’s test for these privileges? Lily Law and Marcel Zentner of the University of York may have created it with the Profile of Music-Perception Skills, known by the slightly-labored acronym PROMS.
Law and Zentner didn’t have music criticism in mind when they developed their test. Instead, they were looking for a reliable way of testing musical perception skills that could be used in studies that seek to link musicality with a whole mess of biological and behavioral traits, ranging from motor skills to empathy. Most of these studies simply compare “musicians” vs. “non-musicians” — a binary classification based on the subject’s self-reported amount of musical training and nothing more. But Law and Zentner speculated that such criteria missed two important groups of people: “musical sleepers” who have natural musical skills despite little formal training, and “sleeping musicians” who remain musically stunted despite lengthy education and practice. Other scientists have developed similar musical perception tests, but most date back to the middle of the 20th century and have several limitations, including overly complex or culturally biased stimuli and the use of poor quality audio recordings performed by unreliable human musicians.
The PROMS test breaks musical perception down to its basics, as simple and culturally neutral as the authors could make it. Subjects perform nine subtests, testing melody, standard rhythm, rhythm-to-melody, accent, tempo, pitch, timbre, tuning, loudness and procedure. The musical snippets that the researchers use are short, “proto-stimuli” of synthesizer tones or library sound samples (encoded at 128kbps, yuck), and each test is a same-or-different comparison. So two sound samples are played (say of two pitches, two tempos or two instruments playing the same note) and the subject is asked to say whether they were definitely the same, probably the same, definitely different, probably different or “I don’t know.” Points are awarded for correctness and confidence, a wishy-washy “I don’t know” gets you zero. It takes about an hour to complete, though they are working on a more concise version as well.
Most of the paper is concerned with testing the validity of PROMS: within individual subjects who took the test twice on different days, against other musical perception batteries and against a simple white noise gap perception test. But it’s interesting to look at how the old categories of “musicians” and “non-musicians” performed against each other. The graph at right plots amount of musical training against total PROMS score, and while there’s certainly a trend where education improves perception, it’s not as sharp as one might expect. As the authors predicted, there’s a fair number of musical sleepers and sleeping musicians, people who don’t exactly the follow the easy equation of musical training = musical expertise.
It would be interesting to see how time spent listening to music, rather than formally studying it, correlates with PROMS score. Without trying the actual test (I looked for an online version, but no dice), I’m guessing I would do pretty well on subtests for rhythm, melody and timbre, probably not so hot on tests for pitch and tuning that are more often occupationally encountered by working musicians. I’d hypothesize most music critics fall into the “musical sleepers” category, just through endless immersion in the artform. But even the poor sucker above who rolled a 60 on the PROMS could probably be a decent music writer if they could string a few words together: after all, the best music criticism is less about the elemental parts than the cohesive, emotional, subjective whole. Still, I wouldn’t mind throwing my PROMS score back at the next mouth-breather to accuse me of musician envy.
Law L.N.C. & Zentner M. (2012). Assessing musical abilities objectively: construction and validation of the profile of music perception skills., PloS one, PMID: 23285071
To test the effect of lysine supplementation on herpes infection, 1543 subjects were surveyed by questionnaire after a six-month trial period. The study included subjects with cold sores, canker sores, and genital herpes. Of these, 54% had been diagnosed and treated by a physician. The results showed that the average dosage used was 936 mg of lysine daily. Eighty-four per cent of those surveyed said that lysine supplementation prevented recurrence or decreased the frequency of herpes infection. Whereas 79% described their symptoms as severe or intolerable withoutlysine, only 8% used these terms when taking lysine. Without lysine, 90% indicated that healing took six to 15 days, but with lysine 83% stated that lesions healed in five days or less. Overall, 88% considered supplemental lysine an effective form of treatment for herpes infection.
Walsh D.E., Griffith R.S. & Behforooz A. (1983). Subjective response to lysine in the therapy of herpes simplex, Journal of Antimicrobial Chemotherapy, 12 (5) 489-496. DOI: 10.1093/jac/12.5.489
[Peer Review is an occasional series where I fact-check scientific claims made by songs.]
Contrast is the simplest way to play tricks on the brain. Sit in a dark room for a while then go outside, and a sunny day can be painfully blinding. Hold an ice cube in your hand then stick it under a tepid stream of water, and it will feel warm to the touch. Composers figured out the power of musical contrast centuries ago, deploying the quiet-loud-quiet effect long before the Pixies came along. But University of Toronto and Ohio State researchers tested whether a more nuanced musical illusion exists — the contrast between happy tunes and sad.
This phenomenon is less obvious than it sounds. If you played someone the same piece of music 13 times in a row and then suddenly a new piece, any listener would probably respond positively to the release from repetition. But for the same thing to happen with the mood of the music, such that a sad song is rated higher after 13 consecutive, but unique, happy songs, means that the contrast can play the same tricks in the more abstract realm of emotion. In science jargon, that effect is called ”hedonic contrast,” and the researchers wanted to test it out in a pool of undergraduate listeners.
In such an experiment, the researchers are faced with the challenge of unintentionally picking music already familiar to the subject, and perhaps already infused with emotional meaning. The easy solution, at least for college students, is classical music. “Results from multiple samples of listeners from the same university population — who listen primarily to dance-pop music — motivated the assumption that excerpts from the particular genre used here (i.e., classical piano pieces) would be unfamiliar to the present listeners,” the authors write. Pre-experiment surveys of the subjects found that only 11 of the 94 students listened to classical music at least occasionally. So the researchers didn’t have to dig too deep: the 30-second snippets used in the study were taken from canon composers such as Beethoven, Mozart, and Schubert.
The first group of listeners was subject to a punishing ABAAAAAAAAAAAAAB schedule, where the As represented either happy or sad music (determined by tempo and major vs. minor key) and the Bs represented the flip to the other emotion. Over that long run of As, the listeners showed habituation to the happy or sad streak, rating each subsequent piece lower and lower. But consistent with the hedonic contrast hypothesis, the final B piece was consistently liked better than the preceding snippet at the end of a long chain of As, and the difference between that A and B was significantly higher than the first AB contrast of the sequence. Still, the effect was impossible to separate from another concept, called “contrastive valence,” which proposes that unexpected novelty is a source of musical pleasure. Listeners expecting happy snippet #14 might just react strongly to a surprise dose of classical melancholy, and rate the switch as rather enjoyable.
The authors speculated that composers and modern songwriters are probably well aware of this higher-level contrast as well, using emotional shifts everywhere from the structure of symphonies to album sequencing. Unknown is how this effect translates to music or genres that are more familiar to the listener, or how this phenomenon plays out in our broader musical culture. ”Listeners may enjoy sad-sound music simply because of its relative rarity — and hence contrast — in a culture in which happy-sounding music is more prevalent,” they write, a theory that might explain why that dreary Gotye song was such a weird pop radio hit in 2012. But for anyone stuck in their own playlist rut, consider adjusting the emotional contrast — break up that Low binge with some K-Pop.
Schellenberg E.G., Corrigall K.A., Ladinig O. & Huron D. (2012). Changing the Tune: Listeners Like Music that Expresses a Contrasting Emotion, Frontiers in Psychology, 3 DOI: 10.3389/fpsyg.2012.00574