Music and Language
Music and language are the two principal products of the human communication system. The research in this area involves the comparative investigations of the structure of language and music. Research topics focus on the structural features shared by music and speech, such as rhythm, meter, pitch and tone contours, and their physical realization, as well as a comparison of the hierarchical organization of the two systems. An area of research that is receiving attention, for instance, is investigates a relationship between a rhythmic typology found among languages called ‘syllable’ vs ‘stress timing’ (English vs French) and the musical rhythms that develop in these different types of language communities (Ani Patel from the Neuroscience Institute in San Deigo, who has written extensively on this topic, was a guest speaker this past April in both Linguistics and at the Music Cognition Symposium).
Other important areas of research involve the relationship of the suprasegmental structure of speech and language (pitch, meter, rhythm and tone) to meter, tonal contour and rhythm in music, as found in the setting of text to music; the evolution of language and music; the use of pitch contours to express discourse functions; how linguistic notions like metrical line or poetic meter relate to meter and timing in music; the effect of African polyrhythms on speech cadence and the development of jazz. Many of these issues came up in McDonough and Danko’s class on music and language (spring 07). Student projects included an investigation of the effects of the rhythmic structure of Spanish and Wolof in salsa music, a comparison of the tonal contours of spoken versus sung verse in Chinese poetry, the relationship between spoken word and musical recitation or instrumental ‘storytelling’ in Hmong, text setting in Gregorian chant. Other areas of investigation lie in the use of music and non-tonal musical rhythms in oral traditions of ‘storytelling’, and the structural changes imposed by the process of writing on memory and music and metrically enhanced storytelling in oral communities (research by Alfred Lord and David Rubin); a comparison of the transcription and notation systems between the two components; the evolution of language and music; the use of pitch contours to express discourse functions (a small rising intonation that means ‘I’m not finished speaking yet’). An important area of focus for future research concerns whether there are similarities in the ‘syntax’ or grammar of language and music, for example, whether the phrases and their structure in musical compositions are similar to the principles governing phrases and their structure in languages of the world. Insofar as there are similarities, there are fertile areas for research on shared evolution of cognitive capacities and possible shared underlying brain structures.
The strong language research community at Rochester, which includes the Center for the Language Sciences (CLS) (language researchers in Brain and Cognitive Sciences, Computer Science, Linguistics and Philosophy), makes this focus on language and music ideal, an alliance that has precedence, for instance, in the collaboration that occurred in a phonetics lab in Bonn more than 50 years ago between Meyer-Eppler and Karlheinz Stockhausen. The CLS community is involved in a number of cross-disciplinary research areas that are relevant to the interests of the Sound & Music group: the Danko-McDonough work on speech cadence and music; studies of semantics of music and language (Greg Carlson and a composition student from ESM); and possible future studies of syntactic structure in language and music (cf. Lerdahl & Jackendoff).