Colloquia Series

2015

May 29, 2015, 513 Lattimore Hall

E. Allyn Smith, Associate Professor, University of Quebec at Montreal

Cross-linguistic differences in disagreements arising from descriptive and evaluative propositions

Semanticists, pragmaticists, philosophers, and others have recently been interested in disagreements arising from evaluative propositions (especially those containing so-called 'predicates of personal taste"), as in (1), and their theoretical implications.

  1. A: This soup is tasty. B: No it isn't.
  2. A: Rochester is in Quebec. B: No it isn't.
  3. A: This soup is tasty, in my opinion. B: # No it isn't

The idea is that, as compared to a descriptive proposition like (2A), evaluative propositions express the opinion of the speaker, but refuting them doesn't seem to deny that the speaker holds such an opinion (Kolbel 2003, Lasersohn 2005, etc.). This would, in principle, make them similar to sentences like (3), but here, direct disagreement is not felicitous (Stevenson 2007). Stevenson argued that the same can be said of epistemic modals such as 'might': you can say 'no' to the fact that Elizabeth might visit if you know otherwise, but if someone says that they don't know whether Elizabeth will visit, saying 'no' cannot indicate that she won't.

In this talk I will present offline felicity judgment data from English and Spanish two-turn oral dialogues showing that there are differences with respect to these judgments, which creates a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007 and Umbach 2012. I will further discuss the interplay of various factors in these data, including cultural politeness differences (introducing data from another dialect of Spanish with known differences in cultural norms). As time permits, I will also present data from Catalan and French.


May 8, 2015, 513 Lattimore Hall

Roger Levy, Associate Professor, Department of Linguistics, University California San Diego

Bayesian pragmatics: lexical uncertainty, compositionality, and the typology of conversational impli

A central scientific challenge for our understanding of human cognition is how language simultaneously achieves its unbounded yet highly context-dependent expressive capacity. In constructing theories of this capacity it is productive to distinguish between strictly semantic content, or the "literal" meanings of atomic expressions (e.g., words) and the rules of meaning composition, and pragmatic enrichment, by which speakers and listeners can rely on general principles of cooperative communication to take understood communicative intent far beyond literal content. However, there has historically been only limited success in formalizing pragmatic inference and its relationship with semantic composition. Here I describe recent work within a Bayesian framework of interleaved semantic composition and pragmatic inference, building on the Rational Speech-Act model of Frank and Goodman and the game-theoretic work of Degen, Franke, and Jäger. These models formalize the goal of linguistic communicative acts as bringing the beliefs of the listener into as close an alignment as possible with those of the speaker while maintaining brevity. First I show how two major principles of Levinson's typology of conversational implicature fall out of the most basic Bayesian models: Q(uantity) implicature, in which utterance meaning is refined through exclusion of the meanings of alternative utterances; and I(nformativeness) implicature, in which utterance meaning is refined by strengthening to the prototypical case. Q and I are often in tension; I show that the Bayesian approach constitutes the first theory making quantitative predictions regarding their relative strength in interpretation of a given utterance, and present evidence from a large-scale experiment on interpretation of utterances such as "I slept in a car" (was it my car, or someone else's car?) supporting the theory's predictions. I then turn to questions of compositionality, focusing on two of the most fundamental building blocks of semantic composition, the words "and" and "or". Canonically, these words are used to coordinate expressions whose semantic content is least partially disjoint ("friends and enemies", "sports and recreation"), but closer examination reveals that they can coordinate expressions whose semantic content is in a one-way inclusion relation ("roses and flowers", "boat or canoe") or even in a two-way inclusion relation, or total semantic equivalence ("oenophile or wine-lover"). But why are these latter coordinate expressions used, and how are they understood? Each class of these latter expressions falls out as a special case of our general framework, in which their prima facia inefficiency for communicating their literal content triggers a pragmatic inference that enriches the expression's meaning in the same ways that we see in human interpretation. More broadly, these results illustrate the explanatory reach and power of recursive, compositional probabilistic models for the study of linguistic meaning and pragmatic communication.


May 6, 2015, Kresge Room, Meliora 269

Roger Levy, Associate Professor, Department of Linguistics, University California San Diego

Language comprehenders as reverse engineers

From the last several decades of research we know that human language comprehension is highly optimized to the demands presented by real-time spoken and written input. We are finely tuned to the detailed statistics of our linguistic experience, yet retain an extraordinary capacity to generalize beyond that experience to novel comprehension environments. A leading hypothesis regarding this capacity for generalization is that comprehension involves implicitly deploying structured generative models of language production, with the comprehender effectively "reverse-engineering" the speaker's intended message through Bayesian inference. Here I present work elucidating the structure of these generative models under this hypothesis. In the first part of the talk I discuss our recent work on noisy-channel models of language comprehension, in which a speaker's intended message is distorted through processes including speaker error, perceptual noise, and memory limitation before analysis by our system of language understanding. I present results showing how noisy-channel comprehension can lead comprehenders to entertain and even adopt grammatical interpretations of an input inconsistent with its literal content. I also present results extending the range of documented noise operations. In the second part of the talk I explore how comprehenders model speaker choice in syntactic alternations influenced by multiple factors. For example, preferences in the dative alternation ("Pat gave Kim a book" versus "Pat gave a book to Kim") have been argued to reflect (i) differences in the shade of meaning encoded by each syntactic option, and (ii) principles of optimal linear ordering such as putting short constituents before long. If both (i) and (ii) are true and comprehenders model the syntactic choice as a generative process driven by these multiple causes, we should see explaining-away effects between linear-ordering optimality and inferred meaning intent in comprehension. We demonstrate these effects for the first time. More generally, this work underscores the power of using generative models to account for human language comprehension, and opens the door to a range of further explorations of the structure of this generative knowledge.


May 1, 2015, 513 Lattimore Hall

Jean-Pierre Koenig and Karin Michelson, Professor and Chair, Department of Linguistics, University at Buffalo SUNY

Exploring the limits of syntactic structures

Syntax has played a central role in investigations of the nature of human languages. But, there are at least two distinct ways of conceiving of syntax: the set of rules that enable speakers and listeners to combine the meaning of expressions (compositional syntax), or the set of formal constraints on the combinations of expressions (formal syntax). The question that occupies us in this talk is whether all languages include a significant formal syntax component or whether there are languages in which most syntactic rules are exclusively compositional. Our claims are (1) that Oneida (Northern Iroquoian) has almost no formal syntax component and is very close to a language that includes only a compositional syntax component and (2) that the little formal syntax Oneida has does not require making reference to syntactic features. Our analysis of Oneida suggests that what is often taken as characteristic of human languages (e.g., syntactic selection/argument structure, syntactic binding, syntactic unbounded dependencies, syntactic parts of speech) is merely overwhelmingly frequent in the world's languages. Our research also suggests that a critical function of compositional syntax is to manage the binding of semantic variables, a function anticipated by Quine's work on the nature of (semantic) variables.


April 10, 2015, 513 Lattimore Hall

Jill Thorson, Postdoctoral Research Associate, Communication Analysis and Design Laboratory, Northeastern University

Multiple Perspectives on Understanding Prosodic Development

Infants are born with sensitivities to their native language's prosody (i.e., melody and rhythm). My research program is designed to understand the ways in which this attunement to prosody affects early language development over the first years of life. Specifically, this work concentrates on how prosody impacts early attentional processing, word learning, and speech production, with a focus on the importance of including a phonological account alongside an acoustic-phonetic one. Two lines of inquiry deploy a variety of research methods (e.g., eyetracking, corpora, and speech elicitation) and consider the role of prosody from a perceptual and a productive perspective. Additionally, the role of technology in methodological innovation is explored, such as how touch-screen interfaces and voice synthesis can effectively address questions regarding language learning in atypical populations. Future research on early language acquisition will investigate the benefits of integrating these various perspectives and methodologies, and how this multi-faceted approach can better understand typical and atypical prosodic development.


April 10, 2015, Meliora 366

Percy Liang, Assistant Professor of Computer Science, Stanford University

Learning to Execute Natural Language

A natural language utterance can be thought of as encoding a program, whose execution yields its meaning. For example, "the tallest moun tain" denotes a database query whose execution on a database produces "Mt. Everest." We present a framework for learning semantic parsers that maps utterances to programs, but without requiring any annotated programs. We first demonstrate this paradigm on a question answering task on Freebase. We then show how that the same framework can be extended to the more ambitious problem of querying semi-structured Wikipedia tables. We believe that our work provides a both a practical way to build natural language interfaces and an interesting perspective on language learning that links language with desired behavior.


February 6, 2015, 513 Lattimore Hall

E. Allyn Smith, Assistant Professor, University of Quebec at Montreal

Cross-linguistic differences in disagreements arising from descriptive and evaluative propositions

Semanticists, pragmaticists, philosophers, and others have recently bee n interested in disagreements arising from evaluative propositions (especially those containing so-called 'predicates of personal taste"), as in (1), and their theoretical implications.

  1. A: This soup is tasty. B: No it isn't.
  2. A: Rochester is in Quebec. B: No it isn't.
  3. A: This soup is tasty, in my opinion. B: # No it isn't.

The idea is that, as compared to a descriptive proposition like (2A), valuative propositions express the opinion of the speaker, but refuting them doesn't seem to deny that the speaker holds such an opinion (Kolbel 2003, Lasersohn 2005, etc.). This would, in principle, make them similar to sentences like (3), but here, direct disagreement is not felicitous (Stevenson 2007). Stevenson argued that the same can be said of epistemic modals such as 'might': you can say 'no' to the fact that Elizabeth might visit if you know otherwise, but if someone says that they don't know whether Elizabeth will visit, saying 'no' cannot indicate that she won't.

In this talk I will present offline felicity judgment data from English and Spanish two-turn oral dialogues showing that there are differences with respect to these judgments, which creates a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007 and Umbach 2012. I will further discuss the interplay of various factors in these data, including cultural politeness differences (introducing data from another dialect of Spanish with known differences in cultural norms). As time permits, I will also present data from Catalan and French.


January 23, 2015, 513 Lattimore Hall

Joyce McDonough, Associate Professor, University of Rochester, Linguistcs and Brain and Cognitive Sciences

The Dene verbal compound: representing the complex inflectional system of the Dene (Athabaskan) verb

Within a Word and Paradigm appro ach to morphology words, not morphemes, are the basic units in the lexicon (Milin et. al., 2009; Ackerman & Malouf, 2012; Blevins, 2014, 2015; Plag & Baayens, 2008, Baayens et.al. 2014, 2015). Fully inflected words are lexical units, organized into paradigms, making paradigms, which encode the relationship between words, fundamental objects in the lexicon. In this framework, much work has been done on nominal inflection and derivational systems. Much less has been done on the more complex inflectional systems of verbal morphology, which may encode rich morphosyntactic functions. In this talk I will lay out the structure of a typologically unusual and highly complex system, the Dene (Athabaskan) verb word, traditionally captured by a position class template of around 23 prefix positions used to order verbal morphemes. I'll demonstrate that is an unworkable system. Instead, the Dene verb is a unusual but simple and principled variation on compounding. The model is base d on evidence from phonetic studies and lexical patterns.


2014

December 12, 2014, 513 Lattimore Hall

Elaine Chun, Associate Professor (English), University of South Carolina

"She be acting like she's black": Linguistic blackness among Korean

Research on the use of African American English (AAE) by speakers who do not identify as African American has largely focused on how performances of racial 'crossing' (Rampton 1995) may be used to construct masculinity, often in ways that reproduce stereotypes of race and gender (Bucholtz 1999; Chun 2001; Reyes 2005; Bucholtz and Lopez 2011). Such work has drawn attention to at least a few important facts: first, a variety that linguists have classified as an ethnolect of a particular ethnic group can be used in meaningful ways by speakers outside the group; second, ethnolectal features are complexly related to other social dimensions, such as gender and class; and third, language practices have sociocultural consequences for individual identities and community ideologies.

Two concerns that remain are (1) how linguists can productively continue the important project of ethnolectal description--for example, identifying distinctive elements of AAE in ways that recognize meaningful outgroup language use, and (2) how linguists can analyze outgroup uses of AAE without simplistically suggesting that these uses necessarily reproduce stereotypes of black masculinity. In order to address these concerns, I consider the sociolinguistic status of features described by linguists as belonging to AAE, namely, six lexical or morpho-syntactic elements: habitual be, neutral third-person singular verb (e.g., she don't), multiple negation, ain't, the address term girl, and the pronoun y'all. By examining about 100 tokens used by five female youth who identify as Korean American, I discuss some of the conceptual challenges that arise for an ethnolectal model of language and draw on some sociolinguistic and linguistic anthropological concepts, such as ideology, indexicality, persona, stance, voice, and authentication to address these challenges. Finally, I show how qualitative methods of discourse analysis, which attend to the emergent complexity of how language can invoke social meanings, can usefully contribute to our understanding of how linguistic forms relate to social meaning, yet in ways that may still remain complementary with our projects of ethnolectal description.


November 14, 2014, 513 Lattimore Hall

Aaron Albin, Indiana University-Bloomington

PraatR: An architecture for controlling the phonetics software Praat with the R programming language

An increasing number of researchers are using the R programming language (http://www.r-project.org/)for the visualization and statistical modeling of phonetic data. However, R's capabilities for analyzing soundfiles and extracting acoustic measurements are still limited compared to free-standing phonetics software such as Praat (http://www.fon.hum.uva.nl/praat/). As such, it is typical to extract the acoustic measurements in Praat, export the data to a textfile, and then import this file into R for analysis. This process of manually shuttling data from one program to the other slows down and complicates the analysis workflow.

This workshop will feature an R package (`PraatR') designed to overcome this inefficiency. Its core R function sends a shell command to the operating system that invokes the command-line form of Praat with an associated Praat script. This script imports a file, applies a Praat command to it, and then either brings the output directly into R or exports the output as a textfile. Since all arguments are passed from R to Praat, the full functionality of the original Praat command is available inside R, making it possible to conduct the entire analysis within a single environment. Moreover, with the combined power of these two programs, many new analyses become possible. Further information on PraatR can be found at http://www.aaronalbin.com/praatr/.

At this workshop, the creator of PraatR will first present a conceptual overview of the package, followed by several hands-on exercises on participants' laptop computers illustrating its range of functionality. At the end of the workshop, the presenter will be available for brief consultations about how PraatR can help you in your own research.

Attendance is limited to 20 participants on a first-come-first-served basis. If you are interested in coming to the workshop, please send an e-mail stating so to Wil Rankinen at wrankine@ur.rochester.edu.


October 17, 2014, Meliora 366

Susan Goldin-Meadow, Beardsley Ruml Distinguished Service Professor, University of Chicago

Gesture as a mechanism of change

The spontaneous gestures that people produce when they talk can index cognitive instability and reflect thoughts not yet found in speech. But gesture can go beyond reflecting thought to play a role in changing thought. I consider whether gesture brings about change because it is itself an action and thus brings action into our mental representations. I provide evidence for this hypothesis but suggest that it's not the whole story. Gesture is a special kind of action--it is representational and thus more abstract than direct action on objects, which may be what allows gesture to play a role in learning.


September 12, 2014, 513 Lattimore Hall

Jim Blevins, Department of Theoretical and Applied Linguistics, Cambridge University

Morphology as a complex discriminative system

A number of converging lines of research have recently coalesced into an approach to morphology that combines classical WP models with contemporary data-driven methodologies. One component of this approach is distributional view of language structure and language learning. Another is a complex system conception of morphological patterns and inventories. These components are united by a dynamic communicative pressures, rather than in terms of derivational relations or static constraint satisfaction. This talk outlines some of the properties and implications of this perspective and reviews evidence that supports this type of approach over simple system models of morphology.


April 24, 2014, Kresge Room, Meliora 269

David Poeppel, Professor, Psychology and Neural Science Cognition & Perception, New York University

The temporal structure of auditory perceptual experience

Speech and other dynamically changing auditory signals (and also visual stimuli) typically contain critical information required for
successful decoding at multiple time scales. What kind of neuronal infrastructure forms the basis for the requisite multi-time resolution processing? A series of neurophysiological experiments suggests that intrinsic neuronal oscillations at different, ‘privileged’ frequencies may provide some of the underlying mechanisms. In particular, to achieve parsing of a naturalistic input signal into manageable chunks, one mesoscopic-level mechanism consists of the sliding and resetting of temporal windows, implemented as phase resetting of intrinsic oscillations on privileged time scales. The successful resetting of neuronal activity provides time constants – or temporal integration windows – for parsing and decoding signals. One emerging generalization is that acoustic signals must contain some type of
edge, i.e. a discontinuity that the listener can use to chunk the signal at the appropriate granularity. Although the ‘age of the edge’
is over for vision, acoustic edges likely play an important (and slightly different) causal role in the successful perceptual analysis of complex auditory signals.


April 11, 2014, Meliora 366

Craige Roberts, Professor, Department of Linguistics, The Ohio State University

Indexicals, Centers and Perspective

I argue for a theory of demonstratives in which:

(a) they're anaphoric (as I argued in Roberts 2002) and in that respect are definites like definite descriptions and pronouns,
but:

(b) they're unlike the other definites in that they really are essentially indexical, something that isn't adequately captured by King (2001), Roberts (2002), or Elbourne (2008),
and that:

(c) we can improve on the account of indexicality in Kaplan (1977), as criticized by Heim 1985, by adopting a view of indexicals in which their central feature is anchoring to a Discourse Center, a self-attributing doxastic agent.

A Discourse Center is a counterpart in the context of utterance of the notion of a Center in Lewis (1979), the latter theory modified as in Stalnaker (2008). Discourse Centers are argued to play three kinds of roles in interpretation:

(i) they are crucial features of a theory of de se interpretation, as in Lewis/Stalnaker;
but here they serve two new roles as well:

(ii) they are the presupposed anaphoric anchors for indexicals; and

(iii) they also serve as arguments of a perspective operator, in a modification of Aloni (2001), permitting an account of de re belief attributions involving all kinds of definite NPs, including indexicals themselves.

Among other things, this will permit a more flexible, perspicuous account of shifted indexicals in languages like Amharic (Schlenker 2003, Anand & Nevins 2004, Deal 2013, Sudo 2012), and a natural account of so-called fake indexicals of Kratzer (2009).


March 28, 2014, CSB 601

Casey Lew-Williams, Department of Communication Sciences and Disorders, Northwestern University

Statistical learning in semi-real language acquisition

Infants and toddlers have a prodigious ability to find structure (such as words) in patterned input (such as language). Learning regularities between sounds and words often occurs seamlessly in early development, leading some to conclude that statistical learning plays a role in enabling language in the first place. This might be true, or alternatively, it might be an irrelevant artifact of distilled laboratory tasks. The ultimate explanatory power depends partially on whether we define statistical learning narrowly (transitional probabilities between syllables) or broadly (any kind of input-based pattern extraction), and partially on whether statistical learning can scale up to explain natural language acquisition. Here I ask: Can statistical learning withstand the complexity inherent in (somewhat) real learning environments? I will present a series of studies that test how infants learn when presented with variability in utterance length, word length, number of talkers, social/communicative cues, and frequency resolution. To conclude, I will briefly address the question of scalability by turning to an important outcome of early statistical learning -- the ability to process language efficiently in real time -- which falls by the wayside when listeners don't accumulate language experience like a baby.


February 14, 2014, Meliora 366

Nathaniel Smith, Research Associate, Institute for Language, Cognition and Computation, University of Edinburgh

Building a Bayesian bridge between the physics and the phenomenology of social interaction

What is word meaning, and where does it live? Both naive intuition and scientific theories in fields such as discourse analysis and socio- and cognitive linguistics place word meanings, at least in part, outside the head: in important ways, they are properties of speech communities rather than individual speakers. Yet, from a neuroscientific perspective, we know that actual speakers and listeners have no access to such consensus meanings: the physical processes which generate word tokens in usage can only depend directly on the idiosyncratic goals, history, and mental state of a single individual. It is not clear how these perspectives can be reconciled. This gulf is thrown into sharp perspective by current Bayesian models of language processing: models of learning have taken the former perspective, and models of pragmatic inference and implicature have taken the latter. As a result, these two families of models, though built using the same mathematical framework and often by the same people, turn out to contain formally incompatible assumptions. Here, I'll present the first Bayesian model which can simultaneously learn word meanings and perform pragmatic inference. In addition to capturing standard phenomena in both of these literatures, it gives insight into how the literal meaning of words like "some" can be acquired from observations of pragmatically strengthened uses, and provides a theory of how novel, task-appropriate linguistic conventions arise and persist within a single dialogue, such as occurs in the well-known phenomenon of lexical alignment. Over longer time scales such effects should accumulate to produce language change; however, unlike traditional iterated learning models, our simulated agents do not converge on a sample from their prior, but instead show an emergent bias towards belief in more useful lexicons. Our model also makes the interesting prediction that different classes of implicature should be differentially likely to conventionalize over time. Finally, I'll argue that the mathematical "trick" needed to convince word learning and pragmatics to work together in the same model is in fact capturing a real truth about the psychological mechanisms needed to support human culture, and, more speculatively, suggest that it may point the way towards a general mechanism for reconciling qualitative, externalist theories of social interaction with quantitative, internalist models of low-level perception and action, while preserving the key claims of both approaches.


2013

December 5, 2013

Jila Ghomeshi, Department of Linguistics, University of Manitoba

The Detachment Principle and the syntax of pragmatic particles


November 22, 2013

Elika Bergelson, University of Rochester, Aslin Lab, Brain & Cognitive Sciences


November 19, 2013

Don Kulick, Department of Comparative Human Development at the University of Chicago

Danes call People with Down syndrome 'mongol': politically incorrect language and ethical engagement


November 8, 2013

Scott Fraundorf, University of Rochester, Jaeger Lab, Brain & Cognitive Science


October 31, 2013

Eva-Maria Roessler

Field Linguistics Talk Series


October 25, 2013

Scott Paauw, University of Rochester, Department of Linguistics

Determining If A Language Underwent Prehistoric Creolization


October 4, 2013

Solveiga Armoskaite, University of Rochester, Department of Linguistics

A Category Neutral Simulative Plural: Evidence From Turkish


September 20, 2013, CLS Colloquia Room

Jeffrey T. Runner, Associate Professor, Linguistics and Brain & Cognitive Sciences, University of Rochester, Department of Linguistics

Constraints of the Binding Theory: Evidence from Visual World Eye-Tracking


May 14, 2013

Nadine Borchardt, Humboldt University, Berlin

Language documentation among the Bagyeli hunter-gatherers of Cameroon


April 26, 2013

Judith Tonhauser, Associate Professor, Department of Linguistics, The Ohio State University

What's at issue? Exploring content in context


April 11, 2013

Laura Batterink, University of Oregon

Implicit and explicit neural mechanisms supporting language processing


February 22, 2013

Maziar Toorsarvandi, American Council of Learned Societies New Faculty Fellow Department of Linguistics and Philosophy

Gapping is VP-ellipsis


February 20, 2013

Floris Roelofsen, Research Associate, Institute for Logic, Language and Computation, University of Amsterdam

Polarity particles


February 8, 2013

Scott Grimm, Postdoctoral Researcher Department of Translation and Language Sciences , Pompeu Fabra University

Grammatical Number and Individuation


February 7, 2013

Wallace Chafe and Marianne Mithun, Professors of Linguistics, Department of Linguistics, University of California, Santa Barbara

The Phonology of Seneca


February 4, 2013

Kathryn Davidson, Postdoctoral Fellow, Linguistics Department, University of Connecticut

Investigating the semantic/pragmatic interface through sign language structure: the case of scalar implicature


January 18, 2013

Scott AnderBois , Visiting Faculty Linguistics Department

QUDs and at-issueness in Yucatec Maya attitude reports


2012

November 9, 2012

Pauline Jacobson, Brown University

The Short Answer: Implications for Direct Compositionality (and vice-versa)


October 14, 2012

Edward Vajda, Western Washington University

The Ket language of Siberia


June 7, 2012

Sally Treloyn, University of Melbourne

The Substance of Song


April 9, 2012

Klinton Bicknell, Department of Psychology, UC San Diego


April 4, 2012

Bozena Pajak, Department of Linguistics, UC San Diego


March 30, 2012

Emily Tucker Prud'hommeaux, Computer Science, Oregon Health & Science University


February 17, 2012

Sudha Arunachalam, Speech, Language & Hearing Sciences, Boston University

Three challenges of verb learning, and how toddlers use linguistic context to meet them


2011

December 5, 2011

Victor Kuperman, Department of Linguistics and Languages, McMaster University


October 27, 2011

Sarah Brown-Schmidt, Department of Psychology, University of Illinois at Urbana-Champaign


October 13, 2011

Chris Potts, Department of Lingustics, Stanford University


October 4, 2011

Meghan Sumner, Department of Linguistics, Stanford University


May 12, 2011

Cynthia Fisher, Psychology Department, University of Illinois


May 9, 2011

Herb Clark, Professor of Psychology, Stanford University


April 21, 2011

Doug Roland and Hongoak Yun, The University of Buffalo

Semantic Similarity, Predictability, and Models of Sentence Processing


April 18, 2011

Thomas Hörberg, Department of Linguistics, Stockholm University

Cue-Based Argument Interpretation


April 14, 2011

Philip Hofmeister, University of California - San Diego

The Encoding-Retrieval Relationship in Sentence Comprehension (and Production)


April 6, 2011

Raphael Berthele, University of Bern


March 31, 2011

Ed Holsinger, Department of Lingustics, University of Southern California

Meaning, Context and Representation


March 29, 2011

Jennifer M. Roche, Department of Psychology, University of Memphis

Don't rush the navigator: Audience design in language production is hard to establish, but easier to maintain


March 16, 2011

Anticipation, local coherences, and the self-organization of cognitive structure in sentence processing, Department of Psychology, University of Connecticut

Anuenue Kukona


February 15, 2011

Gerhard Jaeger, Department of Linguistics , University of Tuebingen

Game Theoretic Pragmatics


2010

May 24, 2010

Nicholas Altieri, Psychological and Brain Sciences, Indiana University

Uncovering the Mechanisms of Audiovisual Speech Perception: Architecture, Decision Rule, and Capacity


April 21, 2010

Frank Bechter

Narrative Combinatorics: Roleshifting Versus "Aspect" in ASL Grammar


April 19, 2010

Jennifer Culbertson, Cognitive Science Department, Johns Hopkins University

Learning Biases and the Emergence of Typological Universals of Syntax


April 14, 2010

Noah H. Silbert, Psychological and Brain Sciences, Indiana University

Phonological Information Integration in Speech Perception


April 12, 2010

LouAnn Gerken, Professor of Psychology and Linguistics, University of Arizona

Predicting and Explaining Babies


March 15, 2010

Sharon Goldwater, School of Informatics, University of Edinburgh

From Sounds to Words: Bayesian Modeling of Early Language Acquisition


February 22, 2010

Gary Dell, Psychology, University of Illinois at Urbana-Champaign

Implicit Learning in the Language Production System is Revealed in Speech Errors


January 20, 2010

Jesse Snedeker, Department of Psychology, Harvard University

Fast, Smart and Out of Control


2009

December 7, 2009

Meghan Clayards, Centre for Research on Language Mind and Brain, McGill University

The Role of Phonetic Detail, Auditory Processing and Language Experience in the Perception of Assimilated Speech


October 19, 2009

Stefan Frank, Postdoc, Institute for Language, Logic and Computation, University of Amsterdam

The Irrelevance of Hierarchical Structure to Sentence Processing


September 14, 2009

Chris Kennedy, Department of Linguistics, University of Chicago

The Number of Meanings of English Number Words


June 26, 2009

Oleg Kiselyov (FNMOC) and Chung-chieh Shan (Rutgers)

Self-Applicable Probabilistic Inference Without Interpretive Overhead


June 4, 2009

Amy Perfors, University of Adelaide

Learning to Learn, Simplicity, and Sources of Bias in Language Learning


April 15, 2009

Tanya Kraljic, Center for Research in Language, UC San Diego

Learning a Talker's Speech


April 8, 2009

Matt Goldrick, Department of Linguistics , Northwestern University

The Phonetic Traces of Lexical Access


February 20, 2009

Hannah Rohde, Department of Linguistics, Northwestern University

Discourse-Driven Expectations in Sentence Processing


February 12, 2009

Linda Smith, Indiana University

Big Changes in Object Recognition Between 18 and 24 Months: Words, Categories and Action


2008

December 4, 2008

Gary Lupyan, University of Pennsylvania

What Do Words Do?


November 11, 2008

Claire Cardi, Cornell University

What Were They Thinking? Finding and Extracting Opinions in the News


October 16, 2008

Ash Asudeh, Institute of Cognitives of Science & School of Linguistics and Language Studies, Carleton University

Production of Ungrammatical Utterances: The Case of Resumptive Pronouns


October 16, 2008

Ida Toivonen, School of Linguistics and Applied Language Studies , Carleton University

The Phonetics of Phonological Quantity in Inari Saami


October 6, 2008

Aravind Joshi, Computer Science, University of Pennsylvania

Towards Discourse Meaning: Complexity of Dependencies at the Discourse Level and at the Sentence Level


April 18, 2008

Suzanne Stevenson, Computer Science, University of Toronto

Bridging the Gap between Syntax and the Lexicon: Computational Models of Acquiring Multiword Lexemes


April 9, 2008

Dan Jurafsky, Linguistics, Stanford University

Inducing Meaning from Text


March 20, 2008

Ray Jackendoff, Center for Cognitive Studies, Tufts University

The Parallel Architecture and its Role in Cognitive Science


2007

December 4, 2007

Hannele Nicholson, Lingustics , Cornell University

Disfluencies in Dialogue: Attention, Structure and Function


September 25, 2007

Shravan Vasishth, Linguistics, University of Potsdam

Determinants of parsing complexity: A computational and empirical investigation


September 24, 2007

Susan Garnsey, Psychology, University of Illinois at Urbana-Champaign

The Event-Related Optical Signal (Eros): A New Neuroimaging Tool for Language Processing Research


May 14, 2007

Lisa Pearl, Linguistics, University of Maryland


May 2, 2007

Michael Wagner, Linguistics, Cornell University

Encoding and Retrieving Syntax with Prosody


April 26, 2007

Frank Keller, HRC, University of Edinburgh

Probabilistic Models of Adaptation in Human Parsing


April 18, 2007

Roger Levy, Linguistics, UCSD

Expectations, locality, and competition in syntactic comprehension


April 12, 2007

Philip Hofmeister, Linguistics, Stanford University


April 6, 2007

Nick Cassimatis, Computer Science, RPI

A Cognitive Substrate for Human-Level Intelligence


February 21, 2007

Suzanne Gahl, University of Chicago

Linguistic Knowledge is Probabilistic: Evidence from Pronunciation


2003–2004

November 3, 2003

Jenny Saffran, Department of Psychology, University of Wisconsin Madison

Statistical Learning: What Goes In, and What Comes Out


2002–2003

May 28, 2003

John Kingston, Department of Linguistics, University of Massachusetts

From Ears to Categories: Intermediate Steps in Speech Recognition.


May 5, 2003

Sheila Blumstein, Department of Cognitive and Linguistic Sciences, Brown University

The Mapping of Sound Structure to the Lexicon: Evidence from Normal Subjects and Aphasic patients.


March 19, 2003

Elsi Kaiser, Department of Linguistics, University of Pennsylvania

Interpreting and Anticipating Reference in Discourse.


March 17, 2003

Harlan Harris, Department of Linguistics, University of Illinois, Urbana-Champaign

The Horror: Speech Errors and Phonological Production Models.


February 7, 2003

Craige Roberst, Department of Linguistics, Ohio State University

Relating Attention to Intention of Information Structure


January 31, 2003

Craige Roberts, Department of Linguistics, Ohio State University

Presupposition: The Interaction of Conventional and Conversational Implicature.


January 28, 2003

Craige Roberts, Department of Lingustics, Ohio State University

Information Structure in Discourse: A Basic Pragmatic Framework.


September 25, 2002

Gary Marcus, Department of Psychology, New York University

Plasticity and Nativism: Towards a Resolution of an Apparent Paradox.


2001–2002

June 18, 2002

Mike Harm, Carnegie-Mellon University

How Do Readers Compute Word Meanings? Insights From the Triangle Model.


May 21, 2002

Tom Bever, Department of Linguistics , University of Arizona

What Language Processing Tells Us About Cognitive Science.


May 20, 2002

Tom Bever, Department of Linguistics, University of Arizona

American Landscape Painting: Aesthetics, The Golden Mean and Depth Perception.


April 22, 2002

Asu Asudeh, Department of Linguistics, Stanford University

Resource Logic


April 19, 2002

Michael Tarr, Cognitive and Linguistic Sciences, Brown University

It's Pat - Sexing Faces Using Only Red and Green.


April 11, 2002

Matt Dye, Centre for Deaf Studies, University of Bristol, United Kingdom

To Sign or Not To Sign: Studies of Deaf Cognition in British Signers.


April 1, 2002

Gianluca Sorto, Department of Lingustics, University of California at Los Angeles

Possessives in Context


March 28, 2002

Franklin Chang, Department of Psychology, University of Illinois at Urbana-Champaign

Symbolically Speaking.


March 25, 2002

Duane Watson, Department of Brian & Cognitive Sciences, Massachusetts Institute of Technology

Understanding Intonational Phrasing


March 4, 2002

Jennifer Venditti, Department of Linguistics, Ohio State University

Another Look at Accented Pronouns: Evidence from Eye-tracking


February 26, 2002

Todd Haskell, Department of Psychology, University of Southern California

The Role of Distributional Information in Speech Production: The Case of Subject-Verb Agreement.


January 26, 2002

Gary S. Dell, Beckman Institute, University of Illinois, Urbana-Champaign

Lexical Access and Serial Order in Language Production: A Test of Freuds Continuity Thesis.


November 7, 2001

Maryellen McDonald, Department of Psychology, University of Wisconsin, Madison

Constraint Satisfaction Processes in Language Production


October 31, 2001

J. Kathryn Bock, Beckman Institute, University of Illinois, Urbana-Champaign

Clock Talk


2001

April 25, 2001

Paul Luce, Department of Psychology, State University of New York, Buffalo

Understanding Spoken Words: Activation, Competition and Temporary Memory in Spoken Word Perception.


April 2, 2001

Paul Smolensky, Department of Cognitive Science, Johns Hopkins University

Optimality in Linguistic Cognition


1999–2000

April 25, 2000

Jennifer Arnold, Department of Psychology, University of Pennsylvania

He vs. She: The Use of Gender in On-line Pronoun Comprehension


April 13, 2000

Michael Walsh Dickley, Department of Linguistics , Northwestern University

The Processing of Temporal Relations in Discourse


March 24, 2000

Jason Eisner, Department of Computer Science, University of Rochester

Doing OT in a Straitjacket


March 1, 2000

Kenneth N. Wexler, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology

Very Early Parameter Setting in the Computational System of Language, Varriablity in Development Across Languages, Maturation versus Learning, Impaired Development, and the Potential for a Genetics of Language


January 17, 2000

Kita Sotaro, Max Planck Institute for Psycholinguistics

What Gesture Can Tell Us About the Process of Verbalization of Spatial Information


September 17, 1999

David Poeppel, Department of Linguistics, University of Maryland - College Park

wo Ideas About Timing in Hearing and Speech.


1998–1999

May 3, 1999

Bruce P. Hayes, Department of Linguistics, University of California at Los Angeles

Burnt and Splang: Some Issues in Morphological Learning Theory


April 14, 1999

Carlota S. Smith, Department of Lingustics, University of Texas - Austin

The Navajo Prolongative and Lexical Structure


March 7, 1999

Gert Webelhuth, Department Of Lingusitics, University of North Carolina - Chapel Hill

Surface Cues for Pragmatic Inferences as Motivation for the Evolution of Surface Syntax


February 25, 1999

John W. Moore, Department of Linguistics, University of California at San Diego

Judgment Types, Causatives, and S-Selection


February 23, 1999

Harry van der Hulst, Department of Linguistics, Leiden University

Modality-free Phonology


December 18, 1998

Bryan Gick, Haskins Laboratories, University of Connecticut

Articulatory Correlates of Ambisyllabicity in English Glides and Liquids


1997

September 19, 1997

Jason Stanley, Department of Philosophy , Cornell University

Necessity, A Priority, and What Is Said


1996–1997

April 25, 1997

Sarah (Sally) Thomason, Program in Linguistics, University of Michigan

Contact-induced Language Change and Contact-language Genesis


April 22, 1997

Ellen M Kaisse, Department of Linguistics, University of Washington

Glides, Vowels, and Ghost Consonants in Argentinian Spanish


March 28, 1997

Myrna Schwartz, Moss Rehabilitation Research Institute

When a Dog is a Cat and a Rug is a Fug: Picture Naming Errors in Aphasic and Non-aphasic Speakers


November 22, 1996

Barbara J Grosz, Department of Computer Science, Harvard University

Modeling Collaboration for Human-computer Communication


October 30, 1996

Peter W Jusczyk, Department of Psychology, Johns Hopkins University

What Infants Remember About Utterances They Hear


October 18, 1996

Zoltan Szabo, Department of Philosophy, Cornell University

The What and Why of Compositionality


September 27, 1996, Graduiertenkollegs Integriertes Linguistik-Studium

Graham Katz, University of Tuebingen

States, Events, Time, Tense and Other Monsters


1995–1996

April 26, 1996

Gary F Marcus, Department of Psychology, University of Massachusetts - Amherst

Symbols and Simple Recurrent Networks in Language and Cognition


April 19, 1996

J Kathryn Bock, Department of Psychology, University of Illinois - Urbana-Champaign

Structural Repetition as Implicit Learning


March 29, 1996

Dan Jurafsky, Department of Linguistics, University of Colorado - Boulder

A Probabilistic Model of Lexical and Syntactic Access and Disambiguation


December 8, 1995

Jennifer Saul, Department of Philosophy, University of Sheffield

The Problem with Attitudes


November 17, 1995

Mark E. Richard, Department of Philosophy, Tufts University

Analysis, Synonymy, and Sense


September 15, 1995

Yuki Kuroda, Department of Linguistics, University of California at San Diego

Theoretical Issues in Syntax


1994–1995

May 26, 1995

Gary S Dell, Beckman Institute, University of Illinois - Urbana-Champaign

The Past, Present and Future in Language Production


April 28, 1995

Tim Stowell, Department of Linguistics, University of California at Los Angeles

The Phrase Structure of Quantifier Scope


April 12, 1995

David Dowty, Department of Linguistics, Ohio State University

Birds, Bees, and Semantic Theory


March 3, 1995

Itziar Laka, Department of Linguistics, University of Rochester

Case in Human Grammar


February 10, 1995

Peter Lasersohn, Department of Linguistics, University of Rochester

Verbal Plurality and Conjunction


February 3, 1995

Kai von Fintel, Department of Linguistics, Massachusetts Institute of Technology

A Minimal Theory of Adverbial Quantification


December 16, 1994

Chris Barker, Department of Psychology, University of Rochester

Episodic -ee in English: An Argument That Thematic Relations Can Actively Constrain New Word Formation


December 9, 1994

Kathy Eberhard, Department of Psychology, University of Rochester

The Marked Effect of Number on the Production of Subject-verb Agreement


December 2, 1994

David Braun, Department of Philosophy, University of Rochester

The Many Meanings of Demonstratives


November 4, 1994

Michael K. Tanenhaus, Department of Psychology, University of Rochester

Using Eye-movements to Study Spoken Language Comprehension in Visual Contexts


October 25, 1994

Greg Carlson, Department of Linguistics, University of Rochester

What Are Thematic Roles?


October 21, 1994

James F Allen, Department of Computer Science, University of Rochester

The TRAINS Project


October 14, 1994

Elissa L Newport, Department of Psychology, University of Rochester

Creolization and Some Thoughts About Learning


September 30, 1994

Karen Petronio, Department of Psychology, University of Rochester

Wh-Questions and Related Constructions in ASL


September 23, 1994

Whitney Tabor, Department of Psychology, University of Rochester

Distributional Intimations of Grammatical Reclassification