16/11/2018 ILCB Lunch-talk : Jean-Luc Nespoulous (Professeur Emérite en Sciences du Langage Université de Toulouse Institut Universitaire de France)
Marseille Salle des voutes, 3 place Viotor Hugo 13003
Les dysfonctionnements phonétiques et/ou phonémiques dans l’aphasie, chez l’enfant et chez l’apprenant d’une langue seconde : Une tentative de simplification ?
Erreurs, Contraintes structurales et/ou Stratégies Palliatives
La lésion cérébrale qui provoque une aphasie engendre certes ipso facto un déficit linguistique. Ce déficit ne se caractérise cependant pas, dans la très grande majorité des cas, chez ces patients, par une « perte de compétence », comme l’avait cru R. Jakobson sur la base de données cliniques rapportées par d’autres (K. Goldstein et A.R. Luria, en particulier) et très largement « sur-interprétées » par lui-même !
Il n’est qu’à voir, pour s’en convaincre, la variabilité des « erreurs » réalisées par un même patient (en situation de production de mots, par exemple) pour constater que ses productions sont la résultante de problèmes de « performance » ou de « traitement », souvent comparables à ceux que peut connaître, au quotidien quoique à un moindre degré, tout locuteur non-aphasique (Cf. les travaux de V. Fromkin (1973), M. Garrett (1980).
Nous tenterons, dans notre exposé, de montrer en quoi les « erreurs » produites par les aphasiques – aux plans phonétique et phonémique –, en dépit de leur variabilité intra-individuelle, inter-tâches… – se trouvent néanmoins contraintes par les propriétés structurales de leur langue maternelle.
12/10/2018 ILCB Lunch-talk : David Poeppel
Marseille Salle des voutes, 3 place Viotor Hugo 13003
21/09/2018 ILCB Lunch-talk : Christian Lorenzi
Marseille Salle se séminaire FRUMAN, 3 place Viotor Hugo 13003
• 12h Prof. Christian Lorenzi, CNRS & Ecole normale supérieure, Paris, France
Processing time with our auditory system
Debate on how speech information is represented in the auditory system has revolved around the role of two neural/perceptual features encoding the temporal modulations of the acoustic signal (the “temporal envelope”, ENV, and “temporal fine structure”, TFS), their relative contribution to intelligibility and how that might be degraded by lesions to the peripheral and central auditory system.
We will review psychophysical studies that investigated the development of ENV/TFS perception, the effects of cochlear and central lesions, and the relationship between ENV/TFS perception and speech intelligibility.
Our results suggest that: i) the processing of ENV and TFS is “functional” by 6 months, and fine-tuned by language exposure between 6 and 10 months, ii) ENV is more important for speech identification, whereas TFS is more important for the segregation of competing sound sources, iii) reduced ability to process ENV and/or TFS explains deficits typically associated with cochlear and central damage and ageing.
Shamma, S., & Lorenzi, C. (2013). On the balance of envelope and temporal fine structure in the encoding of speech in the early auditory system. /Journal of the Acoustical Society of America, 133, /2818-2833/./
Lorenzi, C., Debruille, L., Garnier, S., Fleuriot, P., & Moore, B.C.J. (2009). Abnormal auditory temporal processing for frequencies where absolute thresholds are normal. /Journal of the Acoustical Society of America, 125, 27-30./
Lorenzi, C., Gilbert, G., Carn, H., Garnier, S., & Moore, B.C.J. (2006). Speech perception problems of the hearing impaired reflect inability to use temporal fine structure. /Proceedings of the National Academy of Science/ /USA/, /103(49)/, 18866-18869.
18/05/2018 ILCB Lunch-talk : The Temporal Dynamics of Word Processing in Hearing and Deaf Readers by Phillip Holcomb, Department of Psychology, San Diego State University, San Diego (United States)
Aix-en-Provence Salle de conférences, 5 avenue Pasteur 13100
In my talk I will discuss a recent line of research in our lab where we are comparing electrophysiological measures of word processing in hearing and deaf adult readers. Because congenitally deaf adults acquire reading skills without the benefit of having first learned a spoken language they offer a unique contrast with hearing readers which allows certain hypotheses about the role of prior language experience on the mechanisms underlying visual word recognition to be tested.
06/04/2018 ILCB Lunch-talk : The computational neuroanatomy of speech production in the context of a dual stream framework for language by Greg Hickok (Dept. Cognitive Sciences & Language Science - University of California Irvine)
Marseille Salle des voutes, 3 place Viotor Hugo 13003
The dual stream framework for the cortical organization of language is grounded in evolutionary biology in that it proposes an organization that is homologous to that found in non-linguistic sensorimotor systems from which it is hypothesized to have evolved. While it was controversial when first proposed in the early 2000s, a substantial body of evidence now supports the basic claims. Significant progress has been made in working out the functional anatomy of the model, particularly the dorsal auditory-motor pathway, which will be the primary focus of this talk. I will provide a brief overview of the dual stream framework, show how well-established psycholinguistic models of speech production are neatly relatable to it, and then detail a decade of progress in understanding the neuroanatomy and some computational details of dorsal stream function. A major conclusion is that the integration of psycholinguistic and motor control models of speech production represents a promising new direction for research on the neurobiology of speech and language progressing, including its evolutionary origins.
30/01/2018 ILCB Lunch-talk : Dendrophilia and the Biology of Language : Phonological Continuity and Syntactic Discontinuity by Tecumseh Fitch (Dept. of Cognitive Biology, Faculty of Life Science - University of Vienna)
Marseille Amphithéâtre de CERIMED, 27 Boulevard Jean Moulin 13005
An understanding of both the neural mechanisms involved in language, and their evolutionary history, requires incisive comparisons between humans and nonhuman animals. Ideally, such comparisons are grounded in an explicit, computational framework encompassing both formal and neural components. I review work comparing humans with nonhuman primates, other mammals, and birds carried out in the last decades, much of it using artificial grammar learning to explore the perception of phonology and syntax. This research suggests the following two hypotheses: First, the phonological continuity hypothesis holds that sequential processing of syllables is supported by equivalent, homologous mechanisms in humans and other animals. This set of mechanisms allows combination via concatenation, and supports sequential processing at the finite-state (regular) computational level. Second, the dendrophilia hypothesis suggests that humans are unusual in our ability to process complex hierarchical structures in multiple domains (language, music, etc). These hierarchical abilities require computational power at the supra-regular level (above finite state), and supports the abstract structures needed for phrasal syntax and semantics. I propose that these general hierarchical abilities are supported neurally by the great enlargement of Broca’s area in our species, and the broadening of its connections to most of the parietal and temporal lobes. Broca’s region in humans acts as a domain-general “stack”, an auxiliary memory supporting supra-regular computation in both language and music.
8/12/2017 ILCB Lunch-talk : Information-oriented and cross-language aspects on speech and cortical rhythms by François Pellegrino (CNRS & Université de Lyon, Dynamics of Language Lab UMR5596)
Marseille St Charles, Amphi de Sciences Naturelles
During the last two decades a growing body of evidence has shown a close relationship between temporal structure of speech and neural oscillatory activities, especially in the theta and gamma bands. More specifically, several recent models suggest that the neural capacity to track speech dynamics and rhythmic patterns is crucial for speech processing and understanding. However, it is well known that speech periodicity is limited and thus that the story is probably more complex than acknowledged previously.
In this talk I present results of a cross-language comparison of 17 languages in terms of syllabic speech rate, Shannonian information rate and of their shared tendency to very unevenly distribute information among their segments and syllables. These results are discussed in the light of cortical rhythms in the theta band and I introduce a (very) speculative hypothesis stating that there may be a functional distinction between syllables whose role is to convey information and syllables whose role is to provide a rhythmic carrier entraining neural oscillations.
10/11/2017 ILCB Lunch-talk : Alignment and prediction in conversational interactions by Prof. Martin Pickering (University of Edinburgh)
Vendredi 10/11 at 12:00
Aix en Provence LPL, Salle B011
27/10/2017 ILCB Breakfast-talk : Some recent developments in models of retrieval processes by Prof. Shravan Vasishth (University of Potsdam)
Vendredi 27/10 at 9:15
Aix en Provence LPL, Salle B011
Some recent developments in models of retrieval processes
In this talk, I will discuss some recent empirical and theoretical developments in cue-based retrieval theory [1-13]. I will begin by talking about what we know so far about the underlying mechanisms driving retrieval processes in sentence comprehension and the evidence for and against the cue-based retrieval account . Then I will present some recent work showing that two alternative models of retrieval are viable candidate theories .
1. Richard L. Lewis and Shravan Vasishth. An activation-based model of sentence processing as skilled memory retrieval. Cognitive Science, 29:1-45, 2005.
2. Richard L. Lewis, Shravan Vasishth, and Julie Van Dyke. Computational principles of working memory in sentence comprehension. Trends in Cognitive Sciences, 10(10):447-454, 2006.
3. Shravan Vasishth, Sven Bruessow, Richard L. Lewis, and Heiner Drenhaus. Processing Polarity: How the ungrammatical intrudes on the grammatical. Cognitive Science, 32(4):685-712, 2008.
4. Shravan Vasishth, Katja Suckow, Richard L. Lewis, and Sabine Kern. Short-term forgetting in sentence comprehension: Crosslinguistic evidence from head-final structures. Language and Cognitive Processes, 25(4):533-567, 2011.
5. Felix Engelmann, Shravan Vasishth, Ralf Engbert, and Reinhold Kliegl. A framework for modeling the interaction of syntactic processing and eye movement control. Topics in Cognitive Science, 5(3):452-474, 2013.
6. Lena A. Jäger, Zhong Chen, Qiang Li, Chien-Jer Charles Lin, and Shravan Vasishth. The subject-relative advantage in Chinese: Evidence for expectation-based processing. Journal of Memory and Language, 79-80:97-120, 2015.
7. Stefan L. Frank, Thijs Trompenaars, and Shravan Vasishth. Cross-linguistic differences in processing double-embedded relative clauses: Working-memory constraints or language statistics? Cognitive Science, page n/a, 2015.
8. Umesh Patil, Sandra Hanne, Frank Burchert, Ria De Bleser, and Shravan Vasishth. A computational evaluation of sentence comprehension deficits in aphasia. Cognitive Science, 40:5–50, 2016.
9. Molood Sadat Safavi, Samar Husain, and Shravan Vasishth. Dependency resolution difficulty increases with distance in Persian separable complex predicates: Implications for expectation and memory-based accounts. Frontiers in Psychology, 7, 2016.
10. Umesh Patil, Shravan Vasishth, and Richard L. Lewis. Retrieval interference in syntactic processing: The case of reflexive binding in English. Frontiers in Psychology, 2016. Special Issue on Encoding and Navigating Linguistic Representations in Memory.
11. Felix Engelmann, Lena A. Jäger, and Shravan Vasishth. The effect of prominence and cue association in retrieval processes: A computational account. Manuscript (under revision), 2017.
12. Fuyun Wu, Elsi Kaiser, and Shravan Vasishth. Effects of early cues on the processing of Chinese relative clauses: Evidence for experience-based theories. 2017. In Press, Cognitive Science.
13. Paul Mätzig, Shravan Vasishth, Felix Engelmann, and David Caplan. A computational investigation of sources of variability in sentence comprehension difficulty in aphasia. In Proceedings of MathPsych/ICCM, Warwick, UK, 2017.
14. Lena A. Jäger, Felix Engelmann, and Shravan Vasishth. Similarity-based interference in sentence comprehension: Literature review and Bayesian meta-analysis. Journal of Memory and Language, 94:316-339, 2017.
15. Bruno Nicenboim and Shravan Vasishth. Models of retrieval in sentence comprehension: A computational evaluation using Bayesian hierarchical modeling. 2017. Accepted, Journal of Memory and Language.
*substitute for the Lunch Talk of October
30/06/2017 ILCB lunch-talk : Three decades of structural priming research: implications for syntactic representation, domain-specificity of syntax, and multilingualism by Robert Hartsuiker (University of Ghent, Belgium)
Three decades of structural priming research: implications for syntactic representation, domain-specificity of syntax, and multilingualism
About thirty years ago, Kay Bock discovered structural priming, the tendency for speakers and listeners to recycle syntactic structures they have recently encountered. A recent meta-analysis of 70 published papers (Mahowald et al., 2017) shows that structural priming (as well as its enhancement by lexical overlap between prime and target sentence) is highly robust. Here, I look back at three decades of structural priming research, with a particular emphasis on the theoretical implications for syntactic representation, on the organization of the syntactic representations of multiples languages in multilinguals, and on the question of whether structural processing is domain-specific or is shared with other cognitive domains, such as music or math. I then look forward to an ongoing research line on the late acquisition of syntax in a second language. I will describe our account of this acquisition process, according to which syntactic representations start out as separate for each language but merge as the learner’s proficiency increases, and show the results of an artificial language learning study designed to test this account.
-To plan for the lunch buffet, attendance must be confirmed by sending an email to email@example.com
Please let us know if you have any dietary restrictions (vegetarian, allergies, etc.).
-Speaker suggestions (warmly encouraged) for September-June should be sent to firstname.lastname@example.org
12.00-13.00 Talk (Salle de conférences, LPL) by Robert Hartsuiker (University of Ghent, Belgium)
13.00-...... Lunch buffet (garden, LPL, Aix-en-Provence)