|Current Workshop | Overview||Back | Home | Search ||
Core Knowledge, Language and Culture
Tom Bever (University of Arizona)
How many “normal” neurological
Sixty years ago, A. Luria noted that righthanders with familial left handedness (FS+) recover from left-hemisphere aphasia relatively fast, and show crossed aphasia (right hemisphere lesion) more often than people with only right-handed family members (FS-). Since roughly 40% of people are FS+ this is a significant finding. We report on recent behavioral and imaging studies exploring the possible neurological basis for the effect of familial handedness on language and cognition in righthanders.
Our research so far has shown:
a) FS+ people access lexical items more readily than local syntactic/semantic structures: the opposite is true of FS- people.
b) FS+ people have an earlier critical period for language acquisition than FS- people.
c) FS+ people show earlier fMRI activation of a lexical-ordering task in “Broca’s areas” of the right hemisphere than to a sentence creation task: FS- people do not show this difference.
d) We have constructed a Bayesian polygenetic model of the genetic load for left handedness (GLLH), based on 4,000+ 3-generation family handedness pedigrees.
e) The overall EEG Early Negativity response to irrelevant words in a sentence correlates with increasing GLLH. (And FS+ subjects show a weaker ELAN (early left anterior negativity).
f) The ERAN (early right anterior negativity) EEG component to odd musical chords
correlates with increasing GLLH. (In fact, FS- subjects show no ERAN).
These and other studies suggest that the neurological organization for language and cognition in right handed FS+ people relies more on the right hemisphere, in part as a function of the degree of genetic load: this asymmetry difference has apparent categorical effects.
The consequences for current cognitive and clinical neuroscience are obvious: it is startling that less than 1% of research articles in the last decade that mention subject/patient handedness also assess subject/patient familial handedness. And absolutely none of them assess the strength of genetic load for lefthandedness. Clearly, many important aspects of behavior and neurological organization for language and cognition are being missed by ignoring this variable.
The implications for the genetics and epigenetics of language and cognition are also potentially important. The fact that both FS+ and FS- organizations are “normal” offers a window into the genetic foundations of language that complements the study of special and rare abnormal genetically determined linguistic and cognitive syndromes. The FS+/FS- distinction and the GLLH will allow us to track early stages of neurological and behavioral maturation as a function of the genetic load for lefthandedness, which presumably in large part reflects genetically determined degree of left vs right hemisphere dominance.
There are intriguing consequences for linguistic theory. Normal wide variation in how language is represented neurologically supports the idea that much of language capacity is not the result of a universal fixed set of neurological structures. Rather, on this view, syntax emerges as the (“perfect”) interface between a large lexical capacity and existing semantic propositional relations: on this view, how syntax is represented and enacted neurologically can vary significantly, depending on specific aspects of an individual’s cerebral organization. If specific neurological loci are not accountable for all relevant language universals, we must look to formal, mathematical and physical constraints – so-called constraints of the “third kind”
Collaborators include: Roeland Hancock, Julia Fisher, Lee Ryan (U of Arizona); Angela Friederici, Daniela Sammler (CNS Institute, Max Planck, Leipzig); Shiaohui Chan (Taipeh University); David Townsend (Montclair State University).
Susan Carey (Harvard)
Language and Number: Evidence from Studies of Conceptual Development
The problem I will be addressing is how logical resources get harnessed in actual representational/computational structures. My case study will be natural number, as captured by the successor function (Piano's axioms), or equivalently, on 1-1 correspondence (as Frege showed). But NOBODY thinks that representations of natural number we actually think with are constructed as are Piano's or Frege's proofs. I have previously characterized the innate representational resources with numerical content that infants and young children actually deploy and shown that they cannot express natural number. I have characterized a bootstrapping process through which modeling these structures in terms of an initially meaningless count routine yields the first step toward natural number, a representational structure that captures (implicitly) the successor function for known verbal numerals in a count list. Here I am concerned with the construction of the numerical meanings of the words "one, two, three" and "four," prior to this bootstrapping episode, and draw on a variety of evidence that implicates the semantics of natural language quantifiers and set based quantification in this process.
Jean-Pierre Changeux (Collège de France & Institut Pasteur, Paris)
Synaptic epigenesis and the evolution of higher brain functions
The epigenesis theory of develoment can be traced back to William Harvey (1651), who stated, in contrast to contemporary preformation views, that the embryo arises by “the addition of parts budding out from one another”. The word epigenesis was subsequently used by Conrad Waddington (1942) to specify how genes might interact with their surroundings to produce a phenotype. This is also the meaning we adopted in our paper, « Theory of the Epigenesis of Neuronal Networks by Selective Stabilization of Synapses » (Changeux et al. 1973 ; Changeux & Danchin 1976), according to which the environment affects the organization of connections in an evolving neuronal network through the stabilization or degeneration (pruning) of labile synapses associated with the state of activity of the network. This definition contrasts with the recent and more restricted sense of the status of DNA methylation and histone modification in a particular genomic region. The synapse selection theory was introduced to deal with two major features regarding the genetic evolution of the human brain : 1. the non-linear increase in the organisational complexity of the brain despite a nearly constant number of genes ; 2. the long postnatal period of brain maturation (ca 15 years in humans), through which critical and reciprocal interactions take place bertween the brain and its physical, social and cultural environment. This theory will be evaluated and updated in the framework of the recent human/primate genome data, analysis of gene expression patterns during postnatal development, brain imaging of cultural pathways, such as those for language learning, and current views about the neural bases of higher brain function, in particular the global neuronal workspace architectures for access to consciousness and its pathologies (see Dehaene and Changeux 2011).
Changeux JP, Courrège P, Danchin A.A theory of the epigenesis of neuronal networks by selective stabilization of synapses. Proc Natl Acad Sci U S A. 1973 Oct; 70(10):2974-8.
Changeux JP, Danchin A. Selective stabilisation of developing synapses as a mechanism for the specification of neuronal networks. Nature. 1976 Dec 23-30;264(5588):705-12.
Dehaene S, Changeux JP. Experimental and theoretical approaches to conscious processing. Neuron. 2011 Apr 28;70(2):200-27.
Changeux JP, The Good, the True and the Beautiful : a neuronal approach. Odile Jacob/Yale University press. 2012
Mark Changizi (2AI)
The Sounds of Nature in Speech and Music
In my new book "Harnessed," I argue that language and music are in us not because we evolved for them, but, rather, because they evolved for us: aspects of language and music culturally evolved to possess the structure that our non-language and amusical brains could brilliantly absorb. In particular, some aspects of language and music came to have the structures of the sounds in nature -- the sounds of, respectively, solid-object physical events and human movement -- just the sorts of sounds our brain had already evolved to process.
Gennaro Chierchia (Harvard)
What is the relation between speaking and reasoning? How does grammar relate to logic? As the traditional view has it, syntax determines which string of symbols/sequence of morphemes is well formed. Logic determines which of the well formed structures are tautologous or contradictory. Applying this to natural language, one might say that Universal Grammar provides a structure building device (say, ‘merge’, ‘copy-merge’ and what have you) and that the structures one can build through such device are then passed on to a pragmatic/conceptual/intentional system for communicative purposes.
This well established model doesn’t match, in so far as I can tell, how language works.
Universal Grammar is not simply a structure building device. It incorporates an inferential apparatus, a logic. The structure building device (‘merge’) and the inferential apparatus (‘infer’) jointly determine what may constitute a particular language. The crucial way in which the inferential apparatus plays a role in defining what may constitute a language is through subconscious contradictions. Let me illustrate what I mean through an example.
Consider the sentences in (1)
(1) a. * There were any cookies left a’. There weren’t any cookies left
b. * John will ever meet you b’. John won’t ever meet you
The unprimed versions of the sentences in (1) are strongly deviant, in contrast with their primed counterparts, which are perfectly grammatical. The deviance of (1a-b) ‘feels like’ and is a severe as an agreement mismatch, or a word order violations. Yet, as it turns out, the best way of making sense of their status is as contradictions. (1a) and (1b) are contradictory; just like it rains and it doesn’t rain. This claim seems to be highly counterintuitive; too much so to stand a chance at being right. And yet, I will try to make a case that it is the only way of making sense of cases like (1) (and host of related ‘Polarity Sensitive’ phenomena).
Suppose, for the sake of argument, that the take I’ve sketched on the sentences in (1) is right. Doesn’t this commit one to explaining how come (1a) feels so different from canonical contradictions? It indeed does. And I think that the difference between the way in which there are cookies left and it rains and it doesn’t rain are contradictory can be understood in a quite insightful way, following a proposal by Jon Gajewsky (that in turn builds on an old idea by Carnap). Logical contradictions are insensitive to the content of non logical words; but the identity (or lack thereof) of non logical words matters very much: p and not p is contradictory because the two occurrences of p are meant to be interpreted in the same way. Grammar, on the other hand, doesn’t even care whether two occurrences of lexical elements are the same or not. Grammar simply doesn’t see the content of non logical words. Structures that are false for any replacement of lexical material are Grammatically trivial, and hence deviant. Structures that are false for any uniform replacement of lexical material (uniform: in replacing a lexical object p, you must replace all its occurrences the same way) are Logically trivial. So (1a-b) are G-trivial and L-trivial. But it rains and it doesn’t rain is L-trivial, but not G-trivial.
This does require a non trivial reassessment of the relation between grammar and logic (no pun intended). And, if you wish, a reassessment of the relation between speaking and reasoning.
Gergely Csibra (Central European University)
Non-verbal demonstratives, like pointing to something or showing something up, designate their referents as particular objects or sets of objects individuated by spatial means. Despite of this, studies with human infants, children, and adults show that, when such gestures are used ostensively and the context does not suggest otherwise, they are taken to refer to the object kind that the referent represents rather than to the particular object(s) present in the context. This paradoxical phenomenon of non-verbal demonstrative reference to kinds parallels some core properties of generic linguistic expressions. In particular, generic constructions are unmarked, there is a 'default' bias towards generic interpretation of ambiguous sentences, and the predicates of generic expressions are expected to reflect essential, kind-relevant properties that are tolerant to counterexamples. I argue that non-verbal demonstrative reference shares these properties, which reflect fundamental design features of the cognitive mechanisms subserving human ostensive communication. Such mechanisms (1) are probably innate, (2) are not shared with other species, and (3) by enabling the communication of kind-generic knowledge even in the absence of linguistic abilities, they support the maintenance of culturally shared human traditions.
(Collège de France, Paris & Cognitive NeuroImaging Unit, NeuroSpin, Saclay, France)
In search of the neural encoding of constituent structure
Sentences are not mere strings of words but possess a hierarchical structure with constituents nested inside each other. Linguistic theory proposes that constituents are created by a series of successive “merge” operations, but the underlying neuronal mechanisms remain unknown. Christophe Pallier and I have begun a systematic search for the cerebral mechanisms underlying this theoretical construct. We hypothesized that the neural assembly that encodes a constituent grows with its size, which can be approximately indexed by the number of words it encompasses. We therefore searched for brain regions where activation increased parametrically with the size of linguistic constituents, in response to a visual stream always comprising twelve written words or pseudowords. The results (Devauchelle et al., PNAS, 2011) isolated a network of left-hemispheric regions located all along the superior temporal sulcus and inferior frontal gyrus. Among these, a core network comprising inferior frontal and posterior temporal regions showed constituent size effects regardless of whether actual content words were present or were replaced by pseudowords (Jabberwocky stimuli), suggesting that these areas operate autonomously and can extract abstract syntactic frames based on function words and morphological information alone. In most regions, activation was delayed in response to the largest constituent structures, suggesting that nested linguistic structures take increasingly longer time to be computed and that these delays can be measured with fMRI.
We have now extended this research program to spoken and sign language, music perception, and the syntax of mathematical algebraic expressions. Our preliminary results have failed to reveal a single shared network for constituent structure across all these domains. In particular, the constituent structure of algebraic expressions appears to be quickly extracted in parallel by visual areas, in less than 200 milliseconds, guiding the later sequential exploration of the expression using saccadic eye movements. We speculate that, in the course of development, with increasing expertise, the ability to form constituent structures gets “compiled” into various areas of the occipital, temporal and parietal lobes – thus leading to specialized structures for processing syntax in language, music and math. These findings need not exclude the possibility that, at the inception of constituent formation, effortful processes of nested rule formation are at work, possibly unique to the human brain, and under the aegis of the dorsolateral prefrontal cortex.
Tecumseh Fitch (University of Vienna)
Core Knowledge: The View from Cognitive Biology
Available data from both neuroscience and animal cognition support the contention that many, probably most, “modular” aspects of human cognition have deep roots and are shared with other species. An approach to evaluating this hypothesis has been dubbed “cognitive phylogenetics” and involves mapping traits, empirically evaluated in multiple species, onto an independently derived phylogenetic tree. Human-specific traits, which are predicted to be rare, can only be determined by a process of elimination, and require research on a broad range of species. I will illustrate this approach using examples of color vision, social intelligence, supra-regular grammars, and recursion.
Lisa Feigenson (Johns
The act of quantification (e.g., knowing how many objects are in a scene) requires selecting a relevant entity and storing it in working memory for further processing. Critically, multiple kinds of entities can be selected and stored. In this talk I offer evidence that humans can represent at least three different levels of entities in working memory. They can represent an individual object (e.g., “that bird”). They can represent a collection of items (e.g., “that flock of birds”). And they can represent a set of discrete items (e.g., “the set containing Bird A, Bird B, and Bird C”). Each of these different ways of representing a scene permits a different type of quantificational processing. Storing individual objects in working memory permits exact but implicit representation of the number of objects present, up to a maximum of 3 objects. Storing collections of items in working memory permits explicit but inexact representation of the number of items present, with no in principle upper limit. And storing sets of individual items permits exact implicit representation of the number of items present, with an upper limit that has yet to be empirically specified. Hence, which quantity-relevant computations may be performed in any given situation depends on which level of representation is stored. This framework for thinking about interactions between working memory and quantification applies throughout development, starting in infancy.
Deeply embedded in the structure of (all?) languages is a highly abstract representation of the experienced world. It has often been explicitly or implicitly assumed that this level of abstraction is a consequence of language itself or of the unique computational capacities that make language uniquely human. I will argue that this highly abstract representation has ancient evolutionary origins, which explains why it is present in species that have not shared a common ancestor with humans for hundreds of millions of years.
Susan Goldin-Meadow (University of Chicago)
Insight into core knowledge from gesture
Imagine a child who has never seen or heard any language at all. Would such a child be able to invent a language on her own? Despite what one might guess, the answer to this question is "yes". I describe children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, they have not yet been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate ––they gesture––and those gestures, called homesign, take on many of the forms and functions of language––they combine to form sentences, they are themselves a combination of parts, and they display hierarchical structure. The properties of language that we find in the deaf children's homesigns are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. Importantly, the co-speech gestures that the deaf children’s hearing parents produce when talking to their children do not display the resilient properties of language, making it clear that these structures are not inevitable in the manual modality. I end by considering the conditions under which co-speech gesture is transformed into a more language-like system, and I speculate about the conditions under which homesign can be pushed toward more complex linguistic structure (and how those conditions could be studied).
Judy Kegl (University of Southern Maine)
Four instincts that lead us to language
Every body part and mental system involved in language perception and production evolved to serve a primary function other than language acquisition and use. Yet, at some point in human evolution these human traits came together to subserve language as a human trait. Humans now come to their learning environments with a language-ready brain—a brain that has evolved to recognize language relevant evidence in the environment and that is led by innate expectations to analyze that input and eventually acquire the language of their community. In so doing, they acquire a dialect of the human language.
This presentation looks at two relevant populations studied over the last 25 years that shed light on how humans come to language. The first are individuals with language-ready brains and cognitively intact brains that do not acquire language. The second are individuals with language-ready brains who acquire language in the absence of language as input.
Four instincts that draw us to language-relevant evidence in our environment will be examined: periodicity, imitation (mirror neurons), annealing and I-language. Only the last of these instincts is linked to a critical period. 450 individuals who have not experienced exposure to language-relevant evidence within the critical period for language acquisition and 1000 others who developed language in the absence of language as their input help us to tease apart the contribution of each of these instincts to language emergence and acquisition.
Nora Newcombe (Temple University)
Defining the core in core knowledge about geometry: Some alternatives
There is broad agreement that “core knowledge” exists in the domain of geometry. Specifically, there are strong initial capacities for estimation of quantities including distance (Huttenlocher, Newcombe & Sandberg, 1994; Newcombe, Huttenlocher & Learmonth, 1999; Newcombe, Sluzenski & Huttenlocher, 2005), and for comparing these quantities and relating them to each other (Hermer & Spelke, 1994, 1996).
However, there is disagreement concerning
· the cognitive architecture, specifically
o whether there is a geometric module that excludes the use of non-geometric information
o whether common mechanisms underlie estimation and comparison of quantities defined along different dimensions (especially space, number and time) as well as in different modalities
· whether developmental change in tasks depending on estimation of spatial extent depends focally on the acquisition of key linguistic terms.
This paper will touch on these issues, but will focus primarily on the development of quantity estimation, based on core knowledge but going beyond it using a variety of cultural tools and a variety of mechanisms of change:
· transitions from relative to absolute perceptual estimation
· learning to measure using measuring tools
· development of ability to do scaling tasks
o early changes: from perceptual estimation to use of mathematical tools
o later changes: knowledge needed for scaling large numbers (and long times).
Pierre Pica (CNRS)
On core knowledge and its relation to language : the case of Number
A detailed investigation of the structure of small 'numbers' (numbers until 'four') in Amazonian and non-Amazonian languages shows that these units do not behave as numbers in the traditional sense. They rather involve the notion of natural group , expressed by an inalienable relationship between a linguistic unit that can be analyzed as an anaphor and its antecedent.
Focusing on what we call the 'companion strategy' (the presence of 'ebadipdip' (lit 'companion') in ebadipdip (lit 'four') in Mundurucu and other languages, we discuss how our analysis sheds light on the limitation of the 'numeral' system of these languages.
We investigate the implication of our analysis for core knowledge of number and its relationship to language, and discuss the linguistic mechanisms that allow the emergence of numbers above 4.
David Poeppel (New York University)
Hearing at the syllable rate: a linking hypothesis between linguistic and neurobiological primitives
Identifying principled connections between the ‘parts list’ of the mind (the human cognome) and the parts list of the brain proves to be difficult if the goal is to define links that are more than correlational. To take the example of language research, the putative linguistic primitives and neurobiological primitives are certainly mismatched in their granularity - and potentially incommensurable (Poeppel & Embick, 2005). What kind of linking hypotheses offer promise, and how can we transition from correlational to explanatory neurolinguistics? I outline an experimental research program that aims to provide a new angle on the problem.
How do vibrations in the ear (sounds) connect with abstractions in the head (words)? A Marr’s-eye-view suggests that, at the computational and algorithmic levels of description, parsing auditory input at the syllable rate is necessary to yield any comprehension. I suggest a specific ‘temporal unit’ as a perceptual primitive for auditory cognition and speech - intuitively an acoustic-phonetic primal sketch (Poeppel et al., 2008).
What kind of neuronal infrastructure (implementational description) forms the basis for the requisite temporal processing? Neurophysiological and neuroimaging experiments suggest that intrinsic neuronal oscillations at different, privileged frequencies (e.g. theta, gamma) provide part of the underlying mechanisms. In particular, to achieve parsing of a naturalistic input into chunks of the appropriate temporal granularity, a mesoscopic-level mechanism consists of the sliding and resetting of temporal windows, implemented as phase resetting of intrinsic oscillations on privileged time scales (Luo & Poeppel, 2007; Luo et al. 2010). Such a mechanism provides time constants – temporal integration windows – for parsing and decoding speech (Giraud & Poeppel, 2012). Importantly, one foundational (and robustly supported) time scale is commensurate, cross-linguistically, with syllable duration.
Speech and other dynamically changing auditory signals (as well as visual stimuli) contain critical information required for successful decoding at multiple time scales. The information carried at these scales plays a causal role in decoding perceptual information and links to the representations that underpin further processing. Neuronal oscillations provide an implementational solution to link the physics of input to the stuff of thought.
Poeppel, D. and Embick, D. (2005). The relation between linguistics and neuroscience. In A. Cutler (ed.), Twenty-First Century Psycholinguistics: Four Cornerstones. Lawrence Erlbaum, 103-120.
Poeppel, D., Idsardi, W. van Wassenhove, V. (2008). Speech perception at the interface of neurobiology and linguistics. Philos Trans R Soc Lond B 363:1071-86.
Luo H and Poeppel D (2007). Phase Patterns of Neuronal Responses Reliably Discriminate Speech in Human Auditory Cortex. Neuron 54: 1001-1010.
Luo, H., Liu, Z., Poeppel, D. (2010). Auditory cortex tracks both auditory and visual stimulus dynamics using low-frequency neuronal phase modulation. PLoS Biology 8(8), e1000445. doi:10.1371/journal.pbio.1000445.
Giraud AL & Poeppel D (2012). Cortical oscillations and speech processing: emerging computational principles and operations. Nature Neuroscience. 15(4):511-7.
Liina Pylkkanen (New York University)
Core Combinatory Operations of Language: Cross-modal Generality and Steps Towards Mechanisms
Regarding the combinatory system of language, two of the most elementary, yet hard to study, questions are (i) what is the internal architecture of composition – is it monolithic or computationally subdivided – and (ii) what is the scope of linguistic combinatory operations, i.e., to what extent do they also operate outside of language? For the past decade or so, research in my laboratory has used magnetoencephalography (MEG) to uncover the brain bases of linguistic composition with an approach strongly rooted in linguistic theory. In this talk I describe our basic findings and the extent to which they at present speak to (i) and (ii). In sum, our findings indicate a general combinatory mechanism in the ventromedial prefrontal cortex that also functions outside of language and a computationally more specific one in the left anterior temporal lobe whose function appears limited to modification environments (i.e., composes black cat but not eats meat). I will show that both of these regions operate in both comprehension and production. Overall, our work suggests that the answer to the question ‘is linguistic composition monolithic or computationally subdivided?’ is neither a straightforward ‘yes’ or ‘no,’ but rather, the computational specificity of combinatory brain mechanisms varies by region. Future work will need to engage in systematic hypothesis testing regarding the specific contributions of each component of this network.
Pylkkänen, L. & McElree, B. (2007). An MEG Study of Silent Meaning. Journal of Cognitive Neuroscience, 19, 1905-1921.
Bemis, D. K., & Pylkkänen, L. (2011). Simple Composition: An MEG investigation into the comprehension of minimal linguistic phrases. Journal of Neuroscience, 31(8): 2801-2814.
Pylkkänen, L., Brennan J., Bemis, D. (2011) Grounding the cognitive neuroscience of semantics in linguistic theory. Language & Cognitive Processes, 26(9):1317–1337.
Luigi Rizzi (University of Siena)
Core syntactic computations: comparative syntax and language acquisition
In the first part of this talk, I would like focus on the basic ingredients of natural language syntax according to recent models, with special reference to minimalist approaches. One salient characteristic of human language is its unbounded scope: we can constantly produce and understand linguistic expressions that we have not encountered in our previous experience. This is made possible by the combinatorial character of language: elements from memorized inventories (lexical items) can be combined to form higher level entities through recursive syntactic procedures. Recent syntactic models have come to the conclusion that the fundamental recursive rule is extremely elementary: Merge, simply stating that two elements can be strung together to form a third element. If Merge is simple, its recursive applications can generate structural representations of great complexity, which are analyzed in the projects falling under the general heading of “the cartography of syntactic structures”. Structures created by Merge undergo further computations such as Search, Labeling, Spell-out. A core computational property is movement (in fact reducible to a particular case of Merge plus some of the elementary operations just mentioned): certain elements can be dislocated to a position different from the one in which they are interpreted (for instance, in a wh-question the clause-initial phrase is to be construed as a dependent of a verb which can be indefinitely far away: Which book do you think … that we should read ___?). Merge and Move are not elements of an arbitrary formal game, but are immediately finalized to the expression of meaning and its encoding in a system of sounds. Merge immediately expresses properties of argumental semantics (who does what to whom in the described event), and Move creates configurations expressing scope-discourse properties: the scope of operators, such discourse-related articulations as topic-comment, focus-presupposition, and the like.
Comparative studies show that core syntactic computations permit margins of variation across languages, within a richly structured invariant structure. I will discuss how current model deal with the fundamental issue of properly expressing linguistic invariance and variation.
In the last part of the talk I would like to address issues of language acquisition. Certain kinds of movement are acquired early by the language learner, while other kinds are acquired late. I will show how linguistic theory can offer precise tools characterizing syntactic complexity along certain formal dimensions, and the resulting complexity scales are instrumental for understanding selective difficulties in acquisition. The illustration will be based on the intervention effects that make object relatives and (certain) object questions very hard for the learner, who can easily analyze subject relatives and questions from very early on.
Elizabeth Spelke (Harvard)
Core knowledge and language:
Research on human infants, bolstered by research on animals and on children and adults from diverse cultures, provides evidence for a set of systems of knowledge that emerge prior to language. Each system is domain-specific: different systems guides infants' reasoning and learning about objects, actions, numbers, places, geometrical forms, and social relationships. Each system also is innate, developmentally invariant, and therefore universal across cultures. Finally, each system is foundational for the development of uniquely human, later-emerging systems of knowledge including natural number, Euclidean geometry, and morality. Nevertheless, each system of core knowledge is limited both in its domain and its range, and each system is shared by other animals. For both reasons, core systems alone cannot account for the development of the abstract, uniquely human systems guiding mature reasoning. Evidence from studies of children and adults suggests that language contributes to those developments.
These findings raise three questions. First, the faculty of language resembles core systems in some ways but differs from them in other ways: what are the key cognitive properties that unite and distinguish these systems in an infant's mind? Second, how do core systems, and other cognitive capacities, support children's learning of a specific language: does language map directly to the representations that core systems deliver, or does a distinct "language of thought" or communicative competence intervene between language and core knowledge? Third, how does the acquisition and use of a specific language affect children's and adults' non-linguistic cognitive abilities? Recent findings suggest that infants begin to map language sounds to meanings long before they begin to talk. Converging research on language and core knowledge, at these early ages, may shed light on these questions.
Sandra E. Trehub (University of Toronto)
Music: From the Cradle Onward
Representations of music have little in common with representations of actions, objects, numbers, and space. As is the case for language, musical representations have ties to the core social system proposed by Spelke and Kinzler (2007). Music is infinitely more limited than language as a medium of communication, and it is ineffectual for organizing knowledge across domains. Those limitations are largely irrelevant to pre-verbal infants. They are also irrelevant to people of all ages and cultures for whom music is a source of pleasure, solace, and communal identity. The domain of music provides opportunities for exploring perceptual, motivational, and learning biases in infancy as well as our extraordinary capacity for culture. Infants perceive and remember complex melodic and rhythmic patterns. They are avid fans of vocal music, and their mothers oblige them with singing and melodious speech. By 12 months of age, infants exhibit culture-specific knowledge of musical rhythms and pitch relations. Shortly thereafter, they become active music-makers. Contrary to conventional wisdom, the acquisition of implicit musical knowledge is as rapid and as effortless as the acquisition of linguistic knowledge. Ultimately, the implicit knowledge of untrained individuals is comparable to that of musicians. Development in this domain is propelled by the intrinsic appeal of vocal music, flexible vocal learning skills, and dispositions for musical parenting.
Charles Yang (University of Pennsylvania)
The Ontogeny and Phylogeny of Recursion
The recursion gap between non-human communication systems and human language is obvious. For gradualist accounts of language evolution, a popular approach appeals to the ontogenic recapitulation of phylogeny (Bickerton 1996, Hurford 2011). Very young children’s language shows only limited degree of combinatorial diversity (Tomasello 2003), which seem to bear certain similarities to the signing patterns of primates (Terrace 1979). Even for adult languages, certain linguistic forms are believed to be holistically stored: for irregular past tense, both sides of the debate (Pinker & Ullman 2002, McClelland & Patterson 2002) are in an association-based agreement. It is worth noting that low combinational diversity in child language has never been shown to be inconsistent with a compositional view of grammar. Likewise, a rule-based approach to all word formation, including the highly idiosyncratic irregulars (Chomsky & Halle 1968), has never been fully considered in the study of child language. In this paper, we present three findings: (1) Young children's syntactic diversity in their language usage is statistically indistinguishable from the expectations of a fully compositional grammar when universal statistical laws of languages are taken into account. (2) Analysis of Nim Chimpsky's signs shows that Nim's two-sign combinations fail to reach the expected level of diversity if the combinations were fully independent. Rather, they appear to bear hallmarks of memory and retrieval of holistic forms (Tomasello 2003). (3) In the largest quantitative study of past tense to date, word frequency effects purportedly showing holistic storage of irregulars completely break down. The best account for children's performance involves rules such as "add -t and shorten the vowel" (as in "lose-lost", "sleep-slept") despite their lack of productivity. Taken together, these results suggest that the recursive and compositional formation of linguistic expression is available to human children from very early on and used in every corner of language. And it appears absent even in our closest relatives.