Jerry Fodor, among others, has maintained that Chomsky’s language faculty hypothesis is an epistemological proposal, i.e., the faculty comprises propositional structures known (cognized) by the speaker/hearer. Fodor contrasts this notion of a faculty with an architectural (directly causally efficacious) notion of a module. The paper offers an independent characterisation of the language faculty as an abstractly specified non-propositional structure of the mind/brain that mediates between sound and meaning - a function in intension that maps to a pair of structures that determine sound-meaning convergence. This conception will be elaborated and defended against a number of likely complaints deriving from Fodor’s faculty/module distinction and other positions which seek to credit knowledge of language with an empirical or theoretical significance. A recent explicit argument from Fodor that Chomsky must share his conception will be diagnosed and the common appeal to implicit knowledge as a foundation for linguistic competence will be rejected.
“Some questionable terminological decisions also contributed to misunderstanding.”
— Chomsky, 1986, pp.28-9.
Jerry Fodor, 1975, 1981a, 1983, 2000, has long maintained a particular understanding of Chomsky’s hypothesis that linguistic competence is principally subserved by a language faculty. This faculty hypothesis, according to Fodor, is an epistemological proposal about what speaker/hearers know - that propositional knowledge which essentially enters into the explanation of language acquisition and the maintenance of mature performance. What Chomsky’s faculty hypothesis is not, claims Fodor, is a proposal about the architecture of the mind/brain, where such a thesis offers a causal explanation of how speaker/hearers acquire and maintain a knowledge of language and put it to use. In the works cited, this reading has been offered as more or less non-tendentious exegesis, with scant support from Chomsky’s texts; more recently, Fodor, 2001, presents an argument on Chomsky’s behalf that putatively shows that Chomsky must cleave to the epistemological view.
The sequel will agree with Fodor in his negative claim - that the faculty is not a causal mechanism. Here, there is some range of agreement within the literature, although significant differences persist (see, e.g., Higginbotham, 1987; George, 1989; Peacocke, 1989; Davies, 1989; Matthews, 1991; Segal, 1996; Smith, 1999, Knowles, 2000). Where I depart from Fodor, as well as most of the above, is in my further proposal that Chomsky’s understanding - one we should share - is not epistemological either; an alternative will be offered and defended. My focus will be on Fodor’s fairly strict understanding according to which a speaker/hearer’s knowledge of language is constituted by internal propositional states, a species of ‘knowledge-that’. The negative conclusions reached, however, will be seen to impact equally on more relaxed construals of the epistemological thesis that don’t carry assumptions as to the character of the relevant internal states.
2: Fodor’s Distinction
The term faculty, and it close associate, module, have acquired an almost Humpty Dumpty status, whereby they are used to designate, willy-nilly, whatever the particular author is interested in defending or attacking (see, e.g., Segal, 1996; Garfield, et al., 2001). Fodor, 2000, p.57, takes his 1983 use of Chomsky’s ‘modular’ jargon to be responsible for much of the confusion he is presently seeking to rectify. For what it’s worth, I’ve always found Fodor clear on the matter; further, few thinkers in the field worthy of attention have been guilty of reading Chomsky’s use of ‘module’ (used interchangeably by Chomsky with ‘faculty’) as synonymous with Fodor’s use. What has been confusing is the willingness of so many to many to read Chomsky through Fodor’s positive epistemological reading, and then to find Chomsky obscure. To ward off yet more needless confusion over nomenclature, let Fodor’s distinction be the following:
Faculty, in the epistemic sense (= faculty, smpliciter): That (usually innate) dedicated information or knowledge which a system possesses such that it acquires and maintains a competence in some area of cognition (language, mathematics, social relations, etc.)
Faculty, in the architectural sense (= module): A computational component of the mind/brain that is domain specific, in the sense that it outputs ‘answers’ specific to some domain, and is informationally encapsulated, in the sense that the information the computations are defined over such that they are domain specific is restricted to a fixed database (plus input) represented by the module. The module’s computations over the database causally account for a system’s acquisition and maintenance of competence in the given domain (language parsing, etc.)
As intimated here, it would not be a gross mistake to think of Fodor as proposing that what Chomsky calls faculties just are the proprietary databases of modules (Fodor, 2000, p.57). Officially, Fodor does not insist on this understanding, although he does think that there is a language module (in effect, a parser) with a proprietary grammatical database. Fodor, 1983, p.9, well recognises that the epistemological thesis - a claim about what is known - is logically independent of the architectural thesis - a claim about the physical realisation of what is known. That is, the claim that a speaker/hearer stands in an epistemic relation to her language faculty does not entail any particular account of how the faculty is realised in the mind/brain architecture. Still, Fodor does think that the database of the language module just is what Chomsky calls the language faculty (see n.2). For our purposes, what is of primary interest is the distinction itself, not Fodor’s further (and independent) proposal about how faculties stand towards modules. Following Fodor, 1983, p.6-7, 2000, p.11, we may cast light on the distinction by considering the issue of poverty of stimulus.
That a cognitive capacity is acquired in the face of a poverty of stimulus from the domain of the capacity is understood to militate decisively for the innate basis of the capacity. Indeed, such is the felt association between poverty of stimulus and innateness that Prinz, 2002, pp.193-4, suggests that ‘acquired under poverty of stimulus’ may serve as an operational definition of ‘innateness’. Similarly, Cowie, 1999, pp.46-7, argues against a developmental model of innateness on the basis that innateness just is what sound arguments from the poverty of stimulus entail, and no such arguments entail any particular developmental account (cf. Samuels, 2002). I don’t think that claims of this strength possess the least warrant (Collins, forthcoming). Even so, familiarly, arguments from poverty of stimulus have been central to Chomsky’s nativist claims since the 1960s: a poverty of linguistic stimulus (inter alia) shows that a learning theory model of language acquisition is empirically inadequate; a model that does explain (uniquely?) language acquisition in the face of a poverty of linguistic stimulus is one which attributes to the child a biologically endowed range of language specific concepts. The details of such reasoning and the orbiting controversy need not detain us. The most simple way of making the point is via what Jackendoff, 1993, p.26, has called the paradox of language acquisition. The professional communities of linguists and psychologists have all the data they desire on, say, English, certainly much more than any child has ever enjoyed. They also pool their mature intellects, have equal data on many other languages, and design subtle experiments and conduct longitudinal studies over many years. Yet they still can’t work out the principles, rules and concepts that constitute competence with English which the normal child masters by the time she is five! So, it is not so much that the child’s data is poor according to some absolute standard (as if such a notion makes sense); rather, if the child were relying on data, then her acquisition of language would be miraculous. The acquisition ceases to be at all miraculous if we understand the child to possess innate resources which already determine most of what is to be acquired. On this view, the child’s task is so ‘easy’ precisely because she isn’t dependent on rich data, while the data inundated scientist’s task is so hard precisely because he is trying to work out what the child already knows independent of data or general principles defined over it.
Now, and this is Fodor’s point, however such a style of argument may be evaluated, by itself, it only seems to militate for the hypothesis that the child possess some innate knowledge (rules, concepts, etc.) specific to language. The argument leaves entirely open the question of whether there is a dedicated computational device that has access to that knowledge alone, is encapsulated with respect to it. The argument might convince us that, say, the principles of binding theory are innate, but it shouldn’t convince us that there is a module which represents such principles in its database. As far as the argument goes, it might be that cognition is served by a general computational device that has free access to different stores of innate knowledge - faculties. In short, poverty of stimulus considerations tell us that some knowledge is innately represented; they don’t tell us how the knowledge is represented or processed. So, arguments for faculties (knowledge) are not ipso facto arguments for modules. It doesn’t follow, of course, that Fodor reads Chomsky aright merely by dint of the latter’s commitment to arguments from the poverty of stimulus. All that does follow is that such arguments cannot be used in direct defence of a modular thesis (see §5).
The sequel will not be directly concerned with ‘modularity’ theorising in general, nor, in particular, with whether either of Fodor’s notions, or the alternative to be presented, is apt to make sense of the claim that the mind is ‘massively modular’, i.e., all cognition is sorted into domain specific modules. Fodor’s thesis, concomitant with his distinction, is that modules are input/output devices associated with specific channels of transduction such as vision, olfaction, hearing, etc. Thus, for Fodor, whether there are ‘central’ modules is at best moot; the thesis that it’s all modules he considers to be virtually a priori false (Fodor, 2000, chp.4). Much of Fodor’s motivation for his distinction is to bring clarity to these issues. The sequel’s concern is with the language faculty alone. Still, it bears emphasis now that if Fodor does think that all modules are peripheral, in the sense of processing transduced information, and the language faculty is a database for a module, then the language faculty will essentially be in the service of a parser. We shall come back to this (see §5).
3: The Language Faculty, in Chomsky’s Sense
Notwithstanding any diagnostic virtues Fodor’s distinction enjoys, it will be argued that Chomsky’s notion of the language faculty eludes Fodor’s epistemic/architectural distinction. Let us characterise this conception as follows:
The language faculty, in Chomsky’s sense: The language faculty is a function in intension whose specification describes an aspect of the human brain. The function is from selections from a lexicon to infinite pairs of structures - <PF, LF> - that determine the respective forms of merged lexical items as they interface with external systems governing sound articulation and intention/conceptuality. The convergence of the pairs internal to the faculty accounts for the robust sound/meaning association upon which our linguistic performance is based.
The following is a typical statement of Chomsky’s: “[T]here is a special component of the human brain (call it ‘the language faculty’) that is specifically dedicated to language. That subsystem of the brain (or mind, from the abstract perspective) has an initial state which is genetically determined, like all other components of the body” (Chomsky,1996, p.13). Further: the language faculty is “dedicated to the use and interpretation of language… [it] assum[es] states that vary in limited ways with experience. Interacting with other systems (cognitive, sensorimotor), these states contribute to determining the sound and meaning of expressions” (Chomsky, 2000a, p.168). And this integration is what makes the language faculty linguistic: “Each linguistic expression [PF, LF pair] generated by the I-language [a steady state of the language faculty] includes instructions for performance systems in which the I-language is embedded. It is only by virtue of its integration into such performance systems that this brain state qualifies as a language” (Chomsky, 2000a, p.27). Similarly, Chomsky (e.g., 1986, p.3; 2000b, p.54) explicitly states that universal grammar (UG) - the initial state of the language faculty - can be innocently understood as the language acquisition device. The reasoning here is that, on the assumption that the other systems with which the faculty interfaces are autonomous and in working order, the development of the faculty amounts to the acquisition of language, the proper integration of the performance systems.
Some words of clarification are in order. Firstly, the very idea of the language faculty is a methodological idealisation. Properly speaking, the target of our investigation is whatever horrendously complex ensemble of systems accounts for the observed linguistic phenomena. If one wishes to preserve the ‘faculty’ label for this manifold, there can be no a priori complaint. The sequel will follow the tradition, however, and understand the faculty to be that system that is dedicated to ‘syntax’ and relates to external performance systems which serve our use/articulation of language. Here, ‘syntax’ is not a simple notion of ‘form’ or ‘rules’ construed to be independent of words with their phonological and semantic properties. Rather, ‘syntax’ may be seen as a systematic relation between lexical items and complex structures built from lexical items. More precisely, ‘syntax’ (= the faculty) covers a lexicon and operations defined over selections from it that unfurl, as it were, phonological and semantic features inherent in the lexical items in such a way that structures are generated that may input to systems that put such structures to use in our production and consumption of language. In this sense, the faculty at a steady state is the result of the development of UG, whose parameters of variance are perhaps restricted to the morphological idiosyncrasies of the acquired lexicon. This system stands in contrast to the so-called external systems with which it is associated inasmuch as while the faculty’s output necessarily answers to the conditions the systems impose upon it (this is the core idea of the minimalist program - Chomsky, 1995), what use the systems make of its outputs is not itself encoded in UG or, perforce, the faculty’s steady states. Whether this perspective carves the ensemble of systems at its salient joints is an empirical question, not a conceptual one; that is, whether ‘syntax’ itself marks a theoretically salient distinction is to be determined. We maintain the assumption so long as doing so continues to be theoretically fecund. If recalcitrant data arises, or a simpler model consistent with the facts is formulated, etc., then we are rationally free to alter our assumption (see §5).
Secondly, even though we have said that the faculty is a brain state, the faculty explanation of how sound/meaning pairings are attained and maintained is not a causal cum neurological explanation of how one is able to order a coffee or write a paper on the language faculty, still less is it predictive of such behaviour. The faculty may be viewed as a set of sui generis conditions met by the operation of a ‘normal’ brain. The conditions are sui generis in the sense that they specify an aspect of a complex system that is not visible minus such conditions. The aspect at issue is simply the recursive/structured integration of sound and meaning in human cognition. The idea here correlates with the thesis, going back to chapter 1 of Aspects (Chomsky, 1965), that the faculty accounts for competence not performance. It should be said that the correlation is not precise, and the distinction itself has never been absolutely clear. Still, the distinction does have a real point and can, I think, be usefully recast in my favoured terms. As an initial characterisation, we may say that the faculty counts as competence just because it explains how there is a systematicity between independent systems of sound and meaning; to have access to such systematic pairings is just what it is to be linguistically competent. Thus, rather than explaining performance, acts of speech or thought, the faculty hypothesis explains how performance is so much as possible. This, note, does not entail that competence is in any sense restricted to, or exhausted by, the provision of performance. The generation of structures designed to pair sound and meaning might well exceed the bounds of any performance parameters. Indeed, we may say that the very nature of the components conjoined in language use require a third system whose principles are not dependent on the use we make of them in performance: we need a third (competence) system to integrate otherwise separate systems (see §5).
Thirdly, the faculty is further abstracted from the causal structure of the brain in that it is a function in intension. For example, we may describe a given Turing machine as computing some number or as working out the GDP of Luxembourg. As far as the computability of the function (in extension), qua ordered pair, is concerned, it does not make any difference what description we use: the same numbers in will give us the same number out. It makes all the difference in the world, however, if, rather than being given the function, we have an extant system and our job is to figure out what function it is computing given its outputs. In essence, here we are interested in what distinctions and properties the system is sensitive to. Assuming Church’s Thesis, we know that the function (in extension) will be computable in any number of ways in any number of formats, but this does no explaining for us. We want to know the function under that description (in intension) which will explain how the system can produce an independently describable output and keep to any constraints which issue from the nature of the system and the character of its development. So, turning to language, we want a function that produces the observed systematic and highly specific structures that are realised by meaning/sound pairings. In other words, what we don’t want is a function which merely generates symbol strings. The point here goes back to the heart of the argument of early generative grammar (Chomsky, 1955/75, 1957). Relations hold between linguistic structures that are not tractable if the structures are construed as immediate constituent symbol strings (e.g., active vs. passive, scope ambiguities, declarative vs. interrogative, etc.). Thus, the computation must be sensitive to items under whatever descriptive terms make sense of these relations which constitute our competence with the sentences. The function must also be such that the system may fixate on it via partial and degraded data; it would seem, for example, that it can’t be one which merely works on statistical regularities. So, not any old function that determines sound/meaning pairs will do; we want one which will explain an independently observed systematicity between and within sentences.
Fourthly, this conception of the language faculty is not intended to be one Chomsky or others have always held, as if there were such a notion. My construal is intended to be one which is flush with many aspects of Chomsky’s developing position (if not every sentence he has ever written), especially in its course to ‘internalism’. Further, it is independently coherent, as well as being distinct from the epistemological reading. Still, a slight exegetical digression will serve us later.
The notion of a grammar presented in Syntactic Structures (Chomsky, 1957) - in fact, undergraduate lecture notes - and the unpublished 1955-56 manuscript of Chomsky, 1955/75, which in essence prevailed up to the advent of the principles and parameters approach (e.g., Chomsky, 1981, 1986), appears to fit pretty much with Fodor’s epistemological reading. Perhaps the most revolutionary feature of Chomsky’s early work was its “methodological approach”, the idea that a grammar for a language should be evaluated much like a scientific theory. The job of linguistic (meta-) theory is to specify a set of theories (grammars) that recursively assign a set of descriptions (at various levels) to the sentences of a language, where the descriptions cohere with the speaker/hearer’s intuitions over the language. The assignment is recursive in that it is infinitely generative (“predictive”) of descriptions of the sentences of the language. The psychological analogue of this approach - not broached in the earliest work - is that the general theory details what grammars a speaker/hearer can acquire/know which in turn explain the intuitions to which the explicit grammars answer, i.e. the general theory becomes an account of the ‘language acquisition device’ that maps from data to a highest valued grammar “predictive” of the data. Thus, rather than grammars being operationally definable from the corpora to which human children are typically exposed, they are theories of the data, internalised by children, that posit unobservables and abstract rules (as made available by the LAD) to capture hidden generalisations in the respective target languages, which in turn account for the structure of, inter alia, linguistic ‘creativity’ (unbounded and continuously novel understanding and production, uncaused by features of the prevailing situation but appropriate to it and coherent in the context).
Fodor, with many others, appears to think that Chomsky’s position, although developed in all sorts of other ways, still retains this kind of methodological/psychological split. This is not so.
It seems to me that in Chomsky’s early thought the methodological and psychological approaches were at least equal; the former prevailed in the published work, the latter was implicit, but soon to emerge (Chomsky has clearly stated that psychological issues of innateness, explanatory adequacy, UG, etc. were important to him in the mid 1950s, in the “immediate background”, “taken for granted”, but their articulation would have been “too audacious” (Chomsky, 1955/75, p.13/35/37). It is worth noting in this regard that the famous opening chapter of Aspects (1965) was essentially written in the late 1950s (Chomsky, 1982, p.62).) What was crucial to Chomsky, and trivially shared by the two approaches, was a commitment to explicitness, which was perhaps best expressed methodologically. (An explicit grammar is one which is infinitely generative of structural descriptions and does not rely on speaker/hearers’ intuitions.) Such is why, I think, psychological factors were certainly not so central for Chomsky that they formed clear constraints on the adequacy of grammars, even if this was just because he couldn’t see how the two would properly mesh (Chomsky, 1955/75, p.116-7). So, while Chomsky did think, contra the behaviourists/empiricists, that grammars were psychological kinds - the kind of thing speaker/hearers know -, one doesn’t find a notion of knowledge or explanatory adequacy that imposes any particularly tight constraint on the nature of the human mind such that it gives rise to the grammars (only in the famous Skinner review does Chomsky, 1959, first raise such issues and there merely sketches a cognitive model as a potential alternative at the end of the article after the Skinner model had been refuted.) This reading is not at all curious when it is appreciated that explicit cognitive considerations were unnecessary for the refutation of behaviourism/empiricism in linguistics. The refutation proceeded simply on the basis of grammars being explicit, for their very abstractness already sounded the death knell for behaviourism and its operational discovery procedures; the structure of the grammars was not recoverable from the ‘visible’ properties of corpora. Further, as intimated above, the introduction of an explicit psychological construal of the theory (Chomsky, 1962, 1964, 1965) was, in essence, a simple recasting of already extant methodological notions. The “external” condition of descriptive adequacy and the “internal” condition of explanatory adequacy of the 1960s work were simply interpretations of the earlier notions of descriptive conditions, which pertain to particular grammars, and generality conditions, which situate such grammars within general linguistic (meta-) theory (e.g., compare Chomsky, 1955/75, p.108-9, 1957, pp.49-50 with Chomsky, 1964, pp.62-3).
Thus, we have a natural concordance between the methodological and psychological approaches: the speaker/hearer’s relationship to her grammar vis-à-vis data could usefully be cast as being the same as the scientist’s relationship to her theories vis-à-vis data inasmuch as both involve theory construction whose hypotheses are not inductively determined (see, e.g., Chomsky, 1957, pp.49-50; 1959, pp.574-8; 1962, pp.528-9/535; cf. Katz, 1966, p.275). All in all, the general impression that was to persist was that grammars, as explicitly specified theories of languages, could (and should) be understood as objects of knowledge, where, in Higginbotham’s 1991, p.556, words, it is a “rational achievement” of the speaker/hearer to acquire her grammar, a theory of her language (Higginbotham appears to endorse this model; cf. Fodor, 1983, p.5). It appeared to be such an achievement precisely because specifying a theory of a language - a grammar - was one thing, specifying the psychological facts which enabled a speaker/hearer to know/acquire such a theory was another thing (George, 1989, p.91 explicitly endorses this model). When psychological concerns came to the forefront in the early/mid-1960s, and the old notions were recast in the ways indicated, the epistemological terminology of the methodological approach became less clear cut. For instance, the evaluative notions (e.g., simplicity) became internal to general linguistic theory and fell under general psychological constraints given that it was the child not the theorist who was doing the evaluating. Such issues of learnability and grammar selection eventually led to the triggering model - as opposed to an inference from evidence model (which was always a tentative hypothesis, not an a priori claim) - of the principles and parameters approach and its concomitant internalism (see §§5, 6). With this approach there appears to be nothing left that is theoretical about the language faculty, at least not in any epistemological sense. This is reflected in the eschewal of the very notion of a child learning a language in favour of the notion of a child’s linguistic growth: to acquire a language is to undergo a specific internal development, it is not to acquire knowledge about - a grasp of - an external object. Similarly, it is a mistake to think of the poverty of stimulus considerations in favour of UG as an instance of the general (Humean/Goodmanian) inductive underdetermination of theory by evidence, as if the child were seeking to find justification for its ‘theory’. One key difference is that the child’s data appear not to confirm the UG generalisations at all, but do appear to confirm simpler but false ones; the ‘data’ act as a trigger for the realisation of some principle, not as evidence for it. There is nothing rational about acquiring a language.
The point of this little bit of history is to highlight that after the development of P&P it really became transparent that grammars and/or languages could no longer be sensibly thought of as independent ‘objects’ of knowledge. In bald terms, grammars, and so languages, ceased to be understandable as ‘things’ which speaker/hearers know; they are simply states of the speaker/hearer. Otherwise put, the only operative notion of language is the grammar ‘in the head’ - the I-language - which is not represented as an external object in any epistemological sense whatsoever, nor, a fortiori, does the internal state represent such an object - there is nothing to represent. The difference between the represented or known and the representation or state of ‘knowledge’ is, at best, a formal reflex of the notion of ‘representation’: a representation requires a represented. But even the formal distinction is misleading. If the faculty is not represented, then there are no causally efficacious representations in the appropriate substantive sense, for there is nothing for them to represent. Of course, there is no harm in retaining the term ‘representation’ along, indeed, with its reflex, but it has a “technical sense”, marking levels of interface between an ensemble of internal systems (Chomsky, 2000a, pp.175-6). Indeed, Chomsky’s use of ‘representation’ has always been technical; it derives from concatenation algebra, not philosophical theories (see Chomsky, 1955/75, p.105). So, we may call a LF-structure a representation because (inter alia) it meets conditions internal to the faculty imposed upon it by other systems. The structure isn’t about such conditions; it can’t go right or wrong about them. The whole arrangement is ‘syntactic’, internally specifiable (cf. Jackendoff, 2002). Chomsky, 1986, p.28-30, has suggested that the “systematic ambiguity” between the methodological and psychology approaches to grammar/language has been the root of much confusion. The internalism that resolves the ambiguity will be returned to at length and be shown to be inconsistent with Fodor’s general argument that there must be a representation/represented distinction.
Let us now consider some ‘Fodorian’ complaints that might be expected. After that, we shall turn to his argument that his epistemic understanding must be correct.
4: ‘Knowledge of Language’
The first issue which needs to be broached is the simple fact that Chomsky has for many decades freely appealed to knowledge of language, which would prima facie suggest that Chomsky has an epistemic understanding of the language faculty. Does Chomsky not mean what he says?
The negative aspect of this central locution bears emphasis. To say that a subject knows a language is to signal that linguistic capacity is not constituted by a set of dispositions or a practical ability or a capacity to communicate. In short, the knowledge locution is, apart from anything else, intended to make clear that competence not performance is at issue (see especially Chomsky, 1965, pp.8-9; 1975, pp.22-3.) So much should be agreed upon by all. However, under the distinction of know-that/know-how, where the latter is read in terms of capacities or dispositions, it seems obligatory to read linguistic competence as a species of knowledge-that, albeit one which potentially departs in various ways from our intuitive notion. Chomsky, though, has been consistently leery of the know that/know how distinction, mainly because he doesn’t see dispositions or other behavioural notions as possessing any explanatory value. Thus, it matters not at all to linguistic theory whether the speaker/hearer knows-that or knows-how a language, or ‘has a language’, ‘speaks a language’, etc. so long as what is being picked out is a mental structure. To this end Chomsky, 1975, pp. 164-5 (cf. 1980, pp.69-70), coins the term cognize to divorce his appeal to knowledge from those properties that have traditionally differentiated know-that from know-how: especially ‘justification’ and ‘conscious access’. Still, Fodor assumes, explicitly contradicting Chomsky (see n.11), that “cognize” is a species of know-that in a substantial propositional sense. As Fodor, 1983, pp.4-5, puts it: “what is innately represented should constitute a bona fide object of propositional attitudes; what’s innate must be the sort of thing that can be the value of a propositional variable in such schemas as ‘x knows (/believes/cognizes) that P” (cf., Fodor, 2000, pp.10-11.) While Fodor is certainly right that ‘cognize’ may serve as a propositional attitude verb (Why not? It can have any features we wish to preserve from ‘know’), it is not intended to be a notion which contributes in any substantial sense to linguistic theory.
The language faculty is cognized in the sense that it is a state of the mind/brain that is separate from, although stands in an explanatory (not causal) relation to, performance, much as, notwithstanding obvious differences, knowledge of a precise score explains one’s competence with Beethoven’s Hammerklavier, a knowledge which is retained even if one develops a debilitating arthritis. The precise thesis, then, does not essentially involve any epistemic terms, or propositional attitudes. The ‘knowledge’ at issue is simply whatever autonomous core concepts and principles structure one’s performance and judgements, although, again, the extent to which it outstrips performance is an empirical matter. In particular, then, it does not follow that the cognized falls under any broader cognitive state or process such that we may sensibly speak about it being an object of knowledge. One difficulty in seeing this is that the information the language faculty generates is easily conflated with the faculty itself; indeed, Chomsky’s ‘cognize’ is explicitly intended to cover both notions, but this just shows how little it has to do with our intuitive propositional attitude notion of knowledge. This becomes transparent with Chomsky’s more recent understanding of the faculty.
It seems natural to think of the language faculty of S as representing S’s language, or at least its grammatical structure: the faculty generates grammars that are true of languages. This is Fodor’s model: grammars are “innately specified propositional contents… truths” (1983, p.7; cf., 2000, pp.95-6). But what could such contents be possibly true of? There is, as it were, nothing to get right. Languages are not external objects we can go right or wrong about. Fodor, 2000, chp.5, suggests that what we might be getting right is the language of our conspecifics, those from whom we learn our language. But this cannot be right. The language we end up with will reflect, along certain narrow parameters, our initial experience, but we do not end up representing (or misrepresenting) that experience. The complex grammatical principles and features are just not in the data; that’s the very point of the poverty of stimulus considerations. The only thing there is to represent is the language faculties of our conspecifics, or at least UG, but that is not something which we represent on the basis of another speaker, it is simply shared qua species trait. The problems here arise from the very idea that language is something which may be represented, something which exists external to us which we can get right or wrong, have true beliefs about. In essence, this model is what Chomsky rejects under the label of ‘E-language’, ‘E’ for an extensionally individuated set of external objects. As go E-languages, so go grammars as things which might represent them.
As far as scientific investigation is concerned, as opposed to conceptual reflection, Chomsky, 1986, 2000a, understands ‘language’ to be the internal finite generative procedure that accounts for our infinite competence. Chomsky dubs languages, so construed, I-languages: the internal steady states of our individual language faculties (UG is separately designated as the initial state, but any state of the faculty may be construed as an I-language). In effect, an I-language just is a particular function in intension, a variation on a biologically determined theme. Thus, an I-language is not something one may be right or wrong about; it is a structure of the mind, not an independent object the mind represents.
Now it is perfectly sensible to think of such I-languages as providing us with infinite knowledge-that (in the colloquial sense) about, say, relations of co-reference, elliptical antecedence, distribution of argument roles, etc. (Chomsky, 1975, p.165; 1980, p.71/93; 1986, p.269). For example, it is our language faculties that determine (in part) our knowledge that in Bill expects to leave by himself it must be Bill who expects to leave (by himself), but when we embed the sentence as a wh-complement - e.g., Harry wonders who Bill expects to leave by himself - we know that it’s not necessarily Bill who expects to leave, but someone or other (who might, accidentally, be Bill). We may also even say (theory permitting) that the competent speaker knows that in the first sentence the empty subject - PRO - of the infinitive complement is (i) obligatorily controlled by (roughly, co-referential with) the matrix subject and (ii) is the clause-mate antecedent of the reflexive, while in the second sentence, the empty subject is the trace (or copy) of the raised wh-argument, with the lower copy binding the reflexive. But the relevant I-language here is not the set of such judgements nor even the set of LF structures which encode the chains of dependence hinted at. I-languages themselves are not infinite abstract objects nor ‘statements’ of such knowledge that can coherently fit into Fodor’s schema (especially see Chomsky, 1994, p.158; 2000a, p.73.) An I-language is that which, in part, generates such knowledge, not the products generated. Indeed, the nature of the faculty is a theoretical issue; it is not a descriptive notion determined by a mere abstraction from our intuitive judgements.
It is wholly unclear, then, in what sense we might know-that an I-language (a state of the language faculty). It really does bear emphasis that, if we bracket Chomsky’s various informal statements about ‘knowledge’, which, over the years, have mainly been in response to philosophical queries, and look just at the theories proposed (especially the recent ones), then we don’t find likely propositional objects of knowledge, whether realised by internal propositional states a la Fodor, or not. We find descriptions of lexical items as (non-phrasal) feature clusters; a computation from lexical selections that merges and moves lexical items, and constraints/principles the computation inherently meets that define levels of description of the interfaces at which external systems integrate with the computation. There are no components here that are constrained to be propositional so as to cleave to our commonsensical attitude verbs. Chomsky is explicit about this:
… in English one uses the locutions "know a language," "knowledge of language," where other (even similar) linguistic systems use such terms as "have a language," "speak a language," etc. That may be one reason why it is commonly supposed (by English speakers) that some sort of cognitive relation holds between Jones and his language, which is somehow "external" to Jones; or that Jones has a "theory of his language," a theory that he "knows" or "partially knows."… One should not expect such concepts to play a role in systematic inquiry into the nature, use, and acquisition of language, and related matters, any more than one expects such informal notions as "heat" or "element" or "life" to survive beyond rudimentary stages of the natural sciences. (Stemmer, 1999, p.397; cf. Chomsky, 2000a, p.72/119, Chomsky, 2000c, p.23.)
The position here, note, is not premised upon a rejection of any particular theoretical substantiation of knowledge, be it Fodor’s RTM or something more neutral. The much more simple point is that knowledge of language is just a reflection of English collocation; it is not a notion that requires any theoretical substantiation, still less does it play a constraining role in linguistic theory. There is no relation R between speakers and theories/grammars, save, of course, those we can stipulate willy-nilly (see §7).
We may view the retention of the epistemic locution as a species of Wittgensteinian sophisticated naiveté. Occasionally, Chomsky (e.g., 1980, p.70, 1986, pp.32-3/267-8) does point out that his use of ‘knowledge’ is not necessarily at odds with its common-sense understanding, although any correspondence is of marginal interest. The point here is that ‘knowledge’ is employed as an informal term, one which may readily give way to a more precise specification. This approach, however, is not legislative. To propose that the faculty hypothesis is not best thought of in epistemic terms is not to be an eliminativist about ‘linguistic knowledge’, as if English collocation were somehow expressive of an outmoded ontology and should hence be regimented so as to lose such an ontic commitment. Rather, we are led to the simple thought that the job of linguistic theory is to understand the nature of the language faculty (UG, developed I-languages and their integration with external systems), not how the language faculty stands to our colloquial locutions. It turns out, as part a matter of discovery, part methodology, that we do not know languages (better: I-languages) in a range of senses which the verb carries. For example, notions of conscious access, justification, authority, and norms are simply not applicable. Most centrally, the verb also loses its transitive/relational aspect; this was one of the main motivations for Chomsky’s cognize neologism and is explicit in the above quotation. An I-language is not an independent ‘object’ - a set of propositions - that is represented/known by a speaker/hearer; it just is a state of the speaker/hearer. From this perspective, we may say that ‘language goes on holiday’ when the colloquial knowledge relation is recapitulated internally, as if to possess a faculty requires us to stand in some epistemic relation towards our internal states so that they may serve as the putative objects for our transitive verb. Only persons know languages; mind/brains do not, just as they do not see cats on mats. More collocation. The import of the common verb - the aspect retained - is that the knowledge is a state (or a set of states) rather than a relation to an independent thing, and this state accounts for the way in which speaker/hearers produce and consume language without the state being exhausted by the specification of such an input/output profile.
5: Collapsing a Distinction
The proposal offered is that we should collapse the representation/represented distinction. The language faculty is not represented by the mind/brain; it is an aspect of the mind/brain. In essence, it is a function from a lexicon to pairs of structures that integrate external systems. The function is defined (in intension) by the descriptive conditions it must meet. The states of the faculty are solutions to an equation between the features of the lexical items and the conditions imposed by the external systems as to which features are legible to them. The states are not amenable to causal generalisation; nor, perforce, do they contribute to the aetiology of linguistic acts. Two potential problems arise here. Firstly, in what way does the poverty of stimulus argument support the hypothesis? Secondly, does the proposed collapse of the representation/represented distinction vitiate the performance/competence distinction?
The first worry here is wholly misplaced. As seen above, poverty of stimulus considerations cannot directly militate for the claim that there is a language module, a component of the causal architecture of the mind/brain. Equally, though, they do not militate for a propositional conception of innate knowledge. The classic arguments of Chomsky are only intended to show that the mind/brain is structured (biologically or physically) to develop/grow I-languages given certain broad environmental conditions; the mind is not an unformed general mechanism whose eventual shape is determined by experience. On the present conception, a theory of the language faculty is simply a specification of such a structure. Thus, the poverty of stimulus considerations tell us, in effect, that the mind/brain must be natively structured in such a way as to be sensitive to concepts of (theory permitting) categorical features, case properties, theta-roles, head projections etc., for these concepts are not recoverable from the data but are necessary (by present theory) to account for the observed structure of linguistic output. So, we do not need to think of Chomsky as confusedly using an epistemological argument for an architectural conclusion.
What of the concern that a departure from the epistemological model leads to a destabilisation of the competence/performance distinction? As noted above, the distinction is not written in stone; it marks a methodological simplification of the complexity that constitutes linguistic cognition in toto. In a sense, the distinctness or autonomy of the language faculty just amounts to the idea that it is a central system rather than an input/output one. That is, systems of the mind which deal with production and understanding of language - performance - interface with the faculty, but the faculty is independently constituted (especially see Chomsky, 2000a, p.117-8.) This is not a stipulation but a theoretical assumption that is sufficiently supported to be our default position. The structure of language cannot be explained by performance in that it is not reducible to - explicable from - the meaning or sound properties it is used to encode. Trivially, sound is linear, but grammatical structure is hierarchical. Likewise, there are perfectly well structured ‘representations’ which, for most, are unusable; and there are perfectly acceptable meanings that are expressed by structures, but which are not typically encoded in those structures. As a matter of fact, then, rather than stipulation, it looks as if the language faculty is not a performance system. It is that third component that is required to effect a convergence of sound and meaning, for properties of neither are predictable or explainable from the other. Further, this component counts as a system of competence just in the sense that it plays this third role, ‘instructing’ performance systems while being independently constituted. In perhaps picturesque terms, the faculty is an engineering solution to the problem of effecting a convergence between the two independent systems. The faculty remains autonomous, though, in that its output pairs answer to independent constrains (sound and meaning respectively). The faculty cannot see or know what use, if any, its interfacing systems may make of its outputs; that the pairs are convergent is determined by the internal structure of the faculty - it does not have one eye, still less two, on what, if anything, the rest of the mind may make of them (Chomsky, 2000a, p.27/117-8). Chomsky goes as far as to suggest that the faculty could have been employed for locomotion in a differently organised mind/brain, if it had been integrated with different systems (Chomsky, 2000a, p.27). The point here is just that, while the faculty meets the conditions imposed by the external systems, its internal design does not encode for just those conditions as opposed to a potential myriad of others. Obviously, no theory (a set of propositions) that ‘truly represents’ an external language could effect locomotion in a differently organised system.
A final worry which might be raised is to do with our ‘free access’ to language. We can, for example, reflect on linguistic structure and discern previously unnoticed ambiguities or see that what appeared to be gibberish is in fact meaningful (see n.14). Indeed, much of the data for linguistic theory comes from such reflection by the linguist, rather than data on performance, although, of course, there is no bar on such data. What this appears to show is that the object of linguistic theory is our propositional knowledge, that which, in part, is reflectively available. The impression is wrong. As earlier stressed, we can indeed be said to know or to cognize the products of the faculty, but they are not the faculty itself; the faculty is that which, in part, generates the products, and it is not open to conscious reflection.
Again, to bar conscious access to the language faculty is not, perforce, to consider it a parser module - an encapsulated device that mandatorily produces structural descriptions for acoustic inputs. Chomsky and others have been vociferous about this: as emphasised, there is a difference between acceptability (parsing results) and interpretability (what’s determined by the faculty). Indeed, Chomsky, 1996, pp.14-5, even doubts whether there is a universal human parser (cf. Chomsky, 1986, p.14, n.10; 1991, pp.19-20; 2000a, pp.117-8; Stemmer, 1999; Higginbotham, 1987). This is not to say, of course, that parsing processes do not involve the language faculty; it is only to say that the faculty is neither a parser itself nor an encapsulated database for one (cf. Weinberg, 1999, on minimalism and parsing). So, the disavowal of conscious access would translate into an argument for an epistemic understanding of the faculty only if the only alternative were a parser module. This is not the situation we are in. The kind of freedom and conscious access language users enjoy is, for Chomsky, a mystery, potentially beyond human comprehension. Whether this is so or not, no light appears to be shed on the problem by saying that the faculty is propositional rather than an abstract structure (a function in intension). After all, the problem of free will is not solved by saying that, unlike in the case of a knee-jerk reflex, one has access to the knowledge that helps one decide whether to act in this way or that. There would be a difference if linguistic performance could be explicated in causal-computational terms, but that is not on the cards.
Still, it might be thought that only an epistemic understanding of the faculty is so much as consistent with the phenomenon of free access. Yet again, it is difficult to see such a claim as more than an insistence on Fodor’s exhaustive disjunction. On the current position, the job of at least making sense of the free access or the ‘dawning of meaning’ phenomenon, as witnessed in the examples of n.14, is necessarily divided between the language faculty and other systems of the mind. We may view the faculty’s role as determining the range of interpretations (potentially zero) a given structure may possess, independent of whether we immediately recognise such interpretations. When we work out, say, that a structure does in fact have some unnoticed interpretation, we are not consulting the faculty at all. To see this, consider: Sailors sailors sailors fight fight fight. One way of explaining the coherence of this sentence to the uninitiated is to ask them to drop the central clause - sailors fight - and stress the remaining relative, to produce: Sailors SAILORS FIGHT fight (additional pauses before and after the relative helps). Then they see that the material they dropped is just another clause. But we are not here simply analysing the sentence, working out its phrasal hierarchy or the selection features of the verbs or some such; rather, we are appealing to performance factors of stress (or intonation, or context) in other examples. (Similar remarks hold for ‘garden paths’, such as the illuminating contrast between The horse raced past the barn fell and The onions fried in the pan burnt.) We may say that the faculty has done its job and remains in aloof silence on the matter, while we delve around in matters of meaning, sound, context, and comparison. We are, for sure, learning something about the faculty - e.g., that its principles generate interpretable n-degree centrally embedded relatives - but we are not being instructed by the faculty (see §7).
Let us now look at Fodor’s explicit argument that a non-epistemic understanding of the language faculty cannot be what Chomsky has in mind.
6: Fodor’s Argument
Fodor, 2001, offers an argument whose conclusion is that Chomsky must be making a clear distinction between representation and represented. As previously noted, many others agree with Fodor in this regard, but neglect to provide something like an argument; instead, they rely on the simple inference, ‘If the faculty isn’t doing the representing, then it must be the represented’ (e.g., George, 1989; Higginbotham, 1987, 2001; Knowles, 2000). Fodor’s argument is directed at Cowie, 1999, who appears to be insensitive to the distinction between the language faculty and performance systems. If Cowie indeed recognises no such distinction, then she is simply mistaken, independent of Fodor’s argument. So, let us put aside a priori inferences and just think about Fodor’s argument on its own terms.
Fodor argues as follows. Distinct grammars G and G* for a language L of speaker S can be co-extensive in the sense that they (strongly) generate the same set of L-sentences with the same structural descriptions. That is, the grammars are in agreement about the sentences S understands and how he would, say, parse them. Let this mean that G and G* are descriptively adequate grammars of S’s language. It does not follow, though, that G and G* are equally true of S. G*, say, might include principles of ordering or substitution, or might even be infinite, which are all, ex hypothesi, incompatible with UG (if, say, S would have tried to acquire his language by fixating on G*, he would have failed due to poverty of data.) Thus, while G* extensionally picks out S’s language up to phrase structure - what S knows - it cannot be the grammar S internally represents, for it is inconsistent with UG, the hypothesised initial state of S’s language faculty. Now Fodor’s key point is that if grammars are internally represented intentional objects, this difference between descriptive adequacy and truth is easily explained. The difference is that while G and G* express the same content - G and G* are descriptively adequate - S can represent G, but not G*. Thus, there is a notion of psychological reality - the underlying representations - that is not exhausted by descriptive adequacy, i.e., what is represented. If there were no difference between representation and represented, then there would be no difference between G and G*, both would be true of S qua descriptively adequate.
The basis of Fodor’s reasoning here, I think, has its roots in Quine’s, 1969, 1972, transposition of indeterminacy considerations from translation/meaning to grammar. Thus, one finds the essence of the above argument succinctly intimated twenty years early:
If… the notion of internal representation is not coherent, the only thing left for linguistic theory to be true of is the linguist’s observations [i.e., that covered by the constraint of descriptive adequacy]… Take the notion of internal representation away from linguistic metatheory and you get positivism by subtraction (Fodor, 1981a, p.201).
I’m not here concerned with whether Fodor’s argument is a good response to Quine-style scepticism. In fact, in Chomsky’s, 1975, 1980, extended discussions of Quine, his response has simply been that there is no coherent indeterminacy thesis beyond the trivial claim of theoretical underdetermination (cf., Chomsky and Katz, 1974, in response to Stich, 1972). Of course, a concomitant of this claim is that we shouldn’t artificially impose operational criteria on the data which may be relevant to linguistic hypotheses. But it just doesn’t follow that the only way to forego an indeterminacy inducing operationalism is to endorse a representation/represented distinction. Whatever the proper response to Quine and those influenced by him might be, the argument proposed by Fodor is not Chomsky’s; if, then, Fodor wishes to endorse its generalisation as part of his intentional realism, as he does, the ‘Chomskyan’ approach to language is not a basis from which to proceed. First, something will be said about Fodor’s endorsement, then the argument will be diagnosed.
The question of what verb we should use in relating a subject to his linguistic competence is somewhat trivial, a matter of collocation. The issue, however, is not trivial for Fodor precisely because, for him, propositional attitudes are the level of theorisation which may track, and so capture in generalisations, the causal springs of human behaviour. In crude terms, then, the problems for Fodor we have been witnessing may be due to his reading Chomsky as if he were Fodorian. That is, Chomsky is being read as if he endorses some general theory of cognition, such as Fodor’s representational theory of mind (RTM), with the language faculty being a particular set of belief contents (something which may be the object of ‘cognize’) represented in some more generally specifiable format (a language of thought) whose tokens form the causally efficacious states of our beliefs and knowledge (see Fodor, 1983, §1.1 for the explicit reasoning). Again, this perfectly fits with Fodor’s position that the language faculty is essentially at the disposal of a parser module. Fodor has explicitly talked this way from at least the early 1970s (see n.2). In effect, the argument Fodor attributes to Chomsky is just an argument for a particular instance of what Fodor sees as a general distinction between mental content and underlying representational structures. As we have seen, though, Chomsky endorses no general theory of cognition in terms of which language might be understood as an instance, still less a language of thought in Fodor’s sense (Chomsky, 2000a, pp.176-8). The computation which underlies linguistic competence is sui generis, essentially linguistic in that it is defined in terms of operations from a lexicon to a set of <PF, LF> pairs. Moreover, as also noted and supported, Chomsky has no commitment whatsoever to the integrity of our intuitive concepts of belief or knowledge. A simple point should make this palpable. Chomsky has long contested that human action, and linguistic behaviour or performance in particular, is not caused (especially see Chomsky, 1975, Chp.1; 2000a, pp.72/95.) But Fodor’s general view is that RTM is the only way to fit content with causal states. If, therefore, Fodor’s endorsement is an endorsement of the idea that the language faculty is content that may feature in intentional generalisations causally implemented in computational states, then it is not an endorsement of anything for which Chomsky argues. Let us now turn to the argument itself.
The first thing to note is that Fodor’s argument, if from anywhere, is from the standard theory. But even on this understanding, the argument is misleading. The classic distinction of the standard theory is one between descriptive adequacy and explanatory adequacy - not ‘truth’. That is, the thought was that where general linguistic theory might allow for, in principle, two or more grammars that would explicitly generate the same structural descriptions for some language L, those grammars would only be equal in terms of explanatorily adequacy just if (i) they could be acquirable given data + UG principles and (ii) were equally ‘simple’ by a UG metric. If two or more grammars were so equal, then, effectively, they would be notational variants. Now, if ‘truth’ is intended to cover grammars sanctioned by ‘explanatory adequacy’, then his argument just does not go through. A descriptively adequate grammar is not ipso facto a grammar a speaker might know in any sense that plays a role in linguistic theory, i.e., descriptive adequacy just doesn’t equate with possible content. The very point of explanatory adequacy, as a condition on general linguistic theory, is to constrain the grammars speakers can know, to “distinguish natural languages from arbitrary symbolic systems” (Chomsky, 1965, p.36): it is nomologically impossible for a speaker’s language faculty to generate a descriptively but non-explanatorily adequate grammar, i.e., one that is not appropriately selected by the constraints on UG. Thus, of Fodor’s two grammars, G* qua not generatable by UG just cannot be represented, even if it is co-extensive up to sentential phrase structure with the acquirable G. Chomsky’s actual point was much simpler than Fodor’s attribution. It was that descriptive adequacy is only necessary, not sufficient, for a grammar to be ‘known’ (ibid, p.34); the adequacy condition provides an “external” justification by relating the grammar to observed facts (ibid, p.27). Explanatory adequacy provides an “internal” justification of the same grammar by relating it to UG in showing that the grammar is in fact acquirable, an actual human language, rather than an “arbitrary symbolic system”. So, the two adequacy constraints jointly determine a single representational system: an adequate grammar is one which ‘fits the facts’ and is explicable as an instance of general rules and conditions that may be understood to be universal, innate (cf. Chomsky, 1964, p.63). As remarked in §3, this difference between ‘internal’ and ‘external’ was already present in Chomsky’s earlier work of the 1950s, where it had a methodological gloss as opposed to a psychological one. On either gloss, it is flat wrong to think that one condition is to do with external content - what is known - while the other is to do with its internal representation - how it is known or represented. The distinction is to do with particularity, determinable via ‘external’ observation, and generality, determinable via ‘internal’ investigation of the resources available to any human language. Unsurprisingly enough, if Fodor’s argument does not even track the reasoning of Aspects, we should not expect it to track the reasoning of the more recent principles and parameters (P&P) model.
Under the P&P model, UG is theorised as the schematic initial state of the language faculty; exposure to data fixes a lexicon and triggers a given vector of values to a finite set of parameters, which results in a steady state of the faculty. Roughly, this state is a speaker’s I-language (what we have just been calling a grammar) (see Chomsky, 1986, chp.2, and 2000a, chp.1). Clearly, Fodor’s argument makes no sense on this model.
There is no conception here of a grammar or language to which notions of extensional equivalence or strong - let alone weak - generative capacity apply. Chomsky (2000d, p.141, n.21) is explicit on this matter:
“[We should] put aside [the view] that restricts “linguistic evidence” to identification of “well-formed” (“grammatical”) expressions [i.e., “external” evidence which bears on descriptive adequacy], so that the linguist then faces the alleged problem of selecting among grammars that are extensionally equivalent over these objects. Such demands [are] incoheren[t].” (Cf. Chomsky, 1996, p.48, and 2000a, p.132.)
Note: Chomsky is not answering the problem, he is dismissing it as one which does not so much arise because the operative extensional notions are senseless. There is no threat of “positivism by subtraction”. An I-language is simply a state of the mind/brain: a procedure from a lexicon to a <PF, LF> pair. There is nothing here to be descriptively adequate but not true, and so there is no distinction which needs explaining via a representation/represented distinction. In this light, there is no real distinction at all, still less one of the form Fodor assumes. There is the one system, which can realise distinct states, and we want an explanation for all of them (cf. Chomsky, 2002, p.131). Still, we can perhaps say that a theory of the faculty is descriptively adequate if we can see how the developed states of the system support mature competence, and it is explanatorily adequate if the account of the initial state (inter alia) of the system is such as to furnish an explanation of the maturation of the descriptively adequate states. This is a distinction of perspective, not fact or principle. A uniform computational procedure (the intensional function) is taken to constitute the initial state (UG), and the determination of an I-language will reduce, under a minimalist understanding, to a specification of lexical idiosyncrasies and how these affect the morphological marking of functional categories. Trivially, on this understanding, the idea of a pair of strongly generated I-languages, only one of which may be represented, is meaningless: (i) I-languages are not generated at all; they are determined by parameter settings. (ii) By definition, an I-language must be ‘represented’ (realised in some brain), for it is nothing other than a state of a brain, period. (iii) All I-languages are the same up to lexical differences and morphology; the only ‘grammar’ there is is UG, up to the idiosyncrasies indicated. (iv) Consequently, there is no concept of a grammar that applies to a language, and so no possibility of two grammars equally applying to the same language. We can think, in theory at least, of distinct I-languages generating the same E-language, where an E-language is a set of symbol strings type-classified according to some rough phonological or grammatical criteria: all the sentences of ‘English’, say. But this distinction is no part of linguistic theory proper, and an E-language is certainly not what the linguistic mind represents.
7: (Tacit) Knowledge of Language
The argument so far has been that Chomsky’s understanding - and the understanding we should also have - of the language faculty falls outside of Fodor’s faculty/module distinction. In particular, we have seen that there is no basis for the view that the faculty comprises a potential propositional attitude ‘object’: the faculty is not a set of propositions represented by the mind/brain over which relations of knowledge or belief might be realised. There is a philosophical tradition, however, in which tacit knowledge is understood to be a substantive notion in the (potential) absence of explicit propositional states, i.e., a theoretical attribution of propositional knowledge to a speaker/hearer does not entail the speaker/hearer explicitly (internally) tokening ‘sentential’ states with such propositional content (Evans, 1981; Davies, 1986, 1989; Peacocke, 1989). Does this difference make a difference to the preceding arguments?
Beginning with Evans, 1981, the motivation for this sort of account arises from a concern over our putative knowledge of the axioms of a semantic theory for our respective languages. The axioms express propositions, but the speaker/hearer cannot be said to token such propositions under an attitude; for Evans, genuine propositional attitudes must be inferentially promiscuous, at the service of the speaker/hearer for general reasoning implicated in a myriad of projects, rather than harnessed to particular capacities (for discussion, see, e.g., Stich, 1978; Davies, 1989; Knowles, 2000). The details of Evans’s resolution of this quandary need not detain us. The upshot is that the ascription of propositional knowledge is justified to the extent that the inferential role of the axioms in delivering truth conditional statements the speaker/hearer is disposed to accept map onto the causal roles of sub-doxastic states that enter into the causal explanation of the speaker/hearer’s disposition to assent to the truth conditional statements mandated by the theory. A more precise formulation, which does without dispositions, is articulated in Davies’s, 1986, mirror constraint, where the deductive structure of the theory known - a set of propositions - is mirrored in the causal structure of non-propositional states that explains the speaker’s judgements that the theory mandates. In essence, this kind of approach is transposed to linguistic theory by Peacocke, 1989, and supported by Davies, 1989. There are differences between the two accounts offered, but for our purposes, they may be subsumed under a mapping condition:
(MC) A speaker/hearer tacitly knows the axioms, A1,…An, and rules, R1,…Rn, of a grammar G (G is ‘psychologically real’) if, for any p, such that G d p, the explanation of p holding of the speaker/hearer’s language essentially appeals to a causal process that may be factored into a set of states which respectively map onto the axioms/rules of G from which p follows.
In essence, MC says that the technology of a grammar can be said to be known by a speaker/hearer because the propositional units of the grammar that are common factors in derivations map onto states that are the common factors in the causal explanation of the speaker/hearer’s judgements that are mandated by the grammar (cf. Davies, 1989, p.132). My concern here is not with the precise formulation of some such condition, nor even with the general notion that propositional attribution may be legitimate in the absence of propositionally structured internal states. I do think, for example, that Peacocke, 1989, pp.122-4, is right to suggest that a non-sentential ‘connectionist’ architecture is perfectly consistent with the ‘psychological reality’ (as understood via MC) of generative grammars (cf. Davies, 1989, p.151; George, 1989, p.102). My only concern is whether the advertised model provides a substantiation of the locution knowledge of language that has any salience for linguistic theory. I think it does not.
In broadest terms, the moral of my position is that knowledge of language is innocent, independent of any connotation of consciousness or justification, so long as we are immune to any serious methodological/psychological split, under which we are obliged to instantiate, in some sense, the formula R[S, T(L)]. Here, ‘S’ is the speaker/hearer, ‘T(L)’ is a theory of a language L as determined by general linguistic theory, and ‘R’ is a (non-stipulated/substantial/external) relation between S and T about which we require some psychological theory. Fodor approaches the formula in straightforward realist terms: R is knowledge which involves the representation of T such that S realises a state that is true of L. Our problem with Fodor has not so much been his particular brand of intentional/representational realism, but the very structure of the formula which is instantiated. According to internalism, there is no relevant relation R because there is no L which we require a theory of, and so no substantive relation which may hold between it and the speaker/hearer. The formula which is our target might be expressed simply and trivially as S(I), where I is an internal state of speaker/hearer S. We say that the states are I-languages and to know a language is to be in such state. The speaker/hearer stands in no ‘internalised’ relation to our theories, for our theories are just of the internal states, not an independently specifiable ‘object’ the states represent as the content of linguistic knowledge (cf., Chomsky, 1975, chp.1; 2000c, pp.19-20).
Well, the problem with accounts of tacit knowledge, as the notion has been construed, is that they retain as their target the formula R[S, T(L)] (e.g., see Peacocke, 1989, p.114; Davies, 1989, p.133). Under such theories, R is not directly propositional; rather, R is constituted by a complex of causal roles whose nodes or states map onto (‘realise’) the contents of primitive ‘axioms’ and ‘rules’ of T(L). We have already seen at length that the states of the faculty do not enter into the causal nexus leading to linguistic acts. The fundamental problem with the tacit approach, though, is the very idea of accounting for the chimerical R between speaker/hearers and propositional contents, not particularly with its causal assumptions.
The continuing echo of the split approach of Chomsky’s early work, which persisted in the nomenclature up to the development of P&P, makes the point here difficult to appreciate. Once we properly acknowledge the internalist/intensionalist framework, however, the fog clears. First off, generative grammars (I-languages) are not deductive theories, such as have been proposed for semantics, nor, as Peacocke, 1989, p.114, suggests, do they “correspond” to them; there are no axioms or particular rules (we may exclude general operations such as Move/Affect a or Merge) to be mapped onto internal states. Of course, as Peacocke, ibid, p.117, also remarks, MC can potentially be reformulated to cater for distinct conceptions of a grammar. The crucial point, however, is that there simply are no propositional informational units which linguistic theory attributes to a speaker/hearer, and so there are no particular states to be discriminated (propositional or not) which express such information qua a correspondence between a causal and deductive factorisation.
Consider the principles of binding theory as classically laid out in Chomsky, 1981.
(A) A reflexive is bound in D.
(B) A pronoun is free in D.
(C) A R-expression is always free.
(‘D’ is a dummy label for a phrasal domain in which (at least) the respective nominal occurs; the nature of the domain is disputed). It seems that a theory which trades in A-C attributes knowledge of A-C to the competent speaker/hearer as part of its explanation of her competence with nominals. Just so, following MC, it seems that we should say that a speaker/hearer tacitly knows-that A-C just if there is a causal process that (i) essentially involves states that (non-propositionally) encode A-C and (ii) eventuates in a speaker/hearer’s judgements which involve the interpretation of structures featuring binding relations that adhere to A-C. Again, precisely how this might be made clear need not detain us, for the very idea, independent of its causal presumptions, is mistaken, a misunderstanding of the principles A-C.
That A-C are stated as atomised informational units - propositions, things knowable - is an artefact of our theorisation; it is not part of the meta-theory that the speaker/hearer has internalised such units. The theory is our account of the faculty, but the faculty is not the theory put inside the head (as propositions or not), as it were. A-C are internal conditions which the computation of the faculty meets, but there is no presumption that there are any states at all which are specifiable by the principles, still less that some such states enter into causal roles eventuating in explicit judgements. So, for example, the common idea during the 1980s was that A-C hold of S-structure and LF, where this just meant that legitimate ‘objects’ at these levels meet the conditions specified by A-C; if they didn’t, they wouldn’t be at that level. (The principles act as filters on otherwise acceptable structures (at D-structure).) The essential idea here is much more transparent within the minimalist program, which may be properly seen as a logical progression of, rather than an alternative to, the earlier framework. Levels (essentially, just LF, since PF is just features with no syntactic structure) must meet certain conditions if the computation is not to crash (is not to produce illegible objects at the interface), but the conditions can be inherently realised in the nature of the computation without there being any independently specifiable state corresponding to any condition which is met (see, e.g., Hornstein, 2001, Kayne, 2002, and Zwart, 2002). In effect, A-C are descriptive of an intensional computation; it is redundant to think of them as independent items of knowledge reflected in states that uniformly enter into certain causal processes.
Such considerations may be deployed generally over the technology of recent linguistic theory: X-bar theory, ECP, Subjacency, relativised minimality, minimal link condition, head movement constraint, and so on and on. Further, this reasoning holds equally of both UG and particular I-languages. Peacocke, 1989, pp.126-8, suggests that there is a difference: with the acquired state (I-language), knowledge of a condition is legitimately attributable merely if the given condition obtains (i.e., it need not correspond to some specifiable sub-state), but with the initial state (UG), the conditions are “second order informational states” ‘about’ the conditions which hold for acquired languages. Fodor, 1981b, p.258, appears to express the same thought with his claim that speaker/hearers “believe” that “transformations must apply in cycles” (clearly, no particular grammar could state/encode such a claim). This kind of construal depicts UG as a set of meta-instructions on how to form a grammar, which may give way once a grammar is formed that cleaves to the instructions. This is a mistake. UG has long been theorised as a schema, a polymorphic state, whose end-states are parameterised along just a few dimensions, perhaps just the morphological realisation of functional categories as proposed under minimalism. UG is not about particular languages. To see the point here, consider the projection principle (PP), one of Peacocke’s own examples to illustrate his ‘meta-information’ hypothesis. As initially proposed (Chomsky, 1981, pp.29-32), PP says that syntactic structure at all levels is projected from lexical items as a reflection of their selection features. According to Peacocke, then, UG contains meta-information about the form of any particular language in the guise of PP. This is nonsense. PP is our theoretical statement about the relationship between lexical items and syntactic levels as realised in the endstates of the faculty; PP is nowhere encoded. This is trivial to see. In effect, PP says that if lexical items inherently contain their selection features (as they must, due to idiosyncrasy), then it is redundant for that information to be duplicated in phrasal rules, for any structure that does not respect an item’s selection frame will be illegitimate at any level (i.e., contravene principles of binding, case and q-role assignment); thus, the system doesn’t require a principle to encode this (e.g., one doesn’t need a rule to tell one that Bill is probable to leave is ill-formed, if probable independently carries the information that it only takes finite complements.) Ironically, to construe PP as meta-information is to court the very redundancy PP sweeps away: UG keeps to PP as an effect of the relationship between the lexicon and licensing conditions at the syntactic levels.
Similar reasoning applies to Fodor’s example. The classic idea of Aspects (1965) is that recursive base rules defined over ‘S’ (sentence/clause) allow for non-transformational clausal embedding. This permits an economising of the transformational component that may now just consist of independently required cyclical singular transformations, rather than such transformations and (unordered) generalized transformations that target two or more base generated mono-clausal structures. Thus, on the Aspects model, it would be wholly redundant for UG to encode the meta-information that all transformations apply cyclically; the ‘information’ follows directly from the applicability of singular transformations to any multi-clause base generated D-structure (see Chomsky, 1965, pp.132-6).
In sum, the failure of the tacit knowledge model is its very presumption that the target of explanation is how we may properly ascribe propositional knowledge to a speaker/hearer, whether or not this involves ascribing explicit internal propositional states. This endeavour is no part of linguistic meta-theory: a linguistic theory is an account of the (defining) conditions an internally realised computation meets. We state the theory propositionally, but we don’t attribute the propositions to the speaker/hearer. There is no methodological/psychological split, with ‘what is known’ as a specifiable set of propositions (a theory/grammar) and an account of a relation R which holds between speaker/hearer and theory.
The language faculty is an abstractly specified computational (= intensional function) system of the mind/brain. It is abstract because, while realised in the brain and having no independent existence, it is individuated in terms of sui generis concepts particular to the domain of the computation’s outputs. This characterisation renders the faculty neither as a set of propositions to be known, nor as a mechanism, a part of a causal nexus.
 Ultimately, the point here is simply that modularity is not the same thing as innateness (cf. Khalidi, 2001, Garfield, et al., 2001).
 Fodor has consistently held the view that there is a parser and that it has an encapsulated representation of a grammar. A “representation” of a language is “part of a sentence encoding-decoding system” (Fodor, et al., 1974, p.370); “the production/perception of speech [is] causally mediated by the grammar that the speaker/hearer learns” (Fodor, 1981a, p.201); language is a “psychological mechanism that can be plausibly thought of as functioning to provide information about the distal environment in a format appropriate for central processing” (Fodor, 1983, p.44; cf., p135, n.28); and “the domains of perceptual modules (like the language processor) can be detected psychophysically… Modules (especially Chomskian modules) are inter alia innate databases” (Fodor, 2000, p.78/96). Fodor, at least early on, acknowledged that this approach was not Chomsky’s: “I shall… propose a somewhat eccentric way of reading the linguistics and psycholinguistics that developed out of Syntactic Structures… this work is best viewed as contributing to a theory of verbal communication” (Fodor, 1975, p.103). Fodor’s (p.c.) general argument for this position is that parsing requires a rich grammatical description of the signal. Thus, unless the mind/brain represents the same grammar twice, it appears that the language faculty is a database for a parser module. Chomsky is certainly at pains to distance himself from such a view See §5.
 The levels of PF and LF are not essential. In recent work, Chomsky, 2001a, 2001b, among others, raises doubts whether it is appropriate to think that there are any levels of representation. For present purposes, nothing at all hangs on this recent development (but see Collins, 2003). I thus shall keep with ‘PF/LF’ for convenience.
 Chomsky has been consistent in using non-epistemic terms, such as ‘device’, ‘mechanism’, ‘brain area’, etc., to pick out the language faculty. See, e.g., Chomsky, 1965, pp.53-6; 1980, p.28; 1986, pp.12-3; 1994, pp.153-4; 2000a, pp.4/117-8; 2001b, p.1. For the explicit rejection that the language faculty is a Fodorian module, see Chomsky, 2000c, p.20; 2000d, p.140n2.
 For new thinking on this familiar model of UG, see Yang, 2002.
 In particular, the conditions do not have to be ‘linguistic’ in the sense of only being specifiable in terms of noun or subject or some such. Indeed, such conditions would amount to a departure from principled explanation (Chomsky, 2001b). In line with this thought, Huaser, Chomsky and Fitch, 2002, speculate that the faculty proper might be unadorned recursion.
 This is not to say that a theory-like competence explains creativity; it is only to say that the fact of creativity can be seen to be partly realised by such a competence - an infinitely generative system - whereas the fact appears to be flatly inconsistent with the notion that language is a form of learnt behaviour, a habit.
 Some eight year later, Chomsky, 1967, expressed regret that his positive cognitive proposal was “apologetic and hesitant”, not, of course, that the ’59 review was a failure for not properly supporting the innateness of language - as if.
 Consider: “I do not know why I never realized [it] before, but it seems obvious, when you think about it, that the notion of language is a much more abstract notion than the notion of grammar. The reason is that grammars have to have a real existence, that is, there is something in your brain that corresponds to the grammar… But there is nothing in the real world corresponding to language” (Chomsky, 1982, p.107).
 In effect, the proposal here is precisely that we should “become confused about linguistics” in the sense of George, 1989, 1990. Somewhat like Fodor, 1983, 2000, and Knowles, 2000, George’s argument turns on the spurious exclusive disjunction ‘mechanism or propositional knowledge’, or, perhaps better, ‘representation or represented’: since the faculty is not a representational mechanism, it must be that which is represented.
 E.g., Chomsky, 1968/72, p.191, writes: “In general, it does not seem to me true that the concepts “knowing how” and “knowing that” constitute exhaustive categories for the analysis of knowledge”; and one would “find it difficult to understand” linguistic nativism if one keeps to the distinction.
 Knowles, 2000, pp.325-6, takes ‘cognize’ to differ from ‘knowledge’ just in respect of conscious access. His thought is wholly based on Chomsky’s, 1980, p.70, remark that if we were to be conscious of the faculty, then we would not hesitate to say that the conscious state constitutes what we know. But it patently doesn’t follow that what would be so known would be propositional (consciousness is not typically propositional), still less that what is in fact ‘known’ is propositional. Chomsky’s simple point is that conscious access is an orthogonal issue.
 By Kayne’s, 1994, influential linear correspondence axiom (LCA), the precedence relation of PF is determined by a (hierarchical, antisymmetrical) c-command relation within the syntax, which remains non-spelt-out to meet semantic conditions at LF. Some such arrangement is widely assumed, although we need not be committed to LCA as an axiom - its content can be deduced.
 The first feature reflects the fact that structures generatable by the language faculty are not necessarily useable, parsable; e.g., The boat the sailor the dog bit built sank, appears to be gibberish, yet the language faculty determines its interpretability just as it does that of The boat that the sailor that the dog bit, built, sank. The second feature is exemplified by ‘illusions’ such as No head injury is too trivial to ignore, which is taken to express the perfectly legitimate meaning, No matter how trivial a head injury is, it should be ignored. The language faculty, however, determines the sentence to be interpretable as No matter how trivial a head injury is, it should not be ignored.
 Higginbotham, 1987, like many others after him, insisted that the language faculty is to be understood as represented by the mind. The central part of what Higginbotham means here, however, is simply that the language faculty is not a performance system such as a parser (cf. George, 1989; Knowles, 2000). Higginbotham (e.g., 2001) has since entertained a sort of Platonism about syntactic and semantic structures, where it is the job of psychology to discover how a given speaker/hearer may represent (or realise) such structures. Such Platonism appears to be the kind of reflex mentioned in §3: representations require a represented. Yet, as Higginbotham, 1991, p.556, himself, in an earlier incarnation, put it, the represented is a mere “platonic shadow”, which it is “pedantic” to distinguish. Higginbotham has been concerned with the extent to which the shadow may be more substantial than a conceptual reflex. Internalism, we may say, rules against all species of the representation of the external, even the notional external (cf. Chomsky, 2000a, p.73).
 The issues raised in the preceding paragraphs relate to the question of the possibility of error: Can a speaker/hearer go wrong in her linguistic judgements, if the basis of how it seems to her is just how she represents her grammar to be (as ‘internalized’)? See George, 1990; Smith, 1998, 2001; Higginbotham, 1991, 1998; and Barber, 2001. Let it suffice to say that the substantive issue here appears to be independent of the question of whether the faculty is propositional or not. Let us assume, as the problem presupposes, that our propositional judgements partially track the structure of the faculty (under some construal or other), but it wouldn’t follow that the tracked is propositional, only that the tracked contributes, perhaps quite opaquely, to the determination of the kind of judgements available to us. Otherwise put, there is an issue about the relation between our judgements and the faculty, but there is no ‘problem of error’, for the relation is not epistemic.
 Suffice it to say, by questioning Fodor’s argument, I don’t intend to endorse the Quine-style argument, which has been variously entertained by Stich, 1972, Soames, 1984, Devitt and Sterelney, 1987, 1989. The Quine argument, and its progeny, are beyond the pale, so much so that no positive thesis is lent any weight by their trivial refutation. See §7.
 Fodor’s argument appears to be a reading of §6 of chapter 1 of Chomsky, 1965; §II of Chomsky, 1964, might also be relevant. Fodor’s curious use of ‘true of’, however, might hold the key. See n. 19.
 Fodor’s use of ‘true of’ is very curious. It derives from Stich, 1972, as far as I can see, who uses it to suggest that a grammar carries no implication about internal structure. As Chomsky and Katz, 1974, make clear in their response to Stich, the use of the ‘true of’ idiom, so construed, amounts to a mere stipulation that the linguist should restrict his interest so as to exclude developmental and psychological issues; the notion of a grammar G being true of a speaker/hearer has no positive role to play in linguistic theory; a fortiori, it is no part of an argument in defence of a representation/represented distinction.
 Fodor (p.c.) reads ‘I-language’ as idiolectal. Notwithstanding any obscurity which may beset the notion, an I-language cannot be an idiolect, for although individual, an idiolect is a not an internal state.
 Contrary to the impression of much pre-standard theory, Chomsky, 1964, p.53, n.4; 1955/1975, p.5/53, n.75, has made it clear that he has never seen weak generative capacity as intrinsically interesting. Whatever the case might be, once the distinction between grammars and languages dissolves, it is difficult to make sense of either weak or strong generative capacity.
 My thanks go to Noam Chomsky, Jerry Fodor, Jonathan Knowles, Guy Longworth, and an anonymous referee for many helpful suggestions and clarifications.
Barber, A. 2001: Idiolectal error. Mind and Language, 16: 263-283.
Chomsky, N. 1955/75: The Logical Structure of Linguistic Theory. Chicago: Chicago
Chomsky, N. 1957: Syntactic Structures. The Hague: Mouton.
Chomsky, N. 1959: Review of B. F. Skinner’s Verbal Behaviour. Language, 35: 26-
58. References to the reprint in J. Fodor and J. Katz (eds.), The Structure of
Language: Readings in the Philosophy of Language (pp.547-578). Englewood
Chomsky, N. 1962: Explanatory models in linguistics. In E. Nagel, P. Suppes, and A.
Tarski (eds.), Logic, Methodology and Philosophy of Science (pp.528-550).
Stanford: Stanford University Press.
Chomsky, N. 1964: Current issues in linguistic theory. In J. Fodor and J. Katz (eds.),
The Structure of Language: Readings in the Philosophy of Language (pp.50-118).
Englewood Cliffs: Prentice-Hall.
Chomsky, N. 1965: Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
Chomsky, N. 1967: Preface to "A Review of B. F. Skinner's Verbal Behavior". In L.
A. Jakobovits and M. S. Miron (eds.), Readings in the Psychology of Language
(pp.142-143). Englewood Cliffs: Prentice-Hall.
Chomsky, N. 1968/72: Language and Mind (enlarged edition). New York: Harcourt
Chomsky, N. 1975: Reflections on Language. London: Fontana
Chomsky, N. 1980: Rules and Representations. New York: Columbia University
Chomsky, N. 1981: Lectures on Government and Binding. Dordrecht: Foris.
Chomsky, N. 1982: The Generative Enterprise: A Discussion with Riny Huybregts
and Henk van Riemsdijk. Dordrecht: Foris.
Chomsky, N. 1986: Knowledge of Language: Its Nature, Origin, and Use. Westport:
Chomsky, N. 1991: Linguistics and adjacent fields: a personal view. In A. Kasher
(ed.), The Chomskyan Turn (pp.3-25). Oxford: Blackwell.
Chomsky, N. 1994: Chomsky, Noam. In S. Guttenplan (ed.), A Companion to the
Philosophy of Mind (pp.153-167). Oxford: Blackwell.
Chomsky, N. 1995: The Minimalist Program. Cambridge, MA: MIT Press.
Chomsky, N. 1996: Powers and Prospects: Reflections on Human Nature and the
Social Order. London: Pluto Press.
Chomsky, N. 2000a: New Horizons in the Study of Language and Mind. Cambridge:
Cambridge University Press.
Chomsky, N. 2000b: The Architecture of Language. New Delhi: Oxford University
Chomsky, N. 2000c: Linguistics and brain science. In A. Marantz, Y. Miyashita, and
W. O’Neil (eds.), Image, Language, Brain (pp.13-28). Cambridge, MA: MIT
Chomsky, N. 2000d: Minimalist inquiries: the framework. In R. Martin, D. Michaels,
and J. Uriagereka (eds.), Step by Step: Essays on Minimalist Syntax in Honor of
Howard Lasnik (pp.89-155). Cambridge, MA: MIT Press.
Chomsky, N. 2001a: Derivation by phase. In M. Kenstowicz (ed.), Ken Hale: A Life
in Language (pp.1-52). Cambridge, MA: MIT Press.
Chomsky, N. 2001b: Beyond explanatory adequacy. MIT Occasional Papers in
Linguistics, No.20, pp.1-28.
Chomsky, N. 2002: An interview on minimalism. In On Nature and Language (pp.92-
161). Cambridge: Cambridge University Press
Chomsky, N. and Katz, J. 1974: What the linguist is talking about. Journal of
Philosophy 71: 347-367.
Collins, J. 2003: Expressions, sentences, propositions. Erkenntnis 59: 233-262
Collins, J. forthcoming: Nativism: substantial vs. deflationary approaches.
Cowie, F. 1999: What’s Within? Nativism Reconsidered. Oxford: Oxford University
Davies, M. 1986: Tacit knowledge and the structure of thought and language. In C.
Travis (ed.), Meaning and Interpretation (pp.127-158). Oxford: Blackwell.
Davies, M. 1989: Tacit knowledge and subdoxastic states. In A. George (ed.),
Reflections on Chomsky (pp. 131-152). Oxford: Blackwell.
Devitt, M. and Sterelny, K. 1987: Language and Reality. Cambridge, MA: MIT Press.
Devitt, M. and Sterelny, K. 1989: Linguistics: what’s wrong with ‘The Right View’.
In J. Tomberlin (ed.), Philosophical Perspectives 3: Philosophy of Mind and
Action Theory (pp. 495-531). Atascadero: Ridgeway.
Evans, G. 1981: Semantic theory and tacit knowledge. In S. Holtzman and C. Leich
(eds.), Wittgenstein: To Follow a Rule (pp. 118-137). London: Routledge.
Fodor, J. 1975: The Language of Thought. Cambridge, MA: MIT Press.
Fodor, J. 1981a: Some notes on what linguistics is about. In N. Block (ed.), Readings
in the Philosophy of Psychology, Vol. II (pp.197-207). Cambridge, MA: Harvard
Fodor, J. 1981b: On the present status of the innateness controversy. In
Representations: Philosophical Essays on the Foundations of Cognitive Science
(pp.257-316). Cambridge, MA: MIT Press.
Fodor, J. 1983: The Modularity of Mind. Cambridge, MA: MIT Press.
Fodor, J. 2000: The Mind Doesn’t Work That Way: The Scope and Limits of
Computational Psychology. Cambridge, MA: MIT Press.
Fodor, J. 2001: Doing without What’s Within: Fiona Cowie’s critique of nativism.
Mind, 110: 99-148.
Fodor, J., Bever, T., and Garrett, M. 1974: The Psychology of Language: an
Introduction to Psycholinguistics and Generative Grammar. New York:
Garfield, J., Peterson, C., and Perry, T. 2001: Social cognition, language
acquisition and the development of the theory of mind. Mind and Language,
George, A. 1989: How not to become confused about linguistics. In A. George (ed.),
Reflections on Chomsky (pp.90-110). Oxford: Blackwell.
George, A. 1990: Whose language is it anyway? Some notes on idiolects.
Philosophical Quarterly, 40: 275-298.
Hauser, M., Chomsky, N., and Fitch, W. T. 2002: The faculty of language: what is it,
who has it, and how did it evolve? Science 298: 1569-1579.
Higginbotham, J. 1987: The autonomy of syntax and semantics. In J. Garfield
(ed.), Modularity in Knowledge Representation and Natural Language
Understanding (pp.119-131). Cambridge, MA: MIT Press.
Higginbotham, J. 1991: Remarks on the metaphysics of linguistics. Linguistics and
Philosophy, 14: 555-566.
Higginbotham, J. 1998: On knowing one’s own language. In C. Wright, B. C. Smith,
and C. McDonald (eds.), Knowing Our Own Minds (pp.429-441). Oxford:
Oxford University Press.
Higginbotham, J. 2001: On referential semantics and cognitive science. In J.
Branquinho (ed.), The Foundations of Cognitive Science (pp.145-156). Oxford:
Oxford University Press.
Hornstein, N. 2001: Move! A Minimalist Theory of Construal. Oxford: Blackwell.
Jackendoff, R. 1993: Patterns in Mind: Language and Human Nature. Harvester
Jackendoff, R. 2002: Foundations of Language: Brain, Meaning, Grammar,
Evolution. Oxford: Oxford University Press.
Katz, J. 1966: The Philosophy of Language. New York: Harper and Row.
Kayne, R. 1994: The Antisymmetry of Syntax. Cambridge, MA: MIT Press.
Kayne, R. 2002: Pronouns and their antecedents. In S. Epstein and T. D. Seely (eds.),
Derivation and Explanation in the Minimalist Program (pp.133-166). Oxford:
Khalidi, M. A. 2001: Innateness and domain specificity. Philosophical Studies, 105:
Knowles, J. 2000: Knowledge of grammar as a propositional attitude. Philosophical
Psychology, 13: 325-353.
Matthews, R. 1991: Psychological reality of grammars. In A. Kasher (ed.), The
Chomskyan Turn (pp. 182-199). Oxford: Blackwell.
Prinz, J. 2002: Furnishing the Mind: Concepts and Their Perceptual Basis.
Cambridge, MA: MIT Press.
Quine, W. V. O. 1969: Linguistics and philosophy. In S. Hook (ed.), Language and
Philosophy (pp.95-98). New York: NYU Press.
Quine, W. V. O. 1972: Methodological reflections on current linguistic theory. In D.
Davidson and G. Harman (eds.), Semantics of Natural Language (pp. 442-454).
Samuels, R. 2002: Nativism in cognitive science. Mind and Language, 17: 233-265.
Segal, G. 1996: The modularity of theory of mind. In P. Carruthers and P. Smith
(eds.), Theories of Theories of Mind (pp. 141-157). Cambridge: Cambridge
Smith, B. C. 1998: On knowing one’s own language. In C. Wright, B. C. Smith, and
C. McDonald (eds.), Knowing Our Own Minds (pp.391-428). Oxford: Oxford
Smith, B. C. 2001: Idiolects and understanding: comments on Barber. Mind and
Language, 16: 284-289.
Smith, N. 1999: Chomsky: Ideas and Ideals. Cambridge: Cambridge University Press.
Soames, S. 1984: Linguistics and psychology. Linguistics and Philosophy 7: 155-179.
Stemmer, B. 1999: An on-line interview with Noam Chomsky: on the nature of
pragmatics and related issues. Brain and Language, 68: 393-401.
Stich, S. 1972: Grammars, psychology, and indeterminacy. Journal of Philosophy 69:
Stich, S. 1978: Beliefs and subdoxastic states. Philosophy of Science, 45: 499-518.
Weinberg, A. 1999: A minimalist theory of human sentence processing. In S. Epstein
and N. Hornstein (eds.), Working Minimalism (pp.283-315). Cambridge, MA:
Yang, C. 2002: Natural Language and Language Learning. Oxford: Oxford
Zwart, J-W. 2002: Issues relating to a derivational theory of binding. In S. Epstein and
T. D. Seely (eds.), Derivation and Explanation in the Minimalist Program
(pp.269-304). Oxford: Blackwell.