On  the Input Problem for Massive Modularity

Abstract

Jerry Fodor argues that the massive modularity thesis - the claim that (human) cognition is wholly served by domain specific, autonomous computational devices, i.e., modules - is a priori incoherent, self-defeating. The thesis suffers from what Fodor dubs the ‘input problem’: the function of a given module (proprietarily understood) in a wholly modular system presupposes non-modular processes. It will be argued that massive modularity suffers from no such a priori problem. Fodor, however, also offers what he describes as a “really real” input problem (i.e., an empirical one). It will be suggested that this problem is real enough, but it does not selectively strike down massive modularity - it is a problem for everyone.

Keywords: Fodor, language faculty, input problem, massive modularity, theory of mind, Sperber.

 

1: Introduction

A prevailing hypothesis in contemporary cognitive science is that the mind is legion: the shape of human cognition is not the product of general computational procedures that freely apply to many different sources of information; rather, cognition in toto is served by a host of autonomous, innately structured, domain-specific computational components or ‘modules’.  Jerry Fodor, 2000, after others (e.g., Samuels, 1998), dubs this position massive modularity and claims that it is a priori incoherent.[1] Fodor does not insist that there are general computational procedures, indeed, he doubts if computations, qua syntactic processes, can be general in the sense denied by massive modularists. His claim is only that modularity cannot be the whole story, notwithstanding our current inability to provide a coherent let alone a tenable alternative. The heart of Fodor’s argument is the input problem. The problem takes the form of a dilemma under which massive modularity either self-defeatingly concedes the necessity for non-modular processes or else steps onto a regress leading to an infinity of modules. Since modules are hypothesised to be real components of the finite mind/brain, this just means that cognition cannot be wholly modular. Fodor draws the moral that we should try to make progress where we can, while realising that, without new conceptual insights, the structure of most of the cognitive mind remains opaque.

      The brief of the sequel is to argue that there is no a priori input problem for massive modularity. Fodor’s dilemma will be diagnosed as resting upon an unsupported claim about the relationship between the concepts of a given module’s domain (the area it can output answers about) and the representations that may potentially excite or activate the module. There does remain, however, a genuine - a posteriori, empirical - input problem which Fodor highlights, but this problem does not selectively apply to massively modularity - it is a problem for everyone. Thus, we shall agree with Fodor that there are a number of outstanding deep problems to do with massive modularity; we shall disagree in thinking that these problems are solely empirical, not a priori.       

2: Fodor’s Assumptions

In The Modularity of Mind (1983), Fodor proposed a broad architectural bisection of cognition: on the one hand, ‘peripheral’ input/output cognition, such as olfaction, speech recognition, various visual competencies, motor control, etc., are served by modules; on the other hand, central cognition, essentially, rational belief fixation, is non-modular, much like the holistic confirmation of hypotheses in science. Fodor, 2000, also adopts this general framework.

     In his latest presentation, Fodor characterises modules in terms of an intersection of the properties of domain specificity and  informational encapsulation: a module is a set of computational processes that output answers just about a particular domain, be it language, mental states, faces, etc.[2] This is what makes a module domain specific. A module is informationally encapsulated in that the processes that output such domain specific answers are sensitive to just the information or concepts represented in the module’s database and, of course, any input it receives. Thus, modules (typically) do not consult each other, nor receive information from any higher order resources (e.g., general memory); they simply operate automatically and quickly on what they are given as input, and they can be so quick precisely because their processes are defined over their particular databases. This seems to give the right results for many species of ‘peripheral’ cognition such as our super-fast linguistic parsing and our illusion susceptible visual systems - the Müller-Lye illusion has its effect even when we know that the lines are equal in length. (One may think of peripheral cognition as that information processing that is causally associated with particular input channels, such as vision, hearing, etc.)[3]

        Fodor also understands modularity to be a synchronic property of the mind/brain in the sense that to claim that a capacity or competence is modular is to make a claim about the (causal) architecture of human cognition rather than a diachronic claim about the development of the capacity or competence. It bears emphasis, though, that synchronic modularity does not amount to the absurd idea that children are born with all capacities intact and on-line. Modules might develop in a variety of ways; Fodor’s claim is merely that ‘modularity’, as he defines it, pertains to mature architecture rather than its development. Of course, the nature of the mature architecture will impose certain constraints on how we may understand its development, but, in principle at least, there are various options available (cf., Fodor, 1992, Segal, 1996, and Scholl and Leslie, 1999.)

     This characterisation of modularity is in fact not shared by some of those who appear to endorse some version of massive modularity. We shall highlight some of the differences in §6. For the purposes of our argument, however, we may follow Fodor,  for my claims against his a priori input argument do not rest upon an alternative conception of modularity. However one understands modularity, the input problem need not be cause for concern.

       Fodor’s background theory of mental processes is the computational theory of mind (CTM). According to CTM, mental processes are syntactic/formal computations over a symbolic format or code physically realised in the brain. The good of CTM is that it provides states which face two ways, as it were: on the one hand, the  syntax or form of a mental state is constituted by the physical properties (shape, say) of the format in which it is encoded; on the other hand, we know that, while such computations are specifiable non-semantically, they may be designed to preserve semantic properties of truth, reference, etc. just as rational mental processes tend to do. Thus, CTM allows us to understand how mental processes may simultaneously enter into psychological generalisations that both rationalise our actions and delineate their aetiology Otherwise put, content bearing mental states are subsumed by causal generalisations to the extent that such states are computational ones: syntax interfaces between physical and semantic properties. For Fodor, CTM is simply the only story we have of how intentional states may be causally efficacious.

      Now syntax (as presently being understood) is a local property, i.e., representations R and R* are of the same syntactic type just if they consist of components of the same ‘shape’ in the same arrangement. By CTM, mental processes are computations - formal transformations of symbolic items - just if the contents (information) they preserve across their transformations depend on the representations’ local properties, i.e., their syntactic properties. If content preservation were to depend on the representations’ non-local properties (their relations to all other representations, say, a la some brands of connectionism), then the syntax of R would not tie R’s causal properties to its semantic ones and we would thus lack an explanation of how intentional psychological laws could be instantiated by computational states per CTM.[4]

      CTM and modularity are mutually supporting. Modular processes operate on a restricted database, unaffected by all other information in the system. Thus, if the mind is massively modular, and modules are computational devices, then all computations would be similarly restricted, i.e., all computations would be definable over a fixed range of syntactic properties - those that realise the encapsulated sets of concepts particular to modules. Thus, all computation would be local in the sense required by CTM, which says that all computations are local qua syntactic. If computations were non-local, then causal properties would separate from semantic ones, and we would lack a story of how intentional laws may be physically realised.  It is not that CTM entails massive modularity or vice versa; rather, if mental process are computations, as understood via CTM, then computations are conceptually restricted, just as massive modularity says. So, CTM + modularity is a natural synthesis. But, the input problem precisely tells us that when we put the thesis that the mind consists of nothing other than modules together with CTM, then incoherence results.

 

 

 

3: The A Priori Input Problem

In this section, Fodor’s 2000, p.71-5, a priori argument will be presented. The presentation will be somewhat abstract, as is Fodor’s own, although I hope to clarify issues that are somewhat elliptical in Fodor’s discussion.  The dialectical payoff of the temporary abstractness is that, as we move towards putative ‘real’ cases of central modules (e.g., language and theory of mind), we shall see that the abstract model upon which Fodor’s argument rests is highly misleading.

       An initial point bears emphasis. Modules are not necessarily peripheral devices; it is Fodor’s additional conjecture based on his conception of a ‘module’ that they in fact are. Now when Fodor speaks of massive modularity it is crucial to understand that he has in mind what we may call massive Fodorian modularity. That is, for Fodor, the massive modularity hypothesis does not introduce a sense of ‘module’ distinct from that which is appropriate for theorising about input/output cognition; rather, it says, ‘more of the same’.

      It appears that Fodor’s argument against massive modularity was directly inspired by Sperber’s, 1994, argument for massive modularity (what Sperber refers to as the modularity of thought). Sperber, like Fodor, takes input/output cognition to be modular. He also recognises, again with Fodor, that central thought is unlike peripheral cognition; it is characterised by the “free flow of information”. How, then, might a wholly modular architecture support such apparently unencapsulated cognition? Here is Sperber’s, ibid, p.49, proposal:

[The freedom of thought] implies that one particular modular picture cannot be right: Imagine a single layer of a few large mutually unconnected modules; then an information [sic] treated by one module won’t find its way to another. If, on the other hand, the output of one conceptual module can serve as input to another one, modules can each be informationally encapsulated while chains of inference can take a conceptual premise from one module to the next and therefore integrate the contribution of each in some final conclusion.

Fodor may be usefully understood as assuming that Sperber is speaking for all, and that massive modularity seeks to extend Fodorian modularity into central thought. The input problem says that this avenue is self-defeating.

      The problem has the form of a dilemma. For vividness, imagine a system S with two initially identifiable modules: M1 is for thinking about squares and M2 is for thinking about triangles, i.e., they output answers just about squares and triangles respectively. M1 “applies to representations of squares but not to representations of triangles” and M2 “applies to representations of triangles but not to representations of squares”, i.e., the modules have exclusive inputs and are encapsulated with respect to each other (Fodor, 2000, pp.71-2). CTM is not in question, so the input representations (Sperber’s premises) proper to each module have, respectively, syntactic (non-semantic) properties P1 and P2 such that, under interpretation, P1 representations are about squares (have squares in their extension) and P2 representations are about triangles (have triangles in their extension). So, what representations might excite a module can be read off the domain of the module.

     Fodor (p.c.) acknowledges that we (potentially) employ a whole battery of concepts to recognise ‘objects’ of a type that fall within the domain of a given module. For example, assume that we have a car module C. One surely employs some representation of the shape of a typical car to recognise cars and so activate C, even though cars come in all shapes and sizes, and the shape representation does not have the set of cars as its extension: a car-shape facade on one’s horizon is not a car. Put baldly, it might be that if we lacked the right shape concepts, we would never get to recognise a car, but when we think about cars we are not merely thinking about shapes (or, generally, the concepts employed in recognition); we could possess the car concept without the shape concept. Let us accept this. The massive modularist must now depict the mind/brain as being organised so that there are further modules which sort between representations to send just the right representations to the right modules so that we get to think about cars rather than boats or bikes when cars are abroad in our environment. Further, per CTM, these representations must be encoded via syntactic properties.

        To return to M1 and M2, since we are assuming massive modularity, it follows that representations with properties P1 and P2, respectively, must be the output representations of some modular process(es), i.e., “P1 and P2 are somehow assigned to representations prior to the activation of M1 and M2” (Fodor, 2002, p.72). There appears to be two possibilities for how this assignment might be realised: either there is one module M+ that takes in “representations at large” and  respectively assigns P1 and P2 to just those representations that input to M1 and M2, i.e., those just about squares and triangles respectively. Thus, the domain of M+, in effect, covers both SQUARE and TRIANGLE precisely because it can divide between representations in those terms. The other alternative is that there are two inputting modules, M1+ and M2+, the former assigning P1 representations to M1 and the latter assigning P2 representations to M2. Neither option is available.

        Under the first option, M+ is less domain specific than either M1 or M2, for it sorts between all representations, including both ones about squares and ones about triangles, i.e., M+ selects between “representations at large” such that it assigns P1 to its inputs to M1 and P2 to its inputs to M2. Why is this a problem? Well, a module is posited to answer all and only questions about a particular domain - modular cognition is exclusive. A non-exclusive module is one which is not domain specific relative to the mind at large, which just means that it is not a (Fodorian) module: the domain of  M+ is at least as inclusive of the union of the domains of the M1 and M2. Hence, M1 and M2 are not domain specific with respect to M+. The argument, of course, applies equally if we were to posit a hundred central modules: there would be a M+ whose domain is inclusive of the domains of all  modules to which it inputs. In short, under the first option, any central module  presupposes a lack of domain-specificity within the mind, which just means that the mind is not wholly modular, for modules by definition are domain specific.

         This exclusivity problem appears not to arise under the second option, where each of our two modules has its proprietary inputting module, M1+ and M2+ respectively. But our initial question iterates: Is there a single module - M++ - that assigns P1 and P2 representations to M1+ and M2+, respectively, or do each have their proprietary inputting module? Note, the urgency here does not rest on the thought that the same concepts - represented by P1 and P2 - that excite M1 and M2 are involved in the excitation of M1+ and M2+. The point is that, say, “P1 representation of square” is extensional for the position of ‘square’ (Fodor, 2000, p.115, n.16). So, while M1 is activated by P1, M1+ might be activated by some distinct (formal) representation - P1+, say. The  crucial factor is that M++ must sort between representations so that the conceptual/domain difference between M1+ and M2+ is respected, which in turn means the difference between M1 and M2 is also respected by the sorting of M++. So, it is not that only SQUARE could turn on M1+, say, but that whatever does excite the module must be filtered so that the representations are about squares as opposed to triangles or anything else. This makes perfect sense, for the posited job of M1+ and M2+ is to ‘ask questions’ of a module which require SQUARE and TRIANGLE answers, as it were. So, the question iterates: What activates M1+ and M2+  such that they may have such outputs? On the option being considered, there is a single further module - M++ - whose outputs excite both M1+ and M2+. But this just means that M++ must sort between “representations at large” to make a selection which respects the difference between squares and triangles. Thus, we land again into the exclusivity problem, where a single module M++ is “at a minimum” less domain specific than either M1+ and M2+, a fortiori, less domain specific than either M1 or M2. The very recourse to the second option collapses onto the first. If, on the other hand, M1+ and M2+ have proprietary inputting modules, then we ask the same question again: Do they get their inputs from proprietary modules or from a single  module?  Infinite regress beckons.

        Fodor (p.c.) says that one way to think of the argument is to view modules as filters which act as sieves for ‘representations at large’ so that the right information goes to the right module. But the very notion of a filter implies processes which apply to a less exclusive set of representations than pass through it: a filter sorts between whatever passes through it; a filter whose output was the same as its input just wouldn’t be a filter. So, the cycle of the input argument is that to keep to equal domain specificity for each central module (i.e., to eschew a module that subsumes “representations at large”), one requires a further module that selectively inputs to it. But then we are pushed further back to ask how these modules are selectively excited. In sum: given a finite mind, “each modular computational mechanism presupposes computational mechanisms less modular than itself [read: some filter], so there’s a sense in which the idea of a massively modular architecture is self-defeating” (Fodor, 2000, p.73).

        How might one seek to extricate massive modularity from this dilemma? Sperber, whom it was suggested is Fodor’s prime target, suggests that “the presence of specific concepts in a representation determines what modules will be activated and what inferential processes will take place”; but “they are otherwise blind to the other conceptual properties of the representations they process” (Sperber, 1994, p.49). We shall see that Sperber’s proposal holds the key to dissolving Fodor’s problem. Still, to wear Fodor’s hat pro tem, such disciplined myopia appears not to ameliorate the input problem, but compound it. The input problem is not about how, say, a SQUARE module might be blind to NON-SQUARE; after all, Fodor, as noted above, readily admits that all sorts of concepts might serve in our recognition of the instances of a given concept’s extension. The problem is how the module may see SQUARE, which it must do if it is to be a SQUARE module. But, being the module it is raises the question of how the inputting representations are sorted into ones which would activate the module. If they are sorted into an inclusive module, then equal domain specificity is foregone; if they are not so sorted, then infinite regress beckons.

      Fodor considers an empiricist response. In essence, such a  response is to halt the infinite regress of inputting modules at a sensorium: by definition, under empiricism, a sensorium is less domain specific than anything else in the mind, for it encodes all inputs into a phenomenal vocabulary. Thus, if S has a sensorium, then, by necessity, the sequence of inputting mechanisms begins at that point and there is no infinite regress. So, if S can distinguish between squares and triangles, then S does so via the difference being a sensory one encoded in the sensorium. This proposal, however, is somewhat of a rhetorical reductio. No-one believes that all conceptual distinctions are encoded by phenomenal features; modularists certainly do not understand their architectural commitments to translate into a commitment to empiricism! Thus, the thought that phenomenal reductionism is the only escape route from the input problem amounts to the thought that there is no escape route.[5]

      Let us agree, that no massive modularist wants to end up defending phenomenalism. Is there another avenue of escape? Well, why couldn’t a module operate on representations conceptually marked (not in terms of sensory features). Gigerenzer and Hug, 1992, for example, depict a cheater detection module (CDM) as operating on representations marked as social exchanges, i.e., just those situations where cheating could take place. But, for Fodor, this helps us not a jot. The question naturally arises: What tells the mind that there is a social exchange before it? If, as we are assuming, the mind is massively modular, then there would need to be a module for the detection of social exchanges - something which differentiates representations in terms of social exchange - which in turn activates the CDM. But what activates the ‘social exchange detection module’? Again, we have stepped on the infinite regress on pain of admitting a non-modular aspect to cognition which may sort between representations at large.

 

4: Rethinking the A Priori Input Problem

In presenting the input problem, we have followed Fodor and described merely two modules and left the rest of the putative mind, under the banner of “representations at large”. The input problem arises because a given central module M can only be activated after all other representations have, some way or other, been filtered so that M may be activated by its proprietary representations. The question then arises as to the module that outputs representations which activate M. Unless we posit an endless sequence of modules that output one to another with M as their terminus, we must posit, at some stage, a module that outputs representations to M and also to other modules. That is, there must be some place where representations are sorted or selected into those that do and do not activate given downstream (more central, less peripheral) modules. But this entails that the system will not be uniformly domain specific: there will be a ‘module’ whose domain “at a minimum” is inclusive of the domains of the modules which it activates.

       What is unclear is why we should think that if there is a central module M, then all upstream representations must be differentiated between those do and do not excite it in terms of the concept(s) that cover the domain of M? If we resist this thought then the input problem is spiked. Further, it seems to me that this thought is one a massive modularist would deny on principle.

      In §3, Sperber’s, 1994, thought was entertained that a module is “blind” to all non-relevant concepts encoded in an input representation. On Fodor’s behalf, we offered the rejoinder that this doesn’t help resolve the input problem because we need to know how a module can see what is propreitary to it. So, if a module is to answer questions about, say, squares - its domain is about squares - then it must be triggered by representations that have squares in their extension, otherwise, the module’s activation would never get to be about squares. Now the problem arises as to how the antecedent representations are sorted such that only the ‘relevant’ ones may trigger the module. The input problem beckons. But this initial characterisation somewhat misinterprets Sperber. Sperber, 1994, understands modules to come in a plethora of species, with domains that “vary in character and size” and “interconnected in ways that would make an engineer cringe” (ibid., p.46). Also, “the range of stimuli causing [a] module to react will end up being such an awful medley as to” preclude the description of a module’s domain “in terms of a specific category” (ibid., p.53). In other words, there are no independent filters on what gets to excite a module; rather, we should think of a module as self-filtering: if it can make use of an input, something happens (an output is delivered to another system); if it can’t make use of a representation, a computation internal to the module crashes - there’s no output.

      So, on this model, the inter-module connections are “awful” in the sense that there is no prior sense to be made of them: a module isn’t the module it is because of the representations it can receive; it gets its identity from what it does to whatever it does receive, i.e., the kind of answers it offers. The main consequence of this picture is that we cannot stipulate that there must be a relation of (extensional) identity between the concepts that trigger a module and the concepts that feature in the modules output. The relation between input and output  is not a priori underwritten: we have to discover what turns on a module. Of course, a very real problem remains here. But, if we cannot determine a priori what concepts excite a given module (even if we know the module’s domain), then, to keep with the toy example, to identify a square module does not ipso facto involve an identification of all and only those upstream representations about squares. Hence, the a priori input problem does not arise, for we are not inexorably led to posit more and more modules that output representations just proper to the receiving module.

         Fodor, it seems, does not so much as consider this sort of manoeuvre; that is, he assumes that a central module can only be excited by representations filtered to it. Why should Fodor take this claim to be unquestionably true?

     It might seem that we have too easily disposed of the input problem - we have presented a straw man. I think not. The force of the problem for Fodor arises, I suspect, from two related sources. Firstly, Fodor’s version of massive modularity rests upon an essentially perceptual model of a module: just as a perceptual device must sort between sensory inputs to identify language or a face, say, so a putative central module must somehow sort between all the representations it possesses to send the right ‘questions’ to the right downstream modules. Secondly, Fodor differentiates from the above problem a real, non-a priori input problem, and this problem is serious for everyone. However, the massive modularist is depicted as being especially susceptible to this real problem. It appears, in other words, as if Fodor thinks the real problem just is the ‘philosophical/a priori’ one in the context of massive modularity. It will be suggested below that this is not so. I’ll come to this second problem in the following section; before that, let us pick up on the first issue.

       Perceptual modules are essentially devices of recognition that map proprietary concepts onto ‘stimuli’. But, of course, the mapping is selective. We don’t attempt to parse visual input, and we don’t estimate depth of acoustic input. In other words, perceptual modules have their inputs psychophysically filtered so that the right stimuli goes to the right module. This is done by our sense organs (inter alia). The result is that modules’ outputs get to be about the distal properties which caused the stimuli for the rest of the mind. In short, a module whose inputs weren’t filtered wouldn’t be perceptual. Central cognition, of course, is not perceptual or recognitional; Fodor would be the last to think it was. But then the very idea that what works for perception will work for central cognition looks doomed from the off. In other words, we just shouldn’t even entertain the idea that perception-like modules comprise central cognition. That is, we ought not to think of central modules as being in the business of recognising or filtering certain kinds of inputs. Qua central, a module will be potentially open to lots of different kinds of  information, but will also be self-filtering in the sense that one may or may not get an answer. So, like perception, a central module is not an open resource, but, unlike perception, its potential inputs are not antecedently filtered.

     One problem that might sway one back to the perceptual model of modularity is that of aboutness. If there isn’t an informational chain running through a series of modules/filters from perception to the centre, then in what sense could we say that a central module’s answers/outputs are about anything? This query, I think, is a red herring.

        The question of aboutness - the intentionality of mental states - is orthogonal to the essentially architectural issue of massive modularity. Indeed, it might be that cognitive science just shouldn’t be in the business of trying to understand intentionality. Following this avenue, however, will take us far from our primary concerns. All I want to show here is that one’s claims about intentionality - pro or con - ought not to constrain one’s claims about modularity.

     In broad terms, it is perfectly consistent with modularity to view the categories the mind employs as not individuated by corresponding properties in the world, but by their role in determining the internal organisation of the mind and how this responds to any input (cf., Jackendoff, 1992, and Chomsky, 2000a.) The supposed extensions of our concepts are thus simply the reflection of internal mental organisation, rather than constraints upon it. Fodor rejects any such cognitive ‘constructionism’, and the present space precludes a proper challenge to his arguments. Let me, then, just motivate this approach with some examples.

      The concepts of the language faculty, for instance, have no extensions at all, or rather, any extensions they have do not enter into the individuation or explanatory role of the concepts. The concept of NOUN (i.e., the feature +N) does not designate any set of acoustic or orthographic patterns. The notion is internally individuated in terms of its role in determining the properties of the language faculty’s interfaces with other mechanisms of the mind. More vividly, the empty categories - e.g., PRO and wh-trace - have no possible extension.[6] It is useful, in fact, to think of all concepts of linguistic theory as being just like empty categories: we posit them in line with structural principles that account for observed linguistic competence. The ‘representations’ of the faculty account, in part, for ‘external’ features, but they do not represent them.[7] The same reasoning applies throughout the mind. For example, BELIEF and FACE are employed to organise our dealings with conspecifics; obviously, we can, with perfect legitimacy, say that people have beliefs and faces just as they utter nouns, but there are no independently realised properties here our minds are constrained to represent. By somewhat speculatively considering such cases, we may give a more substantial answer to the question of how a module is excited.

       If there is a ToM module (ToMM) that mandatorily applies intentional concepts to selective objects or situations, then it surely does not itself pick out those objects qua believers, desirers, etc. from transduced input. But then how might ToMM be activated? Well perhaps ToMM is linked to a more peripheral face recogniser or an ‘intentionality detector’ based on gaze direction, as well as a parallel language faculty to supply potential content attributions. None of these other systems need supply ToMM with representations filtered so as to encode BELIEF. Its seems clear, for instance, that mature face recognition activates the ascription of mental states, but one cannot recognise a belief as such. At other times, of course, ToMM is activated when one is not dealing directly with a conspecific. It would appear that there is nothing in particular that is required to activate ToMM, but ToMM, in the normal mind/brain, nevertheless functions appropriately, issuing ‘answers’ in terms of belief and desire to ‘questions’ posed by the distal scene and distinct internal cogitations.

       On this model, the input to ToMM has not been filtered in terms of BELIEF, DESIRE, etc.; rather, the arrangement supports what seems to be a mandatory effect of human cognition: one understands a thing with a face in terms of beliefs, desires, etc. Of course, lots of things have faces without beliefs, and one could lack a face, but still be a believer. However, our minds are organised in such a way that ‘recognition’ of faces default triggers attribution of mental states (We may presume that the attribution is blocked when other modules are not also triggered, say an ‘intentionality detector’.)

        This proposal, for sure, is speculative. Two points: Firstly, Fodor  is seeking an a priori refutation. Thus, the mere coherence of the proposal suffices to spike Fodor’s a priori input argument. The real issue is empirical: something similar to the proposal might be the case, we have to look and see; an answer is not forthcoming from the armchair, as it were. Secondly, one way to test whether such an arrangement obtains, is to see what cognitive dissociations are possible. In this regard, there is some data from Williams syndrome that might support the hypothesis in that we find ToM, language and face recognition spared amidst great cognitive dysfunction. This prima facie supports the view that  language and face recognition are necessary conditions for the normal development of ToM (not sufficient, singly or jointly - Autistics typically do have face recognition, but severely degraded ToM) (Segal, 1998, and Author, 2). It has also been proposed that ToM is ontogenetically associated with mechanisms of  shared attention and self-movement detection (e.g., Leslie, 1994, and Baron-Cohen, 1995). If we think of these mechanisms as being autonomous in the sense that they can exist (diachronically and synchronically) without a full-blown ToM, then we might well have a picture of a fixed architectural arrangement where no a priori input problem for a central ToMM arises (see Author, 2.) Again, there will probably be pathological cases which demand serious complications to this proposal. At the moment we are simply depicting the kind of empirical research that might be employed to support the above speculation.

       The proposal about ToM applies mutatis mutandis to language. One’s language faculty cannot, by hypothesis, be turned on by, say, an LF structure, for the properties that make up such a structure are peculiar to the faculty; rather, we may presume, the mind makes an initial segmentation of acoustical properties with the architecture of the interfaces being such that only such properties are mapped to phonological/morphological properties so that the faculty proper may make something of it.[8] This ‘something’ might be a relevant LF chain which, in turn, constrains the possible  interpretations  we may map to our interlocutor’s words. Some such arrangement is precisely what Chomsky, 2000a, p.117, means when he says that the language faculty is accessed by performance systems but is not one itself. Here, the input problem appears to be inapplicable: there is a central language faculty that is excited by the representation of features not proper to it, nor filtered; what features these are is an empirical problem of discovery, not a priori stipulation.

 

6: The Really Real Input Problem

If we are right that massive modularity is not a priori struck down, it by no means follows that the theory is empirically credible.  Fodor writes:

unlike [the a priori input problem], the question about how the mind manages to represent things in ways that determine which modules get excited is not just “philosophical” but really real (Fodor, 2000, p.77).

That is, the a priori problem is not the kind of problem actual cognitive scientists  worry about, for they are already working on the assumption that minds can represent their environment in ways which activate the right modules. The really real problem, on the other hand, directly bears on the empirical issue of how the mind can so much as be in a position to entertain the excitatory concepts on the basis of external triggering - the “things” which the mind represents. The problem arises where we have distinctions to which our minds are attuned, such as language or not language (Fodor’s own example), that are not marked, at least not apparently so, by sensory features, i.e., those which may be psychophysically detected by a transducer and fed to a peripheral module. If all concepts involved in linguistic representation were empirical ones, then there would be no problem at all. In point of fact, however, they most surely are not. Thus, what is apparently required is a set of sensory features whose detection alerts the mind that language is going on. Once a language is acquired, the problem has been solved, for thereupon we have in place a lexicon which associates phonological features (or gestures, say, if the language is ASL) with grammatical and semantic ones, with this whole bundle of features reacting to some range of acoustical properties we can produce and detect. We don’t know how this arrangement works; still, it is clear that the lexicon and the principles defined over it are not representations of acoustical properties. At its simplest, then, being able to perceive language is, given transduction, being able to match the perceived features with one’s stored features.[9] What alerts the neonate to attend to language as opposed to any other ambient noise is a much more difficult question. The relevant distal features might be certain prosodic properties, perhaps in the context of being emitted from human faces; then again, probably not: the blind acquire language with the same alacrity as the sighted; mutatis mutandis, the deaf and ASL. Again, however, the child’s mind imposes its own concepts, they are not constructed from the features perceived. This, at least, is something with which one cannot imagine Fodor  disagreeing

           The real input problem, then, is serious indeed, but it is not an a priori problem for any position. Fodor, though, thinks that the real problem is especially troublesome for the massive modularist. While the problem arises for all modules, with perceptual modules (the only kind of modules Fodor thinks there are), the problem is domesticated, as it were, i.e., “it’s in some part because it’s plausible that the domains of perceptual modules (like language processing) cam be detected psychophysically that mechanisms of perceptual analysis are prima facie good candidates for modularity” (Fodor, 2000, p.78). So, we know that the (real) input problem has been resolved as regards such processes, even if, as just observed, it is very difficult to identify the sensory features that do the triggering. In other words, we have a real phenomenon, but, as yet, no explanation. This is not so with central modules. Because they are not perceptual, they are not psychophysically triggered, and so they don’t act mandatorily. Thus, they are “not good candidates for modularity” (op cit.). It seems that some thinking must take place for the identification of the things to which the central modules proprietarily apply, i.e., an inferential process from sensory stimuli to non-phenomenal concepts. The example Fodor, 2000, p.74-8, jumps up and down on is the putative cheater detection module (CDM). Let us look at this example in a little detail.

       Fodor’s point is simple: What possible sensory cues could turn on a CDM, could let it know that the distal array contains a cheater? How can the real input problem be solved for the CDM? So put, it is obvious that there are no such cues; cheaters don’t come in brown and yellow stripes, as Fodor says with typical vividness. No-one, however, need suppose the contrary; to think there must be sensory cues for cheater detection would just be to think of the CDM as a perceptual module. A more likely understanding of CDM is that it is centrally situated, linked with, say, a ToM module, and receiving input from modules which tell it that it is dealing with  conspecifics. In short, the CDM need not induce the presence of potential cheating from sensory cues, but from, rather, a battery of inputs from other modules, some peripheral, some parallel, whose domains concern conspecifics. Again, Fodor is blind to this option because he insists that the input to any module must be filtered so as to code the concepts proper to it; thus, there must be an input module that detects cheaters (not necessarily by employing CHEATER, but some other concept that has cheaters in its extension). We have, however, seen no reason to share Fodor’s assumption.

        Still, it might be thought, the CDM, qua central, does presuppose processes less modular than itself. That is, where central modules are concerned, the real input problem turns into a variation on the a priori input problem. Fodor does not argue in this way, although such a thought might explain, as we are entertaining, Fodor’s apparent blindness to the solution to the input problem offered above. For example, the thought might be that the mind must first detect and represent, say, conspecifics or social relations (friend, buying-selling, etc.), with these representations being then sorted into those which do and do not involve the likelihood of cheating, with only the former being fed into the CDM. Yet since cheaters just are conspecifics and cheating is precisely a social relation, then whatever ‘modular’ processes input to the CDM are less domain specific than CDM itself, which just means that a mind with a CDM is not equally domain-specific, not wholly modular. This is the argument Fodor, 2000, p.75, offers against Gigerenzer and Hug, 1992, who suggest that recognition (or least priming) of a social situation may trigger the CDM (see above).

      But this argument is no good, or, rather, no defenders of the CDM, such as Gigerenzer and Hug, need not presume the existence of modules of the kind the argument supposes. As remarked above, the relevant inputting modules may be ones that detect faces or self-motion, say. Thus, there is no implication that these modules will be less modular than CDM. So, the Gigerenzer and Hug case of social priming doesn’t entail a ‘social’ module. Indeed, the idea that there might be a ‘conspecific module’ or a ‘social situation module’ and a CDM is self-defeating precisely because it is logically impossible to identify a cheater without identifying a conspecific and a social situation, but to identify a face is not to identify a cheater, just as to identify a face is not to identify a belief, or to identify an acoustic sequence is not to identify syntactic structure. It is, per massive modularity, the architecture of the mind that associates faces with believers, sounds with meanings and cheaters with conspecifics; the associations are not independent facts the mind represents.

        All that said, Fodor’s basic point remains: one has to think about cheating, i.e., it is obviously not mandatory for humans to know whether they are being cheated or not - they must perform some inference. The greater the information relevant to one’s thinking about a domain, the less modularised is the database upon which the thinking relies. In the limit case, if global considerations impinge on one’s judgement, then it appears that the processes are not modular at all; nothing is excluded, everything is potentially relevant.

          This concession, however, does not qualify the substantial points made above. Whether this or that competence is modular based is an empirical question, it is not determined merely by our registering a competence as a competence: modular architecture cannot be read off competence identification.[10] If it turns out that some competence is a high-level conscious ability that can freely exploit all information otherwise available to the system, then Fodor is correct: the competence is not modular based. So, there is nothing in the massive modularity hypothesis that insists that because detecting cheaters is a competence, there must be a CDM; it is perfectly consistent for a modularist to reject CDM on straightforward empirical grounds. For example, as Fodor is at pains to stress, our detection of situations involving cheating is hardly mandatory, we need to weigh the evidence. This, though, sharply contrasts with certain features of ToM: even where we would consciously deny mental states to an organism, such as a pet dog, we still unreflectively attribute them. Consider also, watching a theatrical performance. We know that X doesn’t want to kill Y because X believes that Y killed X’s father and  married X’s mother, but we cannot help but attribute such beliefs to X (Segal, 1996). Fodor’s claims against CDM, then, might well be correct, but the complaint does not generalise without some argument that global reasoning is constitutive of all proposed central modules. As we have just noted, ToMM is a clear of case of a central module, along with the language faculty, that does not succumb to the kind of considerations that probably militate against CDM.

 

7: By Way of Conclusion…Thinking and Perceiving (Again)

The foregoing has only dealt with one aspect of Fodor’s rich battery of arguments against massive modularity. The input argument is the most interesting to explore, however, because it brings into greatest relief an assumption of Fodor’s which co-ordinates his other stances on modularity, viz. the clean separation of perceiving and thinking. The issue here is not whether there is such a separation in general, but whether, granting that there is such a separation, it vitiates massive modularity. Fodor thinks it does. We have already seen how this assumption perhaps led to what, on reflection, turned out to be only a fallaciously a priori argument. To conclude, we may see how the assumption sends much of cognition into a black hole.

       Fodor, 2000, p.115, n.18, claims that where thinking as opposed to perceiving is at issue, then there just is no evidence for modularity. Yet, as well as ToM and language, many other areas of cognition have been hypothesised to have modular characteristics “to some interesting extent”, which is all Fodor, 1983, p.37, claims for his own modularity hypothesis. Thus, as regards innateness, domain specificity and (to some degree) encapsulation, many competencies have been hypothesised to be modular: mathematics (e.g., Dehaene, 1997); various ‘folk sciences’ (e.g., Keil, 1989); music (e.g., Sloboda, 1985); religion (e.g., Boyer, 1993); empathy/concern (Nichols, 2001); social relations (Gigerenzer, 1997); et al.[11]  The thing to note is that these ‘modules’ putatively support thinking, not perceiving; they are obviously central, i.e., they don’t receive their inputs direct from transduction. Yet what Fodor means by ‘thought’ is essentially the determination of answers to problems that is not mandatory, but is, rather, free and global. The extent to which, then, ToM, language, etc. are free is the extent to which they are cases of thought, perforce non-modular. Now ToM, language and the other mooted competencies are patently not fully mandatory, but nor, of course, are they simply determined by free conscious judgement. In other words, these competencies appear to have modular features without being perceptual.

       For example, we can figure out that ‘garden paths’ (e.g., The horse [which was] raced past the barn fell) and centrally embedded relatives (e.g., The boat the sailor the dog bit built sank [The boat, which the sailor, who the dog bit, built, sank]) are interpretable, even though we cannot automatically parse them. I’m not here suggesting that Fodor’s parser ‘module’ must be automatically successfully (parse, in this context, is not an achievement verb.) Clearly, a parser, like any mechanism, will have a variable range of success. The point, rather, is that such cases show that our linguistic competence is not a recognitional or perceptual competence, for the cited cases are structures generatable by the language faculty but not accessible to performance. Chomsky, 1996, pp.14-5, for one, even doubts whether there is a universal human parser (cf. Chomsky, 1986, p.14, n.10; 1991, pp.19-20; 2000a, pp.117-8; Stemmer, 1999; Higginbotham, 1987). Otherwise put, working out what such apparently deviant structures mean requires conscious effort, but once done, we see that they fall under the same principles as the rest of our language which requires no such effort to comprehend; we thus learn something about our competence: it admits n-degree relative embedding. But the structures remain non-parsable. In itself, this fact does not refute the hypothesis of an inaccessible/encapsulated parser, but it does shows that language is not restricted to such a device.

      Likewise, we appear to know about ToM just fine, and spend a lot of our time revising what we think of other minds. But, and this is the point, to show that these competencies are not simply perceptual is not to show that they are non-modular. By Fodor’s very lights, we certainly have access to the output of the language faculty and ToMM, but there is no reason to think therefore that we have access to all the intramodular information. We surely don’t, why else do we have linguistics and developmental psychology? (cf: Fodor, 1998a, chp.11.) On the other hand, ToM and language are not simply perceptual. So, perception/thought appears not to map onto modular/non-modular. To buttress this point, we may briefly look at what appears to be Fodor’s own ambivalent attitude towards to language and ToM.

       Fodor, 1983, p.9-10, while viewing the ‘Chomskyan’ language faculty as a set of propositional structures, also holds that such structures require an implementing mechanism. But what sort of mechanism is this? Fodor has been understood as treating the language faculty as the database for a parser. Fodor does indeed appear to have some such view (see Fodor, 1983, p.135, n.28). For the reasons given above, however, this is not Chomsky’s view and seems to be quite mistaken anyhow, e.g., ‘garden paths’ and other constructions are products of the language faculty but not products of a parser. More recently, Fodor, 2000, chp.4, simply speaks of a language module, with the ‘faculty’ as its database. Yet if this module is not a parser, then it is patently a central system. It would seem, then, that the input problem as regards language is somehow solved: there are representations that are not ‘about’ anything linguistic but do excite the language faculty.  Fodor himself appears to recognise this but dubs it, as mentioned earlier, a “really real” problem (2000, p.77). But it is unclear what real difference there is between this genuine problem and the input problem. The mind, Fodor agrees, somehow represents X (presumably, acoustical or orthographic properties, i.e., not linguistic properties) which in turn excites the language module. Well, why cannot the mind represent all sorts of other features to excite all sorts of other modules? Why should this arrangement be unique, rather than typical? If there is parity here, then there just is not an a priori input problem, there is just the real empirical problem.

         Fodor is in similar difficulties on the architectural status of ToM. Fodor, 2000, p.97, does refer to ToM as being modular, but leaves it unclear whether he has in mind the epistemic (i.e., innate database) or architectural sense. Fodor’s, 1987a, ‘Creation Myth’ also suggests modularity, but is again unclear on details. Fodor, 1992, p.284, appears to make his intentions clear when he refers to ToM as “an innate, modularized database” (my emphasis). This seems to say that there is a ToM Fodorian module, or at least  that ToM is a component of such a module. If so, we appear to have a central module, for believers and desirers, as it were, are not identifiable via transduction. But this just means that the input problem has at least been solved for this particular module: on the assumption that BELIEF is not represented transductively, the ToM module must be ‘turned-on’ by upstream (more peripheral) representations that are not ‘about’ beliefs. Fodor’s problem here is precisely that language and ToM certainly look like cases of thinking as opposed to perception, but also appear not to be species of global cognition. So much the worse, perhaps, for thinking that thinking = non-modularity.

       The moral is that there is a lacuna in Fodor’s thought. We may suspect that an awful lot of cognition will fall into the black hole between perception and (global) thinking. Indeed, any mandatory cognition that is not perception will emit no light.

      The brief of the above considerations is modest. We have only attempted to rescue massive modularity from Fodor’s a priori input problem. There is a real problem about how the mind selectively responds intelligently to external stimuli, but this is an empirical quandary not an a priori one. No solutions were proposed; still, none have been a priori knocked out of the ring. The mystery remains, but so does massive modularity.

 

Notes



[1] In Fodor’s sights are Tooby and Cosmides, 1992, Sperber, 1994, Plotkin, 1997, and Pinker, 1997, as well as many others.

[2] In his 1983, Fodor characterised modules simply in terms of a set of diagnostics for their identification (innateness, domain specific, fast, etc.). The new neater ‘definition’ is made in order to clarify what Fodor sees as essentially a misunderstanding of his earlier work. See §6. 

[3] Colheart, 1999, proposes that ‘modularity’ just means ‘domain-specificity’; whether encapsulation, innateness, etc. hold of a module is an independent empirical question. In (partial) agreement with Colheart, Garfield et al., 2001, p.502-3, in apparent opposition to Fodor, 1983, suggest that it is “absolutely wrong” to think that modules must be innate  (cf., Khalidi. 2001, who argues for a general dissociation of innateness from domain specificity/modularity.) Understood as a criticism of Fodor, this seems to me to be an insubstantial complaint, albeit correct as a characterisation of the typical use of ‘module’ within the literature. Fodor thinks that the whole issue is empirical; there is no independent notion of a module that requires definition. The question for Fodor is whether certain styles of cognition can “‘to some interesting extent’” be explanatorily isolated as domain specific, encapsulated, etc. (Fodor, 1983, p.37). For example, Fodor reasons on poverty of stimulus grounds that linguistic parsing works on an innate database; its speed, inter alia, gives him reason to think it is encapsulated, and so on. Fodor does not simply argue that, because parsing is modular, it is innate.

[4] It is important here to speak about ‘content preservation’ depending on syntax as opposed to content tout court. The latter would amount to a functional role theory of content. Fodor’s long standing view is that content (extension) is determined externally, not internally. Thus,  content itself doesn’t depend on syntax, but our capacity to reason over contents does. See Fodor, 1987, 1990, 1998b.

[5] Prinz, 2002, presents an updated version of Lockean concept empiricism under which everything in the mind is first in (copied from) the senses. ‘Senses’ here means dedicated perceptual/recognitional faculties. Suffice it to say that Fodor would marshal a good number of arguments against Prinz’s position independent of any issue to do with massive modularity. For problems with Prinz’s treatment of linguistic cognition in particular, see Author 1.

[6] Empty categories are elements of syntactic representations that have both semantic and syntactic features but do not have any phonological/morphological realisation. Far from being exotica, the categories have been the stock in trade of linguistic theory for about thirty years.

[7]  As understood in generative linguistics, the notion of a representation does not imply a represented in any substantive sense. See Higginbotham, 1991, Jackendoff, 1992, 2002, Chomsky, 2000a,b, 2003a, and Author, 2.

[8] Fodor (1983, 2000, 2001) has long understood Chomsky’s faculty hypothesis as an empistemic thesis about what we know - a database - rather than a modular thesis about the causal architecture of the brain. Fodor also thinks that the language faculty is essentially at the service of a language perception device. This reading does not in the least correspond to Chomsky’s own understanding (Author 2). Either way, the present point remains, for however we understand the notion of a language faculty, we still ‘detect’ or ‘perceive’ language, i.e., put the faculty to use in interpreting others.

[9] Obviously, this is only so for the recognition of a language sufficiently similar to one’s own. Adults, of course, cannot discriminate any language qua language from mere noise. Neonates do this, with the ability lost once the child fixates on a particular set of phonological features to map onto acoustic input. See Mehler and Dupoux, 1994.

[10]  Oddly, Fodor appears to disagree, insofar as he takes massive modularity to amount to the claim that “there is a more or less encapsulated processor for each kind of problem [a mind] can solve” (Fodor, 2000, p.64). It is difficult to imagine anyone who would believe this. By the modularist hypothesis, how many modules there are and how they are arranged are empirical problems. The answers are not given as a metaphysical reflex of how we might arbitrarily carve our categories of problems.  

[11] Although Fodor does not discuss any of these cases, we may presume that he would treat them as epistemic modules/databases, and so as not germane to massive modularity, i.e., poverty of stimulus arguments don’t demonstrate architectural theses. While this negative claim is sound, if universal, autonomous structures of knowledge are identified, it would be perverse to think that there is a general learning mechanism that operates over these structures, for their very autonomy means that they are relatively developmentally isolated; that is, we would at least need a mechanism which works for each, but we don’t require one which works for all. It doesn’t follow, of course, that each competence has a supporting module. The interesting question for the modularist is what core knowledge underlies these competencies and what likely modular ensemble supports their mature existence.

 

References

Author 1, 2,

Baron-Cohen, S. (1995), Mindblindness: An Essay on Autism and Theory of Mind,

      Cambridge, MA: MIT Press.

Boyer, P. (1993), The Naturalness of Religious Ideas: Outline of a Cognitive Theory

       of Religion, Berkeley: University of California Press.

Chomsky, N. (1986), Knowledge of Language: Its Nature, Origin, and Use, Westport:

       Praeger.

Chomsky, N. (1996), Powers and Prospects: Reflections on Human Nature and the

       Social Order, London: Pluto Press.

Chomsky, N. (2000a), New Horizons in the Study of language and Mind, Cambridge:

      Cambridge University Press.

Chomsky, N. (2000b), ‘Minimalist Inquiries: The Framework’, in R. Martin, D.

      Michaels, and J. Uriagereka (eds.), Step by Step: Essays on Minimalist Syntax in

      Honor of Howard Lasnik, Cambridge, MA: MIT Press, pp. 89-155.

Chomsky, N. (2003), ‘Reply to Rey’, in L. M. Antony and N. Hornstein (eds.),

       Chomsky and His Critics, Oxford: Blackwell, pp.274-287.

Colheart, M. (1999), ‘Modularity and Cognition’, Cognitive Science 3: 115-120.

Dehaene, S. (1997), The Number Sense: How the Mind Creates Mathematics, Oxford:

       Oxford University Press.

Fodor, J. (1975), The Language of Thought. Cambridge, MA: Harvard University

       Press.

Fodor, J. (1983), The Modularity of Mind, Cambridge, MA: MIT Press.

Fodor, J. (1987a), Psychosemantics: The Problem of Meaning in the Philosophy of

      Mind, Cambridge, MA: MIT Press.

Fodor, J. (1987b), ‘Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the

       Spheres’, in J. Garfield, ed., Modularity in Knowledge Representation

       and Natural-Language Understanding, Cambridge, MA: MIT Press, pp. 25-36.

Fodor, J. (1990), A Theory of Content and Other Essays, Cambridge, MA: MIT Press.

Fodor, J. (1992), ‘A Theory of the Child’s Theory of Mind, Cognition 44: 283-

      296.

Fodor, J. (1998a), In Critical Condition: Polemical Essays on Cognitive Science and

      the Philosophy of Mind, Cambridge, MA: MIT Press.

Fodor, J. (1998b), Concepts: Where Cognitive Science Went Wrong, Oxford; Oxford

      University Press.

Fodor, J. (2000), The Mind Doesn’t Work That Way: The Scope and Limits of

     Computational Psychology, Cambridge, MA: MIT Press.

Fodor, J. (2001), ‘Doing without What’s Within: Fiona Cowie’s critique of nativism’,

      Mind 110: 99-148.

Garfield, J., Peterson, C., and Perry, T. (2001), ‘Social Cognition, Language

      Acquisition and the Development of the Theory of Mind’, Mind and Language

      16: 494-541.

Gigerenzer, G. (1997), The Modularity of Social Intelligence, in A. Whiten and R.

      Byrne, ed., Machiavellian Intelligence II: Extensions and Evaluations,

      Cambridge: Cambridge University Press, pp.264-288.

Gigerenzer, G. and Hug, K. (1992), ‘Domain specific reasoning: social contracts,

      cheating, and perspective change’, Cognition 43: 127-171.

Higginbotham, J. (1987), ‘The autonomy of syntax and semantics’, in J. Garfield

       (ed.), Modularity in Knowledge Representation and Natural Language

       Understanding, Cambridge, MA: MIT Press, pp.119-131. 

Higginbotham, J. (1991), ‘Remarks on the metaphysics of linguistics’, Linguistics and

        Philosophy 14: 555-566.

Jackendoff, J. (1992), Languages of the Mind: Essays on Mental Representation,

      Cambridge, MA: MIT Press.

Jackendoff, J. (2002), Foundations of Language: Brain, Meaning, Grammaer,

      Evolution. Oxford: Oxford University press.

Keil, F. (1989), Concepts, Kinds, and Cognitive Development, Cambridge, MA: MIT

      Press.

Khalidi, M. (2001), ‘Innateness and Domain Specificity’, Philosophical Studies 105:

      191-210.

Leslie, A. (1994), ‘ToMM, ToBy, and Agency: Core Architecture and Domain

      Specificity’, in L. Hirschfield and S. Gelman, eds., Mapping the Mind: Domain

      Specificity in Cognition and Culture Cambridge: Cambridge University Press, pp.

      119-148.

Mehler, J. and Dupoux, E. (1994) What Infants Know, Oxford: Blackwell.

Nichols, S. (2001), ‘Mindreading and the Cognitive Architecture Underlying

       Altruistic Motivation, Mind and Language 16, pp. 425-455.

Pinker, S. (1997), How the Mind Works. London: Penguin.

Plotkin, H. (1997), Evolution in Mind. Londan: Alan Lane.

Prinz, J. (2002), Furnishing the Mind. Cambridge, MA: MIT Press.

Samuels, R. (1998), ‘Evolutionary Psychology and the Massive Modularity

       Hypothesis’, British Journal for the Philosophy of Science 49: 575-602.

Scholl, B. and Leslie, A. (1999), ‘Modularity, Development and ‘Theory of Mind’’,

       Mind and Language 14: 131-153.

Segal, G. (1996), ‘The Modularity of Theory of Mind’, in P. Carruthers and P. Smith,

       eds., Theories of Theories of Mind, Cambridge: Cambridge University Press, pp.

       141-57.

Segal, G. (1998), ‘Representing Representations’, in P. Carruthers and J. Boucher,

       eds., Language and Thought: Interdisciplinary Themes, Cambridge: Cambridge

       University Press, pp.146-161.

Sloboda, J. (1985), The Musical Mind: The Cognitive Psychology of Music, Oxford:

       Clarendon Press.

Sperber, D. (1994), ‘The Modularity of Thought and the Epidemiology of

      Representations’, in L. Hirschfeld and S. Gelman, eds., Mapping the Mind:

      Domain Specificity in Cognition and Culture, Cambridge: Cambridge

      University Press, pp.39-67.

Stemmer, B. (1999), ‘An on-line interview with Noam Chomsky: on the nature of

       pragmatics and related issues’, Brain and Language 68: 393-401.

Tooby, J. and Cosmides, L. (1992), ‘The Psychological Foundations of Culture’, in J.

       Barkow, L. Cosmides, and J. Tooby (eds.), The Adapted Mind: Evolutionary

       Psychology and the Generation of Culture, Oxford, Oxford University Press, pp.

       19-136.