About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

War for Men's Minds

Intelligence, Animality, and Machinery
by Stephen Boydstun


In the first section, I articulate what definitely does not possess intelligence. In the second,

what definitely does possess intelligence and what intelligence, if any, might be reasonably ascribed to animals and machines. (This paper was written in 2000.)

 

I. Plants and Animals

 

Natural intelligence is among the animals only. Plants are not intelligent. This I take as given. Any notion of intelligence that would cast plant activities or behaviors as realizations of intelligence or as instances of intelligence, however meager, surely fails to capture only intelligent behavior. Ditto for single living cells, whether bacterial, archeal, protistic, plant, or animal. Single cells are not intelligent. This, too, I take as given.

            Adopting Daniel Dennett's design-engineering stance toward the living cell, we may say that the two basic designs of the cells composing plants and animals are two resolutions of a profound problem for life: the macromolecules of life inside the cell tend to carry many excess negative charges, which get balanced by positive ions dissolved in the water within the cell. The presence of these ions means that water tends to be drawn into the cell by osmosis (because of nature's strong tendency to reduce concentration gradients of particle species). The problem is to keep the cell from swelling and finally bursting as more and more water is drawn in. The resolution in plant cells is to make the wall of the cell strong enough such that the inside pressure can exceed the surround pressure and oppose the osmotic flow of water into the plant's cells.1 The passivity of this resolution, as compared to the animal-cell resolution (below), is probably the root cause of the total lack of intelligent behavior in plants to this day.

            Plants do behave and respond. The gravitropic response of the roots of certain plants is a good example of a plant behavioral response. Within about a half an hour of being uprooted, such

plants are able to redirect the growth of their roots in the new direction of gravity. The rate of growth of the uprooted root is reduced on both the upper and lower surfaces of the elongation zone of the root, a growth zone five-to-six millimeters long, near the root cap. However, growth rate is reduced most on the lower surface two-to-three millimeters behind the cap. This slower growth rate along the lower side of the root causes the downward curving growth of the root.

            Gravitropic roots have a detector, in the terminal half-millimeter of the root, of the new direction of gravity with respect to the uprooted root's orientation.2 Within seconds of uprooting, amyloplasts in columella cells of the root cap fall and settle along the new lower wall of each cell. This detection step is the only step of the gravitropic response in which gravity directly pulls down a component (amyloplasts) of the root system.

            It is thought that the amyloplasts within the columella cells fall onto the endoplasmic reticulum, a complex of calcium-rich membranes and vesicles. Calcium ions escape from the complex, elevating calcium levels along the lower side of the cells. Beyond a threshold concentration of calcium, the protein calmodulin is activated, which then turns on calcium pumps in the cell wall, thereby allowing the eventual accumulation of calcium ions along the lower side of the root cap. Calmodulin also activates auxin pumps. Calcium seems to then facilitate the movement of auxin from the lower side of the cap to the lower side of the elongation zone. Auxin is a hormone that inhibits growth; hence the extra slowing of growth along the lower side of the uprooted gravitropic root.

            With plants and living things generally, we are rightly talking behavior and response. The living thing behaves in response to certain circumstances. The living thing is a highly structured system; one capable of internally actuated responses, one poised to make transitions toward valuable states of itself or its kin. In the constitution3 of the living system, there inhere4 potential, valuable states G, and in the constitution of the living system, there are subsystems that are activated by differences between G and an actual state A so as to bring A to G (Minsky 1986, 78; Rosenblueth, Wiener, and Bigelow 1943; Schaffner 1993, 365–68). The gravitropic root made horizontal will make itself vertical. In contrast, falling water or pebbles do not actively attain lower levels on the earth; they do not detect and respond to gravity (cf. Dretske 1988, 1–11, 44–50).

            For understanding inanimate natural entities and processes, as for understanding life, there is real advantage in taking a systems view. However, in taking the systems view of inanimate natural systems, we should be wary of a couple of ontological errors, exemplified by the following: (i) Attractors in the dynamical phase space of a system seem to attract the states of the system. That is an illusion. The causes of the tendency toward an attractor are other than the attractor. (ii) In the Lagrangian and Hamiltonian formulations of mechanics, it seems that the system is teleological, more particularly, that the system is working to keep certain quantities at minimum possible values. That, too, is an illusion, a convenience of linguistic expression. A less convenient expression gives the truth and fully so.

            Inanimate natural systems (and animate systems in their inanimate character) do not have the G-A organization of living cells and multi-cellular life (and G-A machines). To view inanimate natural systems as having G-A organization is feasible, these systems succumb to such a stance taken toward them, but to view them as having G-A organization is unjustified. Nothing is gained in predictive nor explanatory power in taking such a view of them; and such taking flirts with faulty metaphysics (e.g., Minsky 1986, 79).

            I think the important qualitative difference between living behavior and inanimate natural-system behavior, between G-A systems and inanimate natural systems, warrants institution of a stance appropriate to G-A systems, a stance we might call, after Marvin Minsky's coinage, the difference-engine stance. My description, above, of a G-A system coincides with Minsky's description of a difference engine, except that I speak of subsystems activated by differences between G and A, whereas Minsky speaks of subagents aroused by differences between G and A.5 I am barring "subagents" (and, of course, "rational subagents"—the intentional stance) from the most elementary G-A systems: roots in gravitropic response, bacteria in chemotactic response, bacteria in regulation of their level of ribosome synthesis, and machines with only elementary feedback regulations, such as refrigerators with thermostats and engines with governors.

Even if subsystems of an elementary G-A system do not qualify as subagents, might a whole elementary G-A system, such as a plant or a single-cell organism, itself be rightly taken as an agent? The sense of agent at issue here is not the sense of a cause of change (e.g., a chemical agent) nor the sense of a means or mode by which something is done (i.e., an instrument), but the sense of a being who acts, or does. Among organism behaviors, I should affix agent behavior to multi-cellular animal behaviors. In their elaboration of the concept of biological agents, which informs recent work on artificial agents, Rolf Pfeifer and Christain Scheier set out as agent-criteria: self-sufficiency, autonomy, situatedness, embodiment, adaptivity, and suitedness to ecological niche. On the surface, it seems that plants and single-cell organisms would satisfy these criteria, but when the particulars of these criteria are gotten into, it emerges that Pfeifer and Scheier have in mind, as biological agents, only certain multi-cellular animals and humans (1999, 25, 99). In concordance Rodney Brooks speaks of the "acting and reacting" that is "part of intelligence" and that is "a necessary basis for the development of true intelligence" as entailing: "the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction" (1991, 396–97). That sounds like animality to me.

The difference-engine stance properly ranges over only a portion of the full range of Dennett's design-engineering stance, but it properly ranges more widely than the intentional stance properly ranges.6 Dennett vacillates on whether all G-A behaviors fall within the proper range of the intentional stance (1981, 65–66).

            G-states as G-states are goals in a generalized sense of the term goals, the generalization being from purposeful, deliberate G-A systems (such as us, with prefrontal cortex engaged; cf. Rosenblueth, Wiener, and Bigelow 1943, 18–19) to G-A systems in general. In that generalization, the aboutness of goals evaporates, only their towardness remains (cf. Dretske 1988, 122–27). That G-A systems, the ones barely so, have goals in this generalized sense does not license our taking Dennett's intentional stance (definitely an aboutness-intentional stance) toward them. In talking behavior and response in plants, we need not be taking Dennett's intentional stance. Nor should we, for there is nothing—no predictive nor explanatory power—to be gained by that taking, as distinct from a mere G-A taking.

There is, however, something further to be gained from taking an information-forming-and-information-depending view of plants, of their cells, indeed of any cell (Küppers 1990, 8–21, 31–33, 40–56, 130–37; Bray 1995). Plants and single cells evidently contain sequencers for information processing, temporary encodings of recent environmental conditions, token types7 for conveyance of information, and, of course, the adaptive response executed as the culmination of information-processing sequences. In all this, I should say, we have no platform for ascribing any aboutness intentionality. Adaptive responses ensuing the information processes of the plant or cell are coordinate with vegetative G-states, both bespeaking towardness, but no aboutness. Presumably such responses are the primordial evolutionary precursors of semantics; definitely not instances of the latter.

Moreover, as it seems to me, the token types in the information processes within plants and single cells are molecules (or subunits of molecules) that can be readily construed as having merely indication functions in the system (cf. Dretske 1988, 52–59; Haugeland 1998, 308–9). Token types having only indication functions do not amount to symbols in, for important example, the physical symbol system of Newell and Simon (1976, 85–91); too inflexible and limited to satisfy completeness and closure requirements of physical symbol systems. So, if one were favorably inclined toward the conjecture that physical symbol systems are a necessary and sufficient condition for intelligence, one should take pleasing note of the absence of that condition in plants and single cells.

Let the difference-engine stance consist of G-A taking rightly entwined with attendant information-processing taking. G-A behavior and information processing go hand in hand; plainly in the case of the refrigerator regulated by thermostat, more intricately in living systems and in computers and robots.

            I do not mean to preclude, or set up for censure, all takings of the intentional stance toward computers and robots. I say to myself as I key my choices and numerical values into the modern treadmill at my gym that I am lying to the machine about my age so that it will let me have a higher target heart rate. That is a convenient saying, lying, but it is shallow and frivolous. Adopting genuinely my intentional stance toward the treadmill machine would be unwarranted. I know perfectly well that all the relevant behavior of the machine can be captured by thinking of the machine as simply a designed machine with a difference-engine control system having user inputs. There is no serious reason for taking the intentional stance toward the "not-so-smart" treadmill. Such a taking is a joke or retrogressive metaphysics. In stark contrast to interaction with the treadmill control system, we have: playing chess against Deep Blue. It is impossible for any human whatsoever to play against Deep Blue with any success at all (measured, say, by number of moves before losing to the machine) without taking the intentional stance toward it.

Natural intelligence is a spectacular spin-off from the resolution for animal cells of the cellular design problem that stems from the excess negative charge of the macromolecules of life. The plant-cell resolution is passive; the animal-cell resolution is radically dynamic.8 In the animal-cell resolution, the cell membrane ends up having an electrical potential difference across it. The momentous spin-off, of course, is that this membrane potential, in some animal cells, could be briefly changed by adjustments in the membrane conductances with respect to sodium ions and with respect to potassium ions. Thus did the animal-cell resolution to the design problem arising from the excess electrical charge of the macromolecules of life make possible the essential signaling mechanism (brief change in membrane potential) for muscle cells and neurons.

Some animal behaviors, whether of invertebrates, arthropods, or vertebrates, are as surely devoid of intelligence as plant behaviors. The most direct responses (taxes and kineses) of animals to environmental changes are certainly as without intelligence as plant tropisms. Likewise, reflexes (more and less direct in their neural pathways), their nonassociative modulations (habituation, sensitization, and inhibition), and fixed-action patterns are fully explicable using only the design-engineering and difference-engine stances: zero intelligence in these behaviors. However, when we come to the orchestrations of, and associative (CC or OC) elaborations of, reflex behaviors into whole-animal behaviors (or into like machine behaviors; Anderson 1995, 276–77; Pfeifer and Scheier 1999, 488–91), I think we are then come to agents,9 at least minimally so, and so to at least the anteroom of intelligence.

 

II. Animals and Machines

 

The human, "the paragon of animals," is a possessor of intelligence for sure. That we take as given. But which humans? I should like to include as surely and fully intelligent those of my ancestors who were "here to greet the Mayflower."

These people were Upper Neolithic hunter-gatherers,10 who decorated, made graphic representations, made music, played games, and spoke a language.11 They were illiterate, and that likely means that they had no concepts for thinking about the structure of spoken language; no explicit knowledge of the syntactical structure of their spoken language; no concept of words as linguistic entities, rather than as names or the things meant by those names; and no concept of a sentence as such (Olson 1994, 67–77, 87, 123).12 Literacy and its cognitive wake are not essential to intelligence (cf. Deacon 1997, 366–67). Peoples illiterate, but speaking a natural language, are fully intelligent (by age eight, say).

Peoples illiterate, but speaking natural language, have powers of symbolic thought, enough for full intelligence.13 The symbols of symbolic, linguistic thought, I should say, are in the understanding user. From brain imaging during language processes and from performance deficits of brain-damaged persons, it appears that typical distinct and interacting areas of brain activities underlie the exercise of concepts, the exercise of words and sentences, and the coordinated exercise of concepts with words and sentences (Damasio and Damasio 1992).14 Symbols in symbolic, linguistic thought would appear to be engaged broadly (in some yet unknown ways)15 upon these brain activities (Deacon 1997, 298–309, 331–34).

A bear track indicates a bear whether or not any smart animal comes along and becomes cognizant of that indication (cf. Dretske 1988, 54). When one takes a bear track as an indication of bear, the track is for one a natural sign, an index. Similarly, when one sees in a bear-claw necklace its natural indication of bear (and of human), the necklace is for one an index. The bases of index are causality or other natural correlation, that is to say, the basis of index is natural indication.16

When one imitates the vocalizations of birds, in entertaining the children, say (not in calling birds, and not as a code signal to a fellow hunter), or imitates other sounds of nature, or when one imitates motions of some animal or draws or sculpts its likeness, one makes an icon. An icon is a sign natural by likeness, but artificially produced (shades of ANN's) and interpreted as such.

A symbol such as the word bear is an artificially produced sign having an indication function given it by human social convention; and having a syntactic character, also conventionally determined, in the natural language in which it is a word. Indications of words

(or other token types) are precised by their use in sentences (or in other syntactically structured complexes of token types). What is more, sentences can indicate new, fantastically many, and fairly complicated things, using a fairly fixed vocabulary,17 even without writing and reading. The exquisite indicating function of linguistic symbols is surely the reason for their being in life

(see also Deacon 1997, 79–101, 449–50).

            All nervous systems have iconic and indexical indication functions brought about by evolution and development. As Terrence Deacon observes, these "are the basic ingredients for adaptation" for animals with nervous systems (ibid., 449). Now I discern some animals with nervous systems that are no more intelligent than plants.

            Structuring causes (evolutionary and developmental history) explain why root-curving in gravitropic plants exists. A triggering cause (uprooting) explains why a particular root-curving occurred when it did occur. The sequence of internal chemical events I related above explains how root-curving is effected.18 That makes two, coupled whys and one how. Some organic why-explanations, some structuring-with-triggering causes, can become reasons for actions in some animals (Dretske 1988, 50, 116–21; cf. Rosenberg 1990, 304–5; Haugeland 1994, 302–4). Animals devoid of such reasons are as devoid of intelligence as plants.

A properly functioning collumella region of a gravitropic root indicates a change in the direction of gravity (because it got uprooted and) because that indication function contributed to the survival and propagation of the present plant's ancestors. The present plant has the gravitropic response because of its genetic inheritance and a favorable environment during its development. The root of the present plant does not curve downward on account of what its collumella region is now indicating: the downward direction (where the root may more likely encounter nutrients and water). Rather, the present plant root curves downward in response to the collumella indication only because its ancestors did reach adequate nutrients and water by that behavior.

So it goes with the zero-intelligence animal behaviors I have registered already: taxes and kineses, reflexes and their nonassociative modulations, and fixed-action patterns (such as gaits or saccades). But so it goes also for whole-animal behaviors, indeed for the animal's whole repertoire of behaviors, where those behaviors result, in their individual-learning aspects, only from classical conditioning. In such whole-animal behaviors, the orchestration of reflex behaviors, and of concomitant iconic and indexical indication functions, in the present animal are in operation only (because of the present stimulus) and because its ancestors survived (long enough, statistically) by those whole-animal behaviors.

            Animals whose only associative modulations of their reflex behaviors are by classical conditioning are animals capable of only what Eric Kandel and Robert Hawkins call implicit learning: nonassociative learning and classical conditioning. We are capable of that sort of learning too. It is crucial for us and complements our explicit learning, which, in contrast to implicit learning, "is fast and may take place after only one training trial. It often involves association of simultaneous stimuli and permits storage of information about a single event that happens in a particular time and place; it therefore affords a sense of familiarity about previous events" (Kandel and Hawkins 1992, 80). Explicit learning evidently occurs only in vertebrates.

            It is reasonable to suppose that animals—specifically, invertebrates and arthropods—incapable of modulating their reflex behaviors by explicit learning are incapable of having reasons, in even a paltry way, rather than having only structuring-with-triggering causes, for their whole-animal behaviors. Still, they do have whole-animal behaviors. Toward these animals, and corresponding machines, it would seem appropriate for us to take the agent stance,19 but not the intentional stance (rational-agent stance); the former stance is sufficient. So, I am concluding that invertebrates and arthropods have no more intelligence than plants, which is to say, none. Artificial autonomous agents with zero intelligence on this account: Distributive Adaptive Control robots and Cairngorn, a robot that (without supervision) learns locations and builds up a map of its environment, based on information from motor signals (Pfeifer and Scheier 1999, 488–90). These machines, though themselves without intelligence, may well become crucial parts of intelligent artificial agents in the future; just as implicit learning is crucial to us.

Hans Moravec and colleagues envision, for about the year 2010, an intelligent robot with the gross information processing power of a reptile, a robot programmable for doing many different chores for us (1999a, 95–98). He speaks of this robot, like lizard, being "instinct-ruled, . . . unable to adapt to changing circumstances" (Moravec 1999b, 135).

Traditionally, instincts have been contrasted with intelligence. Schopenhauer thought of instincts as requiring will, perception, and some apprehension of elementary causal relations; as supplanted by reason in man; and as not intelligence because not guided by knowledge of the end toward which the animal works (1969, 23, 34–39, 84–87, 114–17, 150–52). "The one-year old bird has no notion of the eggs for which it builds a nest" (ibid., 114). Bergson also contrasted instincts with intelligence. The distinctive mark of intelligence, in Bergson's assessment, is extensive tool making, the sort that requires inference (1983, 146–56).

On balance I should say the traditional sharp distinction between instinct and intelligence has proven incorrect. Instinct seems now rather complementary to, indeed supportive of, higher intelligence. Some bird species, though having an instinct toward nest building, have to learn by trial and error what materials are suitable for building a nest. Trial-and-error learning is explicit learning. We, too, seem to have instincts, in a sense formulated by Ronald de Sousa (a sense derivative from Freud's): instinct for us is simply that which determines emotional dispositions, which then motivate, but do not determine, behavior. Such instincts determine not the response, but the desired outcome. They are unconscious, but manifest in feelings (De Sousa 1987, 78–86, 91–105). Consider also, the role of emotion in our own full-bodied and creative intelligence (Damasio 1994; also, Dennett 1975, 74–76). Instinct in De Sousa's sense would seem a smart thing to try to put into Moravec's robot 2010 or robots beyond.

Any of the vertebrates and any of the corresponding artificial autonomous agents exhibiting some explicit learning have some intelligence, and these are just the agents towards whom the intentional stance is warranted. Animals capable of some explicit learning are capable of responding to a natural indicator because of some of what it is indicating, rather than only because of success of ancestral response to occasions of similar indicators. Some vertebrate species, of course, are capable of more such learning and more sensitive, nuanced responding than are other vertebrate species. Explicit responding to, explicit interpretation of, indicators comes in levels of explicitness beyond the bare threshold of explicitness (Dretske 1988, 116–21; Clark 1992, 384–86; Deacon 1997, 73–83). More explicit and more innovative (Pfeifer and Scheier 1999, 20–21, 632–33; Haugeland 1989, 180–81) responsiveness to indicators seems to coincide with more of what we call intelligence; very highly accomplished in animals having at work in mind the indicating functions of linguistic symbols.

Can the solid intelligence possessed by (even illiterate) humans be had by a nonliving machine of our making in the future? I remain unsure, but we have, from AI today, some promising subsystems for such a creature. And we have indications of what more is to be pursued.

The most salient shadow of doubt I have over being able to make such a creature is that explicitness of responses to natural indicators may require, to degrees in step with the various animals capable of such responses, that the machine be alive in the world and apprehend that and love that. The paragon of animals, for extreme example, says of his kind: "We exist, and we know that we exist, and we love that fact and our knowledge of it" (Augustine). I am unsure but what this sort of occasion of know and love is basic to any mind and but what the exist they entail is being alive.

 

Notes

 

1.     My portrayal of the plant- and animal-cell resolutions of the design problem stemming from the excess negative charge of the macromolecules of life are abstracted largely from Hoppensteadt and Peskin 1992, chp. 7.

2.     My portrayal of the operation of gravitropism follows Evans, Moore, and Hasenstein 1986. In some gravitropic plants, there may be additional detection farther back along the root; Ridge and Sack 1992.

3.     More specifically, in internal constraints of its dynamics; Pattee 1973, 75-101.

4.     G-states inhere in the G-A system, but are evidently often (always?) specified by relations with aspects of the external environment. The G-state of the gravitropic root inheres in the constitution of the root, but is specified by a relation to an aspect of the "external" environment: alignment with gravity.

5.     Minsky's difference engine (merely homonymous with the difference engine of Charles Babbage) is a very broad-brush rendition of GPS (Newell, Shaw, and Simon 1960, 256–60; Haugeland 1985, 179–80). The appropriateness of the intentional stance evaporates in the passage from GPS to the difference engine, at least until the difference engine is embellished with further gizmos (e.g., Minsky 1986, 175). It is the no-frills model of Minsky's difference engine, with its subagents replaced by subsystems, that I am putting to work in the present study.

6.     Between the difference-engine stance and the intentional stance (rational-agent stance), there are at work today the agent stance (in embodied cognitive science; Pfeifer and Scheier 1999) and the schema stance (in robotics and neurobiology; Arbib, Érdi, and Szentágothai 1998, chp.3). The behavior ranges of the agent and schema stances are evidently coincident. I take the warrantable ranges of the four stances to be as follows: the intentional stance is a proper subset of the agent/schema stance is a proper subset of the difference-engine stance is a proper subset of the design-engineering stance. On working relations of the difference-engine stance with the agent/schema stance, within the warranted behavior range of the latter, see Simmons and Young 1999, 15–17. On inter-workings of the agent/schema stance and the intentional stance, within the warranted behavior range of the latter, see Brooks et al. 1999, 64–67, 75–80, and Scassellati 1999.

7.     There would seem to be no formal system at hand; actuations of steps in gravitropism or of steps in a bacterium's regulation of its level of ribosome synthesis (Nomura 1984) depend on concentrations of the tokens; analog, not digital. Moreover, the types of the tokens are of the wrong genre for digital fidelities; the types at hand are simply of chemical species, not types defined by roles within the informational system. (See further, Haugeland 1985, 52–58, 105; 1997, 8–10; 1981, 84–86.) The preeminent bearers of biological information, nucleotide-chain token types, are defined by role in the informational system. Still, it is my understanding that concentrations will be crucial to any of the life-making and life-maintaining activities and behaviors of any cell and any multi-cellular organism.

8.     The membrane forming the boundary of an animal cell is hardly a wall. It can withstand no pressure difference across it. The membrane is permeable to water, potassium, sodium, . . . but not to chlorine. The membrane is more permeable to potassium than to sodium. Ion pumps in the membrane pump sodium ions out of the cell and, in the same process, pump more potassium ions in from outside the cell. The pump maintains a higher concentration of sodium ions outside than inside and a lower concentration of potassium ions outside than inside. So sodium ions outside will be diffusing back in across the membrane; potassium ions inside will be diffusing back out. Since the membrane allows the diffusion of potassium out more freely than it allows the diffusion of sodium in, the net effect of the pump will be to increase the concentration of nonwater particles on the outside, thereby reducing water osmosis into the cell that would otherwise burst the cell membrane. Since the pumping is decreasing the overall concentration of positive ions on the inside of the cell, the excess negative charge on the inside due to the macromolecules of life (and chlorine ions) will not be entirely cancelled out by the dissolved positive ions inside. Then the cell membrane will have an electrical potential difference across it. The cell can live with that provided the pump speed is restricted to a certain range implicated by the membrane's electrical conductance with respect to sodium ions relative to its electrical conductance with respect to potassium ions. A very dynamic resolution!

9.     See note 6.

10.  My own tribe, the Choctaws, dwelt in thatched-roof cabins. They raised corn, beans, and pumpkins; gathered nuts and fruits; hunted deer and bear; and fished. Like all the aborigines of North America, they were illiterate.

11.  Presumably, they had some notions of numbers; Butterworth 1999, 25–56.

12.  North American Indian languages were typically polysynthetic. Words were formed from bound elements (elements used only in combination with other such elements), which elements served as nouns, verbs, adjectives, and adverbs. A single word of this sort may carry the meaning of an entire sentence (i.e., of what would be a sentence in a language such as English).

13.  And perhaps for fully wiping out H. neanderthatlensis; Tattersall 2000, 61–62; but see Deacon 1997, 370–73.

14.  On brain-area activations in elementary numerical thought, and their distinctness from activations in linguistic thought, see Butterworth 1999, chps. 4 & 5.

15.  Recent work on the ways: Hilario 1997 (survey); Blank, Meeden, and Marshall 1992; Plate 1997; Omori, Mochizuki, Mizutani, and Nishizaki 1999.

16.  As I understand it, the basis of index is natural indication, but we extend index into artificial indication in our use of words (not as symbols in the full-bodied sense, but) as explicatives and demonstratives or (supported by concepts and symbolic thought) as proper names or personal pronouns (Macnamara 1986, chps. 3–5).

17.  Of course, one does not comprehend the words or the sentences, even though one has full syntactic competence in the language (e.g., patient Boswell, in Damasio and Damasio 1992), unless one knows what they indicate, that is, unless one has the concepts that should be evoked by the substantive words in their sentence.

18.  The causal and explanatory notions here are from Fred Dretske (1988, 40–50); the application to gravitropism is my own.

19.  See note 6.

References

 

Anderson, J.A. 1995. An Introduction to Neural Networks. Cambridge, MA: MIT Press.

Arbib, M.A., Érdi, P., and J. Szentágothai 1998. Neural Organization: Structure, Function, and

       Dynamics. Cambridge, MA: MIT Press.

Bergson, H. 1983 [1907, 1911]. Creative Evolution. A. Mitchell, translator. Lanham, MD:

       University Press of America.

Blank, D.S., Meeden, L.A., and J.B. Marshall 1992. Exploring the Symbolic-Subsymbolic

       Continuum: A Case Study of RAAM. In The Symbolic and Connectionist Paradigms.

       J. Dinsmore, editor. Hillsdale, NJ: Lawrence Erlbaum.

Bray, D. 1995. Protein Molecules as Computational Elements in Living Cells. Nature

       376:307–12.

Brooks, R.A. 1991. Intelligence without Representation. In Haugeland 1997.

Brooks, R.A., Breazeal, C., Marjanovic, M., Scassellati, B., and M.W. Williamson 1999.

       The Cog Project: Building a Humanoid Robot. In Nehaniv 1999.

Butterworth, B. 1999. What Counts: How Every Brain Is Hardwired for Math. New York:

       The Free Press.

Clark, A. 1992. The Presence of a Symbol. In Haugeland 1997.

Damasio, A.R. 1994. Descartes' Error. New York: Avon.

Damasio, A.R., and H. Damasio 1992. Brain and Language. Sci. Amer. (Sep):89–95.

Dennett, D.C. 1975. Why the Law of Effect Will Not Go Away. In Mind and Cognition. 1990.

       W. Lycan, editor. Cambridge, MA: Basil Blackwell.

——. 1981. True Believers: The Intentional Stance and Why It Works. In Haugeland 1997.

De Sousa, R. 1987. The Rationality of Emotion. Cambridge, MA: MIT Press.

Dretske, F. 1988. Explaining Behavior: Reasons in a World of Causes. Cambridge, MA:

       MIT Press.

Evans, M.L. Moore, R., and K. Hasenstein 1986. How Roots Respond to Gravity. Sci. Amer.

       (Feb):112–19.

Haugeland, J., 1981. Analog and Analog. In Haugeland 1998.

——. 1985. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.

——. 1989. Representational Genera. In Haugeland 1998.

——. 1994. Understanding: Dennett and Searle. In Haugeland 1998.

——. 1998. Having Thought. Cambridge, MA: Harvard University Press.

Haugeland, J., editor, 1997. Mind Design II. Cambridge, MA: MIT Press.

Hilario, m. 1997. An Overview of Strategies for Neurosymbolic Integration. In Sun and

       Alexandre 1997.

Hoppensteadt, F.L., and C.S. Peskin 1992. Mathematics in Medicine and the Life Sciences.

       New York: Springer-Verlag.

Küppers, B. 1990. Information and the Origin of Life. Cambridge, MA: MIT Press.

Macnamara, J. 1986. A Border Dispute: The Place of Logic in Psychology. Cambridge, MA:

       MIT Press.

Minsky, M. 1986. The Society of Mind. New York: Simon & Schuster.

Moravec, H. 1999a. Robot: Mere Machine to Transcendental Mind. New York:

       Oxford University Press.

——. 1999b. Rise of the Robots. Sci. Amer. (Dec):124–35.

Nehaniv, C.L., editor, 1999. Computation for Metaphors, Analogy, and Agents. Berlin:

       Springer-Verlag.

Newell, A., Shaw, J.C., and H.A. Simon 1960. Report on a General Problem-Solving Program. In

       Information Processing: Proceedings of the International Conference on Information

       Processing, 15–20 June 1959. Paris: UNESCO.

Newell, A., and H.A. Simon 1976. Computer Science as Empirical Inquiry: Symbols and Search.

       In Haugeland 1997.

Nomura, M. 1984. The Control of Ribosome Synthesis. Sci. Amer. (Jan):102–14.

Olson, D.R. 1994. The World on Paper: The Conceptual and Cognitive Implications of Writing

       and Reading. Cambridge: University Press.

Omori, T., Mochizuki, A., Mizutani, K., and M. Nishizaki 1999. Emergence of Symbolic

       Behavior from Brain-Like Memory with Dynamic Attention. Neural Networks

       12(7–8):1157–72.

Pattee, H.H. 1973. Physical Basis and Origin of Control. In Hierarchy Theory. New York:

       George Braziller.

Pfeifer, R., and C. Scheier 1999. Understanding Intelligence. Cambridge, MA: MIT Press.

Plate, T.A. 1997. Structure Matching and Transformation with Distributed Representations.

       In Sun and Alexandre 1997.

Ridge, R.W., and F.D. Sack 1992. Cortical and Cap Sedimentation in Gravitropic Equisetum

       Roots. Amer. J. of Botany 79(3):328–34.

Rosenberg, J.F. 1990. Connectionism and Cognition. In Haugeland 1997.

Rosenblueth, A., Wiener, N., and J. Bigelow 1943. Behavior, Purpose, and Teleology. Philosophy

       of Science 10:18–24.

Scassellati, B. 1999. Imitation and Mechanisms of Joint Attention: A Developmental Structure

       for Building Social Skills on a Humanoid Robot. In Nehaniv 1999.

Schaffner, K.F. 1993. Discovery and Explanation in Biology and Medicine.

       Chicago: University Press.

Schopenhauer, A. 1969 [1818, 1844]. The World as Will and Representation.

       E.F.J. Payne, translator. New York: Dover

Simmons, P.J., and D. Young 1999. Nerve Cells and Animal Behavior. 2nd ed.

       Cambridge: University Press.

Sun, R., and F. Alexandre, editors, 1997. Connectionist-Symbolic Integration. Mahwah, NJ:

       Lawrence Erlbaum.

Tattersall, I. 2000. Once We Were Not Alone. Sci. Amer. (Jan):56–62.
Sanctions: 9Sanctions: 9 Sanction this ArticleEditMark as your favorite article

Discuss this Article (8 messages)