Sunday 8 June 2014

What is Cognition, and How Could it be Extended?

ROBERT RUPERT
U Colorado & U Edinburgh
Department of Philosophy

VIDEO



OVERVIEW: Cognition is the overarching natural kind or property that distinctively contributes to the production of the proprietary phenomenon investigated by cognitive science, that is, intelligent behavior. On the ground, cognitive-scientific practice relies most fundamentally on modeling. Taken together, these two observations suggest a way to identify what it is for a process or state to be cognitive: abstract from the variety of forms of successful cognitive-scientific modeling. The central theoretical construct of cognitive science, the one common to all successful forms of cognitive-scientific modeling, is the relatively persisting, integrated system that moves through the world managing the agent's interaction with the environment when the agent behaves intelligently. I characterize the relevant form of integration more precisely, then ask (1) whether humans currently function as components in cognitive systems that include more than individual humans and (2) whether the idea of an integrated system can help us to decide whether to count as cognitive the processes occurring in creatures other than humans.

READINGS:
    Rupert, R. D. (2011). Cognitive systems and the supersized mindPhilosophical studies152(3), 427-436.
    Rupert, R. D. (2009). Cognitive systems and the extended mind. Oxford University Press.
    Rupert, R. (2013) Memory, Natural Kinds, and Cognitive Extension; or, Martians Don’t Remember, and Cognitive Science Is Not about Cognition, Review of Philosophy and Psychology 4, 1 (2013): 25–47


32 comments:

  1. The definition of cognition seems somewhat circular because the identification of intelligent behavior requires cognitive competence so only a cognitive system can identify what would account as a cognitive system. This does not render the definition useless of course but it requires perhaps some additional constraints for example: what would be the most primitive cognitive activity. My guess is that such a primitive activity would be selection. Selection stands at the basis of all intelligent behaviors but am not sure that this is not a trivial assertion. So I would propose selection for relevance, where relevance is derived from the structure and the history of the cognitive agent. What do you think about this path of reasoning?

    ReplyDelete
    Replies
    1. I see your point. It is strange that we have to use cognition to theorize about or study cognition. But, there doesn’t seem to be an alternative. Any theory of mind or cognition that we propose will be one that is proposed by a mind or a cognitive system.

      I like your ultimate path of reasoning. The most fundamental cognitive mechanisms are differentially sensitive to some properties, in contrast to others, and this could be seen as a kind of selection. I think the set of mechanisms that constitute the human cognitive system contains many mechanisms that display this kind of differential selectivity. I'm a little uncomfortable talking about relevance, because that term might suggest that the subject somehow sees (as a mental or cognitive act) that the properties in question (in the environment) are relevant. And as an account of the roots of cognition, that would be circular! But, there are other ways to understand relevance, which you may have in mind. For example, mechanisms that are components of cognitive systems typically have their sensitivity altered by experience, failure – which is, metaphorically speaking, a way for the world to tell the organism that it was detecting and responding to something not sufficiently relevant (given the circumstances).

      Delete
    2. Thanks. Indeed what I meant in "relevance" was not the product of the said cognitive system but more generally what confers increased fitness in evolutionary terms. In this sense relevance is what defines a viable cognitive system and not the other way around. Cognitive systems which fail to extract relevance in terms of increased fitness will not survive for long.

      Delete
  2. What do you mean by intelligent behavior? Is no-human animal able to intelligent behavior, to cognition?

    ReplyDelete
    Replies
    1. Animals surely seem to have intelligent behaviour to me. Which, according to the definition given means that they have cognition (?).

      Delete
    2. I agree with you. But according to the definition Robert Rupert gave, animals need to have cognitive states that co-contribute to intelligent behavior in order to have cognition. So the answer must depend on the definition of intelligent behavior I guess.

      Delete
    3. Like vveitas , I think that animal have intelligent behavior but they have limited cognition regarding thier evolution (we can make analogy with humain child).

      Delete
    4. It's unclear to me whether nonhuman animals exhibit intelligent behavior, but here are some thoughts about how we could, and eventually probably will, answer that question. Cognitive science begins with the hunch that a certain range of observed phenomena (call these ‘forms of intelligent behavior', if you like, but that's merely a convenient label) are produced by the same kind of process – that there is a unified explanation of these phenomena. What are the phenomena? The list is long, but it includes such things as experimental manipulation of the environment and the creation of technologies, language use, the curing of diseases, the construction of large structures that bear a striking resemblance to the drawings that guided the production of those structures, and many more. Then, cognitive scientists try to give a unified explanation of these phenomena. This is not guaranteed to work, in which case, there may be no single kind Cognition (and thus no unified family of phenomena, the forms of intelligent behavior; instead there would be nothing more than a list or catalogue).

The phenomena to be explained (the explananda of cognitive science) don’t, in the first instance, include behavior of nonhuman animals. But, there’s lots of room for revision along the way, just as is there is in standard scientific enterprises. We might discover that some phenomenon we’d been ignoring is actually best modeled using the same framework we were using to model the phenomena we were paying attention to – in which case we would regroup the phenomena – perhaps counting something as intelligent behavior that we hadn’t previously thought of as intelligent behavior. Perhaps we’ll find out that the best models of the production of paradigmatic cases of intelligent behavior also apply to the case of nonhuman animals. If so, then the animals are cognizing. If the best models of nonhuman animal behavior turn out to be quite different (how much difference between models is relevant is an interesting question), then the animals are doing a different kind of thing (whatever you call it) from what humans are doing when they build theories that guide experimental manipulation and technological applications, use language, cure disease, design and construct buildings, etc.

      Delete
  3. On a entendu les comportements intelligents d'une personne sont influencés par des contributeurs. Ces contributeurs peuvent être cognitif ou non-cognitif. Je ne crois pas avoir compris comment on différencie un contributeur cognitif et non-cognitif.

    Translation:
    We were showed that intelligent behaviour of a person are influenced by contributors. Those contributors could be cognitive or non-cognitive. I don't think I understood the difference between cognitive and non-cognitive contributors.

    ReplyDelete
    Replies
    1. Yes, I think that's the crux of the issue. There are lots of contributors (the sun, Aristotle, the birds, the trees, our computers, our friends, and so on), and the puzzle is how to divide them, in a principled way, into two categories. If we can't do that, then, when the proponent of extended mind or extended cognition says that cognition extends into the world, s/he's simply pointing out that many causes of intelligent behavior are out in the world; but that would not be very interesting as a general claim, because that's been agreed to by all parties, all along.

      Delete
  4. Do you believe that cognitive systems develop in time? If so, humans as cognitive systems may have became more distributed, because of the sociotechnological development (Internet, web, Google, mobile phones, newspapers, etc). Which kind of indicates that the P#1 may be not relevant. The fact that CogSci methods were successful historically does not mean that they will be successful in the future. But again, it does not mean that CogSci was having a wrong approach to things all along. In any case P#1 may not be the good basis for arguing about extended cognition...

    ReplyDelete
    Replies
    1. It depends on the role of P#1. I claimed that its purpose is just to show that there's AT LEAST ONE internal cognitive system. Seen that way, the technologically aided introduction of other, extended ones wouldn't affect the argument. After all, there being at least one internal cognitive system is consistent with there being loads of extended cognitive systems as well.

      But, perhaps what you're thinking is that, as technological wonders have appeared, the integrity of the internal system has been lost, in which case, yes, that would be a problem for my argument. I don't think that's happened, though, as a matter of fact. People lose their phones, and their laptops run out of batteries, and they continue to exhibit flexible intelligence and problem-solving capacity (for example, the capacity to figure out a way to find an outlet to recharge the laptop battery!). So it looks if, even in the present day and age, the positing of an internal cognitive system will still do lots of explanatory work. The later parts of the argument are supposed to show that adding any more systems is gratuitous, from a causal-explanatory standpoint.

      Delete
  5. Dear Robert, Thank you very much for your very interesting presentation ! With Cognition and Science, you questioned the idea of “extended mind” where a conception of distance is a component. I would like to ask you where you can localize epistemology and metaphysics in this dynamic and topological discussion about knowledge(s).

    ReplyDelete
    Replies
    1. Say more about the notion of a conception of distance being a component. I don't think I understand that idea.

      Delete
  6. So, if external factors that contribute to the production of intelligent behaviour are not a part of cognition, what are they in your opinion? How can we define them?
    Let's forget for a second the suggestion of new technological devices and the web as an extension of our cognitive abilities. What about the cultural and social network in which humans have been embedded since we began to exist? Isn't it true that this also shapes the way we behave and makes important changes in our brain connections? Can that be seen or has it been defined before as a type of extended cognition?

    ReplyDelete
    Replies
    1. Interesting question Fhernandhah. Say someone comes from a culture where everybody is illiterate. They'll never learn to read, whereas someone from a different culture will learn to read. Since reading is clearly a cognitive ability and the culture, or environment, that one grew up in decides the fate of one's literacy, is culture part of cognition?

      Delete
    2. I do think those external social factors are important, as are external nonsocial factors (the tree in front of me that causes my perceptual experience). But I don't see the logic in going from

      B caused A
      to
      therefore, B has distinctively A-type properties.

      The big bang caused everything, but that doesn't give the big bang cognitive properties.

      I want to make loads of room in our theorizing about cognition for the contributions of society. I don't know of anyone who denies (or did deny at any point in the history cognitive science) that, e.g., your family upbringing causes you to have many of the kind of thoughts you have. But since that's always been acknowledged, I'm not sure what the innovation is to say now that families are cognitive entities. Why isn't the status "causally affects cognition" good enough? Why should we want to say also that the thing doing the causing IS ITSELF cognitive?

      Delete
    3. Is it not the case that the combination of contributions of each individual member in a family create a dynamic that has it's own emergent properties? How does this differ from any other kind of network towards which we ascribe intelligence? Moreover, Internal family systems theory proposes that individuals internalize this family dynamic, which influences thoughts, feelings, and behaviours (personality). So, not only does the family unit have a causal role in intelligent behaviour, according to this framework, but the unit itself is intelligent in much the same way that we as individuals are.

      Delete
  7. You use many times the concept: intelligent behavior. Finally, I think this concept is subjective and depends individual conditions. Yeah! That's true the machines will predict the intelligent behavior. However it is not clear your concept for me. Could you give me more details how you describe it (intelligent behavior)? What it is for you in some words. Thank you ROBERT RUPERT.

    ReplyDelete
    Replies
    1. In a nutshell, it is, at least at the outset, just a list of observable phenomena on which I place an empirical bet. I think those phenomena will, when our scientific work is through, be best accounted for using the same sorts of models. In other words, there's a range of observed behavior (people engaging in conversation, curing diseases, building large buildings the structure of which matches the structure of the drawings with which they began, etc.) that I'd bet will have a unified explanation (or most of which will have a unified explanation; see my remarks above about the flexible nature of the scientific process). If not, then there is no such thing as intelligent behavior in any deep sense (instead, just a disunified collection phenomena we place under the heading 'intelligent behavior'). In that case, too, we would find out that there is no unified kind cognition.

      Delete
  8. I found Professor Rupert’s talk quite interesting. I have two questions. First, what specifically is meant by “intelligent behavior”? Second, what is a body? If a body has been augmented by technology (e.g., replacing lost eyesight with a camera), do we have two cognitive systems or only one?

    ReplyDelete
    Replies
    1. I agree with Max that the concept of intelligent behaviour needs a precise definition if a definition of cognition relying heavily on the term is to be of use. Does domain specific intelligent behaviour count? In that case, could not many a robot already be considered a "cognizer"?

      Delete
    2. I disagree. I don't think this is how scientific investigation works. We don't have to define, for example, electrical phenomena to develop a theory of electricity. At the outset of the inquiry, we simply say that here's a list of phenomena -- lightning in the sky, hair standing up by curtains, sparks from your brush (whatever) -- and we call them 'electrical'. Then we bet that they'll have a unified explanation, and we do some experimental manipulation, produce some data, and model the data. If we find that all or most of the phenomena we began with have a unified explanation (one kind of model or family of models accounts for all of them or most of them), then we say that we've discovered what electricity is.

      It's the same thing in the case of cognitive science. All you need at the outset is a list of observable phenomena (people having conversations, navigating their environments, building architectural marvels, writing books, and so on) that you suspect will all be susceptible to the same sort of explanation. They you experiment, manipulate, produce data, model the data and IF (that's a big if) all or most of those observable phenomena are best explained by a certain kind of model or family of models, then you've found out what cognition is. No need in either case (electricity or cognition) to define something that's common to all of the observable phenomena taken as explanatory targets. And this is the standard case in science, I would maintain.

      Delete
  9. Thanks for the great talk Robert. I was happy that you confronted the problem of defining cognition head on.

    Has anyone attempted to define a cognizer yet? I feel this is an alternate angle on the same question. This may, however, translate to ‘what constitutes a mind’ and ‘how far is our mind distribute’ (the extended-mind hypothesis).

    I would argue that a cognizer is a body plus all the environmental data it processes. It’s hard to draw the cognitive/non-cognitive line between environment and body because we could equally draw this line between bodily sensory receptors and their initial brain projections. Or, between the primary sensory cortices and association cortices (involved in integrating data form multiple senses and memory).

    We need to draw a similar cognitive/non-cognitive line for cognitive output. Clearly our body is cognitive because it carries out intelligible behaviour. Is the book we write in, or computer we search with part of our cognitive output as well?

    ReplyDelete
  10. I liked your definition of cognition. It reminded me of the way that claims are constructed in patents - you left it general enough to accommodate many potential forms of cognition, but it was still precise enough that it captures the essence of what cognition might be.

    ReplyDelete
  11. I get the impression you have a peculiar idea of the role of natural kinds and ontologies in science. I tend to think they are tools for researchers – you seem to think they are constraints they should respect. Did I get it wrong?

    ReplyDelete
    Replies
    1. Yes. The slide about the mark of the cognitive was meant to try to push the discussion in the other direction. I agree with you that researchers use whatever they choose to get the job done, produce interesting effects, etc., whereas the theorizing I'm engaged in is what we do "after the fact." It's a matter of looking at what the researchers have done and trying to explain why, for example, they were successful, for instance, which natural kinds are such that their existence would account for the researchers' successes. But, that's no prior constraint on the researchers (or if it is, it's a constraint the world is placing on them, assuming there is an external world, which I do -- not a constraint that philosophers are placing on them).

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Yes, but keep in mind that any interpretation of a current body of scientific work could do this kind of damage. Maybe 100 years from now people will look back and see that the extended picture was a complete dead end and that the people arguing for an extended view of cognition caused others to waste an enormous amount of their time pursuing an extended view of cognition. Anytime someone tries to interpret existing science this kind of risk exists. But in my case, I want to emphasize the difference between saying, "here's an ontology derived from our best interpretation of current science, which may change, thereby rendering this ontology obsolete," and saying "here's an ontology derived from our best interpretation of current science, and everyone must accept it, regardless of what kind of future work is done in the sciences." The second view represents a confusion about what it means to do philosophy of science, in my opinion (or maybe just a confusion about the dynamic nature of science itself). Keep in mind that the people I'm arguing with claim (at least some of the time) to be doing the same sort of thing I am: interpreting cognitive science as it now exists (after 25 years of dynamical-systems-based modeling, situated robotics, and the like).

      Delete
    4. Ok, thank you very much for the clarification!

      ————

      Concerning the deleted comment: I apologize, I wanted to edit what I said and didn't notice you had answered. For other readerss, what I removed was:

      "This doesn't necessarly makes it harmless. Lakatos said that research spawns from research programmes, which usually come with their own ontology. If you constraint the making of post-hoc explanation, you're also constraining research programme development, and therefore future research."

      and it was followed with an apology about the word "peculiar" in the introduction note (I didn't mean the pejorative undertone, I meant something like "specific").

      Delete
  12. How about: A mental state is a felt state. The only “components” that are components of a mental state are those that, if you unplug them, there is no mental state: Not that something else is felt, but that nothing is felt.

    So far, this only happens with components that are in the head, not outside.

    Whether that works for a “cognitive” state depends on whether you mean “mental” (felt) by cognitive, or you mean merely internal. If the latter, then, ironically, the boundary between the internal and the external is completely arbitrary: Internalism alone is incoherent.

    ReplyDelete
    Replies
    1. I think the better we understand introspection and how little access it gives us to the theoretically relevant properties of our mental states, the less confidence we should have in our ability to tell which things do or could feel. We might be pretty reliable at telling when we feel things and when we don't, but we shouldn't mistake this for reliability in our judgments about the nature of those felt states (whether they have intrinsic qualitative character, for example, or irreducible phenomenal properties"). Maybe we now know, say, that sedating my thalamus heavily knocks me out, but maybe turning off the computer knocks out the computer-me cognitive system (if there is one). How would we know? Now, if in our own experience we were able to introspect the very nature of the feltness of our mental states (not just that we feel something, but the very essence of it as a kind of state), then perhaps we could make these judgments (about various systems) with confidence, but I don’t think we can. Introspection is too weak and easily tricked a tool.

      In some ways, I like the knockout test for being a component, but I think it runs the risk of leaving too much out of the cognitive domain, if it were applied there. Either too few things get to count as cognitive (very little in the brain would count as cognitive – because so many bits of the brain can be removed without their removal's rendering you unconscious). Or, if we broaden our scope to certain forms of intelligent behavior (leaving the restriction to felt states behind), things go in the opposite direction: lots of external stuff will count as cognitive components by the "knockout" criterion. Smash my computer and suddenly I stop typing my response. Knocking out the computer completely shuts down the form of behavior. That would seem to show, by the knockout criterion, anyway, that the computer contains cognitive processing. But I think the knockout criterion is untenable on independent grounds. Generally speaking, it’s not the case that if A depends on B (A would completely stop if B were taken away), then B has properties associated with A. My cognition would completely stop if there were a huge change in gravitational fields, but it’s pretty implausible that gravitational field thereby have cognitive properties.

      I think you’re presenting a false dilemma in the second part of your comment, Stevan. Why are “cognitive = felt” and “cognitive = internal” the only two options? Why not my approach, according to which being cognitive is, broadly speaking, being whatever kind of thing (property, kind, state, process) distinctively accounts for intelligent behavior; after all, that's the explanandum of cognitive science. Even if I'm right about the distinctive contribution of integrated systems, those systems COULD turn out to span the boundary of the organism. I don't equate being cognitive with being felt, but there's nothing about my approach that makes it an arbitrary definition that cognition is internal. It simply looks like for most people most of the time, that’s how it is. And that judgment isn’t based on a definition of cognition as internal; it depends on what theoretical property plays the central dividing role in cognitive science AND where instances of that property are instantiated.

      Delete