Sunday, 8 June 2014

Web Semantics

University of Edinburgh
School of Informatics




OVERVIEW: Under what conditions does the Web count as a part of your own mind? We discuss the conditions upon which cognitive extension and integration can be upheld, and inspect these in light of the Web. We also argue that this ability to integrate the mind into media such as the Web is inherently social, insofar as it involves interaction with both technological scaffolding and other humans. Also, there are many cases where external media like the Web are not actually integrated cognitively, but simply serve as a way to co-ordinate intelligent problem-solving via distributed cognition. Yet distributed cognition should not be underestimated, as it can serve as a stepping stone to a wider kind of cognitive integration: collective intelligence. Finally, we inspect the impact of the Web — via phenomena like tagging, social media, and search engines — on traditional notions of language and semantics.


READINGS:
    Hui, Y., & Halpin, H. (2013). Collective individuation: the future of the social webThe Unlike Us Reader, 103-116
    Halpin, H., Robu, V., & Shepherd, H. (2007, May). The complex dynamics of collaborative tagging. In Proceedings of the 16th international conference on World Wide Web (pp. 211-220). ACM.
    Halpin, H (2013) Does the web extend the mind?. In: Proceedings of the 5th Annual ACM Web Science Conference (WebSci '13). ACM, New York, NY, USA, 139-147.




41 comments:

  1. Dror, I. and Harnad, S. (2009) Offloading Cognition onto Cognitive Technology. In Dror, I. and Harnad, S. (Eds) (2009): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins

    ReplyDelete
    Replies
    1. Thanks for posting this citation!

      Delete
    2. Yes, thanks a lot!

      Delete
  2. I don't think that any tool referred to as an "extension of the mind" are different of any other much simpler tools (e.g. glasses). They only change the quantity of data that we have access to and it doesn't have any qualitative impact on the "mind". Even very advanced and complex tools, such a bionic eye, only provide data to the mind but is not a part of it. This even apply to a real eye. Would the loss of an eye make you lose a part of your mind?

    ReplyDelete
    Replies
    1. 1) I believe glasses do have a qualitative impact on the mind (or on our cognition). If we can see something properly we will act differently towards it, it turn altering our cognitive output.

      2) You argue that an eye only provides data to our mind. This argument can become homuncular. The eye provides data to the primary visual cortex, which in turn provides data to secondary visual areas, tertiary areas, and finally brain regions known as associative cortex. Where is the divide between simply providing data and contributing to the mind?

      This argument, however, can go the other direction too. If the light reflected from a tree provides my eye with data, is the tree and the light part of my mind? Might as well be. Perhaps our mind is our body plus everything we perceive in our environment. Under these circumstances, losing an eye would affect your mind (although the terminology of losing part of the mind is debatable).


      Delete
    2. Harry Halpin: The example of the bionic eye was deployed in order to help prime us for the Google Glasses example, as a mid-point between the "dispositional beliefs" of the notebook and the possible cognitive extension proposed by Google Glass. Normal glasses and bionic eyes are "extended perceptual devices" insofar as they help reveal new "information" to our optic array, but lack the "mark of the cognitive" insofar as they do not involve language and memory (i.e. representational information of non-present objects). However, the stream of information provided by Google Glass seems much more likely to qualify as cognitive - and so part of an Extended Mind!

      Delete
  3. Some arguments defend greatly the idea: «The web as extension of our mind”, but and there is but: my question is about accessibility conditions and availability of this extension. Are we sure that can we consider the web as very accessible extension when we need and where we could be every time!
    Other point is the usability, if I can’t or I don’t know how to use these extern tools, this make an extension available only for a part of people …

    ReplyDelete
    Replies
    1. I think Harry argued that the web is not quite an extension of the mind yet, because it is not always available and reliable. If only some people used the Web (under the 4 conditions Harry talked about) and other didn't, then yes, only some people would have 'extended' their mind to the Web.

      Delete
    2. Harry Halpin: Yes, cognitive extension would likely in current Web-based cases involve training. The Web is not just a wire, but a set of embodied capabilities. I'd look at the work of Engelbart on this. However, the general trajectory seems to be to make these kinds of cognitive extensions over the Web more accessible.

      Delete
  4. An emphasis on the integrated external cognitive apparatus being required for life or survival is very interesting in light of the concept of autopoiesis. An integrated system required for the sustenance of life and the demands of a metabolism constitutes not only a living system, but also an cognitive system. As it stands, few of the technologies that I carry in my pocket seem essential to my project of living. The best they do is facilitate the tasks of living. I'm skeptical about google glass being a true cognitive extension, as it stands, but a future invention of google's, a contact lens that tracks blood sugar levels to inform diabetics about their metabolic state, seems to me more like a cognitive extension in the survival sense. So if truly integrated and essential cognitive extensions aren't here yet, they may soon be.

    For the lenses see http://www.theglobeandmail.com/report-on-business/international-business/us-business/novartis-google-to-make-blood-sugar-tracking-contact-lens/article19609320/

    ReplyDelete
    Replies
    1. Harry Halpin:

      Hard to say - I'm not sure I'd endorse a "Robinson Crusoe" version of life. In general, it seems we are becoming more technologically embedded for living. For example, people are less likely to know how to grow food, and rely on technically-mediated logistical food chains. With "smart cities" and "Just in time" production, this may increase. If someone took away your technical scaffolding, could you survive?

      Good point re contact lens, and I generally agree that we haven't reached collective intelligence or widespread cognitive extension *yet*, but we could.

      Delete
  5. Could we affirm that depending of the utilisation of the web we could consider it or not as an extended mind?

    ReplyDelete
    Replies
    1. Harry Halpin:

      Yes, the point of the talk and the "Does the Web Extend the Mind?" talk is that the *entire* Web isn't a cognitive extension, but only portions of it under fairly hard-to-attain conditions.

      Delete
  6. Dear Harry, Thank you very much for your great presentation ! Two questions : (1) Is it possible to give a referential or Fregean definition of usefulness of information in the resolution of a problem? (2) Is it possible to have a formal version of it in description logic and ontologies?

    ReplyDelete
    Replies
    1. Harry Halpin:

      1) "Fregean" in terms of truth values where the truth values are indexed to problems? Perhaps, and a variant of this could be claimed to be done done in search engine notions of "relevance", and not to use such a heavy-duty notion of as truth. I'd look at the work of Dan Sperber on "Relevance" for an attempt to work on relevance in a Fregean context.

      2) No-one has formalized relevance in a Sperber-sense in terms of description logic and ontologies, but it seems notions of Fregean "truth" apply to the *semantic model* of any language, and so meta-modelling the truth conditions of a model gets a bit recursive. See the work of Brian Cantwell Smith on LISP.

      Delete
  7. You said that web semantic is like usual language, that it is a social construction. What do you mean? It doesn’t seem to me that the semantic web has been built by people, instead it has been created by engineers and computing scientifics.

    ReplyDelete
    Replies
    1. Harry Halpin:

      Are you implying engineers and computer scientists aren't people? :)

      Indeed, most ontologies are not automatically constructed, but constructed via some process of negotiation and discussion. For example the popular ontology FOAF was created by Dan Brickley, but has a mailing list for comments. Dan Brickley now manages the ontologies in schema.org for Google, also which are produced by discussion.

      Delete
  8. Did I understand correctly the slide about 'ontological turn' that 'collective intelligence' is not something new to the world as we already live as part of collective intelligence (considering an approach of collective construction of concepts and human artefacts ever created)? Or is it my biased thinking, because this is what I think is true?

    ReplyDelete
    Replies
    1. Harry Halpin:

      Yes, we have already and always have been collectively intelligent - even humans are collectively intelligent ensembles of biological cells. However, I argue with the Web we will get *new* collective forms of intelligence we may not recognize because we lack the cognitive capabilities to recognize them. That's the ontological turn, that the intuition that there is a collective intelligence is just another way to view a new ontological being that is just as real as humans, but we don't quite grasp the details yet.

      Delete
  9. Thank you for an inspiring talk!
    One way to understand cognition is as operations of selection made by the cognitive agent. More specifically it is selecting for relevance where only information relevant to the agent (based on its internal representations and value system) is being selected. In this sense, clearly the web extends our cognition as it not only provides us with additional information but is deeply integrated into the process of extracting relevant information. a trivial example is of course the search engine but this is definitely not the only example. Does your reflections on web generated ontologies support this approach?

    ReplyDelete
    Replies
    1. Harry Halpin:

      I agree that cognition has a part of it the selection of relevant information from the environment, so I agree with what you said for Google. For Semantic Web ontologies, its much harder - most have been shown to be rather not useful. However, for a real success look at schema.org. It seems interesting that schemas that are successful are information-dense and "low-level" - think music songs rather than the study of Being/Time/Space.

      Delete
  10. I'm not sure I caught your refutation of the second objection you listed (the Heideggerian/ Rowlandian that these are cognitive extensions but we need some sort of human-ness and intentionality at its core). You say in the overview of your paper that "this ability to integrate the mind into media such as the Web is inherently social, insofar as it involves interaction with both technological scaffolding and other humans." You talked about the web semantics beyond and without the human, so is it only the intentionality that you disagree with?

    ReplyDelete
    Replies
    1. Harry Halpin:

      I was arguing (all too briefly) that collective intelligence currently as we know it involves humans and technology. Does it necessarily involve humans? I'm not sure. For example, we often attribute intentionality to animals. If we think that they have genuine intentionality (which I do), I don't see why intentionality - and even "experience" and "feeling" - have to be only attributed to humans. Other non-humans might have them, we don't know and might not be able to know.

      Delete
  11. Cette présentation me fait penser que l’esprit ne peut plus être située seulement dans la tête d’une personne; elle serait également étendue dans le monde externe. Si la esprit peut s’étendre à l’extérieur, alors je crois qu’il peut également extraire l’information externe et l’intégrer dans lui-même. Avec ça, nos pensées peuvent être grandement affectées par ces facteurs externes (même si c’est notre cerveau qui prend les décisions). Ceci peut avoir un impact très négatif dans la société.... si le WEB contient des informations néfastes, alors elle peut grandement affectée la personne. On remarque déjà des exemples de cette conséquence.

    Translation :
    This makes me think that the mind might not be considered something only in the head of someone, but it might spread onto the world. If it can be spread outside, I also think that it can also be extracted from the outside word into the brain. We might think that this means that our mindful actions are affected by those external factors (even if our brain is the one that makes the decision). That might not be good... With that, we might say if the Web contains bad material, then it greatly affects the mind of a person. We’ve already seen that happening...

    ReplyDelete
    Replies
    1. Harry Halpin:

      Yes, I agree a feedback loop between the mind and world exists, and the Web can have a negative effect. Stiegler has good writing on this I'd suggest looking at. The essence is most likely neither good or bad, but dialectical.

      Delete
  12. Cognition is the umbrella for problem-solving, memory, and other processes. How does consciousness come into play with respect to distributed cognition and collective intelligence?

    ReplyDelete
    Replies
    1. Robert Thibault just asked a very similar question (I want to give credit, I don't mean to plagiarize)

      Delete
    2. Harry Halpin:

      I see no fundamental problems with extended phenomenology, or even consciousness being heavily external. Examples include shared emotional states in music concerts or collectively intelligent mechanisms "making consciouss" new groups or clusters via use of machine-learning visualization.

      Delete
    3. Our only example of consciousness is our own experience. There is no causal explanation for consciousness and thus consciousness is out of reach for the scientific method. Unless something is acting very much like us, why would we call it conscious? A crowd at a music concert does not look like a human. If we call a crowd conscious, surely we must assign consciousness to many other entities. This becomes a philosophical debate.

      Delete
  13. What's the point of stipulating a condition for portability if you also state that a human and their cognitive extension need only be temporarily bounded?

    ReplyDelete
    Replies
    1. Harry Halpin:

      Portability can be thought of in a "wide"-sense, not a spatial sense. For example, accessing Google via my smartphone or Google Glass is fine in terms of cognitive extension even thogh Google's servers are in California and you are in Montreal, because the latency via the smartphone makes those capabilities of the Google server portable to a particular human agent.

      Delete
  14. In your paper "collective individuation" I saw you reference the work of G. Simondon. Can you say more on collective individuation and how you reflect on Simondon's work on the issue?

    ReplyDelete
    Replies
    1. Harry Halpin:

      Simondon deals with technological and human (what he calls "psychical" in translation but would be better translated "psychological) individuation. Individuation is how we determine what ontologically counts as an individual. So it's quite relevant work, although pre-Internet. Read Stiegler for a more up-to-date analysis.

      Delete
  15. Someone asked about autopoeisis and you said collective intelligence was fleeting and unstable. An unstable state usually tends toward stability. You also said collective intelligence was not some sort of Singularity. Will collective intelligence tend away from Singularity then, toward the separation of man and machine?

    ReplyDelete
  16. Great talk, quite interesting! My question for Dr. Halpin concerns his use of the notion of autopoiesis to account for collective intelligence. This notion traditionally is associated to a very strong notion of boundary. Maturana and Varela defined an autopoietic system as a system that dynamically produces itself, notably by producing a boundary that constitutes a relatively stable and robust differentiation between the system and its environment. The enactivist literature has recently moved to concepts like autonomy and adaptivity, notably under the impetus of such thinkers as Di Paolo, concepts which seem more flexible. My question for Dr. Halpin is whether or not he thinks that the notion of autopoiesis is flexible enough to do the heavy lifting required to explain collective intelligence.

    ReplyDelete
    Replies
    1. Harry Halpin:

      Personally, the concept of autopoiesis is more fully developed than Di Paolo's work (as shown by its deployemnt by Luhmann for social systems) on autonomy, but it's much older - and the key is autopoiesis deals with creating the boundaries needed by (perhaps non-natural) kinds in science. Thinking through the connections between say, Di Paolo's work on autonomy and Yochai Benkler's work on autonomy still very needs to be done.

      Delete
  17. Outstanding talk. Thanks!

    In the case of web-extension, I think we get a good feel of how it's enabling new domination dynamics. On one hand, the potentials of cognitive extension give its users definite advantage over those who can't afford it – that's the so-called technological gap. On the other, this advantage (or the danger of not embracing it, for career, etc.) gives strong incentives that can make a population captive of a technology (a bit like farmers are becoming captives of Monsanto crops).

    Dangers of the web for users and non-users alike is linked to the limits of its universal access – it opens the possibility for abuse, both as a result of the alienation of the technologically extended population by those who provides those extensions, and by the technology extended to those who don't have access to it and can thus be dominated by the latter.

    ReplyDelete
    Replies
    1. Harry Halpin:

      Agreed, that's why were running the "Web We Want" campaign in order to maximize access and training:

      https://webwewant.org/

      In terms of becoming captivitiy to the technology, the point of the Web is that it's an open platform not owned by a single entity (unlike say, Google, Facebook, or older pre-Web AOL/Compuserv).

      Delete
  18. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Harry Halpin:

      IRC is still very popular in technical circles like W3C.

      We did some work on the PhiloWeb conference series:
      http://web-and-philosophy.org/conferences-philoweb/

      There's also a mailing list, although no IRC yet :)
      http://www.w3.org/community/philoweb/

      Delete
  19. I think Menary's point about the constitutive version of the extended mind hypothesis is helpful here, that functions external to the skin and skull can constitute a process of mind. It seems obvious to me that the web extends the mind. One can accomplish tasks that one could not without the web, which points to a synergy with social machines. One cited counter-argument is that the feeling is different (e.g. it does not feel like recollection when we use a notebook to access information). But I don't see this as a problem. They are both memory processes, but they do have functional differences which leaves room for qualitative differences in feeling. For example, because a notebook is not embedded within an associative memory network with automatic fast access, we must use meta-memory to determine the physical location of the notebook and initiate the goal of retrieving it, followed by a transduction of the information through vision. I agree that the four criteria proposed by Chalmers and Clark may be more than sufficient for extended mind. Extensions of mind are constitutive even if it's transient and the experience is not exactly the same.

    ReplyDelete