Sunday 8 June 2014


Towards a Global Brain: 
The Web as a Self-organizing, Distributed Intelligence

Vrije Universiteit Brussel, ECCO - Evolution, Complexity and Cognition research group

VIDEO



OVERVIEW: Distributed intelligence is an ability to solve problems and process information that is not localized inside a single person or computer, but that emerges from the coordinated interactions between a large number of people and their technological extensions. The Internet and in particular the World-Wide Web form a nearly ideal substrate for the emergence of a distributed intelligence that spans the planet, integrating the knowledge, skills and intuitions of billions of people supported by billions of information-processing devices. This intelligence becomes increasingly powerful through a process of self-organization in which people and devices selectively reinforce useful links, while rejecting useless ones. This process can be modeled mathematically and computationally by representing individuals and devices as agents, connected by a weighted directed network along which "challenges" propagate. Challenges represent problems, opportunities or questions that must be processed by the agents to extract benefits and avoid penalties. Link weights are increased whenever agents extract benefit from the challenges propagated along it. My research group is developing such a large-scale simulation environment in order to better understand how the web may boost our collective intelligence. The anticipated outcome of that process is a "global brain", i.e. a nervous system for the planet that would be able to tackle both global and personal problems.


READINGS:

    Heylighen, F. (2014). Return to Eden? Promises and Perils on the Road to a Global SuperintelligenceThe End of the Beginning: Life, Society and Economy on the Brink of the Singularity, B. Goertzel and T. Goertzel, Eds.
    Heylighen, F. (2013). Self-organization in Communicating Groups: the emergence of coordination, shared references and collective intelligence. In Complexity Perspectives on Language, Communication and Society (pp. 117-149). Springer Berlin Heidelberg.

48 comments:

  1. Intuition Pump: Yes, things can be done by a collectivity that cannot be done by individuals, and the collectivity is in that sense more "intelligent" than the individuals -- but does that make the collectivity a mind? Can a collectivity have a migraine? If not, then in what sense is it a mind?

    ReplyDelete
    Replies
    1. This is related to one of my main doubts during the talk. Can the global brain actually perceive?? What does it perceive and how? Is the "collective perception of challenges" actually comparable to the perception capacity of a human mind?

      Delete
  2. The boundaries of a body are much less problematic than the boundaries of a mind. Bodies do things; minds feel.

    ReplyDelete
    Replies
    1. Within the framework of "Enactivism" borrowed by Dr. Heylighan, the distinction between "doing things" and "feeling" falls away because feeling is tied to doing things. The absence of a distinction between mind and body within the context of "Embodied/Situated Cogniton" makes the boundaries of a body equally as problematic as the boundaries of a mind.

      In light of your comment, I'd be interested in your criticism of Enactivism or Embodied Cognition.

      Delete
    2. You can't do away with the difference between doing and feeling by just not making the distinction. And of course they're tied together: They're correlated! But the substantive question is how and why. "Enactivism" does not answer that question: it begs it (and it's a bit of a cult, like Gibsonian "direct realism" and Peircean "semiotics"). It's a combination of the blisteringly obvious with a lot of hand-waving, all wrapped in faddish words...

      Delete
  3. "Agents" have "needs" -- but are the "needs" felt, or are they just dynamics (states and state-changes)?

    ReplyDelete
    Replies
    1. If they can't talk, and we can't know for sure, if we have reason to think they may feel (i.e., if they behave as if they feel or possess c fiber analogues) should we not behave towards them as if they have de-facto feeling?

      Delete
    2. Only if they pass the (robotic) Turing Test.

      Delete
  4. On peut même comparer la discussion avec des technologies tel que Apache Haddop. Son but est de créer des applications distribuées sur plusieurs ordinateurs. Si un ordinateur tombe en panne, alors un autre peut continuer son travail. Il est de même dans la vie de tous les jours. Si un programmeur tombe en panne, alors un autre peut prendre sa place dans le travail. De nos jours dans le travail, la collectivité est toujours privilégiée à l'individu.

    Peut être comparer tout ça à la collectivité du cerveau, mais de façon simpliste.

    ReplyDelete
    Replies
    1. Oui, mais est-ce qu'un esprit n'est pas une entité consciente? Et est-ce qu'une collectivité est consciente? Et sinon, en quoi est-ce qu'elle serait un esprit?

      Delete
  5. When it comesto the modeling of the social networks as a neural network, some questions arise in my head:

    1. The first is related to what one of the assistants already asked:
    Can the specific needs of agents (people) be actually reduced to mathematical paradigms? What happens with subjectivity and social/cultural constraints? What is important or relevant for one group of agents may not be so for another. I believe this makes the "global needs" very hard to assess.


    2. How can we affirm that the network actually reinforces USEFUL links and rejects USELESS ones? What happens with the enormous amount of spam, incorrect or unverified information that is everyday widely distributed in the web, like in the chinese whisper effect? All those are also reinforced by many individual agents. Can we then speak about "collective stupidity" as well? What if the "global brain" is not always as intelligent as it seems?

    ReplyDelete
    Replies
    1. 1. In the model, needs are different for every agent, although they tend to be variations around the most common default. Therefore, they are subjective. The distributed intelligence is successful if all agents manage to satisfy their personal needs. Thus, the collective need is maximally satisfy individual needs, "the greatest happiness for the greatest number".

      Delete
    2. 2. In the model, a link is reinforced only if the receiver of a challenge propagated along that link derives benefit from it. Since you don't derive benefit from spam, you will not reinforce but rather decrease your trust in the sender of that spam.

      Delete
  6. Q 1: If we shut down a part of this global Brain, do we still have a coordinated network!!! In our mind if an area is defected the brain move to another area. Could we imagine this ability with this global Brain!!!

    Redha eltaani -Uqam student

    ReplyDelete
    Replies
    1. Likely, although there will be some nodes that are more "hub-like" and essential for network or subnetwork integrity.

      Here's an article about the property of the brain you mention, called hodotopy, that makes it easy to consider other networks behaving similarly.

      Brain hodotopy: from esoteric concept to practical surgical applications.
      De Benedictis, Duffau H.
      http://www.ncbi.nlm.nih.gov/pubmed/21346655

      Delete
    2. Self-organizing networks, which include both the brain and the global brain, are intrinsically very robust: they tend to recover from a wide variety of local damage by having other parts take over the function of the damaged parts. This is in sharp contrast with most artificial, designed systems, such as individual computers.

      Delete
  7. Q2: Instead the ability to be a power tool to resolve a world problem, the global brain can be used in the bad way, the countries are going to use it to spy, to create a conflict …etc. How can we avoid this kind of use?
    Redha eltaani- student UQAM

    ReplyDelete
  8. Professor Heylighen has defined a mind as an emergent entity that is goal-directed or intentional, able to tackle challenges, and dynamically emerges from the interaction of different coupled, simpler agents. Does this imply that artefacts like ordinary laptop computers have mind? If so, how do we avoid the problems associated with the famous Chinese Room argument?

    ReplyDelete
    Replies
    1. Good question! I hope Prof. Heylighen will reply!

      Delete
    2. Does your laptop consistently and independently exhibit behaviour that might cause you to take an "intentional stance"? Dr. Heylighen highlighted the importance of enactive capacity for the simpler agents–an ability to act in such a fashion as to cause some change in the environment that we could conceive of as goal-directed or intentional. While laptops alone generally do not behave with such capacity as to constitute an intentional agent, as I understand it, a human using a laptop or running a certain kind of program on the laptop could constitute such an agent.

      The global brain is not dependent on computation alone. It is also part dynamical system with appropriate connections to the outside world (with sensorimotor symbol grounding). At least, that's how Dr. Heylighen described it. For this reason, it is not susceptible to the Chinese room argument.

      Delete
    3. I essentially agree with Ishan here. I would add that a laptop does not have autonomous goals or values, therefore it does not have a mind of its own, but merely extends our mind the way a telescope extends ou eyesight

      Delete
    4. Thank you for responding, Professor Heylighen!
      I generally agree with the point you are making here about extended cognition. However, I still have an important reservation. It seems to me that the point you just made (viz., that artefacts merely extend human minds, rather than have a mind of their own) undermines the thesis that the Global Brain has, or dynamically constitutes, a Global Mind. If artefacts such as laptop computers merely extend our mind, much the way instruments such as telescopes do, what prevents us from asserting that the various kinds of artefacts that are components of the Global Brain (e.g., the Web, etc.) merely extend the minds of the human agents that are component parts of that network?

      Delete
    5. Ishan—interesting points as well.
      You mention Dennett’s intentional stance. Recall that he and Haugeland also defend the idea that the intentionality of systems appraised in the intentional stance can be either “intrinsic” or “derived.” A thing like a soda machine is intentional. It responds adequately to my inserting 2$ by providing me with a sweet caffeinated beverage (just what I needed!). But this intentionality is derived. The system is only intentional, about something in the world (in this case, about a can of soda), in virtue of various human cognitive systems that are inherently intentional and that are using the machine for their own purposes. Symbolic language is similar to the soda machine case, in that for symbols to mean anything at all, for them to be intentional, there must be another system to interpret those symbols as meaningful. Suppose (for the sake of argument) that we accept cognitive extension; then the problem is just displaced rather than solved, because the whole system (soda can & human) is intentional only in virtue of one kind of component of the extended system (the person). It is the inherently intentional system that grounds the intentionality of the symbols and the soda machines (and in general of all intentional artefacts)!
      And so, I am not sure much is gained by appealing to the intentional stance. Even if the whole cognitive system (person & laptop) is intentional, the question of whence this intentionality originates, in virtue of what components the system is intentional, is not answered thereby.

      Delete
    6. Having "autonomous" goal-directed behavior is trivial. You can generate it with a toy Khepera robot. Brains can do a lot more than that; and having a mind means feeling, not just having autonomous goal-directed behavior.

      Delete
    7. I agree with Stevan that goal-directed behavior is very easy to implement. Intelligent goal-directed behavior with the emphasis on the sub symbolic, subconscious, intuitive intelligence that determines most of our reactions, on the other hand, requires a very sophisticated subjective experience that implicitly anticipates, associates and values the myriad of potential events, actions or reactions that could follow. That is why all robots up to now behave essentially like stupid zombies. A zombie in the sense of Chalmers could never pass the Turing test, because it would lack this intuitive, affective ability that drives most of our actions...

      Delete
  9. I also agree with the person that pointed out that, in the "global brain", individual agents have minds of their owns, in contrast to the human brain. THis makes it very complex to analyze as a whole, considering that these "complex agents" have desires and will of their own. They can withdraw or re-enter the network whenever they want. Neurons can not.
    Therefore... are social networks (withthis relative "freedom" of individual agents) really comparable to neural networks?

    ReplyDelete
    Replies
    1. Neuron networks can be suppresed or activated. There are also "winner-takes-all" models of individual human cognition that suppose contradicting stances. I agree the brain metaphor is a coarse tool, however: it's likely to miss on important aspects of global-sized cognition.

      Delete
    2. As I noted in an earlier reply, self-organizing networks such as the (global) brain are intrinsically very robust or adaptive: they continue functioning even if a sizable part of their components stops responding in the usual way. The same has always applied to social systems, which the global brain still is.

      A good way to understand how such a system can function smoothly even when the agents it is composed of change their mind or become unavailable is the concept of stigmergy: agents contribute to a shared medium (e.g. Wikipedia). When one agent does not do what it is expected to do, others will typically take its place. As long as there are enough agents willing to contribute, the stigmergic system functions very efficiently....

      Delete
  10. How do subjectivity, human connection, and individual experience come into account in the global brain?

    ReplyDelete
  11. Comments: He used FLOPS in order to connect agents.
    Thank you Steve Harnard and Petko Valtchev for this interesting summer school

    ReplyDelete
  12. I'm wondering if anyone got the "empirical" part, with the simulation and the text analysis project (I was thinkering with the things he'd said before of at that point). It seemed to me that the simulation backed a relatively unproblematic assumption, namely that coordination yields increased benefits. I couldn't get what was the hypothesis for the text analysis project, however.

    ReplyDelete
  13. • Global coordination and global identity are not warranted.
    • In the evolution of life, ecologies are of huge diversity of agents. Unity and diversity are pretty balanced across scales
    • Perception everywhere? Is it good? Should we become completely transparent?
    • What does it mean beneficent? To whom? And what is “more beneficent”?
    • Neurons are not aware to the brain at large but human agents can become aware to the Global Brain because its linguistic and cognitive competence to use higher levels of abstraction. Therefore the analogy of GB to human brain is unwarranted! This is an interesting question from the standpoint of cognitive development as a general method. At different scales agents might present completely different behaviours and interactions. There ARE significant interactions between agents at different scales (see for example: cancer cells that can exploit global body mechanisms to proliferate.)
    • Values cannot possibly be aligned in all cases. Conflicts must be accommodated in future GB scenarios.
    • Formalization of challenges will require an elaborate semantic platform and semantic processing.

    ReplyDelete
  14. Talking with Dr. Heylighen after his presentation, he explained to me that ‘Global Brain’ was a term invented before the Web existed. Moreover, he said that the GB has existed for a long time, and that the Web has only increased the abilities of the GB. I found this an important point because human communication is not a new thing. The Web has drastically increased the speed which information is transmitted with, however I would not expect that speed of communication was the last element missing before a new mind could emerge.

    ReplyDelete
  15. A few people have drawn a separation between humans and GB compared to neurons and humans by stating the neurons cannot feel (aka: are conscious, have a mind, have intentionality, or whatever other words we want to use). I suggest we define what it means to have a mind before we make this assumption.

    When we talk about the mind of a GB, do we expect it to be anything like our human minds? Do we expect the GB to feel a migraine? I think not. Yet, I don’t think this expels the possibility of a GB feeling at all. Perhaps a GB can only feel things that we humans cannot fathom. If this were the case, the GB would feel and thus would have a mind.

    We can also apply this logic in the reverse direction. Perhaps neurons can feel feelings that neither humans nor a GB can. This post may seem a little to sci-fi for some, but there is no extant evidence that either a neuron or a GB could feel. Why are we keen to assign mind-status to a GB while stating, as if fact, that neurons cannot feel.

    Dr. Heylighen said he believes that different entities (bacteria, humans, GB) have different levels of a mind. Because having a mind is all-or-none (we either feel or we don’t), I prefer the notion that perhaps different entities have fundamentally different minds (they can all feel, but are feeling things incomprehensible to our human feeling).

    ReplyDelete
  16. If consciousness is a continuum, which Dr. Heylighen suggests, then it would be interesting to have a measure of consciousness incorporated into the mathematical model, so that we could see the global mind become more conscious as it develops. That being said, I don’t think that any network which has sufficiently many agents and connections will necessarily become a mind. I think the the right kinds of actions have to be involved too. For example, we know that the network formed by neurons in the human brain is able to categorize, and perhaps this is the crucial function that allows a mind to develop, since categorization is a general enough function that the system can start to categorize it’s own behaviours too (but I’m just speculating, here).

    But I definitely agree with Dr. Heylighen that a distributed intelligence that displays omnipresence, omnipotence and omnibelevolence would be great for humanity. I think the web could be a good host for this intelligence.

    ReplyDelete
  17. Sorry for this late comment, but I found this presentation very interesting. I particularly appreciated the analogy between god and google. It seems that there is a common idealogy that last through time and that humans wish to realise.

    I also think that there is two presuppositions underlying the global brain : - people act as a win win situation, however it is not always the case. - The aim of every human is to work, but what about people which are looking for to get relax?

    ReplyDelete
  18. Many of the comments are about whether the global brain needs to have some form of feeling or subjective experience in order to be considered as a mind. I conceive feelings as a combination of preparedness or expectation for what might follow the present situation with an evaluation in terms of positive or negative of these potential follow-ups. In that sense, we can see the global brain developing some kind of feeling.

    There already exist very coarse measures of this collective feeling, for example in twitter mood measurements. According to the research of my former phd student Johan Bollen, a positive mood is predictive of an increase in the value of the stock exchange. In that sense, it can be seen as the global brain being prepared to take more risks, and therefore being more willing to invest more in stocks when it feels good.

    This is of course very preliminary, but illustrates what distributed feelings might mean in practice. On the other hand, many people, including Stevan, would object on grounds of principle that something that is not a human or animal could really have feelings, and argue that functional explanations such as preparedness cannot grasp what a feeling really is. My reply is that it is the predicament of all science and knowledge in general that you make simplified models of things but can never reach the things-in-themselves. Therefore, the map is not the territory and my explanation of a feeling is not a feeling. But feeling is in that respect just like any other phenomenon for which we try to formulate an explanation. My subjective experience of the weight of the cup I am holding in my hand is not the same as the scientific explanation of weight as the effect of gravity. But most people are willing to accept that the theory of gravity satisfactorily explains weight.

    With feeling, consciousness or mind, however, people are much less willing to grant a practical equivalence between their subjective experience and any scientific model of that phenomenon. I'm arguing that there is no essential difference: subjective experience necessarily is different from a scientific explanation because it looks at the same phenomenon from a first-person perspective, while science uses a third-person, "objective" perspective. Obviously, these two perspectives give very different results, but that does not mean that the one cannot be translated to some degree into the other.

    If my scientific model of feeling accurately explains or predicts my subjective experience, that is all I need. if that same model can be extended to predict or explain the activities of the global brain, then I am satisfied that the global brain has the equivalent of feelings. Of course, I will personally not be able to experience those feelings, because I am not the global brain. But neither can I experience the feelings of any other person, animal or things besides myself. I merely induce that they have feelings like me because they react similarly to me in situations where I would experience a feeling. The more different they are from myself, the more difficulty I will have to empathize with those feelings. That is why I cannot really expect to empathize with the global brain's feeling. But that does mean not that the global brain cannot have feelings...

    Francis Heylighen
    http://pcp.vub.ac.be/HEYL.html

    Sent from my iPad

    ReplyDelete
    Replies
    1. I guess the problem of feelings that is addressed here is not that they can or cannot be reliably predicted by an objective model. The problem is rather the existence of someone (the subjective self) to whom these feelings are occurring in the form of subjective experiences. The existence of such subjective agency cannot be predicted by any third person model or perspective and this is where the conundrum lays. Those who side with the theory that such subjective agency is merely an epiphenomenon arising from the way information is organized in the brain will agree that behavioral third person models pretty much cover what needs to be explained about feelings. Those who object this view, will argue that there is something extra to be explained i.e. the existence of the feeling agent. The argument is that this subjective agency must confer some evolutionary advantage and therefore it is not an epiphenomenon. If it is not, subjective feelings must be accounted as well as the existence of a subjective agency that is not trivially explained by third person models. In other words, the said simplified models mentioned above do not capture something significant about minds that requires an extra explanation. Whether such subjective agency is present or not in non-human, non-animal agents then becomes a question that cannot be dismissed by current models. Moreover, some philosophers of mind claim that third person models are fundamentally inadequate to deal with this particular problem. I do not side with such a radical approach but I do acknowledge that there is something that needs to be explained about the existence of subjective states occurring to subjective agents.

      Delete
    2. What about the difference between preparedness&valence and felt preparedness&valence. It's the latter that you need for a mind -- and it is obviously not explained by preparedness&valence itself...

      Delete
    3. For me, the core of the problem is not in the difference between "felt" valence and "functional" valence, but, as Spaceweaver proposes, in the difference between first-person (how do *I* experience the phenomenon) and third-person (how does some other agent experience the phenomenon) perspectives on valence/preparedness/feeling.

      The worldview I subscribe to is evolutionary-cybernetic, and based on an ontology of action. That means that I see action as the irreducible essence of all phenomena, and agents as the stable aspects of such action. This stability is the result of the natural selection for "survival" (= invariance) of certain patterns. Cybernetics teaches us that us that for more complex systems, stability is achieved via the mechanisms of regulation or control in which the agent counteracts disturbances that threaten its survival, and exploits opportunities that strengthen its chances of survival.

      This requires that the agent be able to sense all phenomena that challenge its survival (fitness), and to evaluate them in terms of danger (negative valence) or opportunity (positive valence). Therefore, a rudimentary form of feeling is a necessary attribute for a cybernetic agent, and will as such be promoted by evolution.

      Stevan has objected that you can build simple cybernetic robots that don’t have any "feeling" in the traditional way that we imagine it. My answer is twofold:

      1) as cybernetic systems become more intelligent, their feeling (as a combination of preparedness, association, intuition, valence, etc.) will become so subtle and sophisticated that it is no longer distinguishable from our human feelings;
      2) most crucially, we ourselves are cybernetic systems, and therefore our experience of the world is intrinsically subjective or "first-person", and as such different from the experience we imagine a robot to have.

      The crucial difference here is between a cybernetic agent viewed from the outside (3rd person) and from the inside (1st person). From the inside, the rest of the world is merely a source of disturbances and affordances, i.e. phenomena that have an intrinsic valence, and this valence is about my personal survival. Therefore, it is the most important thing that exists for me, and my reaction to it (first as a feeling, later perhaps as a thought or action induced by that feeling) is the most important thing I could be busy with. Therefore, feeling is the primary phenomenon for me, as a first person. It cannot be compared to anything else: it determines who I am, what I do, what I think, whether I will be successful or fail, whether I will survive...

      The reason we find it difficult to grasp this most powerful force of all in our mental life is that we have been trained (mostly by science) to express everything in terms of "objective", third-person accounts. In such accounts, there is no room for this most subjective and most powerful of all experiences.

      Non-scientific approaches, on the other hand, such as phenomenology, Buddhism or "mindfulness", have more or less successfully attempted to make people grasp that their experiences cannot be fully objectified. I personally try to combine my scientific, third-person understanding with my subjective, "mindful", first-person experience, and have in general little difficulty reconciling both...

      Delete
    4. I see this discussion as also relevant and informing about the second week's debate around existence and nature of collective intelligence / distributed cognition and extended mind (July 15-17). In one of those talks was briefly mentioned the 'modelling' approach which can be summarized as: understanding = modelling the reality -> the shorted description/model, the better it is (in terms of information compression/predictive power) -> this implies minimum set of assumptions (Occam's razor principle) -> all assumptions/premises that do not make a model better should be abandoned (unless we want to create fairy tales).

      (1) The relevance of this approach to discussion about the difference between 'felt' valence and 'functional' valence, as Francis has put it, is that introducing the notion of 'feeling' into the description of how mind works is reasoned only if it leads to a better model (more informative / shorter, etc). I personally do not see much more value in the statement 'having a mind means to be able to feel' (ref. Steven, just a few comments above) as compared to 'having a mind means having a mind' which is of course true but does not inform us about anything. I guess I would like to get a description of the concept of feeling for a start.

      (2) The relevance to collective intelligence and distributed / extended cognition is that positing these concepts help us to model certain phenomena in a better ways than without them. E.g. concepts of national character, collective sentiment, behaviour of markets, web, society, etc. sometimes can be grasped better by looking at these collectives as having some sort of identity. After all I believe that modelling efficiency is the underlying reason and cause for treating ourselves as persistent individuals rather than a bunch of cells (which is also true)...

      Delete
    5. I think both first-person experience and third-person experience are pretty similar modelling acts as I tried to describe above, except that in case of first-person experience there may be more information available (something not accessible to a third person). On the other hand, it seems to me that Libet's experiments (http://en.wikipedia.org/wiki/Benjamin_Libet) demonstrate that it is not the case..

      Delete
  19. I take issue with one of the propositions made by Heylighen (2014) where he states that, "the default assumption that resources obey a zero-sum logic only applies to matter and energy, because these are subjected to a conservation law. It does not apply to information, which can be reproduced, and thus shared, without limitations." As Jim Hendler mentioned in the discussion today, big data requires big storage. If we acknowledge that cognition is embodied and thus, information is stored and transacted through a physical medium, we have to acknowledge that information is, to a degree, bound by the laws of the physical universe, like conservation.

    ReplyDelete
    Replies
    1. In the case of the GB, this may take the form of band-width issues, in which access to and the ability to share data is restricted, at least temporally, by the band-width of the network.

      Delete
    2. Sharing without any limitations whatsoever is of course an unreachable limit. Even information storage and transmission have certain physical limits that cannot be overcome, due e.g. to quantum and thermodynamical principles. But that limit is so far away from the traditional limits we associate with scarce resources (e.g. gold, or caviar) that in practice most information is (nearly) free. Therefore, we can share it with others without having to think twice about the physical cost of that sharing...

      Delete
  20. Your presentation described a beautiful and ambitious project. If we have the priority to value empirically some questionings, I think that Wikipedia is a good place where to simulate and conceptualize your ideas. Wikipedia is a big project, but suffering of semantic inconsistency. It's a fascinating example of collective project whose unity and unification is very interesting challenge for deduction strategies on relations between individuals/collectivity,homogeneity/heterogeneity. Don't you think it the good place to test the feasibility of the conceptualisation and the realisation of the global brain?

    ReplyDelete
  21. Wikipedia is indeed a nearly ideal example of what the Global Brain can do in practice. Its data are moreover publicly available for analysis, and many people have already been doing such analyses to uncover the dynamics of collectively developed knowledge systems. However, in the Global Brain Institute, we still have not found an elegant way to interpret the data in such a way that they can be used to confirm or disconfirm our "challenge propagation" model of GB self-organization. But we continue looking and would be grateful for any concrete suggestions in this regard...

    ReplyDelete