Sunday 8 June 2014


Socio-Technical Epistemology

Institut fuer Technikfolgenabschuetzung und Systemanalyse (ITAS)

VIDEO


    OVERVIEW: The increasing pervasiveness of technologies of computation, information and communication not only affect our culture, economy and politics: they shape our epistemic practices: increasing amounts of personal data are used for profiling, information gets personalized in more or less transparent ways, we use crowd-sourced or collaboratively created content in our daily quests for knowledge. These technologies offer new possibilities and challenges both in research and in our everyday lives. My talk will try to shed some light on the impact of the computational on epistemic practices and the challenges this raises for philosophy. We need to develop a socio-technical epistemology, harvesting insights from different disciplines beyond philosophy, such as science and technology studies, cognitive science and web science, to provide frameworks for evaluating as well as guiding the design and governance of socio-technical epistemic systems and practices. Socio-technical epistemology can have positive repercussions for these neighbouring disciplines.

READINGS:
    Simon, J. (2010). The entanglement of trust and knowledge on the Web. Ethics and Information Technology12(4), 343-355. 
    Simon, J. (2010). 
A Socioepistemological Framework for Scientific PublishingSocial Epistemology24(3), 201-218.

20 comments:

  1. Dear Judith, Thank you very much for your great presentation ! I have one question: Knowledge are today for a lot of us mainly knowledge by instruction or knowledge of knowledge of knowledge... First works on epistemic logic to model these chains of inferences define the epistemic operator K. In computational point of view and with the goal of a good expressivity, do you think it’s more important to formalize relevant inferences with a minimum of epistemic operators (maybe only K) or enrich the representation by new and more adapted epistemic operators according to specific sciences, for the web science and the foundation of a global brain?

    ReplyDelete
    Replies
    1. Dear Ludovic,

      many thanks for your question and your kind feedback. As I already told you yesterday, I am not really an expert in epistemic logic. Nonetheless, I would probably rather opt for your second proposal.

      Delete
  2. How does feminist theory could help to move from socio-epistemology to socio-technical epistemology? I didn’t get this point.

    ReplyDelete
    Replies
    1. As far as I understood the feminist theory is related to power - knowledge relationship in socio-technological epistemology. Which is arguably related to the emergence of nation states (Alain Desrosières / 1998 (?)). Seems to be a very interesting relation.

      Delete
    2. Now its littel bit clear to me.

      Delete
    3. Dear all, yes it was meant to be related to the power-knowledge link, which is not only prevalent in the debates around the emergence of nation states, but also within feminist social epistemology (e.g. Fricker's book on Epistemic Injustice).

      Delete
  3. Extremely interesting talk by Professor Simon. I am highly sympathetic to the framework proposed in this talk. My (naive) question concerns neglect of technology and power structures in social epistemology. What do think are the main reasons these crucial and pressing issues have not been extensively discussed in the literature?

    ReplyDelete
    Replies
    1. Dear Maxwell, good question. I shall note however, that there are exceptions and that some social epistemologists are now turning towards a recognition of technology (e.g. Don Fallis, Alvin Goldman, etc) and that feminist (social) epistemologists, such as Miranda Fricker, address the power issues in knowing. I think the reluctance to address the power issue are partly due to a strong focus on an a-social knowing subject in many (mostly analytical) epistemological accounts.

      Delete
  4. It is very interesting your concept socio-technical epistemology.Do you think, it will be necessary to create rules in this concept? Thank you JUDITH SIMON.

    ReplyDelete
    Replies
    1. Dear Albert, thanks for your kind feedback. I am not sure to what extent rules will be necessary, but of course socio-technical epistemology is also inherently normative.

      Delete
  5. J'ai lu dans la présentation que les patrons et les relations qu'on peut inférer à l'intérieur de Big Data sont véridiques et significatives. Comment peut-on s'assurer de ça? Comment mesure-t-on la confiance? Le problème de confiance est toujours présent, même dans des données Big Data. Le domain du Web Sémantique est un example de domaine qui possède un problème de confiance dans les données (avec sa couche de confiance).

    Translation :
    In the presentation, I've read that "patterns and relationships within Big Data are inherently meaningful and truthful. How do know that the data in Big Data are truthful? Can we measure it? I think that the problem of trust is still present, even in the data of Big Data. In the Semantic Web, it has the same kind of problem of trust in the data.

    ReplyDelete
    Replies
    1. Dear Konstantinos, yes you are right of course. This was a quote from a text my Rob Kitchin in which he represented the claims of new empiricists - only to discard them critically.

      Delete
  6. A very general question: From your perspective, do you think there's an opportunity for nations to use big data effectively for governance? Or do you think this would have the effect of concentrating too much power and compromising privacy?

    ReplyDelete
    Replies
    1. Dear Nicole,

      I think both your claims are right: there are both opportunities and dangers in using big data for governance. The really difficult task will be to balance the two.

      Delete
  7. I'm sorry, my questions are messy, but it reflects the state of my reflections on this matter.

    1) There is an issue about situatedness of computers. They certainly aren't the "view from nowhere, but they aren't sole depositors of the situation of the researcher using it – some of the biases revolve around the formalisms used, statistical linguistic assumptions, and even the kind of calculations that are easy to make on a turing-like binary computer. As a result, when working with them, we have the impression of being in conversation with a hybrid human-machine subject. Indeed, the resulting dynamics are interesting: (a) unlike what we often see in social science, most of our students, me included, don't start with a hypothesis, (b) we often find completely unexpected things.

    What kind of position has a machine in the power dynamics? Does it even have any? Is it relevant?

    2) One of the things text mining enables is to enable for a "scientification" of testimony studies – and thus, hopefully, to make more visible discourses that have been invisibilized. My feeling is that it's a good thing, because it makes it easier to valorize testimony and formulate consensuses. Furthermore, the explicitation of methodology it enables opens up the possibility to control for some of the bias researchers may introduce in analysis.

    I am a bit of an activist, or at least activist goals motivate a lot of my activity in academia. Am I justified in thinking that developing techniques will promote marginalized voices, or am I opening a Pandora's box in doing that kind of work?

    ReplyDelete
    Replies
    1. By the way, I was so glad you persisted in making the conference, despite the difficulties. It was enormously interesting, and enormously important to have you here (although virtually). Thanks a lot!!!

      Delete
    2. Dear Louis,

      thanks for your kind words. As for your questions: Ad 1) my take on this would be based upon some earlier thoughts on distributed epistemic agency between human and non-human agents: I think both humans and some machines, e.g. self-learning algorithms can have agency and it is the interaction that matters. I tried to tackle this issue briefly in a talk, which you can find here in case you are interested: https://www.youtube.com/watch?v=MmJ8lkS0-qk
      In a longer paper on distributed epistemic responsibility, I draw a lot on Karen Barad's agential realism, you can find the paper and the references here: https://www.academia.edu/3180684/Distributed_Epistemic_Responsibility_in_a_Hyperconnected_Era

      Ad 2) I would agree that there is some hope that marginalized voices can at least be better displayed through these new techniques. Whether or not they will be heard and taken into account (in the sense of Helen Longino 2002), remains unfortunately open. Nonetheless, I think it is important to use and develop such techniques for critical discourse - while remaining vigilant to the biases one may inadvertently introduce oneself.

      Delete
  8. While Google may have a near complete monopoly of our online information, it’s comforting to know their mission is to “organize the world’s information and make it universally accessible and useful” (http://www.google.com/intl/en/about/company/).

    The duopoly (Google and Facebook) owning our online information is surely concerning and some law suits are beginning to surface. I think we need to regulate what companies can do with our information. An alternate option, breaking this duopoly into many different companies seems much less possible. While this method worked via competition laws in the past (e.g., the courts forced Standard Oil to split into 31 companies because they were intentionally monopolizing the oil industry), this option is less viable now. Google serves many functions for us because it is so big and interconnected. Can we do anything as individuals to ensure our data is not used without our consent? Or, are government regulations the key?

    ReplyDelete
    Replies
    1. Dear Robert,
      I think to some degree government regulations are needed for data protection, etc. However, I am also not very confident that informational monopolies can be broken in the same way as other monopolies. One reason for this is that there are user-created quasi-monopolies, i.e. monopolies from below, which also could be avoided by users. However, this monopoly is of course self-reinforcing: if the sheer amount of data is increasingly more important than the smartness of algorithms, the dominance of larger corporation may even increase, making it harder for new players to enter the market. (However, we have to keep in mind that there are many more big big data players in the background, which are just not visible as Facebook and Google).

      Delete
  9. I think we should be especially critical when others are modelling, predicting, and trying to control the evolution of a system. We should take each piece of information as somewhat independent, not only in its content value but its entirety. The origin is not of essential concern (could be human/non-human/...) but should be equally evaluated regardless. We are placing more trust in algorithms than humans, which Daniel Kahneman seems to endorse — algorithms do not have biases and heuristics that humans do and their performance can also be assessed more easily by statistics. It may be a good idea to create new meta-information about the quality of information. Agents could also be assigned a reputation age, the only thing is that you don’t always know that it is the same individual. We would still need to be critical. It also might not be a good idea to always listen to agent with the greatest reputation because that could lead to a premature convergence in the solution space.

    ReplyDelete