Elena Esposito

Elena EspositoElena EspositoElena Esposito
  • Home
  • Books
  • Research
    • Algorithms
    • Systems Theory
    • Finance
    • Others
  • Projects
    • PREDICT
  • Media
    • Lectures
    • Videos
    • Press
  • About
    • CV
    • Publications List
    • Presentations List
  • Contact
  • Mehr
    • Home
    • Books
    • Research
      • Algorithms
      • Systems Theory
      • Finance
      • Others
    • Projects
      • PREDICT
    • Media
      • Lectures
      • Videos
      • Press
    • About
      • CV
      • Publications List
      • Presentations List
    • Contact

Elena Esposito

Elena EspositoElena EspositoElena Esposito
  • Home
  • Books
  • Research
    • Algorithms
    • Systems Theory
    • Finance
    • Others
  • Projects
    • PREDICT
  • Media
    • Lectures
    • Videos
    • Press
  • About
    • CV
    • Publications List
    • Presentations List
  • Contact

Selected Research ON Algorithms

Performance without understanding: How ChatGPT relies on humans to repair conversational trouble

LLM-based chatbots’ ability to generate contextually appropriate and  informative texts can be taken as an indication that they are also able  to understand text. We argue instead that the separation of the two competences to generate and to understand text is the key to their  performance in dialog with human users. This argument requires a shift  in perspective from a concern with machine intelligence to a concern with communicative competence. We illustrate our argument with empirical examples of what conversation analysis calls ‘repair’, showing that the management of trouble by chatbots is not based on an underlying understanding of what is going on but rather on their use of the feedback by human conversational partners. In the conclusion we suggest that strategies for the interaction between chatbots and users should not aim to improve computational skills but to develop a new communicative competence. 


Pütz, O., & Esposito, E. (2024). Performance without understanding: How ChatGPT relies on humans to repair conversational trouble. Discourse & Communication, 18(6), 859-868. https://doi.org/10.1177/17504813241271492 


>>> Download paper

 ©  FotoEmotions / Pixabay

Algorithmic crime prevention. From abstract police to precision policing

The growing digitisation in our society also affects policing, which  tends to make use of increasingly refined algorithmic tools based on  abstract technologies. But the abstraction of technology, we argue, does  not necessarily entail an increase in abstraction of police work. This  paper contrasts the ‘abstract police’ debate with an analysis of police  practices that use digital technologies to achieve greater precision.  While the notion of abstract police assumes that computerisation  distances police officers from their community, our empirical  investigation of a geo-analysis unit in a German Land Office of Criminal  Investigation shows that the adoption of abstract procedures does not  by itself imply a detachment from local reference and community contact.  What we call contextual reference can be productively combined with the  impersonality and anonymity of algorithmic procedures, leading also to  more effective and focused forms of collaboration with local entities.  On the basis of our empirical results, we suggest a more nuanced  understanding of the digitalisation of police work. Rather than leading  to a progressive estrangement from the community of reference, the use  of digital techniques can enable experimentation with innovative forms  of ‘precision policing’, particularly in the field of crime prevention. 


Egbert, S., & Esposito, E. (2024). Algorithmic crime prevention. From abstract police to precision policing. Policing and Society, 34(6), 521–534. https://doi.org/10.1080/10439463.2024.2326516 


>>> Download paper

 © Jason Leung / Unsplash

Can a predicted future still be an open future? Algorithmic forecasts and actionability in precision

The openness of the future is rightly considered one of the qualifying  aspects of the temporality of modern society. The open future, which  does not yet exist in the present, implies radical unpredictability.  This article discusses how, in the last few centuries, the resulting  uncertainty has been managed with probabilistic tools that compute  present information about the future in a controlled way. The  probabilistic approach has always been plagued by three fundamental  problems: performativity, the need for individualization, and the  opacity of predictions. We contrast this approach with recent forms of  algorithmic forecasting, which seem to turn these problems into  resources and produce an innovative form of prediction. But can a  predicted future still be an open future? We explore this specific  contemporary modality of historical futures by examining the recent  debate about the notion of actionability in precision medicine, which  focuses on a form of individualized prediction that enables direct  intervention in the future it predicts. 


Esposito, E., Hofmann, D. and Coloni, C. (2024). Can a Predicted Future Still be an Open Future? Algorithmic Forecasts and Actionability in Precision*. History and Theory., 63: 4-24. https://doi.org/10.1111/hith.12327 


>>> Download paper

 © Alex Wong / Unsplash

Explaining machines: social management of incomprehensible algorithms. Introduction

This short introduction presents the symposium ‘Explaining Machines’. It  locates the debate about Explainable AI in the history of the  reflection about AI and outlines the issues discussed in the  contributions. 


Esposito, E. (2022). Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction. Sociologica, 16(3), 1–4. https://doi.org/10.6092/issn.1971-8853/16265 


>>> Download paper

 © Artur Vozneko / Unsplash

Does explainability require transparency?

Dealing with opaque algorithms, the frequent overlap between  transparency and explainability produces seemingly unsolvable dilemmas,  as the much-discussed trade-off between model performance and model  transparency. Referring to Niklas Luhmann's notion of communication, the  paper argues that explainability does not necessarily require  transparency and proposes an alternative approach. Explanations as  communicative processes do not imply any disclosure of thoughts or  neural processes, but only reformulations that provide the partners with  additional elements and enable them to understand (from their  perspective) what has been done and why.  Recent computational  approaches aiming at post-hoc explainability reproduce what  happens in communication, producing explanations of the working of  algorithms that can be different from the processes of the algorithms. 


Esposito, E. (2022). Does Explainability Require Transparency? Sociologica, 16(3), 17–27. https://doi.org/10.6092/issn.1971-8853/15804 


>>> Download paper

 © Robert Anasch / Unsplash

From pool to profile: Social consequences of algorithmic prediction in insurance

The use of algorithmic prediction in insurance is regarded as the  beginning of a new era, because it promises to personalise insurance  policies and premiums on the basis of individual behaviour and level of  risk. The core idea is that the price of the policy would no longer  refer to the calculated uncertainty of a pool of policyholders, with the  consequence that everyone would have to pay only for her real exposure  to risk. For insurance, however, uncertainty is not only a problem –  shared uncertainty is a resource. The availability of individual risk  information could undermine the principle of risk-pooling and  risk-spreading on which insurance is based. The article examines this  disruptive change first by exploring the possible consequences of the  use of predictive algorithms to set insurance premiums. Will it endanger  the principle of mutualisation of risks, producing new forms of  discrimination and exclusion from coverage? In a second step, we analyse  how the relationship between the insurer and the policyholder changes  when the customer knows that the company has voluminous, and  continuously updated, data about her real behaviour. 


Cevolini, A., & Esposito, E. (2020). From pool to profile: Social consequences of algorithmic prediction in insurance. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720939228 


>>> Download paper

 © Thomas Park / Unsplash

Artificial communication? The production of contingency by algorithms.

Discourse about smart algorithms and digital social agents still refers  primarily to the construction of artificial intelligence that reproduces  the faculties of individuals. Recent developments, however, show that  algorithms are more efficient when they abandon this goal and try  instead to reproduce the ability to communicate. Algorithms that do not  “think” like people can affect the ability to obtain and process  information in society. Referring to the concept of communication in  Niklas Luhmann’s theory of social systems, this paper critically  reconstructs the debate on the computational turn of big data as the  artificial reproduction not of intelligence but of communication.  Self-learning algorithms parasitically take advantage – be it  consciously or unaware – of the contribution of web users to a “virtual  double contingency.” This provides society with information that is not  part of the thoughts of anyone, but, nevertheless, enters the  communication circuit and raises its complexity. The concept of  communication should be reconsidered to take account of these  developments, including (or not) the possibility of communicating with  algorithms. 


Esposito, E.  (2017). Artificial Communication? The Production of Contingency by Algorithms. Zeitschrift für Soziologie, 46 (4), 249-265. https://doi.org/10.1515/zfsoz-2017-1014 


>>> Download paper

 © Lesly Derksen / Unsplash

Der Computer als Medium und Maschine

Auf dem Hintergrund der Forschung über die Art und Weise, wie Kommunikationstechno­logien auf Formen der Kommunikation einwirken, werden die Merkmale der Einführung der informatischen Tech­nologie untersucht, in der eine Maschine benutzt wird, um  Kommunikation zu verbreiten und zu verarbeiten. Die grundlegende Frage ist, wie eine Kommunikation behandelt werden kann, die immer abstrakter gegenüber dem außer-kommunikativen  Kontext und immer unabhängiger vom Mitteilungsereignis ist: der Sinn der Kommunikation hängt immer weniger von der Absicht des Mitteilenden ab. Die Hypothese wird diskutiert, dass in dem individuellen Gebrauch des Computers die  Maschine  nicht als  Kommunikationspartner betrachtet werden muss, sondern dazu dient, eine virtuelle Kontingenz als Unterstützung in der Verarbeitung der Informationen zu generieren. Diese vir­tuelle Kontingenz kann auch im  kommunikativen Gebrauch gefunden werden: der Gebrauch der Maschine dient dazu, Informationen zu verarbeiten, die von der Tatsache, dass eine Mitteilung stattgefunden hat, aber immer weni­ger vom Sinn der Mitteilung selbst abhängig sind. 


Esposito, E. (1993). Der Computer als Medium und Maschine".Zeitschrift für Soziologie, 22 (5). 338-354. https://doi.org/10.1515/zfsoz-1993-0502 


>>> Download paper

 © Ali Yilmaz / Unsplash


Copyright © 2025 Elena Esposito – Alle Rechte vorbehalten.

  • Privacy Statement

Unterstützt von

Diese Website verwendet Cookies.

Wir setzen Cookies ein, um den Website-Traffic zu analysieren und dein Nutzererlebnis für diese Website zu optimieren. Wenn du Cookies akzeptierst, werden deine Daten mit denen anderer Nutzer zusammengeführt.

AblehnenAnnehmen