Luciana’s presentation was an absolute tempest of ideas, provocations and inspiration. As one of the leading critical thinkers of our time this was not to be unexpected, but in trying to capture some of what she generated, this summary can only be a poor semblance of the vibrancy of her talk. The work she presented on is coming from an upcoming article for Critical Inquiry called ‘Recursive Philosophy and Negative Machines’ and partly from her new book project Automating Philosophy: Instrumentality and Critique – so she prefaced her talk by caveating that some thoughts are in formation! The key questions that undergird her current research are ‘what is philosophy after computation’ or ‘what does philosophy become after computation?’ She posits that there has been a demise in philosophy and philosophical thinking in the face of increasingly automated decision-making, leading to a poverty of philosophy and of critique. Due to the fallacies present in automation, that are then cemented through the use of algorithms, we can no longer be sure that thinking equals truth, because all the algorithms do is optimize probabilities.
As Parisi says, this critique is not a new one, although it does necessarily take a new form due to the propensity and extended application of algorithms more recently. She traced this back to Heidegger, who complained a century ago that the demise of philosophy had turned thinking into a network of exchange and communication, and clarified that now, it is not just that machine learning has impoverished thinking, as Alex Galloway’s article ‘The Poverty of Philosophy’ suggests, but also, that the overreliance that machinic intelligence has on biases “repeats a logic of exclusion. In short, the poverty of philosophy coincides with the poverty of critique, and with the regime of capital obstruction of cognition and automated decision making.” This relates to how algorithms reduce data to categories, and therefore are designed to make assumptions about things like gender, race and class – for Parisi, this leads us to ask how meaning is given in data? What is the structural power as it pertains to algorithms and our reliance on it in terms of demanding accessibility and transparency?
Here, Parisi referred to Denise Ferreira Da Silva’s ‘The Transparency Thesis’ to help unpack these ideas. The thesis relies on how the universality of the subject “lies behind” arbitrary categories of difference which perpetuate brutalities of racism and anti-blackness. The transparent ‘other’ then exists within these structures, denying and resisting the racist classification that determines how difference is categorized. The subject, knowledge and conscious are identified as ‘one’ and the reflections that Ferreira Da Silva makes about the constitution of power in terms of knowledge and being in terms of temporality and interiority are very important for Parisi. She quotes from Ferreira Da Silva that “the transparent subject institutes itself as separate from the world, and creates the problem of how it is related to the world.” In this way, self-determination is constantly challenged, and leads to an exteriority instead, which changes how we relate to the world, and can also lead to “the end of the world as we know it,” in another quote from Ferreira Da Silva. Quintessentially, the tension is between bodies and the mind, and transparency theory offers a way of seeing the body outside of the racial category it is placed in. This can also be traced through the history of pre- and post-enlightenment understandings of the categorizations of people through the body that were written down and recorded, making categories of difference possible. The Transparency Thesis is the moment of identification for, and of, the racialized body and racist brutality. Arguing against a theory for racist behaviour, Ferreira Da Silva points to what she calls the “horizon of death” of the inanimate of the human, which is always there for being killed.
In considering the metaphor of death here, in relation to structures of power and knowledge, Parisi turns to Achille Mbembe’s ideas on Necropolitics, extending to necroentropolitics, and necroalgopolitics, and claimed that the necropolitical situation as described by him is at the heart of the structure of racial brutalism in science, mathematics and philosophy. In Parisi’s words, “necro-entro-political systems locate the tension between signal and noise in terms of the transformation of energy into usable information whereas dissipative energy demarcates blackness as the negative horizon of human life”. Necroalgopolitics considers randomness, or incomputable information in comparison, but this randomness cannot be, conversely, turned into negentropic information, which would seem logical. For Parisi, the incomputable randomness is already another kind of information. It seems we can never go back to a moment when this was not the case, what she calls a ‘pristine moment’ before algorithms were programmed to enact the brutality. So, machine learning processing does not just repeat the biases that are programmed into them, instead it “tells us the story about the brutality of social capital, that sustains the pillars of knowledge and the category of the human” in a similar way to that of racial capitalism, where the relation between blackness and machines (and here she referred to Louis Chude-Sokei) is inherently linked to colonialism.
For Parisi, the relationship between the demise of philosophy and of critique relies on the “demarcation of the inanimate, unreason and unliving state of what is called elsewhere, the flesh machine.” She referred to Francois Laruelle’s explanation that the authority of philosophy is based on the scientific explanation of reason. And this is problematic when it comes to applying that reason to machine learning and computation. She described the conflict between the human and automation as a “mirror game” which allows for philosophy to maintain its “undisputed metaphysical power of knowing the world” and creates affordances for the inhuman, as well as the human to exist, and to be important. For this stanpoint, necropolitics cannot totally be relied upon to expose the faults within the system, but helps to understand the inhuman effect of the current systems of control.
Here, Parisi moved to discuss Bernhard Stiegler’s recent article on ‘Noosology’ – the re-founding of theoretical computer science in an effort to recuperate critique in the age of automation. In an age where the ‘thought thinks itself’ it is necessary to re-employ these techniques to maintain control of critique. To quote Parisi’s interpretation “the generalized digitalization of existences coincides with the one-sided entropy which Stiegler traces through a world Adorno and Horkheimer called barbarism, or to the cybernetic nuclear age that appropriated the hopes of the potential contained in the decentralization of networks and the editorialization of the world wide web taken over by feedback loops and the computation regime or recursivity.” For Stiegler, the decline of hope after the start of the web started when smartphones and platforms put an end to the social web and the nature of the web itself, also known as “platform nihilism” (Lovink) and “net blues” (Ars Industrialis). He uses the term noodiversity from the Greek noesis, or the Aristotelian noos; the basic intellect or intelligence that allows humans to think rationally.
In contrast to Aristotle, for Stiegler, the origin of noesis is not teleological but technological, bound up with the “artifacts, supplements and prostheses which together make up the technical milieu.” This understanding of a ‘noetic’ individuation thus needs the vectors of difference and differentiation, to mean that techno diversity stems from noodiversity, or, that techno-diversity stems from a diversity of the mind. It is acknowledged however that the “capitalist episteme” that we are in renders this challenging, if not impossible, because of the demands of the regime. From her slides:
In contrasting with what Katherine Hayles would see here as “non-conscious cognition”, Parisi claims that Stiegler would instead understand as the “proletarization of intelligence and the demise of reason”. Parisi clarifies that Stiegler continues his claim about techno-diversity by expanding on the notion of the neganthropocene, which is a break from the Anthropocene, as a result of the impact of technology on the body. This coincides with a new kind of body memory, that is totally different to pre-digital technology, ergo, it is argued that a fundamental shift has occurred. Parisi says that “for Stiegler the challenge is therefore to reintroduce the conditions for vulnerabilities that constitute noodiversity, where noodiversity is also techno-diversity, because it is able to align with biodiversity and prevent the technosphere from destroying the biosphere.” Overall what Parisi is challenging is Stiegler’s appeal to a new theory of the mind to counter-act the result of the impact that machine learning and automation are having on philosophy. For Stiegler, the employment of noesis for doing so gives a new foundation for computer science, that he calls a “hypomnesic tertiary retention – an updated form of anamnesis” or what I understood as material artificial memory that has been co-constructed with the human mind.
Parisi posited that the need for a new understanding of philosophy and critique in the age of computation capitalism in terms of hypomnesic retention seems to yet again justify the authority of philosophy and allow it “to enter through the back door.” The process of noesis would mean that intelligent machines are only allowed to be passive receptors of data, and thereby anti-blackness, anti-feminist and anti-queer categories are removed from the epistemology of the capitalist regime. Returning to Ferreira Da Silva in considering a means to actually achieve this, she suggests that a “poethical or compositional thinking brings forward a 4th dimensional image of the global open to the cosmos, and as such, challenging the transparent ontic and ontological horizon for thinking.” Da Silva asks:
While this does not directly address the problem of technology, Parisi sees this as central to the discussion about how philosophy can function after the reliance on automation and computational algorithms. Ferreira Da Silva shows how the “binaries between philosophy and automation mainly appears as another strategy of othering rooted in the servo-mechanic medium of transcendental philosophy and cybernetics.”
In concluding, Parisi also wanted to mention some new ideas on machine philosophy, something that she talks more about in an interview in Qui Parle with William Morgan, but that stem from her same concerns over the question of critique. She demands that philosophies of machine learning must respond to how the limits of knowledge are already systematically prescribed by an “onto-epistemological structure of power, anti-blackness, anti-femininity and anti-queerness.” She suggests we “need a theory of media and mediation that refuses Promethean [making the machine like man] cosmogony and sides with improperness of know-hows, the elaboration of heretic propositions and weird formalisms that can be founded in the non-linguistically oriented language of computation.” She demands that machine philosophy must account for problems like this that are currently inherent in the systems produced. She seeks to challenge the assumption that machines can only function to process the physical world without really understanding it, and asks why this cannot be so using the methods and critiques that she suggests above. I will quote Parisi directly here, as she suggests that “machine philosophy must side with the inhuman as that which operates through the radically excluded from the noetic origin of thinking (pace Stiegler). That said, in order to gather collective efforts in addressing the problem of techno-diversity in terms of difference without separation (Ferreira Da Silva) one must refuse the Promethean promise of transforming the inhuman into the image of man. Instead, one cannot stop but become even more concerned with finding, looking for, inventing techno-cultural practices of studying, thinking, living as practices of inhuman existence outside the decisional structure of philosophy.”
Thank you so very much to Luciana for providing such a dense, challenging and fascinating talk. The ideas that she is grappling with and suggesting ever more necessary and crucial investigations into the role of machine learning, algorithms, bias and preconceptions that we also consider every day. Luciana we very much look forward to further conversations and seeing how this wonderful work continues.