For Work / Against Work
Debates on the centrality of work

"The Nooscope manifested: AI as instrument of knowledge extractivism"

by Pasquinelli, Matteo; Joler, Vladan (2020)

Abstract

Some enlightenment regarding the project to mechanise reason. The assembly line of machine learning: data, algorithm, model. The training dataset: the social origins of machine intelligence. The history of AI as the automation of perception. The learning algorithm: compressing the world into a statistical model. All models are wrong, but some are useful. World to vector: the society of classification and prediction bots. Faults of a statistical instrument: the undetection of the new. Adversarial intelligence vs. statistical intelligence: labour in the age of AI.

Key Passage

Rather than studying only how technology works, critical inquiry studies also how it breaks, how subjects rebel against its normative control and workers sabotage its gears. In this sense, a way to sound the limits of AI is to look at hacking practices. Hacking is an important method of knowledge production, a crucial epistemic probe into the obscurity of AI. Deep learning systems for face recognition have triggered, for instance, forms of counter-surveillance activism. Through techniques of face obfuscation, humans have decided to become unintelligible to artifcial intelligence: that is to become, themselves, black boxes. The traditional techniques of obfuscation against surveillance immediately acquire a mathematical dimension in the age of machinelearning. For example, AI artist and researcher Adam Harvey has invented a camoufage textile called HyperFace that fools computer vision algorithms to see multiple human faces where there is none (Harvey 2016). Harvey’s work provokes the question: what constitutes a face for a human eye, on the one hand, and a computer vision algorithm, on the other? The neural glitches of HyperFace exploit such a cognitive gap and reveal what a human face looks like to a machine. This gap between human and machine perceptionhelps to introduce the growing feld of adversarial attacks. Adversarial attacks exploit blind spots and weak regions in the statistical model of a neural network, usually to fool a classifer and make it perceive something that is not there.  (p.15)

Keywords

Ethical Machine Learning, Information Compression, Mechanised Knowledge, Nooscope, Political Economy

Themes

Digital Labour, Automation

Links to Reference

Citation

Share


How to contribute.