How Does Thinking Look Like

Lecture performance, 28 minutes. 2021 416 ppm

A lecture performance about the importance of body, gesture, and diagrams in AI research.

The piece draws from a range of archival material, and the artist's two-year residency in Yann LeCun's machine learning research group at New York University. The performance explores how researchers use embodied, situated knowledge in an attempt to separate intelligence from embodiment. It also shows how researchers use diagrams and their imagination to visualize and understand the invisible aspects of AI.

Choreography in collaboration with Sarah Dahnke.

‘How Does Thinking Look Like‘ premiered as a live performance at The Neuroverse live arts festival at ONX Studio in New York (Nov 2021).
The recording of the overhead camera feed was screened as an essay film at Dortmunder U Cinema by Hartware MedienKunstVerein as part of the exhibition 'House of Mirrors: Artificial Intelligence as Phantasm'.

"The artist demonstrates the extent to which thinking is always bound to physicality by placing a wide variety of materials in relation to his own body, especially his hands. The hands can be seen as pointing and thinking organs, while Schmitt talks about historical, technological and social aspects of artificial 'intelligence'.” — Event program, Hartware MedienKunstVerein

Credits

Movement Director: Sarah Dahnke. Produced by Media Art Xploration in collaboration with New York Live Arts. This piece draws in part on a residency at NYU Center for Data Science awarded by the Berggruen Institute ToftH fellowship program.

The text in this lecture performance is part original writing part montage. It quotes — in prose and imagery — from artificial intelligence research publications, philosophy, fiction, and generated text by a neural network. Scroll to the bottom of this page for a detailed bibliography.

A photo of a man sitting at an illuminated desk. The artist's hands are projected onto a large wall on his left, on top of two hand prints belonging to Albert Einstein
Photo: MAX / Alycia Kravitz
Photo: MAX / Alycia Kravitz
A picture of two hands above an old, grainy image of the Mark I Perceptron, a machine for pattern recognition.
A picture of two hands, one holding a red marker, hovering over a page of a computer science research paper with a diagram and some text.
Two hands hovering over a diagram of numbers and text.
Two hands aligning themselves with an abstract diagram of lines and shapes.

Bibliography

Publications used (quotes or images), on order of appearance:

Handprints of Albert Einstein's Hands

American Automobile Association. (1958) "National system of interstate and defense highways: as of June. Washington, D.C.: The Association." The Library of Congress

Hay, J. C., Lynch, B. E., & Smith, D. R. (1960). Mark I Perceptron Operators' Manual. CORNELL AERONAUTICAL LAB INC BUFFALO NY.

Picture of Frank Rosenblatt, Source unknown, found on Reddit.

Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386.

Hinton, C. H. (1912). The fourth dimension.

Vector space model from “A vector space model for automatic indexing”(Salton, Wong, and Yang 1975).

Newcomb, S. (1898). The philosophy of hyper-space. Science, 7(158), 1-7.

Mackenzie, A. (2017). Machine learners: archaeology of a data practice. MIT Press.

Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78-87.

Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of machine learning research, 9(11).

Lewis-Kraus, G. (2016). The great AI awakening. The New York Times Magazine, 14(12), 2016.

Pater, R. (2016). The politics of design: A (not so) global design manual for visual communication. BIS Publishers B.V.

Levin, E., & Fleisher, M. (1988). Accelerated learning in layered neural networks. Complex systems, 2(625-640), 3.

LeCun, Y., Bottou, L., & Orr, G. (1998). : Efficient BackProp, in Orr, G. and Muller K.(Eds), Neural Networks.

Selfridge, O. G. (1988). Pandemonium: A paradigm for learning. In Neurocomputing: Foundations of research (pp. 115-122).

Schmidhuber, J. (1990). Making the World Differentiable: On Using Self-Supervised Fully Recurrent N eu al Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments.

Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.

Arendt, H. (1981). The life of the mind: The groundbreaking investigation on how we think. HMH.

Elman, J. L. (1991). Distributed representations, simple recurrent networks, and grammatical structure. Machine learning, 7(2), 195-225.

Jafari, R., & Javidi, M. M. (2020). Solving the protein folding problem in hydrophobic-polar model using deep reinforcement learning. SN Applied Sciences, 2(2), 1-13.

Vertesi, J. (2012). Seeing like a Rover: Visualization, embodiment, and interaction on the Mars Exploration Rover Mission. Social Studies of Science, 42(3), 393-414.

Cartmill, E. A., Beilock, S., & Goldin-Meadow, S. (2012). A word in the hand: action, gesture and mental representation in humans and non-human primates. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1585), 129-143.

Ping, R., & Goldin‐Meadow, S. (2010). Gesturing saves cognitive resources when talking about nonpresent objects. Cognitive Science, 34(4), 602-619.

Pétervári, J., Osman, M., & Bhattacharya, J. (2016). The role of intuition in the generation and evaluation stages of creativity. Frontiers in psychology, 7, 1420.

Bengio, Y. (2007). Learning deep architectures for Al. Foundations and Trends in Machinę Learning.-2009.—2 (1).-pp, 1-127.

Bélanger, G. (2018). "Concrete Complexity: When Data Visualization Gets Put to the Test of Materiality", ISCP

Schmitt, P. (2018). "On being a vector inside a neural network"