Introduction to ‘Blueprints for Intelligence’

by Philipp Schmitt Acknowledgements

Artificial Intelligence (AI) is often illustrated with sci-fi characters. That's misleading. ‘Blueprints for Intelligence’ is a visual history of AI told through a collection of diagrams from machine learning research publications published between 1943 and 2020.

Looking at the history of AI through its diagrams lets us trace key tendencies in the technology’s evolution. Unconcerned with what these figures might tell a researcher, this project explores what they say about the researcher. It draws connections between the visual representations of neural networks and the researchers’ conception of cognition.

Why metaphors matter

Artificial neural networks are the dominant technique deployed in machine learning, a programming approach loosely inspired by the human brain and often used synonymously with the blanket term “artificial intelligence.” At the end of the day, an artificial neural net is a computer program like any other; say, one that computes whether an image contains a person or a car. But, crucially, neural networks are not programmed with step-by-step instructions to accomplish their task. They use an optimization process to learn how to complete the task, given a number of examples. In domains like computer vision, this approach works much better than manually-coded algorithms. To learn, and to predict, are two abilities commonly associated with intelligence.

In European and U.S. popular media, articles about AI and machine learning are commonly illustrated with humanoid characters from classic science fiction movies like “The Terminator,” Ava from “Ex Machina” or Hal 9000 from “2001: A Space Odyssey.” Sci-fi frequently associates the technology with hyper-masculine or hyper-feminine, dystopian or utopian imaginaries, and White features, even when the overlord in question doesn’t have a body. 1
Another theme is circuit boards or binary code collaged with illustrations of brains, often in hues of blue. Here, intelligence is attributed to a mechanistically understood, isolated human brain. Freed from the “burdens” of a body, these images seem to suggest, intelligence reaches its full potential running on bits instead of oxygen.

A Google Search for images of artificial intelligence
Screenshot: A Google Search for images of artificial intelligence

These images are purely decorative and often unrelated to the article. Worse, they are harmful for the public imagination, because metaphors matter. As Maya Indira Ganesh and others have written, they influence how we develop and design policy for emerging technologies, just as they have for nuclear power or stem cell research in the past. 2 3
While technology narratives can be helpful, popular images of AI taint our understanding and agency. Instead of inviting public discourse around urgent questions like bias in machine learning systems or opportunities to question our understanding of our own intelligence, these images suggest that readers should run from robot overlords or thrust their heads into a blue binary code utopia.

Blueprints for Intelligence

In journal articles, conference posters, textbooks and in the daily life of the New York University lab that I was a part of from 2019 to 2021, figurative AI imagery is largely absent. Yet there are plenty of images: diagrams of neural networks — figures visualizing systems at work that cannot be witnessed otherwise. These diagrams may seem cryptic to the uninitiated, but to experts they read like architectural blueprints. They depict how neurons are interconnected and how information flows through the network.

As I have explored in a previous project, How Does Thinking Look Like, these figures are more than communication tools. When speaking about technical details and even general concepts like “learning” or “intelligence”, the researchers I met at NYU would frequently trace diagrams in hand gestures, suggesting that these diagrams play a role in conceiving new algorithmic architectures. One could say the diagram shapes the neural network it depicts.

If there is a picture of contemporary artificial intelligence, I'd argue it is here: in neural network architecture diagrams.

What is at stake with present-day AI is not a robot invasion, but which concepts of intelligence get prioritized and how they relate to and frame the world at large. Looking at the history of artificial neural networks through its diagrams lets us trace key tendencies in the technology’s evolution. Unconcerned with what a diagram might tell a researcher, this book asks what it says about them. It is an archaeological speculation of sorts, drawing connections between the visual representations of neural networks and AI researchers’ conception of cognition. 4

The diagrams included in this book were collected by the author in a somewhat serendipitous process, favoring where they appear first-in-style and prioritizing historically underrepresented authors. It makes for an incomplete history — one that is subjective, Anglo-centric and reflects the underrepresentation of women and researchers of color in the history of AI. Today, the AI community still has a long way to go toward equity and inclusion. A growing number of organizations have called attention to the need for structural transformation in the field, such as Women in Machine Learning, Black in AI, LatinX in AI, Queer in AI, {dis}ability in AI, and Indigenous Protocol and Artificial Intelligence Working Group. The selection process for this book involved surveying the membership lists and major convenings of these groups.

Ideas in Nervous Activity

This history begins with the artificial neuron, invented by neurophysiologist and cybernetician Warren McCulloch together with the logician Walter Pitts. Their 1943 paper introduced an artificial neuron, inspired by the biological example. McCulloch and Pitts provided a mathematical model intended to demonstrate that neuronal function could be modeled through pure logic. 5

In the 1940s, figures in research papers were often drawn by hand. They were then glued onto typeset and printed papers, before photo copies were mailed to colleagues and reviewers. In most cases we don't know who actually drew these diagrams. In this case, we do. The drawing originated at the McCullochs' kitchen table, during “endless evenings [...] trying to sort out how the brain worked, with the McCullochs' daughter Taffy sketching little pictures which later illustrated ‘A Logical Calculus of the Ideas Immanent in Nervous Activity.’” 6

In the publication, each neuron is hand-drawn and no two are quite alike, no line perfectly straight. Neuronal connections have slightly enlarged endpoints, as if they were to be plugged into a corresponding physical dent rather than an abstract, solely logical connection. Others wrap around the spike of connected neurons in a loop, as if information had to be tied to the receiver for it not to slip off at the slightest pull.

Taffy McCulloch, then about 17 years old, drew neurons as triangular shapes, reminiscent of the intricately illustrated drawings of the neural brain by the “father of neuroanatomy,” Santiago Ramón y Cajal. Abstracted from Ramón y Cajal’s anatomical rendering towards a modular, diagrammatic representation, the McCullochs and Pitts drew represented as modular building blocks that can be freely recombined to create neural circuits. As the historian of science and technology Orit Halpern writes in her book “Beautiful Data,” their work rendered “reason, cognitive functioning, those things labeled ‘mental,’ algorithmically derivable from the seemingly basic and mechanical actions of the neurons.” 7 Laying the foundations of cybernetics and artificial intelligence, McCulloch and Pitts pioneered a mechanistic understanding of thought and biology, suggesting that brains and computers work alike. 8

McCulloch’s and Pitts’ neuron delivered a biologically-inspired, basic building block for modern artificial neural networks. Other early neural network designs also borrow from anatomical concepts and terminology. Frank Rosenblatt’s drawings of the Perceptron (1958), a simple visual classification network, show a “retina” onto which an image of a shape is projected. 9 The image is then fragmented into 400 “pixels” whose light intensity flows as information through connections in the network. Given a few examples, the system could learn which connections indicate characteristic patterns of information and thereby tell different shapes apart without being explicitly programmed.
The scientist notably thought of his invention as primarily a “brain model, not an invention for pattern recognition” 10 and was aware of its limitations. Nonetheless, the Perceptron illustrates a vision of intelligence based on abstractions and simplifications necessary to represent cognition both on paper and in code.

Rosenblatt’s work is richly illustrated with beautiful drawings in organic strokes, often reminiscent of cell-like structures. Yet, the perceptron’s retina is square, unlike any other eyed organism. And the hand-drawn connections between elements might not allude to living organisms but rather resemble the jumble of wires 11 connecting electronic components in the physical Perceptron, a room-sized apparatus for research experiments with the Perceptron algorithm. In other publications of the Perceptron research project, we find experiments with different types of diagrams, illustrating perhaps a desire to overcome intricate, pseudo-anatomical references for a more diagrammatic way of representing cognitive functions.

Oliver Selfridge’s Pandemonium, a pattern learning system that learned to translate Morse code into text, is another creatural take on neural networks. Here, demons have replaced analogies of human vision. These creatures recognize and understand, identify with one particular letter and, when their assigned letter is called, shout into the ear of a “decision demon.”

Demons are supernatural beings that can be human or non-human, or spirits that have never inhabited a body. The disembodied demon may be a better stand-in for artificial intelligence than most other creatures, since its appearance shouldn’t be distracting. Nonetheless, the figure of the demon is subject to the same problems of visualization as other concepts that we fail to imagine, except in our own image. It seems we lack the vocabulary and imagery to think and talk about intelligence without either invoking the animate or the spiritual.

In spite of the demonic allegory, the Pandemonium itself was terrestrially mathematical. Across the history of science, demons have been invoked from time to time as thought experiments, or “placeholders of sorts for laws or theories or concepts not yet understood.” 12

Abstract machines

Over the decades, neural networks became increasingly complex. Multiple interconnected layers of neurons were introduced, allowing for more intricate computations and overcoming some of the limitations of early neural nets. Better learning algorithms and increasing computing power allowed for larger, deeper networks devouring ever-growing amounts of data.

This development can be traced visually in the disappearance of the neuron. In early network diagrams, every single neuron was denoted with a mark on the page. Soon, only a few were drawn, with an ellipsis … as a stand-in for what was perhaps too tedious, or too complex, or simply unnecessary to reproduce.

Later again, the neuronal representation is replaced by the layer. In machine learning, this was commonly denoted as a rectangle, and subsequently a cube with its dimensions annotated (a rectangle with the annotation 32x32 contains 1024 neurons).

The diagram of AlexNet, from an influential paper in computer vision, is a telling depiction of modern machine learning — less for what it shows, but for what it omits. Another step abstracted from previous depictions, the authors have embraced isometric projection. They render layers and convolutions as nested three-dimensional boxes in what are essentially optical illusions: the diagram on this page is entirely flat, yet hinting at a high-dimensional plasticity of which just two dimensions are ‘real', the third an optical illusion, and all others imperceptible. The structure is also cropped at the top, occluding even more than what is necessary of the system from the viewer. What was perhaps an accident of the authors, or deemed irrelevant, aptly illustrates how the neural network has over time slipped out of focus, out of view, and into abstraction.

Non-Human Intelligence

Modern deep learning neural networks are capable, in some cases, of doing things that humans do, and in a few cases even surpass our abilities. At the same time, the field has at times distanced itself from relating to human example. We see this distancing reflected in recent diagrams: modern neural network architectures are rendered in an increasingly abstract, flowchart-like diagrammatic familiar from engineering.

Earlier biomorphic connotations have disappeared from both the linguistic and visual vocabulary. The inaccuracy of the human stroke and the friction of pen on paper have succumbed to precisely generated computer graphics. Neural networks are now painted as domain-independent, modular building blocks for learning and intelligent behavior, just as cyberneticians such as McCulloch and Pitts had imagined.

At the same time, diagrams now often contain depictions of the complex data they process — dogs and people, proteins, prose, x-rays, and the earth — making for a contrast between generalized architectures and specific instantiations of artificial intelligence. And AI explainability research is finding more and more ways to turn the complex mathematical processes into interpretable images, descriptions, and graphs. 13 These examples represent neural networks as fundamentally intertwined with the specific datasets they are trained on.

Spearheaded by outspoken computer scientists such as Timnit Gebru, more and more researchers are including impact statements, ‘model cards’ 14 and ‘nutrition labels’ 15 in their publications, reminding us that neural networks are socio-technical systems: they are conditioned by their creators and have societal effects. And a diagrammatic research project by Kate Crawford and Vladan Joler re-draws the neural network diagram in a whole-earth context: their Anatomy of an AI System traces the planetary resources, data, and human labor that underwrite artificial intelligence today. 16

So, do neural network diagrams make for a better portrait of AI than sci-fi tropes? Most people can’t understand them, or relate to them like they can to a robot. But what if this is their best feature? Instead of projecting onto a metal humanoid, a poetic reading of metaphor and symbolism in AI diagrams prompts us to consider how their creators think about cognition. This may lead to more productive conversations about the risks and opportunities of artificial intelligence research than we could have amidst sci-fi tropes. And it elevates the question: What other conceptions of intelligence are desirable, and how can we represent them?


An abbreviated version of this essay was published in NOEMA in March 2021


This project was supported in part by the Berggruen Institute ToftH Fellowship program.

I would like to thank to Yann LeCun and Jake Browning for their feedback and suggestions; Maya Indira Ganesh for working with me on the first version of this project which, unfortunately, didn't make it to print; Mashinka Firunts Hakopian for her thoughtful editing; and Shannon Mattern for helping me navigate the publishing world.


  1. Stephen Cave, and Kanta Dihal. "The Whiteness of AI." Philosophy & Technology 33.4 (2020): 685-703.
  2. Nils Gilman, and Maya Indira Ganesh. “Making Sense of the Unknown.” The Rockefeller Foundation (7 July 2020):
  3. Stephen Cave, Claire Craig, Kanta Dihal, Sarah Dillon, Jessica Montgomery, Beth Singler, and Lindsay Taylor. “Portrayals and perceptions of AI and why they matter.” The Royal Society (2018):
  4. Adrian Mackenzie’s “Machine Learners: Archaeology of a Data Practice” has been an important inspiration and encouragement to look closely at the material practices of AI research.
  5. Warren S. McCulloch, and Walter Pitts. "A logical calculus of the ideas immanent in nervous activity." The bulletin of mathematical biophysics 5.4 (1943): 115-133.
  6. I would like to thank Anna Munster for telling me about this story which a student discovered in Arbib, Michael A. "Warren McCulloch's search for the logic of the nervous system." Perspectives in biology and medicine 43, no. 2 (2000): 193-216.
    I will update this page once the student's dissertation is published.
  7. Orit Halpern. “Beautiful Data: A History of Vision and Reason Since 1945.” Duke University Press (2015): 162
  8. Amanda Gefter. “The Man Who Tried to Redeem the World with Logic.” Nautilus (29 January 2015):
  9. John C. Hay, Ben E. Lynch, and David R. Smith. “Mark I Perceptron Operators' Manual No. VG-1196-G-5.” Cornell Aeronautical Lab (1960)
  10. Frank Rosenblatt. “Principles of neurodynamics. perceptrons and the theory of brain mechanisms. No. VG-1196-G-8.” Cornell Aeronautical Lab (1961)
  11. Cornell University News Service records, #4-3-15. Division of Rare and Manuscript Collections, Cornell University Library.,
  12. Casey Cep. “Science’s Demons, from Descartes to Darwin and Beyond.” The New Yorker (Jan 8 2021):
  13. Carter, et al., "Activation Atlas", Distill, 2019.
  14. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. “Model Cards for Model Reporting.” In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery (2019): 220-229
  15. Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. “The dataset nutrition label: A framework to drive higher data quality standards.” arXiv preprint arXiv:1805.03677 (2018)
  16. Kate Crawford and Vladan Joler, “Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources,” AI Now Institute and Share Lab, (September 7, 2018)