Task-Transforming Representations

March 20, 2017

Cognition in the Wild, which I read last year, is one of those books that can change the way you see the world. In one of my favourite passages, Hutchins talks about the slide rule:

The slide rule spatially juxtaposes logarithmic scales and implements addition and subtraction of stretches of space that represent logarithmic magnitudes. In this way, multiplication and division are implemented as simple additions and subtractions of spatial displacements.

According to Hutchins, the slide rule — and other tools like maps and navigational charts — are representational systems that have certain computational properties. You can think of the work that goes into building these tools as a kind of precomputation that can be leveraged when the tool is used. They are “task-transforming” representations, because tasks involved in using them (e.g., physical manipulation of the slide rule and the reading of the scales) are fundamentally different from the tasks required to do the computation from scratch.

Later, Hutchins writes:

These tools thus implement computations as simple manipulation of physical objects and implement conceptual judgements as perceptual inferences. But perhaps this refinement will be lacking from the next generation of tools. By failing to understand the source of computational power in our interactions with simple “unintelligent” physical devices, we position ourselves well to squander opportunities with so-called intelligent computers. The synergy of psychology and artificial intelligence may lead us to attempt to create more and more artificial agents rather than more powerful task-transforming representations.

Reading this yesterday, it occurred to me that most of what we call “AI” today — the stuff that used to be called machine learning — could be seen as another class of task-transforming representation. We start with a bunch of training data, usually a particular representation of something else that we care about. The result of the learning process — say, a classifier — is analagous to a map: it’s a representation of the training data with certain computational properties. It makes some tasks easy (“is this spam?”), and others hard (“why was this marked as spam?”). Like a map, it’s not neutral: it’s a tool designed for a particular purpose, biased in certain ways and by necessity a simplification of a much messier and more nuanced “real world”.

And just like with maps, it’s easy to forget that the representation is not the real thing.