A Material History of Bits

Jean-François Blanchette, Assistant Professor
Dept. of Information Studies, UCLA

infrastructure.jpg
In both the popular press and scholarly research, the trope of digital information as “immaterial” is invoked with remarkable persistence. In this characterization, the digital derives its power from its nature as a mere collection of 0s and 1s wholly independent from the particular media on which it resides (hard drive, network wires, optical disk, etc.) and the particular signal carrier which encode bits (magnetic polarities, voltages, or pulses of light). This purported immateriality endows bits with considerable advantages: they are immune from the economics and logistics of analog media, and from the corruption, degradation, and decay that necessarily results from the handling of material carriers of information, resulting in a worldwide shift “from atom to bits” as captured by Negroponte. This is problematic: however immaterial it might appear, information cannot exist outside of given instantiations in material forms. But what might it mean to talk of bits as material objects?
Building on previous work by Kirschenbaum (2008) and Agre (1997), this project proposes a framework for discussing the material foundation of digital information. It suggests that various factors, including the trope of immateriality, have obscured the physical constraints that obtain on the storage, circulation, and processing of digital information, resulting in inadequate theorization of this fundamental dimension of information systems. In fact, computing systems are suffused through and through with the constraints of materiality, and the computing professions devote much of their activity to the management of these constraints, as manifested in infrastructure software.
While applications provide service to users, infrastructure software provides services to applications, by mediating their access to computing resources, the physical devices that provide processing power, storage, and networking. Infrastructure software is most commonly encountered in the form of operating systems, but is also found embedded in hardware (the firmware in a hard drive) or in specialized computers (e.g., web servers, or routers). Whatever its specific form, the role of infrastructure software is to provide a series of transformations whereby the signals that encode bits on some physical media (fiber optic, magnetic drive, electrical wires) become accessible for symbolic manipulation by applications. Infrastructure software must be able to accommodate growth in size and traffic, technical evolution and decay, diversity of implementations, integration of new services to answer unanticipated needs, and emergent behaviors, among other things. It must provide programmers with stable interfaces to system resources in the face of continuously evolving computing hardware—processors, storage devices, networking technologies, etc.
The computing industry manages accomplishes this feat through the design strategy of modularity, whereby a module’s implementation can be designed and revised without knowledge of other modules’ implementation. Modularity performs this magic by decoupling functional specification from implementation: operating systems, for example, enable applications to open, write to, and delete files, without any knowledge of the specific storage devices on which these files reside. This decoupling provides the required freedom and flexibility for the management, coordination, and evolution of complex technical systems. However, in abstracting from specific implementations of physical resources, such decoupling necessarily involves efficiency trade-offs. The TCP/IP protocols for example provide abstractions of networks that favor resilience (the network can survive nuclear attacks) over quality of service (the network provides no minimum delays for delivery of packets). Applications sensitive to such delays (e.g., IP telephony or streaming media) must thus overcome the infrastructural bias of the protocols to secure the quality of service they require.
An important point is that efficiency trade-offs (or biases) embedded in a given modular organization become entrenched through their institutionalization in a variety of domains: standards, material infrastructure (e.g., routers), and social practices (e.g. technical training) may all provide for the endurance of particular sets of abstraction. This entrenchment is further enabled by the economies of scale such institutionalization affords. An immediate consequence is that the computing infrastructure, like all infrastructures, is fundamentally conservative in character. Yet, it is also constantly under pressure from the need to integrate changes in the material basis of computing: multi-core, cloud-based, and mobile computing are three emerging material changes that will register at almost every level of the infrastructure.
Computing, it turns out, is material through and through. But this materiality is diffuse, parceled out and distributed throughout the entire computing ecosystem. It is always in subtle flux, structured by the persistence of modular decomposition, yet pressured to evolve as new materials emerge, requiring new trade-offs. This project thus argues that, in a very literal and fundamental sense, materiality is a key entry point for reading infrastructural change, for identifying opportunities for innovation that leverage such change, and for acquiring a deep understanding of the possibilities and constraints of computing. This understanding is not particularly provided by exposure to programming languages. Rather, it requires familiarity with the conflicts and compromises of standardization, with the principles of modularity and layering, and with a material history of computing that largely remains to be written.
a full version of this paper is available at http://polaris.gseis.ucla.edu/blanchette/papers/materiality.pdf

Leave a Reply