Talk & Text
»Self-Driving Futures« is a talk I gave at the conference »non-machines« at Bauhaus University in July 2023. In it I'm adding a perspective to the term non-machines based on Gilbert Simondon's On the Mode of Existence of Technical Objects.
Let us begin with universal machines, i.e. Turing machines, most commonly in the form of computers and similar devices. In principle, they create a space of potential machines limited only by the limits of computability (both in terms of algorithms and hardware), the limits of formalization, and the imagination of the machine's author.
Programming in this sense means to reduce the potential of the universal machine to such an extent that a specific machine is developed. This (non-universal) machine is freed from any non-functional (surplus) potential.
One of the main purposes of machines is to be able to control the future. By reducing the potential of the universal machine to the exact desired functionality, the future actions of this specific machine are under control.
Reducing the surplus potential of machines
Means gaining control
Means reducing the openness of the futures.1
On the Mode of Existence of Technical Objects#
In 1958, the same year in which the first higher level programming language (ALGOL 58) appeared, the book »On the Mode of Existence of Technical Objects« by the French philosopher Gilbert Simondon was published. In it Simondon thinks about machines in terms of their degree of technicity. Automation requires a reduction of a machine's functionality, which equals a low degree of technicity. A high degree of technicity, on the other hand, requires a leeway of vagueness within the machine. He criticizes excessive specialization of the machine, which causes it to lose its adaptability.
Let's illuminate this by means of a traffic light system.2 A simple traffic light system executing a fixed routine is an example of a complete automaton. In Simondon's view it is a closed machine. It acts within a closed environment, independent of the outside world. Its degree of technicity is zero.
This can lead to absurd situations that we have all seen: A single agent (car driver, pedestrian etc.) standing completely alone at an intersection in front of a red light, waiting for the system to turn green. The human agent is obviously inferior to the machine and the formal system, so this is a bad human-machine-interaction and a bad integration of technology into the world.
A quote by Simondon to that topic:
"Thus, the first condition for the incorporation of technical objects into culture would be that humans would be neither inferior nor superior to technical objects, that they would be able to approach and get to know them by maintaining a relation of equality, of reciprocal exchange with them: a social relation, so to speak."3
According to Simondon, a machine is open
>>> if it's possible to modify its operation from the outside.
The first step in achieving this is to enable information from the outside world to enter the machine and change its output. A simple version of this is a traffic light system that uses sensors to detect an agent waiting in front of a red light, which can influence the operation of the system.
AI Traffic Light#
Recently, a more sophisticated version was installed at an intersection in the city of Hamm. It is being promoted as the first AI traffic light system in Germany. Seven cameras monitor the entire intersection and detect agents of eight different types: ["pedestrian", "cyclist", "motor bike", "car", "truck", "bus", "tram", "train"]. Depending on the type, number and the speed of the agents, the system adjusts the green and red phases of all traffic lights. For example, if there is a large group of pedestrians, the green phase can be extended to allow everyone to cross the street at once. The AI traffic light is an open machine in the sense that it uses information from the external world.
Ensembles of Technical Objects#
The next step is a machine that uses information from another technical object. In Simondon's terms, this would be an ensemble of technical objects. In development, for example, is this for cars that communicate with each other and with other agents such as traffic lights. The main goal, of course, is to gain even more control over the future. We all know the popular question of whether a self-driving car should aim for the grandmother or the child. In the idealized, technically driven world, this problem does not exist. The technical objects have the situation under control and accidents do not happen.
⤷The main purpose of interacting machines is to control the future.
Indeterminacy and Vagueness in Neural Nets#
Simondon argues for a margin of indeterminacy within a machine to enable it to adapt to the world. Let us consider this in regard to artificial neural networks. What is called learning could also be called adaption. During the iterative learning process, parameters inside the neural network are adjusted to produce the desired output. The computational steps inside an artificial neural network are determined, the indeterminacy is located on the side of the data and the parameters that are adjusted by the data. A dataset itself has its vagueness due to the differences of the individual samples. When the data is transferred into the parameters, some particularities of the data get lost due to the abstraction. It may seem that abstraction sharpens the net by removing particularities, but on the other hand, it introduces a new vagueness compared to the concrete object or data sample with its particularities.
On the other hand, Brian Cantwell Smith suggest in his book »The Promise of Artificial Intelligence. Reckoning and Judgment,« that vectors and parameters might contain more information and detail than we get out of them when we eventually reduce them to objects and concepts with sharp boundaries. Machines could instead perform decision making at a sub-conceptual level, i.e., at the level of lists of floating-point numbers.4
Although it sounds contradictory, the neural network seems to contain more vagueness and more detail. But ultimately both are sharpened and reduced to an output that fits our ontology of the world. This can indeed be problematic if we take vagueness and probabilities as facts.
VUCA is a model to describe the world we live in:
(Of course this is an example of our pleasure in reducing the richness of the world to simple concepts. Nonetheless ...) A major feature of our intelligence is the ability to deal with these factors. I argue that the main branch of technological development attempts to dominate these factors with a simple approach of control.
How about a traffic light system like the one created by artist Iván Navarro? The light installation »Traffic Light« shows a mobile of seven traffic lights synchronized so that they all have the same phase at the same time. We can easily imagine a situation in which all agents have to stop together at an intersection and then are allowed to move and collide at the same time. This illuminates our dependence on technology and the aspect of control of the agents' actions.5 On the other hand, it would be interesting to see how human agents would adapt to this situation as their own decision making is more involved.
Next to control, we6 aim for a reduction of the individuals' responsibility. The inherently better traffic circle opens up much more leeway of vagueness for all agents and we can better sense that we are responsible for our judgment and actions. But of course it is wrong to think that, on the contrary, we are less responsible when machines suggest our actions.
Leeway of Vagueness#
Simondon prefers to keep the human in the loop. He envisions a leeway of vagueness in the machines hat keeps them open for a human operator to modify their actions through interactions with them.7
More from Simondon:
"The technical activity differs from the mere labor and from the alienated labor in that the technical activity includes not only the use of the machine, but also a certain coefficient of attention to the technical functioning, maintenance, adjustment, improvement of the machine, in which the activity of invention and construction is continued."8
Much of the technological development on industrial scale is heading in the opposite direction: technological objects are made for the unknowing users, without the ability or possibility to encounter them in the mode of technical activity.
Technical Activity with Neural Nets#
Let us look at possibilities of technical activity in dealing with artificial neural networks:
- Fine-Tuning a pre-trained model
- Training a given model architecture with a custom dataset from scratch
- Continuing the invention
Nowadays, the most widely used method is to fine-tune a pre-trained model. This because it is the fastest and easiest way to create usable applications from it. Training a model from scratch has become increasingly exclusive due to the amount of resources required. The same is the case for continuing the invention or construction. Machine learning models have become so large and the code is rarely open source that it is impossible to continue the construction.9
Let us return to the initial definition of a machine.
If a machine is defined by the reduction of surplus potential
a machine which contains surplus potential
can be seen as a non-machine.
Surplus potential means a leeway of vagueness within the machine and the possibility to approach it in a mode of technical activity.
Some key words about what self-driving futures could mean: First of all appreciating that we live in a VUCA world (more complex than the model suggests). Then, practicing technical activity that leads to more autonomy, agency and responsibility.10
In his book »I am a strange loop«, Douglas Hofstadter describes us human beings as
"unpredictable self-writing-poems --- vague, metaphorical, ambiguous"11
It seems like we are using machines to reduce this in the world, maybe even in ourselves. Instead, we could use it for the development of non-machines.
Of course, this applies not only to the construction of the machines themselves but also to our mindset when interacting with machines. See, for example, the chapter »Keine Experimente. Über künstlerische Künstliche Intelligenz« (»No Experiments. On artistic Artificial Intelligence«) in Hannes Bajohr, Schreibenlassen. Texte Zur Literatur Im Digitalen (Berlin: August Verlag, 2022). ↩
In general, traffic rules are a good example of a formal system. It is defined by abstractions and is independent of the individuality of the agents who apply it. In order to function, it depends on our consent. It is not biologically inscribed in us to wait in front of a red light and start moving when the light turns green, etc. ↩
Gilbert Simondon, Die Existenzweise technischer Objekte (Zürich: diaphanes, 2012), 81. Translated by MK. ↩
Brian Cantwell Smith, The Promise of Artificial Intelligence. Reckoning and Judgment (Cambridge, Massachusetts: MIT Press, 2019), 60--63, https://doi.org/10.7551/mitpress/12385.001.0001. ↩
Felix Hofmann-Wissner brought to my attention that the (obvious) traffic light system is also used in Ted Kaczynski's manifesto »Industrial Society and Its Future« as an example of the restriction of freedom by technology. What he is basically arguing about at this point: "When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it." Theodore John Kaczynski, 'Industrial Society and Its Future' (The Washington Post, 1995), para. 127, https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm. This is indeed the case for many technological artifacts. But of course Kaczynski drew unacceptable conclusions from his views. ↩
Of course the question of who is we matters here. ↩
The interaction (or entanglement) between machines and other agents is a major force against controllability of machines and thus futures. See for example from an artistic/ aesthetic perspective Mattis Kuhn, 'Unbestimmtheitsspielräume Algorithmischer Geflechte in Zeitgenössischer Kunst' (Diploma Thesis, Offenbach, Hochschule für Gestaltung, 2016), https://archive.mattiskuhn.com/MattisKuhn_2016_Unbestimmtheitsspielraeume-algorithmischer-Geflechte.pdf. ↩
Simondon, Die Existenzweise technischer Objekte, 231. Translation and italics by MK. ↩
I myself have followed this path through several projects in my artistic practice. First, I modified an existing architecture (which can be seen as a continuation of the construction) and trained it with a custom dataset. Next, I used an existing architecture and just trained it with my own data. Then I used an already pre-trained model and fine-tuned it with my own data. And finally, I used a trained model without any customization of my own. So over the last years I can trace a decay of technical activity in my own artistic practice with neural networks. ↩
Now I think I prefer the term Self-Driving Presents. Thinking and acting into the future may often be helpful and useful, but on the other hand many of our problems arise from thinking too much about the future instead of being in the present. ↩
Douglas R. Hofstadter, I Am a Strange Loop (New York: Basic Books, 2007), 363. ↩