Big Questions
“what are the important questions in your field?”
Since I’ve heard this question, I’ve been regularly thinking about it. If we are to do something, might as well do it well; and what I do is research. Doing research means providing understandings of phenomena in order to accomplish things in reality, and to understand something means to be able to give an answer to a question about it. So might as well go for the good questions, in order to get the good answers, and to do the good things.
For now, and for me, it comes to three things, revolving around software, technology, norms, and breaking of norms.. What are the limits of technology? What shape does technology impose? And what does it erases?
Where are the limits?
Are there inherent limits to technology? Are these systems only bound to ever grow? How does a system (a technology, a population, a crop) get stabilized?
Believing in a large-language model revolution, corporations are opening nuclear power plants to help us write our emails. The academically-restrained feeling says that this makes very uncertain sense. The gut feeling says it’s fucking stupid, and disrespectful to all humans and non-humans.
Nuclear might be the more dramatic example, but technology does seem to grow, rather than regress. There seems to always be more and more technology. When do we get to less and less? Can we ever get to less and less? Is that a problem with technological solutions or is it a problem with exclusively non-technological solutions? Does technology/progress ever stop? if so, when? and then what? if not, how bad could the consequences be? At the core of this, there are tensions between such technological expansion and material, planetary limits.
Intrinsically, there seems to be some sort of inertia to them (that we can see in the convenience it creates). So we need to involve other systems in controling technological growth. I would tend to consider that, given that functional artefacts (forms of technology) are indeed providing an affordance, but are also mostly intent-driven (they are about doing something), so they should be servants/vehicles of social, economical, cultural and political environments.
For instance, among the systems that surround technology, orient or bolster it, there is productivity, its comrade-in-arms, consumerism, and its siren, advertisement. I like the way Ellul puts it: “The advertisement industry is the propaganda machine of the technical system”. So maybe this is where we set a limit, by redefining the values of such peripheral systems (advertisment being a machine that manipulates desires according to values), in which the technical system itself exists.
This is related to the “can/shall” question of ethical restraint1, and with this question of “ought” comes a secondary question: is there any technological component to ethical behaviour? Or do we just give up on technology altogether? Even if we were to restrain technology through technological means, even if they enable a vision in which limitation is valued, is it wise to still make technology a guardian to technology?
What are the styles?
How much does technology force us into modes of thinking, and acting? How ordering can a system be? What different kinds of modes influence exist ? How far can technological arrangements make people do things? How does software get embedded into situations, and how much agency does it retain? How does technology affect our perceptions (e.g. of time and space?)
The technological determinism in our modes of representation and action is not to be underestimated, since we are the most technological of all animals. One of the things at stake here, would be that changing those technologies would change things for the better (in the case of a strong techno-deterministic view)2.
That also has to do with the social, economical, cultural bias of the artefacts and whether it is active in enabling some behaviours over others. I would say that a technology embodies more or less intensely some behaviours, some actions (some sort of potential performance, in this sense), but clarifying how is an important question. Upstream, it can be oriented by some values, some intent. Downstream, it is oriented by the material configuration of things enabling or preventing. And yet, at the same time, a circle is a fucking circle, so where does it even start? At which point, threshold, does it cross into what we humans perceive as an influence, which might be aligned or not with our desires?
To investigate this, we could start with the fact that there might be different modes of determination. In the way that an expressive system (interface-rich) induces in different ways than non-expressive systems (interface-poor). There is also the adaptability, customizability of the system; widely available vs. rare. Open-close is also one of those modalities, but we could develop further.
It might seem at first that technology only operates rationally. But if we say it’s expressive, then maybe it has some relation with emotion as well? Because, what is an expressive system? one that communicates a deliberate an emotion? Or one that focuses on the imposition of the general impression, like richter or pollock)? The design?
What is clear is that this question demands a finer typologization of the properties of technology. For instance, we should look at the formatting abilities of technologies.
A style is a form, and a form can be regularized into a format, which in turn is meant to be perceived. Technological style is a specific (industrial) way of formalizing the world (i.e. making things nach DIN). Formats then enable activities through readability3 and, again, seems to conjugate quite well with productivism, as it lowers some prices (see the incredible cost reduction enabled by shipping containers).
Tools shape us and we shape tools? What about the alternative? What if technologies are completely neutral? Then a technique might be re-purposed, circumvented. But I suspect that there are still a lot of people affected by software structures, so the technology could be neutral, but the design is not (and what is the difference between those two? That design is always assuming a user, and a use-case?). Here, we see again the EME extension, so the use is a factor.4
How are the losses?
What is lost when we introduce a technology? Are these losses affecting those that are using the technology, or rather those on whom the technology is used? Are there any patterns in the duration of such losses? How predictive can these losses be? How contextual are they? Are they offset by gains, if the zero-sum gain is even valid?
These are the most recent questions, but are motivated by the observation that technology is only ever sold to us (see the note on advertisement above) through progress, gains, and ameliorations. Yet, like all advertisement, that’s a lie. So how can we know what we have lost, what we are losing, and what we will lose?
This involves setting a comparative analysis, a set of pros and cons. What do we always gain? Speed. What do we always loose? Sloth?
-
Just because we can, does it mean we should? ↩︎
-
Such as getting rid of short form video. ↩︎
-
I feel like there is something to be further explored with digital maps. Jameson’s analysis of post-modernism was already considering a loss of orientation, but I have the feeling that the GPS system allows the Google Maps UI to re-center us in a world that escapes grounding. ↩︎
-
The legal-technical connection is a strong and radical one. They replace the security guards with locks and machines. ↩︎