carnet

Performing Tools

There are a lot of discourses around large language models. Some are messianic, preaching that this form of mechanical intelligence will save us all; some are doomsayers, prophesizing the end of the human; others are confused, not sure if they’re missing out on the revolution of a species, or if they’re just gullible to the newest hype. Others do not care at all1.

Personally, I have very mixed feelings about the whole thing. I think the engineering is cool (for instance, turning words into vectors), but I think the hype is waste of everyone’s energy (particularly because it’s mostly agenda-setting by the stakeholders aiming at selling a product). What I do find interesting, is that it is a new support to contribute to old questions. And particularly, I’m curious about the propensities of tools to help us perform and to force us to perform.

Networks of dependencies

Well-aware of the pitfalls of generational judgments (in which older people think it was better before), I am trying to disentangle the roles and responsibilities of humans and machines: what can and should each one do?

When you don’t make that distinction in the correct way, you run to risk of creating a bound of dependency. Their use embeds us into a network which might be suggesting an all-together different price to pay. The subscription model is sustaining the shift from ownership to access: this used to be applied to products, and now it’s being applied to tools. aAt the same time, this servile relationship is hidden between friendly interfaces and metaphors (‘chatting’ sounds better than ‘initating a word-vector stochastic traversal’).

We should be paying attention to the difference between offloading (to transfer, to get rid of) and delegating (entrust, represent for another). What are the tasks that are worth delegating, others that are worth offloading, and others that shouldn’t? Unsurprisingly, we might want to be considering the issue of memory, and of cognitive lock-in.

My problem with LLMs is that most of what it does, we can do as well (writing emails, sending reminders, figuring out simple problems), as long as we have some time. It’s different for other kinds of machines (cranes, CNCs). We just use them to glean a few seconds so that we have something to offer at the altar of the gods of productivity2. Those who still think these technologies will bring betterment for most forget Parkinson’s law. A task will fill up the time that is allocated. to LLMs won’t help with our leisure unless it comes with a legally-bound reduction of worktime.

Creativity and productivity

The question of whether LLMs can create art is interesting cause it touches to the particularly human, something to do with an essence. It’s not so much whether it can create art, but whether we can create art with. The answer is yes, but what kind of art? Disentangling this idea might yield some leads for thoughts: what is the relationship between how we create art, and what we create? What does it do to the creative process, understood broadly as the creation of anything? Art as the act of making?

When we start using technology, there’s always a benefit and there’s always a drawback. One of my favorite pieces of technology is Zotero, a bibliographic reference management system. It does great things to facilitate the import, export and formatting of references, and saved me from countless afternoons of menial labor when writing up my citations. But, on the other side, I lost any overarching feeling for what my library of texts looks like, this feeling that one rejuvenates whenever they glance over their bookshelf.

So, one way of determining this dependency, is remembering the difference between a tool, and a machine (not so easy in the case of LLMs, and this blurring already started with Adobe). I feel that, central to this, is the question of adapting (we adapt the tool to our desires, we adapt ourselves to the machine). It’s hard to be overwhelmed by a tool; that’s usually when it starts to feel like a machine3.

As to whether we can create genuine things, I’ll leave the final word to an anonymous online commenter:

“remix culture” required skill and talent. Not everyone could be Girl Talk or make The Grey Album or Wugazi. The artists creating those projects clearly have hundreds if not thousands of hours of practice differentiating them from someone who just started pasting MP3s together in a DAW yesterday.

If this is “just another tool” then my question is: does the output of someone who has used this tool for one thousand hours display a meaningful difference in quality to someone who just picked it up?

I have not seen any evidence that it does.


  1. Usually, these are the people who do not really deal with reading or writing natural languages in their everyday activities. ↩︎

  2. In the meantime, human relationships get replaced, and new ones are nowhere to be found. I remember mentioning to someone that I had a frustrating bug on the radio. In the end, we realized the PR took forever to be merged by the project maintainer because he, Jakub, was a 19 year old kid from Poland who was busy with his first-year university finals. When someone told me I could have asked GPT to help with that bug, I would never have realized the name of one of the humans on which I rely. ↩︎

  3. Hito Steyerl said that a machine uses you as a tool. ↩︎