Beep, beep. Don’t obstruct progress: co-produce it.

Not long ago I had a conversation with someone who had been in a coma for several months. He finally came to after hearing a discussion at his bedside about whether or not to switch off his life support, a conversation that penetrated his consciousness deeply enough to jolt him back to wakefulness. I asked him what he remembered of being in the coma. He said he had felt the presence of close friends, including a former lover who was dead, and was aware of other conversations. He had felt quite happy, believing himself to be in a beautiful garden, populated by singing birds. After returned to full consciousness he realised that the sounds his brain had interpreted as birdsong were in fact the beepings of the medical instruments that surrounded him. This left him none the wiser about whether or not there had been a ‘near death experience’. After all, it is common to dream about former friends and lovers, and for our dreams, porous as they are, to be invaded by external sounds, in my own case by stray items of news from BBC Radio 4 that weave themselves into the plotlines.

The ubiquity of beeps is now an essential feature of the 21st soundscape, not just in hospitals. In my kitchen, for example, it might be telling me that the fridge door has been left open, that something has finished cooking, that a droplet of water has strayed onto my neurasthenic electric hob, that the dishwasher has finished its cycle or that a new communication has been received on my phone. On the street it might indicate a reversing vehicle or a pedestrian light about to change or (if the window has been left open) that someone in a passing car has failed to fasten their seatbelt. Beeps provoke anxiety, sometimes leading to a condition of prolonged distress, especially if the cause can’t be identified or (as is often the case in hospitals) if one is helpless to address it. For some workers this stress must be chronic. Most of us do not have the luxury, if such it can be called, of sufficient passivity to allow our brains to transmute these sounds into something relaxing and beautiful. Beeps are designed to alert. They demand obedience.

It would be rational to presume that any irritation they cause is more than outweighed by their benefits in keeping us safe. But there is another way of looking at beeps: as part of the mutually shaping process by which people learn to adapt to technology, just as technology (through the use of machine learning) is taught to adapt to us.

As appliances become more complex there is an increasing need for them to be installed and managed (often via procedures that are so lengthy that many of us never bother to complete them, leaving the default settings intact and never using the vast array of different programmes that are in principle available to us on our smart tvs or fancy ovens). A beep may form a crucial part of the signalling system that tells you whether you have done so correctly. With the spread of speech recognition and touch-sensitive devices it is of course only one of many different kinds of interfaces, its role often little more than to say ‘mission accomplished’ or ‘watch out!’ while you grapple with some of the others.

These interfaces are often quite clumsy. In order to use them you have to learn how to enunciate words in such a way that they will be understood by Alexa or Siri or the bot your bank might use to route you to the right bit of the contact centre. You have to learn how hard to press the touch-screen and for how long before it allows you to buy your ticket or check out your groceries or access an app on your ipad, or how lightly to swipe to communicate with your potential lover or get rid of an unwanted ad. As with other new languages, two-year-olds can learn these things more easily than their grandparents (which becomes very scary when you think of what kinds of images can be inadvertently accessed from a smartphone or tablet).

We are in effect being trained to respond to machines, unable to access their use values until we have learned the correct way to interact with them, devoting our time and patience to this learning in a laborious process of trial and error. This is one side of a story, the other side of which concerns the ways that the machines are taught to respond to and anticipate human behaviour in ever-more sophisticated ways. To a considerable degree this converse process is also achieved by the investment of human effort. It might be through the labour of a dispersed global army of clickworkers, paid by the task. Or by our own unpaid labour, as we select which, from a grid of images, is the one containing a traffic light or a tree, to prove we are ‘not a robot’ in order to access some digital service. It might also be gleaned, without our knowledge, by harvesting and analysing the data captured from the tracking of our physical movements or online activities. In this mutual learning and adaptation, the boundary between our selves and the technology becomes ever more blurred. On the one hand, we seem to become technology’s servants, reducing our communications to those that it can accommodate and unthinkingly coerced into following its directions. On the other, it feels increasingly like a prosthetic extension of ourselves, enabling us to do things our childhood selves could hardly have dreamed of. Simultaneously constraining and holding out a promise of liberation.

I have just finished correcting the proofs of the latest issue of Work Organisation, Labour and Globalisation, which I edit. It is on a theme that provides interesting insights into this process of sociotechnical mutual adaptation. The guest editors (Manuel Nicklich and Sabine Pfieffer) are based in Germany and the key concept they use is that of Verselbständigung. Well-known to German-speaking readers from its use by Marx, Weber and Adorno, among other theorists, this concept is little known or understood by English language audiences, not least because it has been translated in multiple ways depending on its use in different contexts, and few non-German speaking authors seem to have grasped its importance. Using the term ‘self-perpetuation’ to convey its meaning in English, the contributors make a compelling case for its usefulness for grasping the ways in which (like other aspects of capitalism) artificial intelligence and algorithmic management take on a life of their own, setting in motion dynamics that exaggerate pre-existing trends and take them forward in ways that develop their own momentum (but also open up new contradictions). Like a runaway truck. Beep beep. Out of my way! Or else….

.