Thoughts on anti-Semitism

It is a long time since I wrote in this blog. There is a repeating pattern whereby a period of illness (in this case recovering from some surgery just before Christmas) sets me back in my ‘proper’ writing, rendering me too guilty to indulge in non-commissioned work until I have cleared the backlog. But eventually I feel so strongly about something that I have to break the self-imposed taboo. Something similar happened a couple of years ago when I found myself compelled to break the silence during the build-up to the Brexit referendum because of concerns about how it was reported. Today’s impulse also comes from a kind of horror at what is going on around me politically but this time the context is the whisked-up media attention currently being paid to anti-Semitism in the Labour Party, which, it appears to me, wearing my Cassandra hat, may be leading us towards something that is profoundly harmful, political and socially. (even though it might in the long run have some positive impacts by bringing what is hidden or taken for granted to the surface).

Despite (or perhaps because of) the huge media coverage, there seem to be important questions that are not being asked, or answered, adequately. There is of course a lot of discussion about where the attacks are coming from and why now: smoking guns aplenty for conspiracy theorists. The Tories, looking for any ammunition to use against Labour in the run-up to the local elections; the Israeli Government, happy to have pro-Palestinian voices silenced while they shoot unarmed civilians in Gaza; Blairites in the Labour Party who seize on any opportunity to attack Corbyn, regardless of its impact on Labour’s election chances; the mainstream mass media, drifting ever further rightwards with the BBC (fearful of the axe) rivalling the Murdoch press in its pandering to the Tories. This morning, on the Today programme, the fact that Corbyn had attended a meeting of Jewdas (‘a left-wing Jewish group critical of more mainstream Jewish organisations. organisations’, as the BBC put it) was treated as evidence that he was failing to address anti-Semitism in the Party in a piece of doublethink worthy of Orwell. The question that, it seems to me, is not being addressed sufficiently is how and why this particular charge is so difficult for serious socialists to counter. How is it that the energetic, resilient left, which has successfully fought back over the last couple of years against so many anti-Corbyn smears, can be so easily silenced when accused of something which that very socialist left has, over the years, done more than any other political grouping in this country to counter? What notions of good and bad, what intersections of fear and shame, what confusions, what extremes of guilt-by-association, have brought us to this pass?

So here I am again, adding yet another voice to the conversation.

Like last time, when I wrote about Brexit, I have been very hesitant about putting my thoughts online. On both topics there are many people much better-informed than I am. I feel a bit like the fool stomping in, in my muddy wellies, onto a polished parquet floor where even the angelic dancers hesitate before taking a tentative step. Not only is it likely that my generalisations will be quibbled with and my attitudes questioned as old-fashioned, badly informed or politically incorrect, but, in the case of anti-Semitism, there is also the undeniable fact that I am not Jewish, and therefore, perhaps genuinely insensitive to what is going on.

So I will start with a self-interrogatory personal narrative of what Jewishness means to me. I grew up with a strong awareness that anti-Semitism existed and was not easy to fight. My father was a student in Vienna in the 1930s and witnessed extreme forms of it first hand. He was a close friend of Muriel Gardiner (who became Muriel Buttinger after her marriage to Joseph Buttinger, the leader of the Austrian Socialist Party). Taking advantage of her family’s wealth and her US passport, Muriel played an active role in sheltering socialists wanted by the police from the Nazis and helping find ways for Jews to get out of the country safely, a role that was lightly fictionalised in the 1977 film, Julia, where her part was played by Vanessa Redgrave. My father played a small role in this (including helping to acquire British passports that could be used to get people across the border) and Muriel remained a life-long friend of his and, later, an inspiration to me. I have vivid memories of the last time I met her, over dinner in an Italian restaurant in Bloomsbury in the 1970s, in the company of my father and my friend Nick Redgrave, where she talked about her American socialist youth, collecting money for the defence of Sacco and Vanzetti in 1920. It was on this occasion that, when I admired the chunky Venetian baroque pearl necklace she was wearing, with typical impulsive generosity, she immediately took it off and said, ‘Here, have it!’.

Anyway, suffice it to say, this experience cemented for me an association between Jewishness and socialism. Like the rest of their generation (my father was born in 1902, my mother in 1907) my parents were by no means free of racial stereotypes. For them, Jews were clever, sensitive, musical and studious. They were good fathers but unlikely to be interested in sport or the consumption of alcohol. Some of these stereotypes were challenged when applied to the complex personalities of Jewish people I knew personally but many were not. I really did meet a lot of Jewish psychoanalysts, writers, university lecturers, publishers, musicians and artists. And an awful lot of them really were socialists. Most were also part of a shared culture which was secular and humanist and their Jewishness did not seem particularly important.

During the 1960s and 1970s, Jewishness, like other identities, felt to me like part of a historical legacy that would become less and less important with the progressive advance of universal education, and the spread of democracy and egalitarian welfare regimes. Religions, it seemed then, formed part of superstitious heritages designed to bolster hierarchies (including patriarchal ones), reinforce obedience to power, provide hope to people facing intolerable adversity and give them spiritual sustenance in their sorrow. While of course their practices should be tolerated and their rituals celebrated, their roles would increasingly be taken over by other communitarian agencies in the brave socialist future we imagined.

But I was sometimes reminded of the strength of Jewish identity, even among secular Jews. I remember a conversation with Michael Kidron (then my editor at Pluto Press) in which I was describing how international my family had become (my siblings’ spouses were, variously, Polish-German, Japanese, French and Palestinian – which extended in the next generation to Indian, Chinese and US spouses) and his immediate reaction was ‘Yes, but you’re all Goy’. I was also struck, living in Yorkshire in the 1970s, by the bonds that stretched across the Jewish community there, crossing party-political boundaries. When the National Front wanted to march through Leeds City Centre, this was stopped, so I was told, by Irwin Bellow, the Tory leader of the City Council, former owner of a company that made sewing machines (subsequently knighted for his services selling council houses as a minister in the Thatcher government) after a phone call from Lou Baruch, the communist leader of Bradford Trades Council (‘the textile workers’ champ’) – the anti-union capitalist and the trade union leader cheerfully colluding to thwart anti-Semitism.

In the early 2000s, when organisations like Jews For Justice in Palestine were formed in the UK, it came as quite a surprise to me how many friends I had up to then simply thought of as fellow socialists and feminists decided to identify themselves publicly as Jews. It had never even occurred to me in many cases that that’s what they were. I actually found it quite difficult to place myself, as a non-Jew, in relation to such campaigns, which seemed to construct people like me as outsiders. If one was campaigning on the basis that everybody was equal and religious distinctions should not matter, then it seemed on one level contradictory to insist on such distinctions. On another level, of course, it was very understandable. In a world where every non-Jew runs the risk of being (consciously or unconsciously) anti-Semitic, just as all white people run the risk of being (consciously or unconsciously) racist, then Jewish voices that stand against Israeli state policy have a unique chance of being heard out. Nevertheless, it left me, and perhaps others, with a sense of having nowhere to put my solidarity, a silencing of sorts.

This is not the whole story, of course. The atrocity of the Holocaust hung like a pall over my childhood, as I suppose it did for most of my generation. I didn’t even realise how deeply it affected me until I became a mother and found myself haunted by detailed and very concrete imaginings of the experience of deportation and the death-camps. What did you do, I kept wondering, crammed, standing, into a cattle-truck with nothing but the clothes on your back, when your baby needed a nappy change? When it cried? When it wanted to crawl? How did you cope with the leaking breast-milk when that baby had been snatched from you? Such questions, I realised, had formed part of the mental sound-track of my life since childhood, breaking their way into consciousness only at moments of emotional stress or vulnerability and playing who knows what convulsive riffs while they remained unconscious.  I can still remember, with great vividness, an experience at my North Wales primary school in the mid-1950s. Two boys were approaching kids in the playground with knowing ‘I’ve got a secret’ smirks, asking if anyone wanted to see their pictures. One by one, we were shown two well-thumbed black-and-white photographs, cut from a magazine, of the piled-up emaciated bodies that were found at the liberation of Belsen. It is hard to exaggerate the shock of this – not just the obscene reality that was represented in those pictures but also the voyeuristic frisson that these two boys seemed to experience, as if it were pornography, and the air of secrecy, as though these images revealed something so shameful that children were forbidden to view them. I am sure I am not the only person whose nightmares were invaded by these images. Such experiences confirm the idea of the Holocaust as something uniquely awful, incommensurate with other atrocities. Even to mention it in the same breath as other genocidal massacres can feel like somehow trying to diminish its importance. Small wonder that denying it is a criminal offence in many countries.

And then there is Israel. My early view of Israel was partly shaped by second-hand accounts of kibbutzim, which provided gap-year experiences to many more or less idealistic kids in the years after National Service was abolished for young British men. They seemed like a foretaste of socialism – sexual freedom and communal living amongst the orange groves. Israel was, in this view, the happy ending that awaited those who were lucky enough to have survived the horror and brave enough to fight for freedom, a view that was reinforced by the 1960 film Exodus, directed by Otto Preminger, one of the first I ever saw on a wide screen.

Since then, I have become only too aware of how necessary it is to unpick such facile narratives and explore their contradictions, not least through my first-hand contact with the descendents of Arabs for whom the foundation of Israel meant being turfed off their ancestral lands. But such unpicking is extraordinarily difficult to do when the narratives are so highly-charged, both emotionally and morally.

I became acutely aware of this when I visited the United States Holocaust Memorial Museum. By coincidence, this, my first (and so far only) visit to Washington, took place only a couple of weeks after the 9/11 attack on the Pentagon. I was on one of the first flights into a city where at least one of the other airports was still closed. There was a strange jumpy hysteria in the atmosphere, with military aircraft zipping overhead and people with their heads down hurrying to get home. Flags fluttered everywhere. Streets were almost empty. My academic hosts had arranged for me to have dinner with their Dean in a fashionable restaurant that served up expensive versions of poor peoples’ foods (things I had only ever read about, like hominy grits). The restaurant was almost deserted and the Dean wanted to spend the minimum possible time there, departing in the middle of the main course after asking the waiter to box up his food so he could take it home with him, ratcheting up the already high level of awkwardness for those of us who had to sit it out until dessert had been consumed.

Rejecting polite offers to entertain me, I decided, in the free day before my seminar, to visit the Holocaust Museum, about which I had heard a great deal. The experience was  immersive. As you entered, you were given a card with the details of a Holocaust victim with whom you were encouraged to identify (I am using the past tense here because things may have changed in the sixteen years that have elapsed since this visit). The first spaces that greeted you gave a historical account of Hitler’s rise to power, with photographs of mass rallies, the swastikaed flags in the photographs uncannily echoing all those stars and stripes waving outside. It was made clear that the Nazis attacked socialists and trade unionists, as well as Jews but the main story was about anti-Semitism. The next section of the museum reinforced this, with a historical account of anti-Semitism in Europe and lots of artefacts showing the rich cultural heritage of European Judaism. Then you were taken, step by harrowing step, through the detail of the Holocaust – the roundings up, the transport, the conditions in the camps, the death chambers. Incidental mention was made of  non-Jewish victims (the gypsies, the gays, the mentally handicapped, the socialists) but overwhelmingly the story was about Jews, and hatred of Jews, and the unspeakable consequences of that hatred. It was all made concrete and vivid, not just through identification with the avatar-victim on one’s personal card but also by the volume of material evidence. Everyone I have ever spoken to who has visited that museum remembers the enormous pile of worn, discarded shoes, heartbreaking in its very banality. Emerging, trembling from the emotional impact of all this, you entered the final part of the permanent exhibition, intended, I suppose, to be uplifting, covering the liberation of the camps and the resistance. The last room celebrated Israel.

I came out into the glare of the Washington sunlight feeling shaken and moved. But also, confusingly, a little bit tricked. It took a lot of thought to unravel this feeling and I ended up concluding that it was the result of the slow elision of oversimplified dualistic oppositions, a slippery my-enemy’s-enemy-is-my friend/if-you-are-not-with-me-you-are-against-me logic that, when extended, led one along a path that was too narrow, too exclusive and not quite where one intended to go. This is a logic that conflates the political and the moral and, by virtue of the power of that morality, creates a stage in which everyone must be a victim, a villain or a hero (not unlike Stephen Karpman’s victim-rescuer-persecutor ‘drama triangle’). It is a world of goodies and baddies with very little scope either for shades of grey or for personal change.  The logic goes something like this: Hitler = evil; Jews = victims; Allied troops = heroes. Since Hitler was bad, Jews must be good, therefore Israel must also be good. Anybody who is against Israel must therefore be bad (like Hitler) – including Arabs. Socialists fit very awkwardly into this logic. According to the Nazi logic, they are as bad as Jews (indeed they are often assumed to be Jews, or manipulated by them) who must be stamped out, which makes them, by anti-Nazi logic, victims and/or heroes. Their historical role as opponents of anti-Semitism and racism in many European countries also renders them good. However if they use the same reasoning that enabled them to identify dispossessed Jews as victims to recognise dispossessed Arabs as victims too, that makes them anti-Israel which renders them bad. They become like those optical illusions of which the eye can only see one version at a time, toggling wildly between good and evil.

There is a sense in which we all want to be heroes of our own biographies, casting others as fellow victims or persecutors, allies or opponents. But in a political landscape so shot-through with moral righteousness and outrage it is extraordinarily difficult to step forward with confident conviction of one’s own heroism, especially if one is not a central protagonist in the story. Indeed, the greater one’s self-awareness and knowledge of history, the more difficult this becomes.

I had a Catholic upbringing which impressed on me the importance of the nightly ‘examination of conscience’ in which you reflected on everything you had done that day and, if any of it was bad, resolved how you would put it right tomorrow: a sort of memory-scan for shame. This kind of self-examination is of course not unique to Catholics. Variants of it can be found in the practices of psychotherapy, for example, or the consciousness-raising that went on in women’s groups in the 1970s.

Many of us, perhaps especially on the white left, are acutely aware of our own inadequacies. When it comes to racism and anti-Semitism, there are few, I suspect, who can put their hands on their hearts and proclaim themselves entirely not guilty. My generation was brought up in a culture that was profoundly racist and homophobic. Did we really never snigger at the camp gay stereotype played by John Inman in ‘Are You Being Served?’ or laugh at the jokes in ‘It Ain’t Half Hot Mum’ or ‘Love Thy Neighbour’? And how much of it washed off on us? We have had to admit that, even if we never consciously discriminated against a black person ourselves, we probably owe our relatively advantaged social positions in the British middle class at least in part to the history of slavery and imperialism. Even as we try to uncover our own hidden racism, we become more and more aware of how our society is steeped in it, how it takes myriad forms and changes over time, how difficult it is to disentangle concepts of cultural difference from those of discrimination, how complicated are the interactions between the past and the present in the formation of identities.

As with so many things in life, the more you know, the more you understand how complicated something is, and the more hesitant you may become about laying down the law to others. Yet alongside this growing comprehension of the complexity of human group inter-relationships, also comes an increasing awareness of how much unfairness and suffering and injustice there is out there. The impulse to remain silent is countered by an equally important impulse to do something about it (that’s what makes people join organisations like the Labour Party). But the interplay between these two impulses might create a sort of paralysis, or at least wrong-foot those who try to enter the public debate without having thought out their position carefully.

Justice is not a card game in which one kind of victimhood trumps another, rendering it irrelevant. We need a broader moral frame that recognises the co-existence of different forms of oppression, even the possibility that the same person, or group of people, might be simultaneously both an oppressor and a victim.

But articulating such a programme requires a degree of nuance that is beyond the binary logic of the mass media to cope with. And which of us, we might ask, has the right to propose such a programme? We seem to have arrived at a situation where non-Jewish socialists feel both unentitled to do so and held back by their very awareness of their own imperfections. I am not sure I am right about this but am wondering how much this might be the explanation for the diffidence (or perhaps even cowardice?) which non-Jews on the left feel about speaking out in the current debate. But speak out, I believe, we must. Somehow.




Posted in Britain, personal memoir, political reflection, Politics | Tagged , , , , , , , | 6 Comments

with all best wishes for happy holidays and a peaceful 2018


Olive tree surviving the snow on my Dalston roof terrace. Signifying that peace is still possible in this grim world? I hope so. Here’s wishing a peaceful 2018 to all readers.

Posted in Greetings, Uncategorized | 2 Comments

Not such good work, Matthew Taylor

The long-waited Taylor Review of Modern Working Practices is now published, under the title Good Work and it is, I am afraid, very disappointing indeed. In terms of its concrete recommendations it goes beyond being a missed opportunity, out of kilter with its times, to posing an active threat to workers’ rights and undoing past advances.

As might be expected from a lead author who was appointed head of Tony Blair’s Number 10 Policy Unit in 2005, it is not short on spin. It speaks repeatedly of ‘enduring principles of fairness’, nods often to the idea of good work as an essential ingredient of happiness and wellbeing and claims to be focusing ‘not just on new forms of labour such as gig work but on good work in general’. Pious mission statements, such as ‘We believe work should provide us all with the opportunity to fulfil our own needs and potential in ways that suit our situations throughout our lives’ sit alongside nods to the inevitability (and benignity) of technological progress. In the classic contradictory formula of centre-left neoliberalism it manages simultaneously to say that ‘Good work is something for which Government needs to be held accountable’ and ‘The best way to achieve better work is not national regulation but responsible corporate governance’!

Why was it no surprise to discover this morning that Taylor’s co-investigator, Greg Marsh, was a former investor in that most visible of gig economy companies, Deliveroo?

Out of kilter with the time

In light of recent events, the report seems oddly old-fashioned. It is little more than six months since the Inquiry was established (in October 2016) but during that period there have been unprecedented developments on the ground, with an upsurge in organising by casual workers in the UK (and elsewhere). New trade union organisations, such as the  UPHD (United Private Hire Drivers) and the IWGB (International Workers of Great Britain IWGB) have sprung up to represent drivers for platforms like Uber and delivery workers for companies like Deliveroo as well as casualised workers in other sectors, such as outsourced cleaning workers, porters and foster carers. A series of test cases brought by these organisations, sometimes with the support of traditional trade unions like the GMB, have established in case after case that workers for companies like City Sprint, Uber and Pimlico Plumbers are not the ‘independent contractors’ these companies claimed they were but ‘workers’, entitled to such rights as the minimum wage and paid holidays. As a result of these, and other well-publicised cases of exploitation of low-wage workers, such as Sports Direct, there has been a sea-change in public attitudes to fairness at work evidenced by the popularity of the demand for an end to zero-hour contracts in the Labour Party Manifesto.

The British public seems, at last, to have seen beyond the rhetoric that elides what is ‘flexible’ for the employer (in the form of a just-in-time workforce, waiting to be summoned at short notice by an app) with the older demands raised fifty years ago by the Women’s Movement for a ‘flexibility’ that responds to the unpredictable demands of family. Having lived it in their own lives, or watched their kids do so, most people now see only too well that being available on demand makes it very hard indeed to manage your own life, especially when childcare is involved. But the report shows no awareness that workers and employers may have different interests, merely stating vacuously that ‘Encouraging flexible work is good for everyone and has been shown to have a positive impact on productivity, worker retention and quality of work’.

While public opinion seems to have been saying ‘enough is enough’, the court judgements  have been saying, in the words of Jason Moyer-Lee, General Secretary of IWGB,  ‘”gig workers” already have rights – all we need to do is enforce them’.

A rational response to this situation – the opportunity that this report misses – would take the existing principles as a starting point and work to ensure that there are clear guidelines for their implementation, putting the onus of proof not onto vulnerable workers but onto those who dictate their working conditions and profit from their services. But this is very far from the Taylor approach.

Missed opportunity

The report quite rightly recognises that the employment status of casual workers is confusing and poorly understood. This is partly because it is dealt with separately under the tax system and in employment law. Under the tax system, unless you have some other legal status such as being a limited company or a partnership, you are either an employee or self-employed. Many workers living hand-to-mouth think it is preferable to be self-employed because that way they can defer the payment of income tax and set expenses against it. Under employment law being an employee brings a range of rights and protections, including such things as maternity and paternity pay, sick pay, parental leave and pensions coverage. These are probably worth much more to most workers in real terms than whatever tax savings they make by being self-employed, but of course can only be claimed if your employer actually agrees that you are indeed an employee and fulfils his or her part of the bargain. There are however some rights, guaranteed under employment law to all workers regardless of whether they are formally classed as employees. These include the right to the minimum wage and to paid public holidays.

The difficulty of establishing employee status is not new. Back in the 1970s and 1980s when I was doing research on homeworking this issue came up again and again. Frightened women, unaware of their rights, were told firmly that they were not employees (often believing – usually wrongly – that what they were doing was not quite legal and that if found out they would become liable for tax or national insurance payments and fined for being in breach of health and safety or tenancy regulations) so they would accept that they had no rights. The law had then no single test for being ‘genuinely self-employed’. Tribunals or courts were supposed to weigh up a lot of different factors such as who determined what work should be done and what should be paid for it, whether or not the worker had the right to employ somebody else to do it, how continuous it was, who paid for the materials and so on. Little has changed since then, although the case law has moved on. The most crucial principle is whether a relationship of subordination can be said to apply.

In the case of most platform companies, there is little doubt that the workers are indeed subordinate. Although practices vary from company to company, workers are usually told precisely what to do, with each ‘task’ well defined and costed. Not only is their pay and work process laid down, there are also typically detailed rules about quality standards to be met. While there may be some limited right to turn a few jobs down, there are usually strong penalties for doing so repeatedly. They do not have the right to pass the work on to others. And in some cases (Deliveroo being a case in point) they are even required to wear uniforms or sport company logos.

The report could have laid out clear guidelines for defining genuine self-employment and spelled out the obligations of employers of subordinate workers. But what it has done instead is muddied the waters still further by proposing exceptions to the existing principles which could be detrimental not only to workers who are currently working casually but also to other workers, including those currently defined as employees.

 How could its recommendations make matters worse?

  1. Establishing a new intermediate kind of employment status – the ‘dependent contractor’

The report proposes setting up ‘an intermediate category covering casual, independent relationships, with a more limited set of key employment rights applying’. Although this approach has been rightly resisted by British legislators in the past, this is not a particularly original response. Indeed it something of a knee-jerk reaction by neo-liberal ‘modernisers’ to the development of new forms of work. It was, for example, strongly promoted in Europe in the 1980s and 1990s (for example by the Belgian labour lawyer Roger Blanpain) as a way of encouraging teleworking without bringing it completely within the scope of existing employment protection laws. Italy provides a particularly extreme example of the ways in which different forms of ‘parasubordinate’ status and sub-categories of self-employment have been created to cover workers, such as call centre workers, who fall outside traditional sectoral agreements and regulatory categores. The overwhelming evidence is that when such new kinds of status are established they do not just result in reduced coverage for the ‘new’ kinds of workers who fall under them but, even more importantly, are then extended across the workforce to bring other more traditional forms within their scope, resulting in a worsening of conditions across the board. In other words, what they do is provide employers with a new tool for casualisation and erosion of existing rights, whatever well-intentioned language is used that purports to prevent this.

  1. Undermining the minimum wage

The report also proposes a change in the way that the National Minimum Wage (NMW) is applied: ‘In re-defining ‘dependent contractor’ status, Government should adapt the piece rates legislation to ensure those working in the gig economy are still able to enjoy maximum flexibility whilst also being able to earn the NMW’. What it proposes is complex, and difficult to summarise here. At the headline level it looks like a proposal to increase the NMW by a modest amount for workers with the proposed new ‘dependent contractor’ status. Howeverthe report also wags a stern finger at those who think that workers should be paid for all the time they spend waiting for jobs to come up, which is, they say unreasonable and open to abuse. Given that many workers in the gig economy spend half their time or more logging on in the hope of work that does not arrive, this could in practice lead to a fall in the time eligible for payment.

There is more in the report. I have only scratched the surface here. But am about to board a flight for China so will postpone further discussion for another day.

Posted in Britain, Labour in the 21st century, Uncategorized | Tagged , , | 3 Comments

and more on the future of work

In the new spirit of reblogging here things I have already blogged elsewhere, here is a piece that appeared today on the LSE blog at

(their headline not mine).

Future of Work: taking the blinkers off to see new possibilities

Anybody relying for their information on the current headlines would find it hard to make sense of what is happening in the labour market. On the one hand, the news media are awash with apocalyptic forecasts, often backed up by studies from reputable organisations such as the US National Bureau of Economic Research , the Oxford Martin School or the Bruegel think tank, that robots, machine learning, drones, d3D printers, driverless cars and other applications of Artificial Intelligence are going to eliminate very large numbers of jobs, not just in manufacturing but also in service industries, ranging from low-skill tasks like picking and packing in warehouses and home delivery right up to high-skill professional tasks like legal research or stockbroking.

On the other hand, employment levels in the UK are at an all-time high of 74.6 per cent, with the unemployment level, which averaged over 7 per cent from 1971 to 2016, having fallen to just 4.7 per cent in January, 2017.

So, are we facing mass unemployment or not? Here we are, nearly a decade after a major financial crisis that led to job losses, austerity and waves of corporate restructuring including bankruptcies, mergers and acquisitions, seeing the emergence of new winners, with new business models and the birth of new industries, with new technological applications playing a key role. If we take a broad historical view, this is actually quite a familiar story.

We could look, for example, at the development of new industries based on the spread of electrical power and mass entertainment after the 1929 crash, or of computerisation after the 1973 energy crisis, or the explosive growth of the Internet in the decade after the infamous 1987 Black Monday. Each of these technologies was also, of course, instrumental in displacing large numbers of jobs in older industries. And with each wave, livelihoods were irrevocably damaged, because the new jobs were not created in the same areas, or for the same people, as the old ones.

The elderly look on in amazement at the desirable new labour-saving appliances their grandchildren buy, remembering the back-breaking drudgery of the old methods. But for every gleaming new factory in one part of the world, there are piles of rusting machinery in others, along with devastated lives and communities. Such ‘creative destruction’, as Schumpeter called it, is, surely, part and parcel of capitalism as usual.

So why, in the second decade of the 21st century, are so many commentators, on so many different parts of the political spectrum, convinced that this time things will be different: that we are, in Paul Mason’s phrase, moving into a period that could be described as ‘postcapitalist’?

Part of the explanation might lie in the way that capitalism is often seen, especially by the young, as a single, monolithic system that embraces all aspects of life. Perhaps a more useful way of understanding it is a somewhat messy assemblage of different capitalists competing with each other, scrabbling for market share, experimenting with new business models and often failing. In times of crisis, when many are going to the wall, technologies (including some that have been around for a while) may be seized on, not as part of an orchestrated general plan, but in much more piecemeal ways, by particular firms looking for means to restore profitability: to reduce labour costs, develop new products or services or enter new markets.

Obvious first targets for automation are processes where labour costs are high, usually because they require scarce skills or workers are well organised. So it is not surprising that skilled print workers were first in the firing line for digitisation, or auto factories for robots. The first companies to introduce innovations can make a killing – getting ahead of their competitors with a step change in increased productivity.

But such advantages do not last long. Once the technology is generally available, it is open to any competitor to buy it at the lowest market price and copy these production methods. A race to the bottom is started, which can only be sidestepped by firms that continue to innovate. It is fanciful to imagine that it would be possible to populate the world’s factories with 2017 state-of-the-art robots and then just leave them to get on with production. Leaving aside the question of how these robots are to be assembled and maintained, there is no conceivable business model that would make this profitable over any sustained period of time.

A much more likely scenario is that vast new industries will grow up to manufacture these new means of production which, like today’s laptops and mobile phones, will rapidly become obsolete and need replacing. These industries will also give birth to new service jobs, involved in their design, distribution, maintenance and in dealing with the unintended consequences of their widespread adoption (such as cyber-crime and new safety hazards).

Current technologies do not just create new kinds of jobs, they also change the way work is organised, managed and controlled. My research has shown that 2.5 per cent of workers already get more than half their income from online platforms. These new organisational models do not just change the way existing jobs are managed but also bring new areas of economic activity within the direct orbit of capitalism, for instance by drawing into the formal economy the kinds of cash-in-hand work done by window-cleaners, dog-walkers, baby-sitters or gardeners. They may not be jobs in the traditional sense, but they are work, with the potential to be organised differently in the future, that can form the basis of profitable new industries.

Another factor that blinkers thinking about the future of work is a failure to see beyond the boundaries of the existing industrial structure and imagine where other new industries will emerge from. Whether it’s the DNA of plants, the human needs for entertainment, sociality and health or outer space, the universe is full of new opportunities for commodification. The question is, can the planet sustain them?

Posted in commodification of knowledge work, Cybertariat, Labour in the 21st century, Political theory, The world | Leave a comment

The future of work

And here is the original version of a blog post that was published on January 25th in the Huffington Post, a shorter version of an article published in German in Volksstimme.


Each time there is a wave of technological change, similar questions are raised about the future of work. Pessimists fear that robots will take all the jobs, leading to mass unemployment and a population too poor to buy the products of the new automated factories. Meanwhile optimists hold out seductive visions of a world with leisure and plenty for all, where automation frees us from routine chores, so everybody can release their creativity.

The pessimist view comes easily to victims of change. If your income depended on looking after horses then you would have seen the coming of the automobile in the early 20th century as a direct threat. Even if you had a crystal ball that enabled you to see how many jobs would be created in the auto industry in the future, you might have still thought: ‘So what? How does that help my family?’

History shows us that each new wave of technological innovation both destroys and creates jobs. The trouble is that the new ones tend to be created for different people, in different parts of the world, and under very different working conditions from the old ones.  The job of an assembly-line worker in Detroit in the 1920s was very different from that a rural stable-hand in Somerset, just as work in a washing-machine factory was different from that of a laundry-maid.

New machines may eliminate some old jobs but new ones are needed to mine the raw materials, make and assemble the components, and maintain them, as well as designing the next generation or robots, drones or 3-D printers. But there are also unintended consequences. Who would have guessed that some of the earliest adopters of mobile phones would be drug dealers, fraudsters and pimps, that one of the earliest commercial uses of the Internet would be for pornography, or that drones would be used for smuggling contraband into prisons? As society struggles to keep up with these new forms of technology-enabled crime, more new jobs are created – to deal with cyber-fraud, remove unwanted content from social media sites and other functions our grandparents would never have dreamed of.

They may be right about the disappearance of many familiar jobs, but pessimists are surely wrong when they speculate that robots will bring permanent mass unemployment.  But might the optimists be right in thinking that artificial intelligence can take the back-breaking toil out of mundane tasks releasing time for more satisfying activities?

The evidence suggests that technology is failing to deliver these benefits, with technology used, not to shorten but lengthen the working day, with expectations of round-the-clock availability.

Neither are machines taking over the boring and repetitive activities, leaving the more creative and satisfying ones for human beings to carry out. Often it is cheaper to use human labour for the most mundane tasks, as evidenced by the growth of online platforms like Amazon Mechanical Turk and Clickworker that enable a dispersed human workforce to carry out micro-tasks deemed not worth automating, such as labelling colours, verifying fuzzily-scanned numbers or clicking ‘like’ on corporate websites.

Human labour is also used in warehouses, with workers instructed via headsets where to run, with every action timed and monitored. A visitor from another planet watching them at work might think that humans are servants of the technology, rather than technology serving the people.

Being monitored and paced digitally is not unique to manual workers or casual ‘click workers’. Nurses, teachers, truck drivers and software developers are just a few of the workers who have to work to numerical ‘performance targets’ and log their working time using online ‘apps’.

How is it that apparently liberating technologies seem to enslave workers ever more tightly to the demands and rhythms of the global economy? We must ask who is developing them and for what purpose.

The corporations that dominate that global economy have somewhat contradictory needs. They need a stream of new ideas to help them stay one step ahead – and to provide these ideas they need bright, motivated, well-educated creative workers. But once these ideas have been implemented, then the best way to stay competitive is to cut costs to the bone, minimise responsibilities to a permanent workforce and find workers who can be deployed efficiently to provide only the tasks that are needed.

Digital technologies make it ever-easier to manage these ‘just-in-time’ processes. But the flexibility they offer is all too often just for the employers. For workers, it may mean being unable to plan ahead because you never known when that smartphone will ping, summoning you to the next task.

Are we entering an era when the majority of workers will be ‘on call’ in this way? Or is there still time to harness the new technologies for the benefit of people rather than profit?

Posted in Uncategorized | Leave a comment

Universal basic income and women’s liberation

Here is a blog post I wrote for Compass, originally published on their site on January 13th, 2017.


From 1971 to 1978, the UK women’s liberation movement held ten national conferences at which it formally adopted a total of seven key demands. The fifth of these demands, added in 1974, was for financial and legal independence for women, accepted with widespread support across all wings of the movement. It is indeed difficult to imagine a form of feminism which does not, in a money-based society, insist that women have their own means of financial support as a way of avoiding being trapped by economic dependence in coercive, perhaps abusive, relationships.

It is therefore perhaps ironic that the question of how this financial independence should be achieved played a role in the bitter disputes within the movement that led to the decision that the 1978 conference would be the last. Although it was by no means the only reason for the split, the espousal by many radical feminists of the demand for ‘wages for housework’ (developed in 1972 by the International Feminist Collective which included Selma James, Mariarosa Dalla Costa, Brigitte Galtier and Silvia Frederici) was for many socialist feminists the last straw. Housework, they said, should not be institutionalised as women’s responsibility. It should be shared equally with men or, better, socialised, in the form of state-provided nurseries, laundries and canteens. And what would happen if women refused to do the housework for which they were paid? Would the husband, the father, or the state, taking on the role of the employer, discipline them and decide that they should not get it? Meanwhile radical feminists argued that reproductive work was for the benefit of all society and, since women did most of it, they should be rewarded for this. Why should they be forced into the labour market just for economic survival, when this important caring work took up so much of their time?

More than forty years later, some of the wounds inflicted in those debates still fester. But could it be that the demand for a universal basic income might be a way of healing them?  Instead of posing women with the option of, on the one hand, aiming for full participation in paid work supported by public (or market) services and, on the other, an income for staying at home and taking responsibility for caring, could it, perhaps, offer them a basis for greater choice and autonomy, substituting a form of ‘both/and’ for a false ‘either/or’ dichotomy?

In order to do so, several important preconditions need to be in place. First, it is crucial that the basic income should be provided not just to women, or, more specifically, to people carrying out reproductive labour (parents or carers) but to everybody, regardless of gender or social status. Second, it should be provided as a right of citizenship or residency and not as a reward for carrying out work that would otherwise be unpaid. And third, it should not be seen as a substitute for the provision of public services.

With all these conditions in place, a universal basic income could become a means of offering both women and men freedom to choose how to divide their time between reproductive work, paid work in the labour market and other activities, and decide what proportion of their income to spend on buying in services rather than providing them in kind, in the knowledge that the welfare state is available as a safety net when things go wrong.

It could also go some way towards addressing the less obvious bur more deeply corrosive personal effects of economic dependence: the way that it can lead to ‘breadwinners’ feeling trapped in their roles and resentful of their dependents’ apparent freedom from the constraints of the employer’s clock, as well as leaving their dependents struggling with guilt, obligations to show gratitude or feelings of pressure to engage in coercive sex, or even put up with violence or abuse to keep a roof over their heads and those of their children.

A universal basic income might, in other words, actually lead to better relationships as well as a more equitable society, providing a genuine basis for liberation.

Forty years after the demand for financial independence was first raised, have we at least reached a moment when mass support might be available for actually achieving it?

Posted in gender, political reflection, Politics, Uncategorized | Tagged , , , , , | Leave a comment

The key criticisms of basic income, and how to overcome them

I quite often write blogs for sites other than my own. It has been suggested to me that I should post them here too, to make life easier for followers who like to see things in one place, so here is one that was published on the Open Democracy website on 14th December.


How can a universal basic minimum income be made compatible with socialist principles and avoid inadvertently furthering a neoliberal agenda?

More than one in five UK workers, over seven million people, are now in precarious employment according to this analysis of official figures by John Philpott. Since 2006, the numbers on zero-hours contracts has grown by three-quarters of a million are and over 200,000 more are working on temporary contracts. My own recent research has found that some two and a half million adults in the UK may be working for online platforms like Uber, Taskrabbit or Upwork at least once a month, with about 1.2 million people earning more than half their income from this kind of work. A growing proportion of the population is piecing together an income from multiple sources, in many cases making even the concept of a fixed occupation anomalous.

Large numbers of worker do not know, from one day – or even hour – to the next if and when they will next be working. Yet we still have an anachronistic benefit system based on the principle that any fit adult (and, under the current regime, many who are less than fit) must either be ‘in work’ or ‘seeking work’. The old Beveridgean welfare state model is, in short, bust. What is left of the old welfare safety net is fundamentally incompatible with a globalised just-in-time labour market in which workers are increasingly paid by the task.

The victims of these incompatibilities are among the most vulnerable in our society – forced to take any work that is going but often unable to claim benefit when none is available. They are caught between the rock of harsh sanctions regimes and the hard place of capricious and unreliable employers, often with no dependable source of income whatsoever. And the numbers of these people missed by the safety net keep growing. The use of food banks has increased more than forty-fold since 2008, the estimated  number of rough sleepers has risen by 55% since 2010 and the number of children in poverty rose from 3.7 million in 2014-2015 to 3.9 million a year later – an increase of 200,000 in just one year. Something is clearly terribly wrong and the increasingly urgent question is how to fix it.

This is part of the problem to which the concept of a universal basic income (UBI) now presents itself as a solution to an expanding range of analysts. UBI is not only promoted as a way to update the benefit system to bring it into line with new labour market realities. It is also seen as a way to reward carers and others who carry out unpaid reproduction work in the home, to support artists, enable lifelong learning or give more autonomy to disabled people. This once-marginal idea is now seriously espoused in the UK by the Green Party, the Scottish Nationalist Party, some trade unions and sections of the Labour and Liberal Democrat parties and Plaid Cymru. Further afield is also actively promoted (including setting up experimental schemes) in Finland, the Netherlands, India, South Africa and, at the neoliberal end of the spectrum, by high-tech entrepreneurs in Silicon Valley.

At the headline level, indeed, UBI can seem to represent some sort of magic bullet that will solve all these problems simultaneously, and is often promoted as such. But a closer examination of the various models proposed reveals considerable differences between them. If these are not recognised, attempts to operationalise it could lead at best to risks of unintended consequences and at worst deep political fissures that could even exacerbate some of the problems UBI is intended to address. Most attempts to model how UBI could be implemented in practice in the UK (for example by Howard Reed and Stewart Lansley, Malcolm Torry and Gareth Morgan) have looked at it in what might be called a policy-neutral context, in which all other features of the economy and the tax system remain unaltered. But of course the reality is that any change in government policy that could lead to the introduction of UBI would be part of a much broader political upheaval that would transform many of these other features. Abstracting UBI from its broader setter in this way makes it harder to see such potential hazards.

For people who believe that the world’s sixth largest economy should be able to protect its citizens from penury, and are committed to (re)developing a welfare state that reduces social inequality and enhances choice and opportunity for its citizens, perhaps the time has now come for a serious debate, not just about the pros and cons of UBI in the abstract, but about which other policies it should be linked with to ensure that these objectives are met. This involves grappling with some difficult questions. Here I look at four of the risks that could arise if a UBI is introduced without such policy safeguards.

The risk of driving down wages

In the abstract, the relationship between a UBI and wage levels can be argued to be either positive or negative. Some argue, quite plausibly, that a guaranteed minimum income would enable people to be much choosier about which jobs they accept, giving them options to turn down really exploitative wage rates and perhaps even providing them with the equivalent of strike pay to enable them to negotiate more effectively with employers without their dependents suffering.

An alternative view draws on the experience of tax credits (and now, universal credit) to point out that providing an income top-up is, in effect, a subsidy to employers who pay below-subsistence wages. In 2015-2016, this subsidy was estimated at about £30 billion. Had this been paid out by employers as part of their wage bill then this would also have led to an increase in national insurance and tax revenues. These credits therefore represent a factor which, whether inadvertently or not, increase inequalities between those who rely on their wages for their livelihood and those who derive their incomes, directly or indirectly, from corporate profits.

If a UBI is not to exacerbate this state of affairs, it is imperative that it is linked to a high minimum wage and one, moreover, that can be linked to systems where workers are paid by the task, not just to hourly rates.

The risk of undermining collective bargaining for employer-provided benefits

An important argument against UBI comes from social democratic parties and trade unions, especially in parts of continental Europe with a strong tradition of sector-level bargaining, who argue that its introduction would undermine their efforts to make employers pay into schemes that provide negotiated benefits, such as pensions, health insurance or childcare. A UBI provided by the state would, they contend, shift the burden of paying for it from employers to the general taxpayer. As Richard Murphy has shown, ‘the poorest 20% of households in the UK have both the highest overall tax burden of any quintile and the highest VAT burden’. This shift would therefore exacerbate inequalities, rather than reducing them, at a societal level.

To avoid this risk, it is therefore important that the introduction of UBI should be accompanied by measures that support trade unions’ abilities to bargain with employers at company and sector levels for benefits for their members, by protection for existing company pensions schemes and by other measures that ensure that employers continue to contribute their share of the cost, for instance through employers’ contributions to National Insurance.

The risk of undermining collectively-provided public services

By giving everyone cash, neoliberal models of UBI play along with the grain of an increasingly marketised economy in which services are individually purchased from private providers. There is therefore a risk that UBI could become a sort of glorified voucher system, undermining collectively provided public services that are designed by bodies democratically answerable to the communities they serve, under the guise of offering individual choice. Quite apart from the considerable risks that this poses to democracy, social cohesion and the quality of services, this could disadvantage individuals with special needs who require more expensive and/or specialised services than the average, exacerbating inequalities even while purporting to offer everybody the same.

It is therefore imperative that the introduction of a UBI should be embedded with policies that protect the scope and quality of public services and their collective and universal character.

The risk of creating racist definitions of citizenship

If a UBI is defined as a right of citizenship, then this raises the question of entitlement: who is, or is not, a citizen? And on what basis is their right to UBI established? A final serious risk associated with the introduction of UBI is that it could become linked to a narrow definition of citizenship from which some people (for example refugees, asylum-seekers or residents who do not hold UK passports) are excluded. In addition to the support this could give to racism and xenophobia this could also lead to a two-tier labour market in which people who are not entitled to UBI become an exploited underclass.

The introduction of UBI must therefore be integrated with humane and well-thought-out policies on immigration and citizenship, perhaps by linking entitlement to the place of residence, rather than nationality.


I have highlighted here what I see as four major challenges that need to be confronted if UBI is to be introduced as a genuinely progressive initiative that can restore some dignity and security to the most vulnerable members of our society, enable a flexible labour market to function in ways that avoid exploitation while encouraging entrepreneurship and creativity and reduce social inequality. In doing so, I do not wish to pour cold water on the very idea. On the contrary, I think that, at this moment in history, it is crucially important – so important that what is needed now is a debate, not about the abstract idea of a UBI, but about how it could be introduced in the real world in a way that is genuinely compatible with social-democratic and feminist ideals and starts to rebuild the train-wreck that is currently all we have left of the 20th century welfare state that so many people worked so hard to create.

Posted in Britain, Cybertariat, political reflection, Politics | Tagged , , , , , , , , | Leave a comment

Not in a shy way

It was entirely predictable that Trump’s first dance as president of the United States would be performed (with some cartoonish mouthing of the words) to the tune of ‘My Way’, playing out in a manner beyond irony the triumph of braggadocio in 21st century public life.

It is hard for anyone with any degree of self-awareness to believe that this is entirely serious. Surely, we think, that degree of ostentatious and clichéd vulgarity must be enacted with a tongue lodged firmly somewhere in a jowly cheek: two tiny fingers raised to the good taste of those who manage the world; the jester releasing his evil-smelling trump (in the colloquial British sense of the word) in the deodorised boudoir of the establishment.

Then comes the awful realisation that this is absolutely for real. The foot-stamping toddler really does want his own way. The occupant of the gilded throne-room really does believe he has a right to rule and annihilate what stands in his path.

What has happened to the world in which modesty is a virtue, lights are hidden under bushels and, whatever you’ve got, it’s unladylike to flaunt it? Even to ask such questions, for someone on the left, is difficult. It puts us on the side of gentility, privilege, convention. It aligns us with that very establishment we thought we were critiquing. And it makes us vulnerable to accusations of snobbery – of being, Heaven help us, ‘North London intellectuals’, deploring the vulgarity of the working class (to a soundtrack of classical music) even while we purport to be placing its interests first.

Its conflicted relationship to popular culture is, perhaps, one of the factors that has contributed most to the intellectual paralysis that seems to have overtaken many on the British left in the aftermath of the Brexit vote. Vulnerable to accusations of elitism, many are uncomfortable talking about the cultural pleasures of cosmopolitan connectedness. They would rather parade a connoisseurship of punk music than of Baroque ceilings, of real ale than of wine, just as it is easier to write a PhD on Eastenders than on Jane Austen if you want to keep your socialist credentials.  While some are happy to subject aspects of popular culture to detailed deconstruction (often in impenetrable language), others are afraid of losing touch, or seeming pretentious, anxiously submerging themselves in activities that reconnect them with their roots, from football to rock and roll. But even such immersion can be accompanied, as the late, lamented, Mark Fisher described so eloquently, by a haunting sense of inauthenticity – of being a fraud who has ‘somehow faked his way through’.

In these days of social media, there is perhaps, no innocence left when it comes to the experience of culture: no experience that is unmediated by the thought – even if resisted – of how it can be captured, reproduced, tweeted, misrepresented, mashed up. In a representational world in which just about everything can be both aligned and opposed to just about everything else, the logic of ‘my enemy’s enemy is my friend’ comes adrift. This makes even the sense of belonging ambivalent, and fraught with risk.

It may no longer be possible to recreate the kinds of social spaces that were available for earlier generations of critical misfits to occupy – the Bohemias (whether in the form of physical districts or literary circles) where intercourse took place between artistic, political and sexual transgression and it was equally OK to criticise the ruling class and consumerism. But perhaps new ones will emerge. In the meanwhile, if we want to communicate beyond our own small circles we have to shout, over the cacophony of social media in which everybody else is doing the same, in the hope that somewhere out there will be another voice that responds to ours, in doing so breaking all those taboos against showing off and opening ourselves up to the accusation of not listening properly to others. We have, in short, to engage in precisely the sort of trumpet-blowing our democratic instincts (not to mention our desire to be liked) warn us against.

The question facing us is how to emerge from this paralysis and start moving again. This requires not only putting weight on limbs we may not entirely trust (and letting go with others) but also deciding  whose hand to hold and in what direction to move: to find a way of substituting ‘our’ way for ‘my’ way. And even, maybe, finding some way to do it in a shy way.

Posted in Britain, political reflection | Tagged , | Leave a comment

Best wishes for 2017

2016 was a year in which the world turned upside down in so many ways. And nothing seems to sum it up better than this shop window display I spotted this morning in Belgrade with its astonishing juxtaposition of icons.  Hoping 2017 will be better for us all. Onward and upward!

Posted in Greetings, things visual, Uncategorized | Tagged | 1 Comment

How global IT companies screw up your daily life – another example


I have been seriously thinking for the last six weeks or so that I am developing dementia, after repeatedly finding that entries I had made in the diary feature on my iphone (on which I have relied for years) were appearing on the wrong day. I now discover that this is caused by a horrible redesign – made with no warning to users whatsoever. Before the last (unasked for) upgrade if you were trying to fix an appointment you could see (in calendar mode) which days did – or did not – have some activity in them. You could then click on any given day to see what appointment was already there (suppressing the minor annoyance that Apple might have chosen to mark something like St Andrew’s Day, or Valentine’s Day and that it was in fact free even when it didn’t look like it) or you could add a new appointment. The software, in other words, took you straight from the month view to the day view via a click on the date. There used also to be an intervening week view that showed each day consecutively so you could see details of what was on for each day. Since the last upgrade they have introduced a quite different intervening view that does not list all the days consecutively but lists every diary entry. If there is more than one thing on any given day, each item is given its own entry, but if there are days with nothing entered it simply skips them. I thought this was just a visual change but now realise that the functionality has also completely altered.
Yesterday I was trying to make an appointment in January. Looked at the month view and found that there was nothing on from the 11th to the 15th and clicked on the 11th to add the new appointment. But the software didn’t take me to the 11th – the page it opened was the 16th – the first on which I had another appointment already entered. The only way to add the new appointment was to enter the new date manually as a changed start time. It has clearly been doing this ever since the last upgrade. This explains why at least four appointments I have made in the last month have ended up appearing on the wrong dates. There are many more set for the future and I can see that I am going to have to go through them all, checking each one to make sure it is entered for the right date. Hours of my time wasted all because some little geek working for Apple (probably in dreadful conditions in Bangalore) didn’t think this thing through, and nobody bothered to offer customers a choice. This same upgrade, I may say, also unilaterally took it upon itself to assume that an appointment I made in Toronto needed to be adjusted by 5 hours to bring it into line with UK time – resulting in another huge diary disruption.
I could manage my diary just fine on a Nokia communicator 20 years ago. But now we are in an era where our every labour process, paid or unpaid, is determined by these global corporations. An activity as simple as jotting a note in a diary electronically, rather than on paper, now involves effectively filling in a form. And this form is not designed to enable independent individuals to manage their lives autonomously but to facilitate corporate control of time management and maximise rental incomes to software companies, telecommunications suppliers and their ilk.
In the last four or five years I have been struck by the spread of those practices whereby messages are sent directly to your diary by other people using Outlook. An alert will suddenly pop up asking you to accept or reject a meeting request from someone you may or may not know. At first these came from other people in the university I work for, and were, I assumed, linked to the fact that we were all on the same email system, but now they come from all directions – neighbours, people I have agreed to do talks for, and even, the other day, somebody inviting me to a party that way. Intrusion into other people’s time management has been appified and normalised. If you fail to ‘accept’ or ‘reject’ or, worse, fumblingly press the wrong button, which has interecepted your urgent attempt to do something else, there will be social consequences, as well as potential financial ones (like those that occur when you do not realise that, lurking in a website from which you have purchased something, there is a hidden area where you are supposed to deactivate automatic renewal).
Last night I spoke at a book launch in Oxford for this remarkable book by Bob Hughes and the audience discussion turned to the question what to do about it (‘it’ being the toxic effects of technology more generally). Two ‘solutions’ stand out as the most obvious.
The first of these is to resist the new technology and go back to the old. In this particular case this would mean going back to lugging around a heavy address book and diary and pen wherever I go. With my low haemoglobin  and bad shoulder this would be an increasingly painful solution as well as doing little to reduce the world’s consumption of paper. it would additionally, in these days when arrangements  are made by text and email, require a lot of cross-referencing with other sources of information. There is also the reality that my handwriting is not the most legible and a note made, for example, on a moving bus, is liable to be open to several alternative interpretations. And the ever-increasing risk of physical loss or damage, from absent-mindedly leaving it behind somewhere or having the bag stolen, or spilling coffee over it.
The second ‘solution’ – the one that, over the years, I have heard proposed by more (usually young and male) techies than I could count, is to develop alternative applications, using open source software. This means having to invest a huge amount of personal time and effort (unpaid of course) in learning how to use this software and, if you are not a denizen of any hackerspace, simply swapping dependence on one lot of techies (poorly paid by global corporations) to another (apparently working for free but actually, of course, with their time subsidised by rich parents or spouses, day jobs paid for by others or some form of rent or taxpayer subsidy).
In the here and now neither of these is an attractive option for me.So I guess that, until the workers of the world unite to build a better society, I am just going to have to grit my teeth and keep learning the new codes and filling in the forms and installing the new apps at the diktat of these global corporations, rendered dumber (and angrier) by the day by their Taylorisation of my daily life.
Posted in Autobiography, Labour in the 21st century | 4 Comments