London mob, where are you?

From Charles I to Margaret Thatcher, when British governments have got above themselves and tried to do what is contrary to most public opinion, they have been brought to their senses by the same powerful force: the London mob.

Taking to the streets is the last weapon available to an infuriated populace – visible as I write on the streets of Hong Kong and Puerto Rico and in the past in so many other places: the 2011 Arab Spring; the 1989 ‘Autumn of Nations’; and of course the 1848 ‘Spring of Nations’ referenced in that name.

Yet here we are, faced with an un-representative British government with less legitimacy than any in living memory, making decisions without popular mandate with a potential to affect many more people  much more seriously than Thatcher’s poll tax, and there is no mob to be seen on the streets of London.

There have been some polite demonstrations against a no-deal Brexit, which have been boycotted by many on the pro-Brexit left on the grounds that they are middle-class, neo-liberal and Blairite. There have also been some demonstrations calling for a general election but, so deep is the rift among former labour party supporters, in a kind of mirroring, many on the anti-Brexit left have failed to support them because they are ‘too cross with Jeremy Corbyn’. This polarisation on the left has been accentuated by the unpleasantly engineered charges of anti-semitism, in which the BBC has played a disgraceful role, matched only in political irresponsibility by the way in which it has conferred the ‘oxygen of publicity’ on the likes of Nigel Farage, Ann Widdecombe and Boris Johnson over the years, presenting them to the British public as entertaining eccentrics rather than the dangerous threats to democracy they actually constitute.

One of the most depressing aspects of these divisions is the claim by both sides in the debate on the left that they speak for the working class. If we look at voting patterns in the 2016 referendum it is clear that the urban population (where most of the working class, especially its black and ethnic minority members, resides) was largely in favour of remaining in the EU. The the average majority for ‘remain’ was 55.2% in the 30 largest cities. All the largest UK cities (apart from Birmingham) – London, Manchester, Liverpool, Glasgow, Edinburgh, Newcastle, Bristol, Cardiff, Belfast – voted with substantial margins to remain in the EU. Other than in Birmingham and nearby Coventry and Wolverhampton, the urban ‘leave’ majority was concentrated in what Americans call ‘rustbelt’ areas – where local populations have been hit hard by deindustrialisation, such as Sheffield, Bradford, Wakefield, Sunderland, Nottingham, Derby, Stoke-on Trent and Hull. Sadly these areas are likely to be among those hit hardest by any no-deal Brexit.

Surely, a future historian might think, it is precisely in these large cities that one might expect people to take to the streets. Look at Manchester, with its brave tradition of popular protest going back to Peterloo in 1819. Or London, where in 1641 the London apprentices and their supporters took to the streets to prevent the bishops from entering the Houses of Parliament to thwart Charles I, and the Gordon rioters shook the establishment to its core in 1780. Not to mention the 1990 poll tax riots that are often credited with bringing Thatcher down. But so far, no sign.

What can explain this? Perhaps it has something to do with race and racism? Certainly a reaction to racism played a major part in triggering some of the most recent urban rioting, for example in  London’s Brixton,  Liverpool’s Toxteth and Leeds’s Chapeltown in the summer of 1981, and the so-called London riots of August 2011. But racism has, if anything, intensified considerably in recent years, fuelled by May’s ‘hostile environment for immigrants ‘ regime in the Home Office and the sense of entitlement of far-right racist parties in the aftermath of the Brexit vote. Have people become too afraid to protest? Or lost faith in the solidarity of the white left? Have they succumbed to the kind of social paralysis that affects people with long-term depression? Or are they simply too busy scratching a living in the precarious gig economy to be able to take time off for the comparative luxury of political self-expression?

Of course I do not want to place the responsibility for leading us out of the mess that is obviously not of their making on any members of the urban proletariat, black or white. In puzzling over why they are not taking to the streets there are also other factors to be taken into account. Is it a question of culture? I seem to remember that some of the impetus behind the poll tax riots came from the anarchist group Class War, linked to a kind of punk culture that was consciously anti-racist, including mixed punk/reggae bands like UB40. I am no expert on popular music these days but it would seem to me that Stormzy’s popularity among Labour Party members follows in such a tradition. Nevertheless, it is one thing to have middle-class Glastonbury-goers applauding the music and quite another to have them out on the streets in solidarity with victims of racism in Lewisham or Moss Side. Or, for that matter, with food bank users in Brent or South Shields. Is it a question of leadership? On the principle that a mob is not a mob until it’s mobilised. Who knows? What seems clear is that there is now in London as in other cities across the land a bubbling cauldron of anger that seems overdue for an overflow. But where will it go? And who will be scalded in the process?





Yesterday I delivered the final manuscript of a book to the publisher. It represented quite an important moment for me, bringing together the insights gleaned from half a century of research on labour.

The last time I remember finishing a complete book in this way was back in 1981, in that burst of energy that high blood pressure produces in mid pregnancy. That one was never intended as a magnum opus, with a title (‘Your job in the Eighties’) that screamed that it had a sell-by date as well, of course, as a write-by date, imposed by the impending birth.

Little did I realise then how pressured the ensuing decades would be. I published edited collections and wrote an awful lot of book-length reports but the only really serious writing I did was in the form of relatively short essays, produced in the brief intervals between the pressing demands of meeting the deadlines for the work that paid the bills.

Thanks to Monthly Review Press, some of these essays, originally written for very different audiences, and with the first dating back to 1978, were published together as a book in 2003, and another collection followed in 2014. But the essay format does not really allow you to build an argument slowly from the beginning and follow it through. On the one hand you cannot presuppose that the reader has read anything else you have written beforehand so you have to go back to square one to explain certain things each time (leading to repetition if they are read sequentially) and the length limit means you cannot go into as much depth and detail as would ideally be nice.

I was constantly urged by friends to ‘write a proper book’ and, indeed, told that I only had myself to blame for any lack of recognition or acknowledgement because I had not done so.

So at last, I bit the bullet and decided to write one, hoping that it might be my last word on this subject that has occupied so much of my time and allow me to move on to other things. I found it quite hard to write in some ways. Partly because, as ever, there were other demands on my time (among others the need to babysit my grand-daughter) but mainly because of the difficulty of avoiding self-plagiarism. If you have been saying something for fifty years (even if this is to very small or uncomprehending audiences) it does not feel fresh when you repeat it. As John Berger memorably said, ‘the first time you say something, you’re discovering a truth. The next time, it’s a little less true’. I would spend hours trying to find a new way to write something only to discover that I had put it much more succinctly, years ago.

Nevertheless, and despite a little bit of (duly acknowledged) recycling here and there I did, I think, manage at least to build a coherent argument starting in Chapter 1 and ending in Chapter 8, with a clear conceptual framework that I hope will be useful to other researchers and students (and maybe even some general readers) in years to come.

But Oh!, as they say, the irony.

However there was one thing I was not prepared for. Even after a working lifetime of playing Cassandra, I was still taken by surprise by one thing: the way that this ‘book’ is going to be published. In a particularly ironic twist, this provides one of the most vivid (and cruel) examples of precisely the kind of fragmentation (of thought processes, of labour processes, of social interaction…) that I have been writing about all these years and, indeed forms part of the book’s subject matter.

Palgrave Macmillan, the publishers, who are now part of the Springer empire, are in the process of introducing a new way of publishing books online, one that integrates them with the way that academic journals are increasingly published. While hard-copy ‘proper books’ printed on paper will no doubt remain, albeit increasingly expensive, they expect the majority of readers to purchase their contents online. And with that in mind they are putting together packages that enable subscribers to pick and mix from a suite of content. Instead of buying a whole book they will be able to download chapters, one at a time, and bundle them together with chapters from other books. Thus, at a stroke, destroying that coherence it has taken so long for me to craft and introducing all sorts of new scope for incomprehension for the reader who comes in at, say, chapter 5.

In this new environment, I suppose that old derided essay format, so criticised by my friends and blamed for my relative invisibility in the academic world, at least in the UK, will turn out to be the best way to communicate after all. Assuming that readers are credited even with the attention span to read 6,000 words consecutively, I fear that the future may be even worse: with the literature made up of individual nuggets, each with an abstract that will be all that most people read, arranged interchangeably in a two-dimensional mosaic in which the genealogy of ideas, the logical sequence of an argument, deep scholarship and, yes, even the quality of writing, are flattened out of existence.

It will be a world where the relationship between reader and writer, that sharing of ideas which matters so much to me, in both capacities, is reduced to a purely instrumental one. Writers are expected to produce a series of discrete, easily explained ‘contributions to knowledge’ (as the reviewers for the academic journals like to put it) which can be harvested as quickly as possible by readers whose only interest is in assembling them, along with others, like so many lego bricks, to produce their own, equally simplified, ‘contributions’. In a process that resembles nothing so much as a dating website – something I wrote about, as it happens, only a couple of weeks ago in my last blog entry.

Researchers, be warned. The fragmentation fairy is waving her wand and you are about to be transported to Academic Tinder. Where, if you have done your homework, you will know that the only successful swipes are those that go to the right.



Not such good work, Matthew Taylor

The long-waited Taylor Review of Modern Working Practices is now published, under the title Good Work and it is, I am afraid, very disappointing indeed. In terms of its concrete recommendations it goes beyond being a missed opportunity, out of kilter with its times, to posing an active threat to workers’ rights and undoing past advances.

As might be expected from a lead author who was appointed head of Tony Blair’s Number 10 Policy Unit in 2005, it is not short on spin. It speaks repeatedly of ‘enduring principles of fairness’, nods often to the idea of good work as an essential ingredient of happiness and wellbeing and claims to be focusing ‘not just on new forms of labour such as gig work but on good work in general’. Pious mission statements, such as ‘We believe work should provide us all with the opportunity to fulfil our own needs and potential in ways that suit our situations throughout our lives’ sit alongside nods to the inevitability (and benignity) of technological progress. In the classic contradictory formula of centre-left neoliberalism it manages simultaneously to say that ‘Good work is something for which Government needs to be held accountable’ and ‘The best way to achieve better work is not national regulation but responsible corporate governance’!

Why was it no surprise to discover this morning that Taylor’s co-investigator, Greg Marsh, was a former investor in that most visible of gig economy companies, Deliveroo?

Out of kilter with the time

In light of recent events, the report seems oddly old-fashioned. It is little more than six months since the Inquiry was established (in October 2016) but during that period there have been unprecedented developments on the ground, with an upsurge in organising by casual workers in the UK (and elsewhere). New trade union organisations, such as the  UPHD (United Private Hire Drivers) and the IWGB (International Workers of Great Britain IWGB) have sprung up to represent drivers for platforms like Uber and delivery workers for companies like Deliveroo as well as casualised workers in other sectors, such as outsourced cleaning workers, porters and foster carers. A series of test cases brought by these organisations, sometimes with the support of traditional trade unions like the GMB, have established in case after case that workers for companies like City Sprint, Uber and Pimlico Plumbers are not the ‘independent contractors’ these companies claimed they were but ‘workers’, entitled to such rights as the minimum wage and paid holidays. As a result of these, and other well-publicised cases of exploitation of low-wage workers, such as Sports Direct, there has been a sea-change in public attitudes to fairness at work evidenced by the popularity of the demand for an end to zero-hour contracts in the Labour Party Manifesto.

The British public seems, at last, to have seen beyond the rhetoric that elides what is ‘flexible’ for the employer (in the form of a just-in-time workforce, waiting to be summoned at short notice by an app) with the older demands raised fifty years ago by the Women’s Movement for a ‘flexibility’ that responds to the unpredictable demands of family. Having lived it in their own lives, or watched their kids do so, most people now see only too well that being available on demand makes it very hard indeed to manage your own life, especially when childcare is involved. But the report shows no awareness that workers and employers may have different interests, merely stating vacuously that ‘Encouraging flexible work is good for everyone and has been shown to have a positive impact on productivity, worker retention and quality of work’.

While public opinion seems to have been saying ‘enough is enough’, the court judgements  have been saying, in the words of Jason Moyer-Lee, General Secretary of IWGB,  ‘”gig workers” already have rights – all we need to do is enforce them’.

A rational response to this situation – the opportunity that this report misses – would take the existing principles as a starting point and work to ensure that there are clear guidelines for their implementation, putting the onus of proof not onto vulnerable workers but onto those who dictate their working conditions and profit from their services. But this is very far from the Taylor approach.

Missed opportunity

The report quite rightly recognises that the employment status of casual workers is confusing and poorly understood. This is partly because it is dealt with separately under the tax system and in employment law. Under the tax system, unless you have some other legal status such as being a limited company or a partnership, you are either an employee or self-employed. Many workers living hand-to-mouth think it is preferable to be self-employed because that way they can defer the payment of income tax and set expenses against it. Under employment law being an employee brings a range of rights and protections, including such things as maternity and paternity pay, sick pay, parental leave and pensions coverage. These are probably worth much more to most workers in real terms than whatever tax savings they make by being self-employed, but of course can only be claimed if your employer actually agrees that you are indeed an employee and fulfils his or her part of the bargain. There are however some rights, guaranteed under employment law to all workers regardless of whether they are formally classed as employees. These include the right to the minimum wage and to paid public holidays.

The difficulty of establishing employee status is not new. Back in the 1970s and 1980s when I was doing research on homeworking this issue came up again and again. Frightened women, unaware of their rights, were told firmly that they were not employees (often believing – usually wrongly – that what they were doing was not quite legal and that if found out they would become liable for tax or national insurance payments and fined for being in breach of health and safety or tenancy regulations) so they would accept that they had no rights. The law had then no single test for being ‘genuinely self-employed’. Tribunals or courts were supposed to weigh up a lot of different factors such as who determined what work should be done and what should be paid for it, whether or not the worker had the right to employ somebody else to do it, how continuous it was, who paid for the materials and so on. Little has changed since then, although the case law has moved on. The most crucial principle is whether a relationship of subordination can be said to apply.

In the case of most platform companies, there is little doubt that the workers are indeed subordinate. Although practices vary from company to company, workers are usually told precisely what to do, with each ‘task’ well defined and costed. Not only is their pay and work process laid down, there are also typically detailed rules about quality standards to be met. While there may be some limited right to turn a few jobs down, there are usually strong penalties for doing so repeatedly. They do not have the right to pass the work on to others. And in some cases (Deliveroo being a case in point) they are even required to wear uniforms or sport company logos.

The report could have laid out clear guidelines for defining genuine self-employment and spelled out the obligations of employers of subordinate workers. But what it has done instead is muddied the waters still further by proposing exceptions to the existing principles which could be detrimental not only to workers who are currently working casually but also to other workers, including those currently defined as employees.

 How could its recommendations make matters worse?

  1. Establishing a new intermediate kind of employment status – the ‘dependent contractor’

The report proposes setting up ‘an intermediate category covering casual, independent relationships, with a more limited set of key employment rights applying’. Although this approach has been rightly resisted by British legislators in the past, this is not a particularly original response. Indeed it something of a knee-jerk reaction by neo-liberal ‘modernisers’ to the development of new forms of work. It was, for example, strongly promoted in Europe in the 1980s and 1990s (for example by the Belgian labour lawyer Roger Blanpain) as a way of encouraging teleworking without bringing it completely within the scope of existing employment protection laws. Italy provides a particularly extreme example of the ways in which different forms of ‘parasubordinate’ status and sub-categories of self-employment have been created to cover workers, such as call centre workers, who fall outside traditional sectoral agreements and regulatory categories. The overwhelming evidence is that when such new kinds of status are established they do not just result in reduced coverage for the ‘new’ kinds of workers who fall under them but, even more importantly, are then extended across the workforce to bring other more traditional forms within their scope, resulting in a worsening of conditions across the board. In other words, what they do is provide employers with a new tool for casualisation and erosion of existing rights, whatever well-intentioned language is used that purports to prevent this.

  1. Undermining the minimum wage

The report also proposes a change in the way that the National Minimum Wage (NMW) is applied: ‘In re-defining ‘dependent contractor’ status, Government should adapt the piece rates legislation to ensure those working in the gig economy are still able to enjoy maximum flexibility whilst also being able to earn the NMW’. What it proposes is complex, and difficult to summarise here. At the headline level it looks like a proposal to increase the NMW by a modest amount for workers with the proposed new ‘dependent contractor’ status. However the report also wags a stern finger at those who think that workers should be paid for all the time they spend waiting for jobs to come up, which is, they say unreasonable and open to abuse. Given that many workers in the gig economy spend half their time or more logging on in the hope of work that does not arrive, this could in practice lead to a fall in the time eligible for payment.

There is more in the report. I have only scratched the surface here. But am about to board a flight for China so will postpone further discussion for another day.

and more on the future of work

In the new spirit of reblogging here things I have already blogged elsewhere, here is a piece that appeared today on the LSE blog at

(their headline not mine).

Future of Work: taking the blinkers off to see new possibilities

Anybody relying for their information on the current headlines would find it hard to make sense of what is happening in the labour market. On the one hand, the news media are awash with apocalyptic forecasts, often backed up by studies from reputable organisations such as the US National Bureau of Economic Research , the Oxford Martin School or the Bruegel think tank, that robots, machine learning, drones, d3D printers, driverless cars and other applications of Artificial Intelligence are going to eliminate very large numbers of jobs, not just in manufacturing but also in service industries, ranging from low-skill tasks like picking and packing in warehouses and home delivery right up to high-skill professional tasks like legal research or stockbroking.

On the other hand, employment levels in the UK are at an all-time high of 74.6 per cent, with the unemployment level, which averaged over 7 per cent from 1971 to 2016, having fallen to just 4.7 per cent in January, 2017.

So, are we facing mass unemployment or not? Here we are, nearly a decade after a major financial crisis that led to job losses, austerity and waves of corporate restructuring including bankruptcies, mergers and acquisitions, seeing the emergence of new winners, with new business models and the birth of new industries, with new technological applications playing a key role. If we take a broad historical view, this is actually quite a familiar story.

We could look, for example, at the development of new industries based on the spread of electrical power and mass entertainment after the 1929 crash, or of computerisation after the 1973 energy crisis, or the explosive growth of the Internet in the decade after the infamous 1987 Black Monday. Each of these technologies was also, of course, instrumental in displacing large numbers of jobs in older industries. And with each wave, livelihoods were irrevocably damaged, because the new jobs were not created in the same areas, or for the same people, as the old ones.

The elderly look on in amazement at the desirable new labour-saving appliances their grandchildren buy, remembering the back-breaking drudgery of the old methods. But for every gleaming new factory in one part of the world, there are piles of rusting machinery in others, along with devastated lives and communities. Such ‘creative destruction’, as Schumpeter called it, is, surely, part and parcel of capitalism as usual.

So why, in the second decade of the 21st century, are so many commentators, on so many different parts of the political spectrum, convinced that this time things will be different: that we are, in Paul Mason’s phrase, moving into a period that could be described as ‘postcapitalist’?

Part of the explanation might lie in the way that capitalism is often seen, especially by the young, as a single, monolithic system that embraces all aspects of life. Perhaps a more useful way of understanding it is a somewhat messy assemblage of different capitalists competing with each other, scrabbling for market share, experimenting with new business models and often failing. In times of crisis, when many are going to the wall, technologies (including some that have been around for a while) may be seized on, not as part of an orchestrated general plan, but in much more piecemeal ways, by particular firms looking for means to restore profitability: to reduce labour costs, develop new products or services or enter new markets.

Obvious first targets for automation are processes where labour costs are high, usually because they require scarce skills or workers are well organised. So it is not surprising that skilled print workers were first in the firing line for digitisation, or auto factories for robots. The first companies to introduce innovations can make a killing – getting ahead of their competitors with a step change in increased productivity.

But such advantages do not last long. Once the technology is generally available, it is open to any competitor to buy it at the lowest market price and copy these production methods. A race to the bottom is started, which can only be sidestepped by firms that continue to innovate. It is fanciful to imagine that it would be possible to populate the world’s factories with 2017 state-of-the-art robots and then just leave them to get on with production. Leaving aside the question of how these robots are to be assembled and maintained, there is no conceivable business model that would make this profitable over any sustained period of time.

A much more likely scenario is that vast new industries will grow up to manufacture these new means of production which, like today’s laptops and mobile phones, will rapidly become obsolete and need replacing. These industries will also give birth to new service jobs, involved in their design, distribution, maintenance and in dealing with the unintended consequences of their widespread adoption (such as cyber-crime and new safety hazards).

Current technologies do not just create new kinds of jobs, they also change the way work is organised, managed and controlled. My research has shown that 2.5 per cent of workers already get more than half their income from online platforms. These new organisational models do not just change the way existing jobs are managed but also bring new areas of economic activity within the direct orbit of capitalism, for instance by drawing into the formal economy the kinds of cash-in-hand work done by window-cleaners, dog-walkers, baby-sitters or gardeners. They may not be jobs in the traditional sense, but they are work, with the potential to be organised differently in the future, that can form the basis of profitable new industries.

Another factor that blinkers thinking about the future of work is a failure to see beyond the boundaries of the existing industrial structure and imagine where other new industries will emerge from. Whether it’s the DNA of plants, the human needs for entertainment, sociality and health or outer space, the universe is full of new opportunities for commodification. The question is, can the planet sustain them?

How global IT companies screw up your daily life – another example


I have been seriously thinking for the last six weeks or so that I am developing dementia, after repeatedly finding that entries I had made in the diary feature on my iphone (on which I have relied for years) were appearing on the wrong day. I now discover that this is caused by a horrible redesign – made with no warning to users whatsoever. Before the last (unasked for) upgrade if you were trying to fix an appointment you could see (in calendar mode) which days did – or did not – have some activity in them. You could then click on any given day to see what appointment was already there (suppressing the minor annoyance that Apple might have chosen to mark something like St Andrew’s Day, or Valentine’s Day and that it was in fact free even when it didn’t look like it) or you could add a new appointment. The software, in other words, took you straight from the month view to the day view via a click on the date. There used also to be an intervening week view that showed each day consecutively so you could see details of what was on for each day. Since the last upgrade they have introduced a quite different intervening view that does not list all the days consecutively but lists every diary entry. If there is more than one thing on any given day, each item is given its own entry, but if there are days with nothing entered it simply skips them. I thought this was just a visual change but now realise that the functionality has also completely altered.
Yesterday I was trying to make an appointment in January. Looked at the month view and found that there was nothing on from the 11th to the 15th and clicked on the 11th to add the new appointment. But the software didn’t take me to the 11th – the page it opened was the 16th – the first on which I had another appointment already entered. The only way to add the new appointment was to enter the new date manually as a changed start time. It has clearly been doing this ever since the last upgrade. This explains why at least four appointments I have made in the last month have ended up appearing on the wrong dates. There are many more set for the future and I can see that I am going to have to go through them all, checking each one to make sure it is entered for the right date. Hours of my time wasted all because some little geek working for Apple (probably in dreadful conditions in Bangalore) didn’t think this thing through, and nobody bothered to offer customers a choice. This same upgrade, I may say, also unilaterally took it upon itself to assume that an appointment I made in Toronto needed to be adjusted by 5 hours to bring it into line with UK time – resulting in another huge diary disruption.
I could manage my diary just fine on a Nokia communicator 20 years ago. But now we are in an era where our every labour process, paid or unpaid, is determined by these global corporations. An activity as simple as jotting a note in a diary electronically, rather than on paper, now involves effectively filling in a form. And this form is not designed to enable independent individuals to manage their lives autonomously but to facilitate corporate control of time management and maximise rental incomes to software companies, telecommunications suppliers and their ilk.
In the last four or five years I have been struck by the spread of those practices whereby messages are sent directly to your diary by other people using Outlook. An alert will suddenly pop up asking you to accept or reject a meeting request from someone you may or may not know. At first these came from other people in the university I work for, and were, I assumed, linked to the fact that we were all on the same email system, but now they come from all directions – neighbours, people I have agreed to do talks for, and even, the other day, somebody inviting me to a party that way. Intrusion into other people’s time management has been appified and normalised. If you fail to ‘accept’ or ‘reject’ or, worse, fumblingly press the wrong button, which has interecepted your urgent attempt to do something else, there will be social consequences, as well as potential financial ones (like those that occur when you do not realise that, lurking in a website from which you have purchased something, there is a hidden area where you are supposed to deactivate automatic renewal).
Last night I spoke at a book launch in Oxford for this remarkable book by Bob Hughes and the audience discussion turned to the question what to do about it (‘it’ being the toxic effects of technology more generally). Two ‘solutions’ stand out as the most obvious.
The first of these is to resist the new technology and go back to the old. In this particular case this would mean going back to lugging around a heavy address book and diary and pen wherever I go. With my low haemoglobin  and bad shoulder this would be an increasingly painful solution as well as doing little to reduce the world’s consumption of paper. it would additionally, in these days when arrangements  are made by text and email, require a lot of cross-referencing with other sources of information. There is also the reality that my handwriting is not the most legible and a note made, for example, on a moving bus, is liable to be open to several alternative interpretations. And the ever-increasing risk of physical loss or damage, from absent-mindedly leaving it behind somewhere or having the bag stolen, or spilling coffee over it.
The second ‘solution’ – the one that, over the years, I have heard proposed by more (usually young and male) techies than I could count, is to develop alternative applications, using open source software. This means having to invest a huge amount of personal time and effort (unpaid of course) in learning how to use this software and, if you are not a denizen of any hackerspace, simply swapping dependence on one lot of techies (poorly paid by global corporations) to another (apparently working for free but actually, of course, with their time subsidised by rich parents or spouses, day jobs paid for by others or some form of rent or taxpayer subsidy).
In the here and now neither of these is an attractive option for me.So I guess that, until the workers of the world unite to build a better society, I am just going to have to grit my teeth and keep learning the new codes and filling in the forms and installing the new apps at the diktat of these global corporations, rendered dumber (and angrier) by the day by their Taylorisation of my daily life.

All that suffering. For what?

I cannot have been alone in my reaction to yesterday’s Autumn Budget announcements from Philip Hammond in which the government promises that underpinned the austerity agenda for the last six years were at last pronounced officially dead. What I couldn’t stop thinking about was the huge toll of human sacrifice those false promises had brought about: the elderly people hounded out of their council homes because there was one bedroom too many, the dying people deprived of benefits because they turned up a few moments late for a Jobcentre appointment, the disabled people put through humiliating and painful tests, the defeated expressions on the faces of proud people forced into demeaning make-work jobs, the shame of having to turn to a foodbank to feed one’s kids. So much pain. Then, all unbidden, the words came into my head from that Stanley Holloway comic monologue, so often requested on the radio in my childhood, called Albert and the Lion, in which the mother of Albert (who has been eaten by a lion at the zoo) is consoled by a magistrate with the thought that she can always have more sons and replies, indignantly, ‘What, spend all our lives raising children. To feed ruddy lions? Not me!’.

Whether those lions are seen as stand-ins for war or for capitalism, the joke, certainly understood by most people in the self-deprecating 1950s when I first heard it, hinged on the fact that of course, people always DO go on raising children, whatever the cost, whatever the sacrifice. In fact for most people, having children is the best and most altruistic thing they ever do in their lives. Having children, or grandchildren, or nephews and nieces, or loving the children of others, gives you a stake in the future, in peace, in public order, in a society that values more than just making money. It is actually society’s main protection from nihilist destructive rage, crime and greed gone mad.

Against all rational self-interest, in the knowledge that it will make them poorer, deprive them of sleep, of chances to go out in the evening, of holidays, people just go on having babies, drinking in their smiles, saving up to buy them treats, then later worrying themselves silly every time they fail to come home on time, trying desperately to protect them from pain and, yes, putting up uncomplainingly with horrible jobs just to try to assure them a secure future.

It was reported at the end of June this year in the Guardian that the number of children being brought up in poverty in the UK had risen from 3.7 million in 2014-2015 to 3.9 million – an increase of 200,000 in just one year of austerity programmes. If you listen to the way the parents of these children are described in the right-wing media, or see how they are treated by the Tory state, you would think that choosing to procreate is an act of pure selfishness, embarked on to jump the queue for social housing, or claim a bit more benefit. Rarely is it recognised that what parents are actually doing, often at great cost to their finances and their own bodily wellbeing, is bringing up the next generation of workers and taxpayers on whom the economy depends. Instead of being rewarded and praised for this, they are demonised.

If there is one single argument, above all others, for the need for a universal basic income it is this: to secure a future for our children – social reproduction – that does not have to be bought with such suffering (I was going to write ‘needless suffering’ but of course in this unequal world we know that there are those who benefit from it).



The end of the middle

There was a sudden moment yesterday morning when I was hit (it felt bodily, like a punch) by the realisation that it was really likely that Trump would win the US presidential election. I was half-watching the morning news on the BBC while preparing breakfast and they showed clips of the final rallies of the two candidates: Clinton in Philadelphia, embracing Bruce Springsteen, and Trump addressing a crowd in New Hampshire. What jolted my attention was Trump’s language: ‘Tomorrow’, he said, with complete confidence, ‘the American working class will strike back’. Wow, I thought, he actually said it; he actually used that phrase ‘working class’ which has always seemed so inexplicably taboo among mainstream Democrats. And I felt a deep conviction that he understood precisely what he was doing when he used it.

For years, I, and no doubt other people on the European left, have been puzzled by the way, across the Atlantic, workers have been persistently described as ‘middle class’. It could perhaps be explained in several ways: negatively, as a way of disassociating from any hint of communist leanings; more positively as an appeal to the aspirations of the poor in a society that has grown through upward mobility, particularly of second-generation immigrants; as a way of fudging class differences in an electoral system in which victory can only be won by broad alliances between what Marxists would regard as proletarians and elements of the petit bourgeoisie.

One of its many effects has been to make it difficult to speak clearly of class at all. People are analysed in their capacities as consumers, or in relation to their ethnicity or other demographic variables, but rarely in relation to their role in the economic division of labour. Although the industrial working class may be romanticised nostalgically (interestingly enough not least by Democrat supporters like Bruce Springsteen) it is marginalised in general discourse. An increasingly fictionalised idea of a centre ground made up of ‘hard working families’ is substituted for them in the mainstream discourse (echoing the language of the 1990s centre-left political discourse which presumed a fuzzy middle ground in which ‘third way’ politics would work).

Tragically, this dissolution of clear class analysis has been echoed on the left as well as in this centre ground, where concepts like ‘the 99%’, the ‘multitude’ and the ‘precariat’ have been substituted for that of the working class.

By not daring to speak its name, these deniers have opened the door to a reframing of working class identity. If the people (workers or former workers) who perceive themselves to be losers of neo-liberal globalisation policies and know very well, and rightly reject, the designation ‘middle-class’ feel that they are not being ‘seen’ by social democratic parties then they will look for other leaders who recognise them for who they are and seem to care about what they fear. And what we are seeing, in the USA as in the UK and other parts of the world, is that those who do so are rabble-rousing, xenophobic populists.

Working class people who have been told they are middle class know that they have been lied to, and will not trust the politicians they believe are liars. The unfolding tragedy we are living through shows that they may then become open to believing other liars, who persuade them to deflect their rage against fellow members of the working class, whom they do not recognise as such, having been deprived of the analytical tools to do so.

To paraphrase G.K. Chesterton (on god): When men choose not to believe in socialism they do not thereafter believe in nothing, they then become capable of believing in anything.

But I will try to end with a glimmer of hope, this time from Hegel.