Thinkpieces Guy Herbert Thinkpieces Guy Herbert

The Human Rights Act as a constitution of liberty

Guy Herbert, best known as the general secretary of NO2ID but writing in a personal capacity, defends the Human Rights act as necessary as a bulwark against the state when so many of the traditional defences have been eroded.

I am here to defend the Human Rights Act. It is not an idealistic defence but a pragmatic defence, rooted in historical context. Should classical liberals support the Human Rights Act against repeal? Do we need it? My answer is yes.

Our reactions to phrases become readily conditioned. And so it has been with "human rights". Let us remember for a moment that the full title of the agreement that is under siege here is the Convention for the Protection of Human Rights and Fundamental Freedoms. If it were called the Fundamental Freedoms Act would it be as easy to undermine?

Sad to say human rights do have a bad name, and they have that bad name for good reasons. Their strongest proponents often do the most harm to their reputation - not because of the legal content of what they say, but of their approach to the law.

This comes in two forms which sometimes overlap: the rarer is soft revolutionism from the far left—human rights as a transitional demand. This makes human rights a movement more than a doctrine—a means to control the terms of any political debate.

More common is a not entirely conscious belief that human rights and the Human Rights Act in particular embody the truth, the whole truth and nothing but the truth of how states should treat people. It's a sort of human-rights fundamentalism, a desire for revealed wisdom in which "but that is contrary to Art 6" is a morally conclusive statement.

It' s bound up with humanism, and instantiates the felt values of the bien pensant left. There’s a parallel here with common US attitudes to their constitution, treated as Holy Writ, though those are found more on the right than the left.

I fear that in particular the venerable National Council for Civil Liberties, now Liberty, has become something like the Church of the Human Rights Act. All its activity (most of it still valuable) is now predicated on the overriding importance and superordinate moral power of the Human Rights Act – taken to be proof that the social assumptions of the liberal left are correct.

There are also those for which the Act was a crowning achievement of Tony Blair, showing how the Labour party in government was committed to the freedom of the people, unlike the brutish Thatcher and Major regimes. These last need not detain us long, though their tribal pique may well spur some Conservative opponents.

The enemies of the Act have a point in their disdain for its claque. But they go rather further than that, and repudiate the law itself. Moreover most reject not just the domestic law (such as it is) but the underlying convention, of which Britain has been part since 1950. The Act for them is merely an enabler for the unwelcome jurisprudence of Strasbourg to get into English law (Scots law not much considered). A British Bill of Rights they say will stop all such civilian nonsense.

What's wrong with it, do they say?

They say it is foreign.

They say the wrong people have rights.

They say we need ‘a better balance between rights and responsibilities’.

They say it is concerned with trivia - or with too many things - and gets in the way of common sense.

They say it allows 'activist judges' to make law.

They say it tramples on the sovereignty of parliament.

Most interestingly for my argument, they say that Britain doesn't need it because of our own much deeper constitutional freedoms.

I have my own complaints against the Act and the convention that it embodies.

Neither is really strong enough in its protection of the individual from the state. The Act doesn’t do some things one might hope from the point of view of rule of law: it does not clearly override other legislation, let alone strike down incompatible law; it doesn’t make Strasbourg an appeal court within our system or its decisions directly applicable here.

It does, on the other hand, extend the scope of human rights actions way beyond bodies exercising state power to all those dealing with the public—and thereby hands some new power to the state, allowing human rights to be used as a sword as well as a shield. [i]

The Convention is too riddled with state get-outs “necessary in a democratic society” - which have given rise to the canting doctrine of proportionality that riddles all the legal discussion of human rights questions. And it fails to mention some critical liberties directly at all (which would matter less if that did not mean the Church of Human Rights has now forgotten them).

It's fairly feeble.

But it is something. I'd argue that we need to strengthen and clarify the application of the Human Rights Act.  What are we to make, as believers in a liberal rule of law, of those who want to dispose of it?

Other commentators have said that Conservative plans (such as they are) are "legally illiterate". That may be so. But such commentators are inevitably communicant members of the Church, preaching to the choir.  To the right, press and public alike that just sounds like technical waffle, covering special interest. Technical problems can be fixed, usually. My differences stem from first principles.

Foreign? So what?

A good thing is a good thing, and a bad one a bad one, wherever it comes from. Hatred of the Eurpean Court of Human Rights does seem to be tied up in some people's minds with dislike of the EU and a sense of nationalist resentment of international institutions generally. But I say, take institutions on their merits. Nationalism is unprincipled and lights arbitrarily what it calls familiar - and calls 'foreign' what it doesn't like. Actually, Great British institutions, from the royal family to fish and chips, often turn out to have foreign origins when you look closely.

The wrong people have rights?

If that is your objection you have missed the point of rights, which is the defence of the individual in the face of attack. If you can strip someone of defences by declaring they are a wrong'un, then none of us is safe.

A balance of rights and responsibilities?

That is if anything worse. It misunderstands what freedom is: not a privilege to be granted on condition of good behaviour, but something that can only be taken away - if at all - on condition.

Against common sense?

This is a variant of "the wrong people have rights". It says some have the privilege to prejudge what is important and to subordinate some people's priorities to others.

Judges making law?

Well I trust the same people will be discarding that Great British gift to the world, the common law system. The extrastatutory decisions of such 'activist judges as Sir Edward Coke and Lord Mansfield', had no business interpreting parliament or precedent. Let's while we are at it chuck out the Appeal Court and House of Lords judgments that found ministers' or officials' behaviour unreasonable. "Activist" in this context is just a boo-word. Judges decide the cases brought before them - what else does one expect them to do? If they make decisions you don't like, then challenge the reasoning and distinguish your case. Otherwise tough.

Trampling on sovereignty? I should bloody well hope so!

The whole point of a constitution, of rule of law, is to constrain absolutism. And absolutism what is meant by sovereignty here—be in no doubt about it. The complaint is that parliament—or the state speaking through parliament—ought to be able to do anything at all, however destructive of individual liberties.

With the greatest respect... no, with no respect at all - I don't agree.  “Absolutism begins at home”, is not a motto any liberal should support.

**

Look for a minute at the context of our (Britain and other Western European countries’) drafting and adopting of the convention: the point of doing so. This is something both its acolytes and enemies neglect. It didn't come from nowhere. It was, as constituting documents usually are, an attempt to define and stabilise an order already won by violent struggle. In the specific case, this was the allied victory in the Second World War, and the aftermath. The contents of the convention are direct reaction to the abuses of the totalitarian states of middle of the 20th century.

Argumentum ad Hitlerum has a poor record, but it is unavoidable here. One can go through every article (I won't - I leave it as an excercise for the reader) of the convention, and see the shadow of the lawless Nazi regime and its abuses. And further in the background, but still in the picture, Joe Stalin, who was still alive and a dominant figure in the world as it was drafted. The high contracting parties were ostentatiously saying: 'we're not fascists, Nazis nor Marxist-Leninists'.

Let's not resist the cheap shot that that Nazis would have had no truck with the Human Rights Act. They would have hated it not only because it specifically tailored to spite them, but because it represented rootless cosmopolitanism not Volkisch values; because the asocial should not have rights; because your rights should in any case depend on your contribution to the Fatherland; and because nothing should stand in the way of the supreme Will to Power.

In 1950, Britain may have been less in need of reassurance that it was not a totalitarian state than many of the other signatories. But the most interesting claim of the critics is that because Britain is intrinsically a free country, this is still the case. At best this is romantic nonsense. Even in the ‘50s the British state was bulking up hugely. It was then we saw the beginning of the widespread use of judicial review as a means - from British common law - of restraining the arbitrary official power that before the war Lord Justice Hewart had called "The New Despotism". [ii] "We" supported the ECHR then not merely to rebuke Uncle Joe but because we had seen how fragile liberty was in dozens of civilized European States.

Sixty years later, the interest of states everywhere in the detail of the lives of their citizens has bloated.

Britain is no longer obviously a free country. In which support for liberty on behalf of our fellow citizens and our representatives can be taken for granted. We have reached a state of mind in which "the government should do something" is a first reaction to the most factitious scandal. In which an absolute majority can be found in an opinion poll to be in favour of banning the almost certainly harmless activity of vaping in public places. [iii]

And though a liberal is hard put It is not even in the same sense a representative democracy.

And it has accelerated. British state is certainly not the (domestically) limited creature it was in 1950 - it has far more power than it had in 1997. Technology gives the state more information, and with it more conviction of Whitehall's omniscience. And the machinery of legislation has changed, too. Parliament has less influence. Much legislation passes as "framework legislation" giving powers in very broad terms to be filled in by regulations. More is very broadly drafted, leaving it in the discretion of police and other enforcers who will be prosecuted when many are technically guilty. And all of it is hustled through timetabled legislation. Six guillotines under the decade of the aforementioned Thatcher regime were a scandal to her opponents. Everything since Blair has been guillotined and knived. [iv]

Yet opponents of the fairly feeble Human Rights Act are not proposing to weaken the executive branch, just the law that stands in its way. A ravenous Whitehall beast will have a little less chain, and disproportionately more power, and it is presented as reverting to an earlier constitutional age. This is at best deluded: like suggesting you can book Concorde to New York next Tuesday. At worst the delusion is a perverse rejection of all we know about political institutions. It is a conceit that totalitarian power is fine, because it will be wielded solely by sympathetic thoughtful people like Mr Grayling, who will use it as the public wishes. And the public is always right.

Hayek wrote in The Constitution of Liberty of the development of the understanding of rule of law and the concept of a Rechtsstaat:

In most countries of the European Continent two hundred years of absolute government had, by the middle of the 18th century, destroyed the traditions of liberty... the main impetus for a revival came from across the Channel. But as the new movement grew it encountered a situation different from that which existed in America at the time or which had existed in England a hundred years earlier. 

The new factor was the powerful centralized administrative machinery which absolutism had built, a body of professional administrators who had become the main rulers of the people. This bureaucracy concerned itself much more with the welfare and needs of the people than the limited government of the Anglo-Saxon world either could or was expected to do. [v]

In the last 100 years Britain has developed an absolutist government. And the fading of the culture of liberty and the growth of institutional power has intensified that. The state has become more absolutist in the nearly 20 years since the Act was passed. We may not want the Human Rights Act. But we do need it. It is not sufficient; the moral certainty of its fans may be irritating; but we do need it.

———

[i]s 6(3)(b) says that a "public authority" includes "any person certain of whose functions are functions of a public nature"… this has been interpreted to create duties for business-owners, for example.

[ii] See Lord Hewart,The New Despotism (London: Ernest Benn Limited, 1929)

[iii] http://yougov.co.uk/news/2014/10/21/ban-e-cigarettes-indoors-say-public/

[iv] How it is held to “work” is set out by the present government, here: http://www.publications.parliament.uk/pa/cm201213/cmselect/cmproced/writev/programming/m99.pdf

[v] FA Hayek The Constitution of Liberty(London: Routledge & Keegan Paul, 1960) Ch.13

Read More
Thinkpieces Ben Southwood Thinkpieces Ben Southwood

Why gamergate will lose

Gamergate is one of the most interesting cultural issues that has appeared in years. It is a rare time that the losing side of the culture war has put up a good fight. But the anti-gamergate side will win, because Progress always wins. I'll try and give a concise guide to gamergate, what's at stake, where it came from, and why exactly it is that it will lose.

Corruption and abuse

Pro-gamergate is used to refer to the gamers' side (i.e. those who either think there is a conspiracy in games journalism; that they have been unfairly stigmatised and bullied; those who dislike Zoe Quinn; and/or those who oppose social justice activism being a major part of games journalism); anti-gamergate to the games journalists' side (i.e. those who think gaming culture is misogynist and/or racist and/or transphobic and/or homophobic; and those who think that the conspiracy claim is just a veil for trying to make gaming an unsafe space for women).

Most gamergaters don't want the issue to be about Zoe Quinn, or women in gaming at all, if you look at their forum postings and discussions. They want it to be about supposed corruption in video games journalism. But the movement is full of apparent contradictions: they are against what they view as extreme social justice inroads in their beloved medium, but at the same time they don't think themselves discriminatory to gay, trans, female, non-white gamers.

However, I think the issue of Zoe Quinn is crucial, just as the issue of Michael Brown was crucial in Ferguson (if the police narrative is a lie, then it looks like a very serious case of police brutality/racism). Internet abuse is a terrible thing, yet scarcely anyone complains about the voluminous abuse directed at George Osborne or David Cameron on Twitter. Perhaps this is because they don't read it—but equally Zoe Quinn could avoid her mentions from those she doesn't follow, as I have done under pressure at times.

It is, outside of some philosophical thought experiments, always morally unacceptable to threaten violence, especially sexual violence, against people, but again violent threats are fairly routine across the internet and almost never actual threats. I could leave my house because an anti-immigration campaigner threatened me with death, but I'm not sure it would not be accurate to describe me as having been 'forced' to leave my house.

Zoe Quinn

Gamergate is a movement that arose when Zoe Quinn, who wrote a text adventure called Depression Quest with a tool called Twine, was accused by her boyfriend of having repeatedly lied to him, cheated on him (with five men in his own industry—Quinn confirmed she had sex with at least three out of five) and generally treated him in a way consistent with abuse. Her interactive fiction got rave reviews by video games journalists (who very rarely review interactive fiction). Many gamers took this to mean she had 'slept her way to the top' (she has not been accused of having sex with any of the people under whose bylines the reviews appeared, but she did have a relationship with a judge who awarded her an Indiecade prize).

Quinn had already been controversial in the games industry. When she first released Depression Quest, at least two members of a forum called Wizardchan ('an image board for male virgins') posted rude things about her. They were accused of raiding and doxxing her (releasing personal information about her) but others have claimed the phone numbers and addresses provided were false. I couldn't find any evidence of either being true. There was a media storm whose narrative ran that women can't even get a game out without facing massive abuse.

Perhaps most controversially, she single-handedly torpedoed The Fine Young Capitalists, a 4chan-supported 'radical feminist' scheme to try and get more women into gaming by crowdfunding their video games. Eight percent of the profits of the game would go to the main developer, and the rest would be used to fund future contests. She organised a campaign against TFYC because she judged it 'transphobic'—because it required entrants to have previously identified as being female to stop men from gaming the system. She also claimed it was exploitative of women because the lion's share of the profits went to future contests. Quinn later set up her own version of TFYC's game jam.

When the aforementioned sex scandal came to the fore, the whole issue blew up, and the accusations of ethics breaches (sexual and monetary) melded with general approbation of Zoe Quinn, pent-up irritation over the anti-gamer culture and (what gamers saw as) extreme social justice activism. Gamergaters congregated initially on 4chan, and eventually on 8chan when even notoriously uncensorious owner Moot banned them from discussing the issue. As I mention below, this eventually developed to a point where some people (probably, but not definitely) gamergaters made threats against Quinn and others.

Death threats

The issue had ticked along until it took a turn for the worse recently, with reports of sexual and violent threats against Quinn, another developer Brianna Wu, and Anita Sarkeesian, a feminist critic of gaming and gamer culture. At times, all three felt threatened enough to leave their house. One threat, of a mass shooting at Utah State University, led to Sarkeesian cancelling a talk she was planning there. The threats were unverified and there is no suggestion that anything physical has happened so far (as with, e.g., threats that I and my colleagues have received).

Though the pro-gg forces won one battle (a letter writing campaign convinced Intel to drop adverts from anti-gg Gamasutra) the death threats have shifted the debate, and suggest they will lose the war. Most gamergaters seem to condemn the death threats on moral and/or tactical grounds, but of course this is not seen as exculpating the overall movement. The movement is described across all major media as being about misogyny and racism and transphobia (often female or trans or non-white pro-gg people are dismissed as unrepresentative tokens) rather than the alleged conspiracies its members want to focus on.

The real story

But really it seems to me that these conspiracies are much less interesting than the gamer vs. games journalist angle. Games journalists are mainly young, white, smart, literate, college-educated, city-dwellers, and all four of those make you not just likely to lean Left, but lean towards the modern 'social justice' movement. Consider, the US is about fifty-fifty in Republican to Democratic voters (if not registered members), but about four times as many journalists are Democrats as are Republicans.

Bear in mind that although social justice advocates do care about wealth disparities, this is far from their main concern, at least in terms of how they allocate their time. For example, insufficiently pro-transgender feminists will arouse large campaigns stopping them from giving lectures at many universities, but libertarian capitalists are able to speak freely. This is why I have argued that social justice is (a) a facet of neoliberalism, and (b) an artefact of the cognitive elite's takeover of society. This is what makes the modern social justice movement so different.

How can it be that social justice is a facet of neoliberalism when most social justice advocates are deeply opposed to neoliberalism? Neoliberalism is a centre-right ideology that, unlike most previous centre-right ideologies, fits very well with social progress even though it favours markets (though substantially regulated and taxed markets) to distribute resources. I suspect that neoliberalism is successful because it has a large portion of the left vanguard agitating for one of its planks and a large portion of the stronger elements of the right agitating for another of them. The media tends to have neoliberal views relative to the population at large—more free market economically but more socially liberal.

Gamers look to games journalism to tell them which games are cool and fun and engaging and interesting and, crucially, worth dropping £50 on. Games journalists see their profession partly as a calling to purge the incompletely right-on memes that still exist in gaming. They might not put it exactly this way, but if you see insufficiently feminist games as harmful then why wouldn't you use the power you had to wipe them out. But gamers don't care nearly as much about their games including sufficiently diverse characters in sufficiently fleshed-out roles, they just want to know which are good games. Given the divide, it's understandable why games writers think them worryingly unreconstructed reactionaries.

Why they'll lose

Because gamers are a late hold-out in the culture war that is raging. Like it has won almost every major political battle since the Glorious Revolution (if slowly, sometimes) the left is going to win this one because it controls the commanding heights of the media, allowing it to bring the mass public on side, and because its adherents follow their faith with a religious zeal. Consider how marginal an issue gay marriage was in the 90s, even (or perhaps especially) among gay people. In 2008 a liberal Democrat didn't feel comfortable not declaring their opposition to it in their presidential campaign. By 2014 it is effectively impossible to hold a prominent job and be an opponent of gay marriage, even if you invented Javascript! By 2014 film adaptations of your books will be boycotted if you oppose gay marriage.

The point is not necessarily that a particular set of policies is actually a bad thing—I'd bet many gamergate activists are in favour of gay marriage—but the way the victories come about. Gay marriage won not because it convinced the public with arguments and evidence, but because a zealous group of elites shamed, bullied and stigmatised anyone who publicly stated anti-gay marriage views as 'homophobic', particularly people otherwise close to them in political views. Lord Freud's discussion of labour market reforms intended to help disabled people was almost shut down because of an out-of-context sequence of words that sounds bad. Banksy's (inane and boring) satire on UKIP in Clacton was removed for fear of being interpreted as racist. Books mainly about the causes of inequality cause a giant storm if they also mention possible genetic differences between races (even if they are judged by academics in the field to be accurately representing the science). Indeed, it is considered enough to claim that something is dated or racist to dismiss its findings.

Gamers are too disparate and disorganised to defeat the most powerful memeplex of modern times. 'Gamergate', the term itself, is already acquiring a slightly dirty taste in a lot of people's mouths, as a byword for misogyny, abusing women, or apologising for either. In general, these mental shortcuts (gamergate = misogynist) are typical of the social justice movement, and are an extremely powerful conversation-ending weapon. 'Wait a second, you support gamergate? So you're a misogynist?' Gamergate was interesting, but its advocates' zeal will run out, while the articles in the press aiming for a balance between pro- and anti- perspectives are dwindling (and are subjective themselves to attacks from more virulently anti- media). Gamergate is one of the most interesting things to happen in years, but I don't think it will win.

Read More
Thinkpieces Dr. Madsen Pirie Thinkpieces Dr. Madsen Pirie

Looking at the world through neo-liberal eyes

I spoke at Brighton University as part of their seminar series on neo-liberalism.  The term 'neo-liberal' is usually used in a derogatory sense, though I chose not to use it that way.  I was the only speaker in the series to speak in favour of neo-liberal ideas, and my title was "Looking At the World Through Neo-Liberal Eyes."  I began by quoting an old Chinese proverb: "Never criticize a man until you have walked a mile in his shoes.  That way you are a mile away when you voice your criticism; and you have his shoes."  I invited the audience to see the world briefly as it looks through neo-liberal eyes.  These were the points I made.

1.  Value is in the mind, not within objects.

Value is not a property of objects or a quality they possess.  Although we talk of objects "having value," we mean that we value them.  Value is in the mind of the person contemplating the object, not in the object itself.  If value resided in things, it could theoretically be measured objectively and we would all agree on what it was.  There would then be no trade, for exchange takes place when each person values what the other person has more than they value what they are offering in exchange.  A trade gives each of them something they value more, and thus wealth is created by the exchange.  When people make the mistake of supposing that value resides in objects, they ask how it arrived there, and come up with fallacious ideas such as Marx's labour theory of value.  An object can take masses of labour to produce, but if no-one values it, it will be worthless.

2.  Time must be factored into activities.

Time must be factored into economic transactions.  I could read a book today or read it tomorrow.  If I choose to read it tomorrow, I forego the pleasure of a day spent contemplating its wisdom and being stimulated by its insights.  I value the activity less if it is in the future.  If I postpone gratification I should be compensated for doing so.  There is a risk element, too, in that if I postpone a pleasure, circumstances may make it unavailable in future and I will have lost out by delaying my enjoyment.  From this notion that present pleasures are valued more than future ones comes the idea of interest, or being compensated for doing without present enjoyment in return for receiving more enjoyment later.  And from this arises the notion of investment, of setting aside funds that could bring gratification now, and using them instead to bring more gratification later.  This is the essence of capitalism, of using funds as capital, not to bring gratification now, but to increase future rewards.

3.  Imperfection abounds.

The world of human activity is not characterized by neatness and perfection.  It is not represented by simple, pure principles in action.  On the contrary, it is messy, and it changes from moment to moment.  In the real world there is no perfect competition and no perfect information.  People act rationally sometimes, and not at others.  They sometimes change their minds and behave differently.  Society is not perfectible, and nor is human nature.  People are motivated sometimes by worthy aims, and at other times they show less admirable traits.  This imperfection should be recognized and admitted so that it can be coped with, and so that ways might be found to minimize any baleful effects it might generate.

4.  Compare the present with the past rather than with an imagined and hypothetical future.

Free market economists tend to think the present is better than the past.  People live longer.  Life expectancy, which was about 30 years for millennia of human existence, is now at 68, and greater still in advanced countries.  Death of mothers in childbirth is a tiny fraction of what it was just over a century ago, and child mortality in infancy is minimal compared to what it was.  Major diseases have been conquered, and fewer die of malnutrition than ever did.  Fewer live lives at or below subsistence, and more have access to healthcare and education than did in previous ages.  By many measures the present is better than the past.  Instead of comparing it with a hypothetic future of an imagined world, neo-liberals compare it with the past, inspect what has made it better, and try to do more of what has worked in order to make the future better still.

5.  The outcome of spontaneous interaction is better than a preconceived goal.

When people make choices, including economic ones, the outcome produced by those millions of interactions will contain more information and allow more different goals to be met than any brought about by planners thinking one up.  The planners are few and they dream up a future that satisfies their own aims.  Those freely interacting are many, and they act to produce a future that allows more of their own goals to be achieved.  The spontaneous society not only results from more minds and more information, it also reacts faster and is quicker to cope with crises.

6.  Poorer peoples become richer by creating wealth, not by redistributing it.

The world's wealth is not a fixed supply to be shared out according to some idea of what is just.  Wealth is created by exchange.  When people trade what they value less for what they value more, wealth is increased.  Development aid redistributes a little wealth, whereas trade creates a great deal of it.  No poor country has become rich through aid, and none has done it without trade.  To help poorer countries on that upward path, richer countries should open their markets and buy what they produce.

7.  Life before the industrial revolution was far from idyllic.

Although some conservatives and environmentalists have a rosy view of Britain's pre-industrial past, and praise what they call "the measured rhythm of rural life," the reality was of abject subsistence for most, accompanied by squalor, disease and early death.  Most people worked on the land from dawn till dusk, doing back-breaking work and living in primitive and filthy conditions.  Most had no margin, so bad harvests or severe winters could bring starvation.  The industrial revolution brought a step up in living standards.  What we now regard as poor and unsanitary housing was an improvement from their rural squalor, and wages gradually enabled them to afford decent clothes, china dishes and household items.  The industrial revolution brought mass production and affordable items, and created the wealth that raised living standards.

8.  Economic growth is a good thing and there are no limits to it.

Economic growth brings the opportunities to satisfy our wants.  It enables us to pay for medical care and sanitation, and for education.  It funds art galleries, libraries and concert halls as well as meeting our material needs.  Although some critics say we will run out of resources, our ability to access new sources of them as technology advances is increasing faster than our use of them.  The same is also true of our development of substitutes.  Technology is becoming smarter, too, using fewer resources for its outputs.  Growth can continue to bring more opportunities and greater security to more and more people.

9.  Globalization has brought huge benefits to large numbers across the world.

Globalization has brought millions of people who previously lived fairly isolated, subsistence lives onto world markets.  They have been able to sell their labour and their produce to distant buyers and to join in the wealth-creation process that lifts people above the struggle for survival and into a world of greater opportunities.  Although some praise buying locally, it is buying globally that uses resources efficiently and enriches more people.  As Adam Smith said, we could make wine in Scotland very expensively, but if we buy cheaper French wines it leaves us money left over to spend on other things.  Globalization has enriched, and is still enriching, the world.

10.  The world is not reducible.

The real world is immensely complex, ever moving, ever changing.  It cannot be reduced to a few simple equations.  When people attempt to do this, they have to simplify, and in doing so leave out essential information that makes it work as it does.  It is not only complex, but inherently unpredictable.  We do the best we can with limited information, and always with the knowledge that unforeseen events might alter our plans and thwart our intentions with unintended consequences.  The world is not like a clean-lined machine whose mechanism can be studied, but more like a messy mass of interactions whose outcomes are uncertain.  Fortunately, the world is fairly resilient and usually manages to cope eventually with the follies people put upon it.

Read More
Thinkpieces Peter C Earle Thinkpieces Peter C Earle

Currency reform in Ancient Rome

In the Western world, modern civilizations are often thought of in comparison to those of the ancient world. The Roman Empire is typically the first considered, and arguably the most natural reference point owing to its many achievements, complexity and durability. It stands in history, widely considered the high water mark of the ancient world; one against which contemporary political, economic and social questions can be posed. Much of the world is still living with the consequences of Roman policy choices in a very real sense, in matters ranging from the location of cities to commercial and legal practices to customs.

The global economic downturn of 2008, in particular its monetary facet, readily invites comparison between the troubles of the modern world and those of the Roman Empire; just as Western currencies have declined precipitously in value since their commodity backing was removed in stages starting roughly a century ago, Roman currencies were also troubled, and present a cautionary tale.

The Roman coin in use through most of the empire was the denarius, which demonstrated a persistent decline in value, starting from the time of transition from Republic to Empire, and continuing until its decimation during the Crisis of the Third Century AD. Although efforts by Diocletian taken after the monetary collapse are commonly associated with Roman economic reform, there were other efforts by earlier, lesser known emperors that suddenly and unexpectedly improved the silver content and value of the denarius. Firsthand accounts and archeological findings provide sufficient detail to allow examination of these short, if noteworthy, periods of voluntary restorative policies – and their architects.

While popular interest typically fixes on such well known emperors as Julius Caesar, Nero, and Augustus, I seek to direct attention toward four lesser known emperors who undertook the improvement of the denarius.  These initiatives essentially constitute rare, temporary episodes of qualitative tightening, in contrast to the more common – and, in recent history, ubiquitous - policy of quantitative easing. The reformers were Domitian, Pertinax, Macrinus, and Severus Alexander. A necessarily concise summary of each one’s initiatives follows, with a brief review of the circumstances surrounding their administration and decisions.

Domitian (September 81 AD – September 96 AD)

The first noteworthy Roman currency reformer was Domitian, son of Vespasian and brother of Titus. He ascended in 81 AD, inheriting the problems associated with his forbear’s costly projects. Vespasian had undertaken large scale construction projects he thought necessary to repair the damage to many structures during the civil wars that raged throughout the late Republic.  In addition, he paid lofty financial incentives to regime-supporting historical writers and awarded pensions of up to 1,000 gold coins annually to a coterie of court intellectuals. Titus, his son and successor, was known for the initiation of lavish and elaborate games as well as for recompensing individuals struck by unexpected natural disasters, including the eruption of the volcano Vesuvius, the Great Fire of Rome, and those affected by the outbreak of war in Britannia. Together, Titus and Vespasian committed vast resources to the construction of the Coliseum; and, consequently, during the 12 years of their emperorship, the silver content of the denarius was reduced from roughly 94% to 90% purity.

Despite that precedent, “Domitian was apparently very sensitive to the importance of capital and the benefits of stability derived from a credible and dependable money supply.”[1] Thus early in his reign, in a “dramatic and entirely unexpected” move that coincided with the end of hostilities in Britain and Chatti (Germany), he fired the Empire’s financial secretary. Historians speculate that this was either because the secretary considered Domitian’s plans to improve the currency quality “unwise” or because he’d allowed such “slackness to permeate the mint” in the first place.[2] Shortly thereafter, “in 82 – 84 AD Domitian improved the silver standard, and older coins averaging 88 to 92 percent silver were reminted into purer denarii (98 percent fine)”.[3]

However, Domitian’s currency improvement effort was short-lived due to renewed foreign warfare. Between 85 and 89, fighting in Africa, Dacia (Eastern Europe), and Chatti again broke out, such that the “purer coins had scarcely entered the marketplace when [he], in mid-85, pressed for money to pay war bills, again changed the standard, reducing it to 93 percent fine”.[4]

Domitian’s reign eventually devolved into tyranny and massive building projects echoing those of his brother and father, and culminated in a wealth confiscation edict so brutal that “it [became] fatal at th[e] time … to own a spacious house or an attractive property”.[5] In 96 AD, he was assassinated.

Pertinax (January 193 AD – March 193 AD)

For nearly a century after Domitian’s fall, the debasement of the denarius continued apace, and by 148 AD devaluations became ritually associated not only with the start of wars and public projects but with inaugural events and holidays as well.[6]

At the beginning of Domitian’s reign, money supply was 60% of what it had been in 40 [AD], and about 70% of this level at the end of his reign, a range maintained throughout the reigns of the Antonines, until, under Commodus, money supply reached 700-800% above [that] initial level.[7]

Commodus, whose twelve year regime saw among other things the re-introduction of Plebian Games – a pricey, nearly month-long festival of religion, art and sports - and an incredible expenditure of state funds in a massive, megalomaniacal campaign of self-indulgent iconography, was succeeded by Pertinax – a man of “propriety [and] economy” – whose 86-day rule starkly depicts the considerable risk that currency reformers undertook - and perhaps still do.

[O]n the day of his accession, he resigned over to his wife and son his whole private fortune, that they might have no pretense to solicit favors at the expense of the state. He refused to flatter the former with the title of Augusta, or to corrupt the inexperienced youth … by the rank of Caesar … g[iving] him no assured prospect of the throne[.][8]

In addition,

[h]e forbad his name to be inscribed on any part of the imperial domains, insisting that they belonged not to him, but to the public. He melted the silver statues which had been raised to Commodus … s[elling] all his concubines, horses, arms, and other instruments of pleasure. With the money thus raised, he abolished all the taxes which Commodus had imposed.[9]

On the heels of that, Pertinax “carried out an extraordinary … coinage reform that returned the denarius to the standard of Vespasian.”[10] It revalued the denarius from 74 to 87% silver by weight on new coins, several mintings of which were emblazoned with the motto “MENTI LAVDANDAE” (“noteworthy good sense”), and most significantly featured not his or another emperor’s visage but Ops, the Roman personification of wealth. In addition to this, Pertinax simultaneously embarked upon a fiscal rehabilitation program, the centerpiece of which were large budget cuts targeting the Roman military; he began by cancelling the customary bonus paid to soldiers by newly-seated emperors.

In a remarkably short of time, Pertinax gained “the love and esteem of his people.”[11] But his cuts to the military were deep and far reaching, and his urge to settle disputes with enemies rather than fight them out enraged the soldiery. “A hasty zeal to reform the corrupted state … proved fatal” for him. [12]. On the 86th day of his rule, his personal guard betrayed him and mutinied, gathering at the imperial domicile. Though other guards urged him to safety, “instead of flying, he boldly addressed them” – and fell beneath their swords.[13] Rome’s “Age of Inflation” had thus begun.

Macrinus (April 217AD – June 218AD)

Gibbons describes Caracalla, who ruled for 19 years, as “the common enemy of mankind” for the incredible number of massacres and persecutions, as well as economic destruction, which occurred during his tenure.[14] He devalued the denarii from 1.81 grams of silver to 1.66 and introduced a new coin, the antoninianus: ostensibly a “double denarius” but actually weighing 2.6 grams of silver instead of the implied 3.3 grams. Additionally, he increased tax revenue by making all freemen in the Empire citizens and commissioned the construction of a number of massive, exorbitant bathhouses. And, most unsurprisingly, he raised the pay of the military, granted them new and expanded benefits, and launched a war against the Parthian Empire.

Macrinus, a member of Caracalla’s staff, became emperor after his assassination – to which, by some accounts, he was a co-conspirator. From the start, he made clear his concern with “prioritiz[ing] public faith over the generation of a sufficient amount of cash” for the Roman state.[15] During his short reign, he made the conscious choice to raise the purity rate of silver from Caracalla’s debased 1.66 grams of silver to above the level extant when Caracalla was installed - 1.82 grams - and demonstrated an inclination toward diplomacy versus combat. When the Persians challenged the Roman army, Macrinus “tried to make peace with the Persian king” and “s[ent] back [their] prisoners of war voluntarily.”[16] Consequently, he incurred the resentment of soldiers and further enraged them by introducing a pay system which paid them according to their rank and time-in-service.

In summary,

[t]he increased silver content was clearly beneficial for the state, as it would instill more confidence among its recipients and presumably still inflation … [but] the major problem, of course, was Macrinus’ attempt at military reform … [T]he army would not stand for a curtailment of privileges, even among new recruits. So while Macrinus’ plan was … fiscal responsibility in the state, the strength of the army was too great to allow for it … [and] paved the way for Macrinus’ downfall.[17]

Dissent soon erupted within the ranks and military forces, in a coup, elevated 15-year-old Elagabalus as the new emperor; a battle ensued between Elagabalus supporters and Macrinus loyalists. Macrinus’ forces were routed, and he was captured and executed.

Severus Alexander (March 222AD – March 235AD)

The new Emperor Elagabalus presided until his 18th birthday, profoundly debasing the denarius and squandering monstrous sums from the public treasury. “No fouler…monster” wrote the poet Ausonius, “ever filled the imperial throne of Rome”.[18] After his assassination, his cousin Severus Alexander rose to power. And

[b]y the time that Alexander ascended the throne the question of the coinage, long acute, had become critical. Looking backwards one may see two centuries of fraud that the debasement of money had gradually but surely proceeded; in the future something little short of national bankruptcy awaited the Roman world unless measures were forthwith adopted to ward off [that] evil day.[19]

Yet in a different strategy from Domitian’s, Alexander initially reduced the silver content of the denarius from 1.41 grams to 1.30 grams, and some years later not only raised it back to the old standard, but far beyond that to 1.50 grams; a quality it had not seen in decades. He

restored the tarnished reputation of imperial money by improving the denarius and striking the first substantial numbers of brass sestertii and copper assees in a generation … they were well-engraved, struck on flans of traditional size and weight, and, as money, the equal of their more elegant ancestors.[20]

He reduced taxes and attempted through various means to end the “singular system of annihilating capital and ruining agriculture and industry [which] was so deeply rooted in the Roman administration”.[21] At the same time, though, he subsidized literature, art and science and socialized education.

When invaders from Gaul threatened the Empire, Severus Alexander attempted to buy them off rather than engage in a pitched, costly battle. This, once again, angered the legionnaires, who elevated General Maximinus as the new emperor. The military rebelled and, like Pertinax and Macrinus before him, Alexander Severus was executed.

Over the next three years, Maximinus doubled soldiers’ pay and waged nearly continual warfare. Taxes were raised, with tax-collectors empowered to commit acts of violence against delinquent or reluctant payers, as well as to summarily confiscate property for citizens in arrears.

Over the next five decades,

[e]mperors … debased the silver currency and raised taxes during what they perceived to be a temporary crisis, expecting windfalls of specie from victory, but war had changed from profitable conquest to a grim defense … The Roman world was treated to the spectacle of imperial mints annually churning out hundreds of millions of silver-clad antoninaniani by recycling coins but a few years old [which] removed older coins from circulation and destroyed public confidence in imperial moneys.[22]

Characteristic of all monetary collapses, as the denarius rapidly withered into a billon trinket Roman citizens developed odd, if essential, skills – the most noteworthy of which were extracting the thin silver coating from otherwise worthless coins and fluency in the social language of monetary failure: barter.

Epilogue

Comparing modern challenges and policy responses to those of remote times is an attractive but precarious enterprise: every generation, let alone culture and era, breathes a unique psychological oxygen. Nevertheless, in this case the exercise yields several potentially valuable insights.

First, what can we say about the reformers? Why did select figures, in an era admitting no formal economic theories and within which the interaction of supply and demand was attributed to superstitious causes or conspiracies, occasionally shore up their currency?

It is notable that all of the reform-minded emperors possessed germane backgrounds and experience: where the majority of Roman emperors had pre-ascension careers in politics or the military, Domitian grew up in a family known for banking; and despite the inglorious end of his incumbency (when maintaining power came to trump sense and experience) his initial

concern with finance, with a stable currency, and an awareness of reciprocity in business and trade dealings as demonstrated in instruments such as his Vine Edict, reflect his continuation of a tradition of financial sensibility, more in keeping with a business house, than with the traditions of [elites] valued by Senators and expected of Emperors.[23]

Most fascinatingly,

[w]hile still a Caesar, Domitian had published a work on coinage … which Pliny the Elder … had cited as a source.[24]

Macrinus was the first non-senatorial emperor, and had years of both financial training and experience; he served as the administrator of the massive Flaminian Road project. Later in his pre-political career, he was personally selected to manage the personal wealth of the imperial family under the emperors Caracalla and Geta. Pertinax had spent years in business as well as teaching, and his time as a merchant led him to the belief that “[e]conomy and industry … [were] the pure and genuine sources of wealth”[25]

Alexander Severus became emperor at 13. It is difficult to hypothesize as to what may have led him to enhance the denarii – and so strongly – but one may speculate that his closest advisors, his mother and grandmother, drew from years of experience in Roman booms and busts.

One also notes that each of the reformers came after particularly egregious debasers: Domitian after Vespasian and Titus; Pertinax after Commodus; Macrinus after Caracalla; and Severus Alexander after Elagabalus. It seems as if each deduced the connection between his predecessor’s profligate monetary (and, indeed, fiscal) policies and the consequent economic crisis, choosing to reverse the afflux.

Summarizing Macrinus’ efforts, but no doubt broadly applicable, is this synopsis:

The exact motivation … for coinage reform [efforts in the Roman Empire] is in general a little hazy … attempts at reforming the coinage standards could reflect the distrust that the … population had for the imperial currency … [a]nother possibility is that [the reformers] wanted to fit into a monetary tradition that was considered responsible … [or] felt a desire to distance themsel[ves] from the policies of [their] predecessor[s] … [as many] were a simple overturning of [prior emperors] destructive economic measures.[26]

In consummation, were these episodes in qualitative tightening successful? Obviously, brief respites in the systematic debasement of the denarius delayed the eventual, yet perhaps inevitable, monetary emergency. More analytically, though, one hint lies in the architectural analysis of lost, donated, and hoarded coins, in that

coins lost casually on sites are equivalent to small change lost today [in that] more were lost as their numbers mounted and purchasing value plunged due to debasements … [C]asual losses and hoards, then, can document shifts in patterns of circulation both within and beyond imperial frontiers.[27]

The number and constitution of coin hoards reveal the public propensity to forestall consumption (“saving”, in familiar parlance) or represent financial reserves hidden out of fear of future devaluations; in any event, they imply which coins were highly valued. Patterns of similar coins lost or donated, oppositely, suggest which in circulation were valued appreciably less. While both Pertinax and Macrinus ruled briefly, and their programs were quickly reversed by their successors, under both Domitian and Severus Alexander (who ruled for 15 and 13 years, respectively) the number of discovered, archeologically-dated coin hoards skyrocketed over those dated to their immediate predecessors: from 3 to 10 and 7 to 18; again, respectively. Similarly, coins of reduced quality are found with higher frequency at ritual offering sites (e.g., temples) and where losses were common (e.g., river crossing sites) than those of contemporary circulating issues with higher purity.[28]

But the most forbidding commonality is the thread of continuity running through the fates of the monetary reformers:

Emperors who improved the purity of the denarius, [notably] Pertinax in 193 [and] Macrinus in 217 … found themselves outbid for the loyalties of the army, and … went down in ignominious defeat.[29]

And so we return to the present day. Owing to the deep entrenchment of government in daily life and, consequently, the politically incendiary nature of entitlements, attempts to underpin the value of any currency with a commodity is likely to be met with considerable resistance from the diffuse and deep-seated institutions and social groups benefitting from fiat currency systems. And yet, for the first time in well over a century, the issue of what actually backs state-issued money has resurfaced as a political issue. Precious metals are seeing their greatest popular resurgence in decades, as, in tandem, interest in and usage of Bitcoin and other cryptocurrencies - precisely because of their irreproducibility and the consequent quantitative limitations - expands rapidly. This, perhaps, hints at a burgeoning shift in public awareness and sentiment, which may eventually translate to political pressure for a return to sound money, but real progress will likely be an uphill battle - with bouts of ‘sticker shock’ along the way. As historian William Warren Carlise wrote,

[a]ll through history we find that it is the reform, the return to sound money rather than the depreciation itself that first rouses popular discontent. It is only when the mass of the people learns that depreciations must be followed sooner or later by such remedies that they begin to entertain a salutary dread with regard to them.[30]

Perhaps the best conclusion that can be drawn from examining these instances is that in response to the familiar rhetorical query – “Are we going the way of the Romans?” – one can reply, truthfully: “No; they occasionally reformed their currency.”

Endnotes

[1] http://domitian-economicemperor.blogspot.com/2011/01/domitian-emperor-in-broader-economic.html

[2] Brian W. Jones, The Emperor Domitian (London: Routledge, 1992), 76.

[3] Kenneth W. Harl, Coinage in the Roman Economy, 300 B.C. to A.D. 700 (Baltimore: The John Hopkins Press, 1996), 14.

[4] Ibid.

[5] Jones, 77.

[6] Susan P. Mattern, Rome and the Enemy: Imperial Strategy in the Principate (Los Angeles: University of California Press), 141.

[7] http://domitian-economicemperor.blogspot.com/2011/01/domitian-emperor-in-broader-economic.html

[8] Edward Gibbon, Esq. The History of the Decline and Fall of the Roman Empire, Vol. I (London: W. Strahan, 1776), 101

[9] John Platts, A New Universal Biography … of Eminent Persons (London: Sherwood, Gilbert and Piper, 1826), 122.

[10] Mattern, 140 – 141.

[11] Gibbon, 102.

[12] Gibbon, 103.

[13] Platts, 122.

[14] Gibbon, 139.

[15] Andrew Scott, “Change and Discontinuity within the Severus Dynasty: The Case of Macrinus”, a dissertation submitted to the Graduate School-New Brunswick, Rutgers, The State University of New Jersey, in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Graduate Program in Classics, May 2008, 133.

[16] Henry Jewell Bassett, Macrinus and Diadumenian (Menasha: George Banta Publishing Co., 1920), 33.

[17] Scott, 135.

[18] Martijn Icks, The Crimes of Elagabalus: The Life and Legacy of Rome’s Decadent Boy Emperor (London: I. B. Taurus & Co, 2011), 115.

[19] R. V. Nind Hopkins, The Life of Alexander Severus (Cambridge: University Press, 1907), 182.

[20] Harl, 128.

[21] Hopkins, 154.

[22] Harl, 132.

[23] http://domitian-economicemperor.blogspot.com/2011/01/domitian-emperor-in-broader-economic.html

[24] Ibid.

[25] Gibbon, 102 – 103.

[26] Scott, 129 -133.

[27] Harl, 17 – 20.

[28] Richard Duncan-Jones, Money and Government in the Roman Empire (Cambridge: Cambridge University Press, 1994), 107.

[29] Harl, 126 – 127.

[30] William Warrand Carlile, The Evolution of Modern Money (London: Macmillan and Co.), 100.

Read More
Thinkpieces, Uncategorized admin Thinkpieces, Uncategorized admin

Should we be concerned about the UK's current account deficit?

Every year since 1984 Britain has run a deficit on the current account of her Balance of Payments. What this in effect means is that each year British businesses and consumers have purchased consistently more goods and services from abroad than they have been able to sell. But is this a problem?

In his repudiation of mercantilism, Adam Smith believed that exchange rates and world trade contained a self-correcting mechanism. A trade deficit precipitates an outflow of currency from Britain, which in turn demands a greater supply of sterling, thereby naturally devaluing the currency. This in turn has the effect of making goods and services appear more price competitive, and should readjust the imbalance in trade to equilibrium.

However, since the 2008 recession and despite a 25% fall in the value of sterling, exports have increased by just 5%. Britain was unable to repeat the export-boom of 1992 which fostered a strong recovery from the 1993 recession.

On the one hand, the deficit stands at around 4.4% of GDP, and therefore is hardly an unmanageable anchor on the economy. Moreover, Britain has undergone significant restructuring since the end of the Second World War, away from manufacturing goods, and toward exporting invisibles, such as financial and education services. One effect of this has been that an economic upswing in the domestic economy tends to correlate with a widening trade deficit. British consumers and businesses have a considerable hunger for imports, and as they become wealthier and more prosperous, they buy goods from abroad at a faster rate than they can sell them overseas.

The import hunger is partly encouraged by the UK’s open and liberalised economy. Free trade pushes down the price of imports and provides domestic producers with the added pressure of foreign competition – a strong incentive function to become more efficient. Firms such as Dyson, with a strong Research and Development base in the UK, tend to import manufactures from South East Asia due to the absolute cost advantages these countries offer. Really then, the alternative to the current arrangement would be a return to the 1970s – with inefficient, over-manned industries dominating the landscape - and promoting import substitution. This would make most contemporaries baulk.

Perhaps then a current account balance is merely a reflection of the times we live in. Globalisation and the subsequent fall in transport costs have allowed the free movement of goods across national boundaries. The economy is simply adjusting to David Ricardo’s theory of ‘comparative advantage.’ Indeed, the UK will not return to building battleships or manufacturing textiles because production costs simply won’t permit it. Far better we produce the goods we are competitive at producing – in aerospace and pharmaceuticals – and focus on service exports – financial and legal.

The only caveat, perhaps, concerns the underlying message the persistent trade deficit is transmitting about Britain. A weak export base implies a fundamental lack of competitiveness in the economy. Productivity levels in the UK have been low by European standards for decades now. According to the ONS, in 2011 output per worker in the UK trailed 21% behind the rest of the G7. In effect what this means is that a British worker must work for 21% as much time in order to produce exactly the same amount of goods as his or her equivalent in France or Germany. This is a concern.

The government’s investment in British industry: through cutting corporation tax, encouraging inward investment and developing tight and highly skilled labour markets shows encouraging signs of progress in addressing these deficiencies in the UK’s competitiveness.

Read More
Thinkpieces admin Thinkpieces admin

The EU: reform or redundancy

With UKIP set to increase further their share of the vote in the upcoming European elections, and Farage’s triumph over Nick Clegg in the televised debate late last month, the issue of EU membership is as heated and controversial as ever.  We are once more reminded of the strengths and limitations of a Common Market, and the anxieties the union has bred here in Britain. At its best, the European Union has the capacity to foster more harmonious economic relations, encouraging stronger trade, greater efficiency and stimulating investment. When one considers that 100 years ago, Europe was descending into four years of bloody warfare, the change couldn’t be starker. Indeed, regional development funds have been particularly positive, providing countries in Eastern Europe, Southern Italy and the Mediterranean with funds for infrastructure projects and capital spending. Meanwhile, promoting the principle of trade liberalisation has assisted in creating millions of jobs, and has allowed the Ricardian principle of ‘comparative advantage’ to flourish.

For Britain, the Union has provided a huge market for exports, with 500 million customers demanding goods and services on our doorstep. As is often cited by pro-Europeans, 50% of British exports are destined to the European Union. While, undoubtedly some of our trade with non-EU countries is diverted by European tariffs, the opportunities available to UK exporters from membership have been strong. According to some measures this trade supports between 3.5 and 4 million jobs in the UK. However, all is not well in the European Union.

Here in Britain, there are frustrations over the lethargy with which the EU has shown itself able to reform and adapt. A large and bureaucratic European budget, to which Britain is a net contributor, is increasingly becoming a source of resentment.

The more mercantile aspects of the European Union are a concern too. By promoting positive discrimination between member countries – through subsidies and external tariffs – the EU is effectively excluding non-member countries, particularly those in the developing world. The Common Agricultural Policy is perhaps the most tired aspect of the EU in this regard. Many of the subsidies made available through the CAP are directed towards wealthy landowners, who can buy entitlements. These landowners do not even have to farm the land to receive the subsidy.

Moreover, the global inequalities created by protectionism are shocking. The developed world injects around $300 billion every year into protecting agriculture, which is roughly six times the amount it spent on foreign aid in 2003. If we scrapped subsidies and tariffs on food, then the subsequent expansion of world trade would do much to raise living standards for developing world. This would be a sustainable way to alleviate suffering, cut food prices, and reduce trade distortion.

If the European Union does not adapt to the challenges it faces then the prospects going forward appear bleak. Reform must be guided by the principles of trade liberalisation, rolling back bureaucracy where it has become wasteful and meaningless, and driving competitiveness through higher investment and capital spending. These were principles upon which the union was first established, and must not be neglected or forgotten.

Read More
Thinkpieces admin Thinkpieces admin

By George

Tim Lai argues that George Osborne's deficit reduction plan has been painful, but the right choice given the alternatives. The Conservative party’s prospective Chancellor of the Exchequer, George Osborne, faced a series of problems going into the 2010 general election: a deep and protracted global economic slump following the 2008 credit crunch, paralysed money markets, a spiralling budget deficit and rapidly rising national debt.

The credit crunch arose from the United States’ sub-prime mortgage market.  From the mid-1990s, US government policy aimed to promote home ownership amongst middle and low-income groups.  Regulatory lending standards were reduced, enabling riskier loans to be made.  Separately, expansionary monetary policy after the 2001 dot.com crash kept US interest rates low from 2002 to 2004, encouraging borrowers into debt.  As rates ‘normalised’ between 2005 and 2007, mortgage costs rose, house prices softened, over-extended borrowers on highly-leveraged loans became exposed, and mortgage defaults multiplied.

Financial institutions in the globalised secondary mortgage market caught a cold.  Benign capital reserve regulations had allowed them to become over-exposed too, on the back of mortgage-backed securities and collateralized debt obligations funded by short-term inter-bank borrowing.  These complicated bundles of debt were rated as safe by virtue of their diversity, but they proved catastrophically vulnerable to wholesale decline.  Their value plummeted, leaving investment banks with unmanageable debt obligations and no-where to go.  Unable to judge the survivability of existing or prospective counter-parties, they no longer dared lend to each other, and credit froze from August 2007, plunging the global finance sector into crisis.  Government intervention followed on a massive scale, including in the UK, with a coordinated international response to prevent complete market failure; few institutions were allowed to collapse (Lehman Brothers), but many were taken over by rivals (Bears Stearns), bailed out by tax-payers (RBS, Lloyds-TSB, Northern Rock, AIG) or fully nationalized (Bradford & Bingley).  Central banks also injected huge amounts of liquidity into the markets to restore the flow of credit, and they cut base rates to unprecedented levels – the Bank of England rate fell to a record low of 0.5% in March 2009.

Despite these large-scale responses, the crunch in inter-bank lending quickly spread to the real economy. Lenders retrenched, seized by institutional paralysis, reactionary risk aversion, and the need to repair balance sheets and meet new regulatory obligations to recapitalise.  Faced with this credit squeeze, general uncertainty, stock-market volatility akin to the Great Depression, fragile cash-flow, household debt, negative equity, a coincident spike in commodity and food prices (which peaked in Summer 2008), and a growing risk of bankruptcy or unemployment, businesses and consumers reigned in as well.

Albeit for different reasons, the UK’s housing market had become dangerously inflated too, and became an important factor in the nation’s economic woes.  Interest rates never dipped as low as in the US, but had nonetheless been attractive to borrowers since the mid-1990s at around 6%, half the 1970s / 80s average.  Liberal lending criteria drew many into the net, tacitly encouraged by successive governments addicted to property-related revenues and the invigorating effect of apparently rising household wealth.  Struck by contagion and credit blight, the UK housing sector crashed (prices fell by 12% from April 2008 to December 2009), followed by domestic consumer demand (spending fell by 4% over the same period). The effect was compounded from early 2009 by events in the Euro-zone, where the banking crash spawned a sovereign debt crisis in some nations, whereby over-spending governments struggled to refinance their debt or bail out their banks.  These economies faired even less well than the UK, with knock-on effects upon trade.  On the supply side, surviving companies turned increasingly inward, delaying new investment or hiring, cutting costs and hoarding capital, with adverse consequences for productivity and output; UK GDP fell by 6.3% between Q1/2008 and Q2/2009.

Predictably, government revenues fell in the economic downturn (from 36.3% of GDP in 2006-7 to 34.5% in 2009-10), whilst expenditures rose sharply (from 44.2% to 51.6%).  Hence, in the run up to the 6 May election, the budget deficit sat at 11.2% of GDP, the fourth largest amongst OECD countries – smaller than Greece’s and Ireland’s but larger than Spain’s, Portugal’s and Italy’s (the so-called PIIGS) – and the national debt sat at 71.3% of GDP, the highest level since WW2.

Amidst this, the sitting Labour government’s expansionary pre-crisis spending plans had assumed strong economic growth, without which deficit and debt would rise further.  In his April 2008 Budget, Alistair Darling, the Chancellor, was forecasting uninterrupted growth of 1¾–3% between 2008-2010, a rapid fall in inflation to 2% by 2009, a diminishing budget deficit of 2% of GDP by 2010, and national debt of less than 40% of GDP, also by 2010.  Six months later, he acknowledged the extent of the UK’s fiscal challenge, but remained committed to a publicly-funded Keynesian stimulus programme, alongside expansionary monetary policy (cf Quantative Easing), low interest rates and falling commodity prices, to mitigate the effects of the ever-deepening recession confronting him.  He explicitly deferred repairing the public finances to the medium-term.  In his last Pre-Budget Report, in November 2009, he conceded that the economy had, in fact, contracted by 4.75% that year, he predicted above-target inflation, projected an in-year deficit of £178 billion or 12.6% of GDP, of which three quarters was structural, and he estimated that national debt would reach £1.3 trillion or 78% of GDP by 2014.  But he only committed to reigning in government spending from 2011, and to halving the deficit over 4 years, with scant detail on how it would be achieved.

There was little here to worry the Keynesian devotee, for whom renewed growth would naturally close the deficit and pay down the debt.  But disquiet amongst others was multi-faceted: that bureaucratically driven capital expenditure would allocate resources inefficiently, distort markets and displace private sector activity; that a globalised economy would dissipate the effect of any demand-stimuli on British productivity; that the inevitable prospect of fiscal consolidation, including through higher taxes, would dissuade consumers and businesses from being significantly stimulated; that incurring further debt rather than convincingly addressing the public finances would undermine the UK’s credit-worthiness and compound the crisis with higher borrowing costs for the government and a deeply indebted electorate (interest on the national debt stood at 4% of GDP in 2009/10, the fifth largest item of public expenditure); that a cumulative total of £25 billion in stimulus measures – which was all that even Darling felt was affordable in the circumstances – would make little impact (cf the $940 billion spent by the US federal government, the effect of which is also disputed).  In short, whilst Darling’s ends may have been laudable, his Keynesian ways were questionable to many and the means he devoted were commonly held to be woefully inadequate.

Against this background, and in keeping with the Conservatives’ ‘small government’ instincts, Osborne adopted a more aggressive strategy that attacked the deficit immediately and aimed to eliminate it more quickly.  His headline targets were to reduce government spending from 47% of GDP in 2009-2010 to 41% in 2015-2016, cut borrowing from 11% to 2% and begin bringing public debt down.  A notable objective during the process was to preserve the confidence of bond markets and keep borrowing costs low.  Ahead of the election, the Conservative Party’s manifesto had committed to: macro-economic stability founded upon savings and investment; low interest rates; effective prudential supervision of the financial markets and; at its core, a credible plan to eliminate the bulk of the structural budget deficit over the course of a single Parliament.  The substance emerged on 22 June, in the new Chancellor’s emergency budget, where he described an ambitious strategy to cut public expenditure by 6.3% of GDP over four years, with more than three quarters coming from spending cuts and the balance from higher taxation.  He also announced most of the major muscle moves for his strategy:

• An immediate in-year cut of £6 billion had been announced soon after the election as a nod to the markets. He supplemented this (whilst also leaving most of Labour’s pre-existing measures in place) with a public sector pay freeze, an accelerated rise in the state pension age, the elimination of middle class tax credits, a change from Retail to Consumer Price Index for calculating welfare benefits, better scrutiny of the Disability Living Allowance and restrictions on Housing Benefit.  In subsequent budgets, he: cut Child Benefit for high-earners; increased public sector pension contributions; introduced ‘career average’ rather than final earnings as the basis for defined-benefit public sector pensions; placed an overall cap on welfare spending and; committed to running a balanced budget over an economic cycle. • On taxation, he increased VAT from 17.5% to 20%, introduced a bank balance-sheet levy, created a 28% Capital Gains Tax band for higher-rate earners (up from 18%) and signalled reviews on tax indexation and financial dealings as sources of additional revenue.  Later, he also ramped up the Revenue’s anti-tax-avoidance operations. • To support growth, he lifted the threshold for employers’ National Insurance contributions, began a phased reduction in corporation tax from 28% to 24% (later extended and accelerated to reach 20% in 2015-2016), took steps to increase the personal income tax allowance from £6,500 to £10,000 (later raised to £10,500 – a flagship Liberal Democrat measure adopted by the coalition government), resurrected a previous link between the basic pension and earnings, and declared support to various regional infrastructure projects.  This was followed by a suspension of above-inflation rises in petrol duty, the controversial reduction of Labour’s top rate of income tax from 50% to 45%, the introduction of ‘funding for lending’ and Help to Buy schemes aimed at encouraging banks to lend to small businesses and nudge developers into building new housing, and limited relief from green levies for businesses. • Finally, by way of oversight, he had already announced the creation of an independent Office for Budget Responsibility.  In the Autumn of 2010, he also instigated an overhaul of the regulatory framework for the financial sector under the Bank of England, which gained wide-ranging powers for prudential regulation under its new Governor in 2013. This extensive catalogue of measures went a long way to addressing the deficit over time, but a gap remained for government departments to bridge during the Comprehensive Spending Review that followed his emergency budget.  Having reaffirmed the party’s commitment to protect the health and overseas aid budgets, which represented almost 20% of all government spending, other areas faced eye-watering average cuts of 25% – up from 14% if nothing had been ring-fenced).  This was later ameliorated by other savings (notably from Child Benefit), but nonetheless remained at 19%.  In practice, the pain was unevenly spread, from 7.5% for Defence and 11% for Education, through 25% for the Home Office and Ministry of Justice, to over 60% for Communities & Local Government.

Osborne’s plan for fiscal consolidation was severe, and risky.  Many feared it would kill off a weak recovery, plunge the economy back into recession and prolong the nation’s woes.  Foremost amongst his critics was the Labour opposition.  But his approach did gain the endorsement of the money markets, sovereign rating agencies and international institutions (including, initially, the International Monetary Fund), all key audiences.  Subsequently, with a depreciating pound, stubborn inflation, rising unemployment, weak private sector investment, feeble productivity growth and, in 2011, the prospect of double-dip recession, he fought off strong pressure to reign back on his programme – and retrospective analysis later determined that a double-dip recession did not, in fact, occur.  By the same token, he resisted further fiscal tightening in early 2013, when Moody’s removed the UK’s AAA credit rating, and when it became clear he would miss his key targets, such that the deficit will is not now expected to clear until 2018-2019, borrowing is forecast to remain above 2% of GDP until 2017-2018, and the national debt is unlikely to fall until 2016-2017.

But even as the UK’s credit-worthiness was downgraded, economic indicators began to improve, led by rising employment and followed by falling inflation, stronger growth and recovering house prices.  By May 2013, the deficit had, at least, stabilised.  A year later: unemployment is below 7% and record numbers have jobs; inflation is 1.6%, well below the Bank of England’s 2% target; incomes are rising faster than prices; the economy is expected to grow by 2.7% in 2014 (faster than any other major economy) and has all-but reached its pre-crisis level; and the budget deficit is down by a third and falling.  In short, notwithstanding the delay in meeting his main targets, the ends to which Osborne committed himself in 2010 are largely realised or firmly in prospect.  But does that make his strategy the right one?

Amongst the foremost strengths of Osborne’s approach, he maintained the confidence of the money markets and protected borrowing rates.  It is improbable that the UK would ever have been unable to sell its bonds, as those nations bailed out by the troika of the European Commission, European Central Bank and IMF were unable to do during the Eurozone crisis, but a loss of confidence would almost certainly have raised the cost of government and personal borrowing, swallowing up the Exchequer’s scarce resources and compounding the financial difficulties of individual debtors and struggling businesses (the UK’s 10 year bond yield fell from 4.28% in February 2010 to a record low of 1.38% in July 2012, and stood at around 2.67% in late April 2014; by contrast, Spain’s peaked at 7.6% in July 2012 and was 4.29% in April 2014; Greece’s peaked at 48.6% in March 2012 and stood at 8.6% in April 2014).  The creation of an independent OBR and the consolidation of responsibility for financial stability and prudential regulation under the BoE played to the same ‘confidence’ narrative by strengthening fiscal transparency and economic governance.  The theorist (including the OECD and IMF) would also look favourably upon Osborne’s spending cuts as a more effective way to close the budget deficit than a higher taxes, and upon his increase in consumption tax as less distorting and constraining on growth than taxes on income, production or investment.  But both spending cuts and higher VAT are regressive and, politically, they demanded to be offset.  Raising the personal income tax allowance (and manipulating the higher rate threshold to limit the benefit to higher earners) did this, whilst also incentivising employment over welfare dependency.  So, too, eliminating middle-class allowances and introducing higher-rate capital gains tax.  Meanwhile, cutting corporation tax, employers’ National Insurance contributions and moderating rises in fuel duty will have supported private sector growth, albeit this effect was notably sluggish.  In summary, Osborne’s package of measures seems balanced and well calibrated to deliver effect without breaking voters’ endurance.  Whilst risky, it was also politically astute, underlining the Tories’ reputation for economic competence and forcing Labour toward fiscal conservatism in an attempt to shed their reputation for profligacy.

His approach has not been without its weaknesses, however.  Most notably, the commitment to ring-fence health spending has severely exacerbated cuts elsewhere and distorted service provision across the board.  It has also sheltered 20% of total government spending from the most rigorous examination.  This may have been a political concession the Tories had to make to gain office in 2010, but it was constraining and unlikely to be repeated in 2015.  Another commonly levelled criticism has been that universal pensioner benefits were left untouched.  In view of dramatically improved health and life expectancy, and the less wearing nature of most modern occupations, many have also argued that Osborne should have gone further with raising the state pension age.  But senior citizens are a growing constituency and the group most likely to vote.  They cannot easily be ignored and, indeed, the Tories even reiterated their expensive commitment to protect the value of state pensions via the ‘triple lock’, which assures indexation by the greatest of inflation, wages or 2.5%.  Elsewhere, Labour’s capital investment cuts, amounting to 1.5% of GDP, were left in place.  Granted, new and important capital spending was subsequently announced (much of it using private sector money) but, amongst all else, it could sensibly have been disbursed earlier, helping to create economically conducive conditions for the recovery.  In short, the government’s strategy has had its economic shortcomings, some of them quite serious, but most have been driven by political expedience or necessity; ever will it be thus.

The circumstances facing the government have also presented opportunities, some of which have been embraced more readily than others.  For example, the chance to refashion and shrink government has been clear, but seems largely to have been taken ad hoc, without any central or overarching examination of what the state is for.  The OBR forecasts that public sector employment will have shrunk by up to 1.1 million by 2018, spending has become more targeted and public services have been opened up to other providers; but this does not amount to a fundamental philosophical transformation.  A root-and-branch overhaul of the UK’s complicated and behaviour-distorting tax code is overdue – for example, to remove the increasingly artificial distinction between income tax and National Insurance, to enhance rewards for investment and production, and to eliminate the most harmful forms of economic rent.  Less punishing or stigmatising bankruptcy laws, akin to those of the United States, might encourage enterprise.  And an examination of the energy sector could more faithfully price carbon emissions, evaluate the cost and maturity of emerging green technologies and agree sensible bridging strategies around nuclear generation and shale oil and gas.

But nor should the threats be underestimated that Osborne faced to his strategy.  It seems likely that excess productive capacity, scarce credit, general uncertainty, risk aversion, and turmoil in the Euro-zone export market did, indeed, serve to dampen private sector investment and to delay the substitution of public sector spending that Osborne’s plan required.  Faced with multiple challenges, banks also remained stubbornly reluctant to lend to business.  Most indicators came to point strongly in the right direction, but his timetable had already been compromised by this earlier sluggishness.  Inflation could have knocked things off track too; although fuelled by temporary phenomena, the cumulative and persistent effect could have forced an unwelcome rise in interest rates.  Instead, deflation became the greater threat.  Disappointing private-sector investment and a persistently fragile Euro-zone left exports weak, perpetuating a potentially destabilising trade deficit.  A resurgent housing market raised the fear of a new bubble, and saving remained unattractive.

In summary, whilst imperfect, Osborne’s deficit reduction plan withstands scrutiny.  At a time when others were reluctant to face (or publicly admit to) the economic challenge in prospect fro the nation, he described it as he saw it and was clear on his objectives from the outset, with respect to government spending, borrowing and debt; he laid out his timetable; and he deployed a strategy to deliver, balancing savings against revenues and using effective measures that, in many instances, had reinforcing secondary effects.  But it is always difficult to prove cause and effect in real-world economics, where innumerable inter-dependent factors are at work, which cannot be evaluated in isolation.  Hence, the extent to which Osborne’s actions were the cause of the positive economic outcomes that followed will always be debatable.  But on the balance of rather extensive circumstantial evidence, it is reasonable to conclude that his strategy was successful.

Nor is it easy to contemplate alternatives in the absence of any meaningful counterfactual.  Some present President Obama’s stimulus package in the US as the sort of Keynesian response advocated by Labour.  But the effect of this federal spending was substantially diminished by the sharp cuts forced upon the states, most of which are required by law to maintain a balanced operating budget.  The US, with the world’s reserve currency, also enjoys very different borrowing terms in the bond markets to the UK.  At the opposite end of the spectrum, the much more severe austerity visited upon the PIIGS by the troika, in order to restore their fiscal credibility, is widely viewed to have prolonged recession and aggravated unemployment well beyond anything the UK endured (Greece, Spain and Italy remain in recession in May 2014, with Greek and Spanish unemployment still exceeding 25%).  In short, George Osborne’s flawed offering may have been about as good as it could reasonably have been.

Read More
Thinkpieces Preston Byrne Thinkpieces Preston Byrne

Bitcoin and the English Legal System, part III: a warm welcome to Cody Wilson

Preston Byrne, in the third of his "Bitcoin and the English Legal System" series, explains why cryptocurrency, technological advances notwithstanding, still cannot do without the law. A few days ago, I had the pleasure of speaking on a panel at the 2014 Liberty League Freedom Forum with City AM's Marc Sidwell, Big Brother Watch's Nick Pickles, and the authors Daniel Ben-Ami and Nick Harkaway; we discussed the implications of advanced technology for liberty. Seeing as most people are not obsessed with blockchain-based technologies as I, before the talk began I asked the attendees how many of them used cryptography - such as PGP/GPG or cryptocurrency - in day-to-day life.

Of 100-odd individuals present (perhaps fifty more stumbled in later, having overindulged during the previous night's festivities), perhaps six hands went up, underscoring a significant problem with technology-as-liberator: adoption. In a room full of activists who oppose state surveillance, only a handful had taken measures to protect themselves from it - measures which, it should be said, may be taken at nil cost. Just as we criticise our philosophical opponents on the political left for denying individual agency in favour of political action, which is rightly viewed as a "convoluted and roundabout" method of accomplishing individual goals, so too should we criticise our own continuing behaviour which makes this surveillance easier to conduct. Though as the panel discussed, there is a general perception of a "technological arms race" between individuals on the one hand and states on the other, the best technology in the world is utterly useless if it is not employed.

We should nonetheless be grateful that the technology is there, developed and promoted by a handful of brilliant mathematical and political minds. One of these minds belongs to Cody Wilson, designer of the 'Liberator,' the world's first fully-3D printed firearm (as well as designer of a number of 3D-printed components for the AR-15). More recently, Wilson has been working as a spokesperson for the "Dark Wallet" project, a collaboration of some of the world's leading cryptocurrency developers aimed at augmenting the functionality and independence of the Bitcoin blockchain, as well as adding trustless privacy features. The problem they seek to solve arises from a fundamental aspect of Bitcoin's design, viewed by some as a weakness: each bitcoin (or part thereof) is a chain of digital signatures and though in aggregate the network behaves like a ledger, this particular feature renders all transactions public - and thus perfectly traceable back in time, all the way to an individual bitcoin's first creation. Because of this, a number of lawyers have crassly taken to calling bitcoins "prosecution futures," and indeed law enforcement has been able to make a number of arrests in the United States based on analyses of these records.

Wilson will be speaking to the ASI this evening. Although I do not know exactly what he will say, I think it is fair to presume he will not endorse the expansion of industry cooperation with regulatory authorities. Indeed, "if Bitcoin represents anything to us," he has said, "it’s the ability to forbid the government." TheunSystem group of which he is a member has expressed similar sentiments to that of the Freedom Forum panellists, referring also to the idea of an arms race, and arguing their work can "gain a new territory of freedom for several years." "We don't need to cooperate with control freaks," they add; "disobedience is the only way." It is a view with which I sympathise but, despite considerable admiration for their work, respectfully disagree.

When I was younger, it was all too easy to become frustrated with the intransigence of social democracy and the seemingly endless trampling of individual endeavour in the name of collective welfare this system legitimises. Given the widely-publicised abuses of state security apparatuses in democracies everywhere, it is perhaps easier still to look to technology to secure an advantage for liberty outside of legally permissible channels - even if that victory will be fleeting at best.

That notwithstanding, implementation of this technology in full compliance with the law, not civil disobedience, is the way forward. This is not to say that anonymity and privacy are unimportant. Clearly they are, and men like Cody Wilson draw much-needed attention to questions of state overreach at great personal risk to themselves. Where we diverge is that I am of the view that the proper means of accomplishing this change is through democratic consensus.

Bitcoin and its derivations are already challenge enough to state institutions, with its strong cryptography and decentralised character confounding all efforts at state control. No Act of Parliament, no court order, no standing army and arguably not even vast amounts of state-backed computing power are presently thought capable of taking the network offline on their own (at least, not for long).

While Bitcoin is the first cryptocurrency protocol, it will not be the last. Commercially, its most significant achievement is in outsourcing the element of discretion from the unilateral act of payment to an algorithm; industry cooperation with state authorities in respect of this aspect of the technology has resulted in favourable regulatory outcomes in the UK and the United States, with the consequence that hundreds of millions of dollars are flowing into the sector, and mainstream businesses large and small are beginning to enter it.

Successor platforms close to release will extend this functionality in respect of multilateral, two-way instructions, importing the cryptographic security of Bitcoin into self-regulating agreements and other communications. In theory, the range of proposed uses for these second-generation platforms is limitless: decentralised crowdfunding, frictionless microfinance, autonomous peer-to-peer banks, and even decentralised social networks have been proposed, all of which would be run by decentralised mining from which virtually anyone can profit.

Prudence demands restraint when extolling the potential of these platforms. However, the degree of investor and developer attention upon them suggests they may be deployed in practical roles rather sooner than we think; and just like Bitcoin, I suspect they will take many people by surprise.

This will have implications for conceptions of liberty. What could promote a culture of privacy more efficiently than incentivising households to put the world's most advanced cryptographic technology in their living rooms? What better way could there be to convince a man of the value of free enterprise than to allow him to hold his own commercial bank in the palm of his hand? What kind of world will we live in where a shopkeeper in Kibera can safely invest in a property development in Kensington at the push of a button, while paying no fees?

How then, with deployable personal capital at their very fingertips, will people view state interference in markets and human interactions in which, perhaps for the first time in human history, they have a stake of their own? I suspect they will view it very differently, and in a manner which has the potential to give rise to enduring societal change. But the technology must first get to this point, and prove useful, before any of this change will be realised.

I am grateful Mr. Wilson has agreed to speak to the Institute this evening; the world needs more people like him. But so too does it need transactional technology which empowers individuals, rich and poor alike, to easily deploy and accumulate capital, legally, safely, and internationally, so that they might use it in order to improve the quality of their lives.

Men have been campaigning for liberty, however they define it, within the confines of the law for hundreds of years. I for one am happy to continue doing so for at least a few more, and encourage the attendees of tonight's event to do the same.

Read More
Thinkpieces Mikko I Arevuo Thinkpieces Mikko I Arevuo

Capitalism under siege

Mikko Arevuo, a senior lecturer in strategic management and Adam Smith Institute fellow, explains the moral foundations of capitalism and what is causing the current crisis of confidence in it.

Adam Smith taught us that self-interest leads an individual to seek the most “advantageous employment for whatever capital he can command.  It is his own advantage, indeed, and not that of the society, which he has in view.”  Adam Smith’s magnum opus, The Wealth of Nations (1776), or even more importantly his earlier work, The Theory of Moral Sentiments (1759), are no longer required reading in most management schools.  We have forgotten that Adam Smith was first and foremost a moral philosopher.

By omitting the moral foundations of Smith’s work, we have come to equate his concept of self-interest with a greedy disregard for others.  Nothing could be further from the truth as Smith’s self-interest is directly linked with the common good. Smith quickly clarifies the link between the pursuit of individual advantage and the societal benefit by adding:

It is his own advantage, indeed, and not only that of the society, which he has in view.  But the study of his own advantage naturally, or rather necessarily leads him to prefer that employment which is most advantageous to the society.

History has vindicated Smith’s fundamental insight. Capitalism has made the world richer and healthier than previous generations could have imagined.  An average person today lives in infinite luxury compared to the lives of our forebears, which Thomas Hobbes characterised in Leviathan (1651) as “poor, nasty, brutish, and short.”  According to the World Bank, in 1981 1.93 billion people lived on less than $1.25/day, an indicator for extreme poverty.  By 2008, the number of people in abject poverty had decreased by a third to 1.28 billion.

Why, despite the benefits of the capitalist economic system, is it under existential siege?  The explanation for the widespread anger is not difficult to find.  Over the last 30 years there have been serious dislocations which have been exacerbated by the recent economic crisis.  Millions of manufacturing jobs have been moved  from developed market economies to countries where labour costs are lower, and youth unemployment is alarmingly high, particularly in some of the European economies.  Income inequality has radically widened in the US and the UK, and market pressures and management compensation structures have led managers to focus on short-term profits rather than on long-term wealth creation.

Finally, capitalism is suffering from an ethical crisis.  According to the think-tank The Henry Jackson Initiative for Inclusive Capitalism, a broad-based acceptance of basic ethical norms is necessary if capitalism is to regain acceptance.  Otherwise, the system itself will become discredited and ultimately destroyed, whether by internal failures or external pressures.  To rebuild confidence in capitalism, there are two critical areas that need to be addressed: short-termism and business ethics.

Shareholder value maximisation became the norm in the 1980’s. There is nothing wrong with maximising returns for risk-takers as long as it is based on the sound strategic principles of long-term value creation.  However, over the last 30 years investment markets have undergone a structural change that has caused capital and equity markets to become focused on short-term returns.  In the 1960’s the average stock holding period in the UK and the US was approximately eight years.  Today, it’s about four months.  It is difficult to call today’s equity holders risk investors in the traditional sense of the word that they have an interest in the firm’s long-term survival.  In fact, modern investors are more akin to speculators seeking quick returns.

This structural change in investor behaviour has pressured management to create value on a short-term, and even quarterly basis.  Companies become fearful that investors will divest at the slightest wobble in the share price or if the firm misses its quarterly performance expectations.  Under these conditions short-term fixes tend to win over the long term goals. The situation becomes even more exacerbated if management compensation is linked to short-term corporate performance.

Data is beginning to support the negative economic impact of short-termism.  There is increasing evidence linking short-termism with a long-term decline in corporate investment as a percentage of GDP.  Moreover, corporate investment activity tends to favour efficiency innovation that releases capital by reducing employment and cutting production costs.  What is particularly worrying is that the released capital is not invested in R&D which could increase the rate of product innovation and thus have a positive impact on employment and economic growth.  Instead, as the emerging evidence points out, capital is used for share buy-back schemes that artificially increase share prices. The end result is increased volatility, inflated asset prices, and stagnating long-term growth and employment.

Economic policy initiatives such as investment tax credits can be designed to counteract management short-termism. However, these measures alone will not be sufficient to change the prevailing market behaviour.  A more effective means to encourage long-term investing is  for businesses to stop providing quarterly earnings guidance for equity investors.  Unilever, Merck, and GE have already shifted their investor guidance away from quarterly reporting to long-term performance indicators.  Other firms have started programmes that reward equity investors who hold their shares for a longer period of time.  There is also evidence that institutional investors such as pension and sovereign wealth funds, who collectively hold roughly 35 percent of the world’s financial assets, are beginning to redesign the performance and reward systems of their asset managers to reward long-term performance.

Combating short-termism is a start but it will not solve the crisis of confidence in capitalism overnight.  Although we know that people tend to behave better when they are evaluated over the long-term, they will not behave ethically unless they work in an environment that fosters ethical behaviour.  We should remember that the behaviours that led to the near collapse of the global financial system in 2008 were not the result of illegal behaviour.  The financial crisis was largely precipitated by questionable moral and ethical behaviour.  The reaction to economic crisis has been to increase the level of regulation.  However, the danger of regulation is that rather than asking the question “Should I do this?,”  the question becomes “Can I do this?”

Regulation creates rules-based behaviour that essentially removes an individual’s moral responsibility.  If capitalism is going to survive in the future, we need to bring ethics back to the centre of commercial activity.  Adam Smith would surely agree that commerce can not be separated from morality.  As we seek solutions to the the 21st century problems of capitalism I will leave the last word to Adam Smith:

"When the happiness or misery of others depends in any respect upon our conduct, we dare not, as self-love might suggest to us, prefer the interests of one to that of many.  The man within immediately calls to us, that we value ourselves too much and other people too little, and that by doing so, we render ourselves the proper object of the contempt and indignation of our brethren.”

 

Read More
Thinkpieces Preston Byrne Thinkpieces Preston Byrne

Bitcoin and the English legal system, part II

Commercial lawyer and ASI Fellow Preston J. Byrne continues to explain why, despite the cries of his inner libertarian, more government involvement in Bitcoin would be a step forward for the cryptocurrency-cum-payment-system, rather than its end.

I should begin by thanking the numerous individuals who privately provided feedback on my proposition that cryptoledgers need law, and therefore the state.

I am pleased to report that the proposition was overwhelmingly opposed, with a few exceptions.

My position, however, remains unchanged. To set the scene for later discussions, I will provide the primary objections and my responses in outline:

1) Crypto-currency was designed to distribute power from the state and resources from the banks to individuals – what you propose undermines that idea.

I get this. Libertarians started cryptocurrency; this is our party. If this technology was created to get around the state, why invite it back in? Hell, why acknowledge the state at all?

The answer, of course, is a situation with which most libertarians will be familiar: other people have arrived at the party, and – not being nerdy as we – they don’t want to talk about politics. Early adopters thus need to start getting comfortable with some uncomfortable facts:

(a) The technology is open-source and the genie is out of the bottle. Anyone can use it and advance it for any purpose.

(b) Bigger players are exploring its potential. One cannot seriously expect banks and payment processors to roll over, surrender, and sacrifice their firstborn at the altar of Ludwig von Mises once a cryptoprotocol presents a threat to their business. Instead – if the technology is as good as its proponents claim –  they will integrate cryptoledgers into their operations and leverage their own resources against whatever “free,” distributed banking system rises to compete with them, as I suggest in Chapter 2 of Tim Swanson’s Great Chain of Numbers. Consumers will benefit as a result.

(c) The lack of a comprehensive legal framework is currently preventing these new actors – and the innovations they might create – from entering the ecosystem. Consequently, the law is coming for cryptocurrency; the technology may be said to be a victim of its own success. While we remain free to flout this process,* we are powerless to stop it.

Whether we like it or not.

This is a thoroughly Austrian state of affairs; it is therefore in our interests to exercise influence rather than deny it is taking place. Plus, if Bitcoin does everything some say it can this shouldn’t be a problem for those who want to get around the law – it’s distributed and pseudonymous, right?

Maybe.** Without a doubt, Bitcoin – used as intended – doesn’t need the law to be economically effective as a mechanism to store and transfer value. The experience of the last year proves it. Whether this position is commercially practicable is another matter; whether the same will be applicable to Bitcoin’s cryptoledger successors is another still. The law will be written for them. Of necessity, though, it will apply to all.

2) Corporate blockchains? GOVERNMENT blockchains!?!!11one? That’s insane. You’re deliberately crippling the technology!

Correct. This is no bad thing. To say reining in a powerful technology for commercial applications is “crippling” is like suggesting that we’re “crippling” America’s strategic nuclear arsenal by using fissionable isotopes for radiotherapy. Derivative products will change the commercial landscape, for sure, but will do nothing to dilute the potency of the original. They may even improve it, such as the MasterCoin and Colored Coins projects propose to do.

A distributed, pseudonymous/anonymous, public blockchain is fantastic for a revolution but useless to a corporation. The active cryptocurrency development community is miniscule, with individuals numbering in the hundreds, if that. If we are right about crypto’s potential, a future is coming where many blockchains – private, public, regulated, unregulated, or even state-sponsored, all serving different functions – will exist.

To a small extent, that future is already here: there are hundreds, if not thousands, of scrypt blockchains in existence thanks to automated crypto generators (my old university dinner club has even mooted creating one to reward the numerous heroic deeds its members regularly perform). A good friend of mine argues that most of these are Bitcoin/Litecoin clones, and do not represent a genuine improvement of the technology (arguing that in some cases, e.g. with Dogecoin’s one-minute confirmation/block time, these “improvements” present significant security risks). I agree with this view.

He argues, however, that this means it would be prudent to unite behind one market-leading technology – Bitcoin – and take it from there. I do not agree with this view, for both political and practical reasons. The Bitcoin protocol has only been in existence for four years, has a number of non-fatal flaws, and only for twelve months has anything approaching serious attention been paid to it.

Innovation is coming; there will be market demand for regulation to provide additional stability for these new products. The more expeditiously the UK establishes a legal framework for cryptoledgers’ use, the faster UK businesses will be able to benefit from them, and outcompete businesses in other jurisdictions. Additionally, more this technology enters the mainstream, I have to imagine the more legitimate its unregulated applications will appear to the general public – and the greater its potential will be for changing their views on how government should function.

Plus, we’re libertarians. Who are we to say anyone can’t use this technology, in any way they wish, to any end they can imagine?

3)There can be only one!

Don’t get me started.

4) “‘What I’ll be discussing in coming months.’ Hitchcockian master of suspense, you are!”

Writing for the Adam Smith Institute does not put a roof over my head – it’s a ‘nights and weekends’ thing and I have to prioritise. To ease your anxiety, my next post will deal with the practical benefits of a polycentric, rather than fully decentralised, blockchain for smart property transfers.

* This is not, in my view, a good idea. I’m reminded of Hobbes – “Fear and liberty are consistent: as when a man throweth his goods into the sea for fear the ship should sink, he doth it nevertheless very willingly.” The technology has the potential to change how people calculate these potential risks, but as recent criminal prosecutions initiated in the United States show, Leviathan’s reach is long indeed – and cryptocurrency transactions are no exception. Those who flout the law do so at their peril, as ever.

** Ibid.

Read More
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Blogs by email