Tim Worstall Tim Worstall

The government just lowered womens' wages

It's entirely true that this will have only a marginal effect but the effect will exist and it will also be in conflict with other expressed desires of our rulers. This will indeed lower womens' wages:

Desk fans should be introduced in every workplace to help women through the menopause, a new government report has urged.

Firms must also provide non-synthetic uniforms, access to natural light, places to rest, special absence policies and cold water fountains.

If there is some extra cost imposed upon the employment of a specific group of people then that cost will end up as a reduction in the wages on offer to that group of people.

For example, employers' national insurance is incident upon wages, as we all know. For the employers are looking at total compensation costs, not wages, when deciding upon hiring. How that compensation is split between taxation and wages doesn't worry the employer all that much, that total does.

This report is urging that there be extra costs imposed upon employing menopausal women. Therefore those wages will be lower. And, of course, we are all being told that closing that gender pay gap is a major preoccupation of public policy.

So, well done there. It's almost as if one part of government has no clue about all the other parts. Even, that the world is a complicated place which is is impossible to plan, govern?

Read More
Ben Southwood Ben Southwood

A neoliberal framework for intellectual property

When I was younger I was very libertarian, and like many libertarians I was very sceptical of intellectual property. It might seem strange to a non-libertarian—libertarians love property rights!—but it's obvious to a libertarian. Property rights over your body, your land, your house and your tools are in direct conflict with intellectual property: if someone has a right to control how an idea is used, it prevents you from using the things you "really" own in ways that you like.

If Apple has a right to the Apple logo, I can't draw it on my house or car and sell stuff out of them. If Apple has a right over using a type of glass in phones I cant use my factory, my machine tools, my raw materials and indeed my hands and thoughts in ways I very well might want to.

I was convinced by the elegance of Roderick Long's argument:

Information is not a concrete thing an individual can control; it exists in other people's minds and other people's property, and over these the originator has no legitimate sovereignty. You cannot own information without owning other people.

Suppose I write a poem, and you read it and memorize it. By memorizing it, you have in effect created a "software" duplicate of the poem to be stored in your brain. But clearly I can claim no rights over that copy so long as you remain a free and autonomous individual. That copy in your head is yours and no one else's.

But now suppose you proceed to transcribe my poem, to make a "hard copy" of the information stored in your brain. The materials you use — pen and ink — are your own property. The information template which you used — that is, the stored memory of the poem — is also your own property. So how can the hard copy you produce from these materials be anything but yours to publish, sell, adapt, or otherwise treat as you please?

But I've changed my mind. The reason regular property rights are good is not because we have a fundamental moral right to sovereignty over certain objects. Robert Nozick is wrong that "mixing labour" with things makes them morally yours in a way that other considerations can never trump. In fact the reason that property rights are good institutions is that they make us happier and freer, and that they have good consequences: rich societies where individuals feel autonomous under a rule of law.

Though the two sorts of rights conflict, the justification for both is closely analogous. Monopolies generate investment. If fields are owned in common, they produce a lot less. Most people are somewhat selfish, and do not improve fields when they stand to benefit only very little from each marginal improvement. A field that will feed 10 would feed 100 or 1,000 if separated into many privately owned plots. Indeed: an individual can feed themselves off far less land if they own it exclusively than the share they effectively use when it is part of the commons.

Some restrictions on property rights are good. Internet libertarians have arguments over not just redistribution, but even simple questions like whether it's okay to break into someone's mountain hut to get shelter in a blizzard. It's obvious that some restrictions on property rights make the world better. This approach accepts that automatically: property rights are there for human flourishing and rule-of-law systems automatically build some beneficial restrictions into them.

Things are similar for ideas. If you give people monopoly control of their idea then they may produce—or share—more ideas. If an idea is genuinely new, then its being produced or shared with you makes you better off and freer. It's all well and good to say I am restricted by not being able to make iPhones—but would I really have been able to make them without Apple?

The trade-off is follow-on innovation. Yes, patents may promote innovation. But they also restrict it: you cannot freely improve on the ideas of others if they have patented them. Their patent may encompass uses that you would have come up with, or propagated, but which they never discover or make use of.

But patents also promote follow-on innovation. Isaac Newton discovered the calculus but did not share his discovery for years. When you register a patent you get exclusive rights, but you must also bring the idea into the public domain. Without patents firms would have an incentive to be extremely secretive and keep crucial ingredients from the scientific and research community.

It might also matter what level you are at. Instituting short, clear, restrictive patents may increase innovation, but expanding these into long and fuzzy rights may reduce it. This is the famous Tabarrok Curve. And it may matter what our alternatives our. Even if patents work well, innovation prizes may work similarly with fewer drawbacks and restrictions on freedom.

A neoliberal approach to IP recognises that it may be a necessary evil—but it may not, and we might have too much or too little of it, or be doing it in the wrong ways. This is a question that has to be answered empirically.

Read More
Tim Worstall Tim Worstall

Jacinda Ardern and asking women about their childbearing in job interviews

Yes, yes, we know. 

The very idea that you might ask a woman, in a manner you wouldn't a man, about her childbearing plans in a job interview. This coming to a certain prominence as Jacinda Ardern, the Kiwi woman about to lose a NZ election as Jezza did here, was asked what no one would ever dream of asking Mr. Corbyn.

A rising political star in New Zealand received a prominent new role — and was immediately asked whether or not she plans to have children.

Jacinda Ardern, who was appointed the leader of the Labour party on Tuesday, has less than two months before the next round of elections. She's the youngest New Zealand Labour leader ever, the BBC reports.

But enough about that. Multiple men wanted to know: What about her uterus?

Given our age, experience of this life and general wonkiness - even if personal experience might be in the slightest bit lacking - we're rather sure that more of the physique is involved than that. Still, there's a point that really does need to be made here:

Many viewers did not find the question so congenial. At the risk of stating the obvious, male politicians in their late 30s are not typically asked whether they're sacrificing their dreams of a family for their dream career.

The double standard is closely tied to misogynistic assumptions about parenting and ambition. And that's completely aside from questions of rudeness, or the fact that a person without children might be making a choice or struggling with infertility.

That's certainly one way to do it. Whether you think that's the best way is up to you:

"If you're the employer at a company, you need to know that type of thing from the women you're employing, because legally you have to give them maternity leave, so therefore the question is is it OK for a [prime minister] to take maternity leave while in office?"

For the record: In New Zealand (and the U.S.), it is illegal to discriminate against an employee because of current or planned pregnancies, and employers are advised to avoid asking that question altogether.

The basic lesson of economics is that there are no solutions, only trade offs. 

Start with that basic fact. We do indeed insist that - as we should perhaps - employers provide time off for those employees of theirs who give birth. In the UK system that employer must carry some (a small part, 10% last time we looked) of the wages paid during that period of maternity leave. They must also carry the disruption and costs of getting someone else to do that job for that time.

OK.

The price, that is the wages, on offer to someone who is known to be about to do that will be lower than to someone who is known to be not about to do that. No, do not demand that this should not be so, it is going to be so. Reality does not accord to your thoughts upon how the universe should be.

Now institute a system in which it is not permissible to ask about this.

OK.

What happens now? Anyone who might do this is now made that lower offer, perhaps tempered by the likelihood of it happening. Our ban on the asking has moved that lower value from those who actually have that lower value to all of those who might possibly have that lower value.

That is, the lower wages resultant from the costs of maternity leave are now applied to all women who might possibly take maternity leave. Or, as we might say if we look at it from the other way, there really is a gender pay gap for all women of likely child bearing age these days around 30 and above, up to the age of around the mid-40s. For never married women with no children at and past that age there is no gender pay gap, in fact there's usually a small, 1% or so, pay premium. For those who do have children there's a significant, about 9% per child, motherhood pay gap.

Why do women who have no desire for or are unable to have children suffering that gap? Because they're not allowed to point it out, employers are not allowed to take it into account.

Which is where that trade offs thing comes into play. If those who do not want children could say so, employers take account of it, then that mothers' pay gap would not apply to non-mothers. It is possible to imagine people signing off on an agreement to not take the maternity benefits if they change their mind.

No, we do not say that would be a better world. Nor that this one is either. We are simply insisting that they are different worlds and one precludes the other. A system which insists employers take no account, cannot do so, of child bearing desires is different from one where they can. The cost of having the world we do is the loss of the other.

It's useful to mull whether we've got the right deal here too. So, mull.....

Read More
Daniel Pryor Daniel Pryor

How Should We Tax Legal Cannabis?

If we legalise weed, how should we tax it? A new working paper on the taxation of recreational marijuana in Washington State has a number of important insights. The study—released by researchers from the University of Oregon—focuses on the effects of Washington State’s unexpected 2015 switch from a 25% gross receipts tax (collected at every step in the supply chain) to a single 37% excise tax at retail. Three findings particularly stand out:

1) Gross receipts taxes on cannabis are inferior to excise taxes at retail.

The study finds that the tax change was roughly revenue-neutral, but the previous tax regime “discouraged otherwise efficient trades between cultivators and processors, thus creating deadweight loss.” The mechanism by which this occurred was the incentivization of inefficient vertically-integrated transactions: “an inventory lot of marijuana is considered ‘vertically integrated’ marijuana if it was cultivated and processed by the same firm.”

Put simply, if a transaction tax is levied at each stage of the cannabis production process, firms are encouraged to do everything in-house: even though this might not be the most efficient solution overall. This inefficiency is illustrated in the table below, which adjusts for the fact that “it takes roughly six weeks after processors purchase raw material from cultivators before the resulting products are sold to retailers” After the tax change, “the fraction of vertically integrated sales [fell] by 3.7 percent after the adjustment period (Column 1), which [was] driven by a 42 percent long run increase in non-vertically produced marijuana sold (Column 2)”:

Vertical integration can provide a number of useful advantages for cannabis firms, but there are also potential disadvantages. Although vertically integrated transactions continued to dominate the market after the tax change, the significant shift towards non-vertically integrated transactions provided efficiency gains. The study was careful to include new entrants to the market post-tax reform, since few incumbents would de-integrate: they would already have paid the fixed costs associated with vertical integration.

2) Mandatory vertical integration in the cannabis industry may reduce market inefficiency.

Since a move away from incentivizing vertical integration led to efficiency improvements, it follows that “requiring vertical integration, as Colorado does, will decrease market efficiency.” Those in favour of cannabis legalisation in the UK should make sure they does not repeat the mistakes of Colorado in this regard. Arguments in favour of mandating vertical integration center on the idea of easing the burden on regulators, who would only have to deal with one firm instead of several firms. But it seems implausible to suggest that the gains from making things easier for regulators would outweigh the efficiency losses from forced vertical integration.

3) Many U.S. states have set their cannabis taxation levels significantly below revenue-maximizing levels, although this is probably a good thing.

Another finding from the study was that Washington’s comparatively high levels of cannabis taxation were “close to the peak of the Laffer curve.” This is due to the fact that the authors’ estimates suggested the “medium-run response to a price increase [in cannabis] is elastic.”

In other words, there’s more state revenue on the table for places with lower levels of cannabis taxation. Does this mean that cannabis taxes should be set at Washington’s high levels? Not quite:

While our results suggest significant tax revenue may be left on the table in many jurisdictions, evaluating the impact of marijuana policy (and constructing optimal policy) in a broader social sense requires additional considerations. For one, the public health externalities of marijuana consumption are not well established. Nor is the relationship between legal marijuana consumption and the consumption of other ‘sin’ goods such as alcohol or tobacco. If it is indeed true, as many advocates claim, that marijuana consumption is ‘better’ in a public health sense than alcohol or tobacco consumption, the optimal regulation of marijuana should be designed to take into account responses in these other markets as well.

The authors of the study do not mention another problem with setting revenue-maximisation as the goal of cannabis taxation policy: potentially preserving the black market in cannabis. As the Adam Smith Institute and Volteface pointed out in our The Tide Effect report on cannabis legalisation last year:

Revenue from taxation of the legal market will benefit the Treasury, although this benefit must be secondary to ensuring the legal market is placed at a competitive advantage to the illicit alternative.

My colleague Sam Bowman has previously argued that “cannabis will be seen as the test case for further drug reform,” and a poorly designed taxation system will undermine the change in consumption patterns that provides the rationale for legalisation.

Finding the best cannabis taxation policy may not be the most exciting part of legalisation efforts, but getting it right is crucial if we are to ensure that the goals of harm-reduction and market efficiency are met.

Read More
Tim Worstall Tim Worstall

Obviously we want unilateral free trade in agriculture but not just there, equally obviously

A good little report out from Policy Exchange making the same point that we've been making here. Unilateral free trade is a good thing, Brexit allows us to have it again, therefore we should have unilateral free trade post-Brexit. The only slight flaw with this report being that it concentrates upon agriculture rather than adamantly insisting, as we do, that the logic covers everything - not that that's a vice, we as usual are just doing a little more of that table thumping.

The Common Agricultural Policy has, at great expense, reduced agricultural productivity by lessening competition and supporting inefficient farmers, and increased costs for consumers. Outside the EU, the UK will be free to abolish tariffs on food products, which will unlock new trade deals, help developing countries and deliver cheaper food for consumers. We can also reform the agricultural subsidies regime so that we reward farmers who deliver public goods like biodiversity and flood prevention, rather than rewarding wealthy landowners.

Policy Exchange recommends that:

After leaving the EU Customs Union, the UK should unilaterally phase out tariffs that increase consumer food prices and complicate new trade deals.

Yes, quite so, why don't we all eat from that cornucopia of the world's food markets? 

We would, and we have here and elsewhere, go further and offer the design of the perfect trade deal:

1.There will be no tariff or non-tariff barriers on imports into the UK.

2.Imports will be regulated in exactly the same manner as domestic production.

3.You can do what you like.

4.Err, that’s it.

Now that we've solved the entirety of Britain's trade stance before breakfast we'll get on with the more difficult things later in the day.

Read More
Sam Dumitriu Sam Dumitriu

Wal-Mart: A progressive success story...

I stumbled upon a fantastic paper the other day from Jason Furman, who served as Chair of the Council of Economic Advisers under Barack Obama (H/T Matt Ygleisas at Vox).

In 'Wal-Mart: A Progressive Success Story' Furman defended Wal-Mart against its left-wing critics arguing that supermarket chain didn't benefit from corporate welfare and raised real wages.

It's a fun paper and it's arguments stretch beyond Wal-Mart. They could easily apply to gig economy firms who have expanded low-paid work and lowered prices at the same time. As well as recent debates around whether tax credits are a form of corporate welfare (they're not).

Here are some of the best bits.

On prices:

"The most careful economic estimate of the benefits of lower prices and the increased variety of retail establishments is in a paper by MIT economist Jerry Hausman and Ephraim Leibtag (neither researcher received support from Wal-Mart). They estimated that the direct benefit of lower prices at superstores, mass merchandisers and club stores (including but not limited to Wal-Mart) made consumers better off by the equivalent of 20.2 percent of food spending. In addition, the indirect benefit of lower prices at competing supermarkets was worth another 4.8 percent of income. In total, the existence of big box stores makes consumers better off by the equivalent of 25 percent of annual food spending. That is the equivalent of an additional $782 per household in 2003.

"Because moderate-income families spend a higher percentage of their incomes on food than upper-income families, these benefits are distributed very progressively."

On wages:

"The one study that was published in a peer-reviewed economics journal found that “Wal-Mart entry [in a county] increases retail employment by 100 jobs in the year of entry. Half of this gain disappears over the next five years as other retail establishments exit and contract, leaving a long-run statistically significant net gain of 50 jobs.” The paper also found a small negative impact on jobs at wholesalers “due to Wal-Mart’s vertical integration” and no statistically significant effect on other industries.

...

"Neumark et al. and another paper by Dube, Barry Eidlin and Bill Lester also studied the impact of Wal-Mart entry on nominal wages. ... All these declines are less than 1 percentage point. The paper also finds that grocery workers’ wages go down in both urban and rural areas and other workers see no significant change in wages. In total, Dube et al. estimate a $4.7 billion annual reduction in retail earnings.

"Neither paper estimated the impact of Wal-Mart on real wages. Presumably the workers in the retail sector and more broadly also benefit from the lower prices that follow the entry of a Wal-Mart. The nominal wage effects in both papers have to be compared to the 7 to 13 percent retail price effect in the long run found by Basker or the reduction in the broader CPI found by Global Insight. Taken together, the evidence appears to suggest that, even for retail workers, the benefits of lower prices could outweigh any potential cost of lower wages –potentially leading to higher real wages even in the retail sector."

On Corporate Welfare:

"The total tax bill, however, is not the relevant question. Instead the question is whether Wal-Mart and its employees pay their “fair share” in a way that is consistent with businesses and workers in similar circumstances. Dube and Jacobs ask one version of this later question. They argue that Wal-Mart pays less than comparable employers (as discussed earlier, the evidence suggests this is not the case) and ask the question: how much do Wal-Mart’s low wages cost taxpayers? They estimate that Wal-Mart pays its full-time workers $8,620 less than comparable employers. They further estimate that Wal-Mart workers get $1,952 in public assistance annually (including Medicaid, EITC, food stamps, and other programs), or $551 more than comparable employers. They assert that this difference is a “hidden cost” of Wal-Mart.

"Their analysis, however, is incomplete and as a result features the wrong answer. Assume that the Dube and Jacobs’ numbers are accurate. If Wal-Mart pays the employee $8,620 less, that money has to go somewhere. If this money goes into corporate profits or executive compensation, it will result in an additional $3,017 in taxes at the 35 percent marginal rate. If even one-fifth of Wal-Mart’s lower wages went to corporate profits or top executives, that would be enough to make its low wages – by the Dube-Jacobs estimate – a net revenue increaser for the federal government. Based on the Dube-Jacobs results, it is overwhelmingly likely that if Wal-Mart pays lower wages, then this would improve the government’s fiscal situation.

"But encouraging private-sector companies to distribute their compensation to maximize net government revenues is peculiar and backwards. Who would recommend, for instance, that a corporation cut pay for its middle-income workers in order to raise executive compensation on the theory that this will raise total tax collections because executives are in a higher tax bracket?"

Read the full paper.

 

Read More
Tim Worstall Tim Worstall

To think that people are complaining about this

It's true that America's Cheesecake Factory is not the sort of gourmet food consumed by refined aesthetes like you and we. But it's perfectly acceptable food for all that, rather better than average in fact. You also get a hefty portion for not all that much money. The puzzle though is that people complain about this:

Watch out, diners: There are serious calories in some restaurant meals.

That was the message of the Center for Science in the Public Interest, a nutrition advocacy group, as it released its annual "Xtreme Eating Award" winners — the most calorie-stuffed dishes and drinks from the country's chain restaurants.

Topping the list were entrees like The Cheesecake Factory's Pasta Napoletana, which the chain describes as a meat lover's pizza in pasta form. The pasta, dressed in a Parmesan cream sauce, is topped with Italian sausage, pepperoni, meatballs, and bacon and clocks in at 2,310 calories, 79 grams of saturated fat, and 4,370 mg of sodium.

We checked the price of this and in the LA area it seems to come in at $14. At which point we really do start to wonder why people are complaining.

Our point being that there has never in human history been a time when the average working guy or gal could go and have a full day's worth of calories of meaty goodness - OK, we know that meatballs and sausage are made of the scrag ends but still - for two hours of minimum wage labour, or more pertinently around 30 minutes work at the US median hourly wage of $25. Not cooked in a restaurant there hasn't been a time before now when this was true.

Far from us complaining about this we'll just add it to our list of proofs that the Good Old Days are right now.

Read More
Ben Southwood Ben Southwood

RPI is silly, but not completely crazy

Chris Giles, the FT's economics editor, has recently been waging a war on the retail prices index (RPI)—Britain's venerable price statistic used to set rail prices, student loans interest, and repayment of some gilts. I'm a fan of Chris, but I think he's gone a bit far: yes, RPI is a bad index, but no, it's not necessarily unfair and wrongheaded in the way he describes.

Nowadays, the official measure of inflation is the consumer prices index (CPI), which, unlike RPI, is designated a national statistic. It's what the Bank of England uses for the flexible inflation target its monetary policy is based around and it differs from RPI by using a much better aggregation method, a bigger and more broad-based sample, and excluding housing.

It's not a judgement call: it's just a better index, which is why more or less everything has switched over. But some things aren't. For some of them it's because they date backwards. For example, the government has long sold RPI-linked gilts—there are £407bn outstanding according to Giles—from which we impute the TIPS market forecast of inflation. For others, it's less obvious why they do.

Now Giles has one very good point. In 2010 the RPI formula was changed to measure certain goods (especially clothes) more wrongly. This inflates the index. This means that repayments to pre-2010 RPI-linked-gilt-holders are higher than they would otherwise have been. Whenever this move was expected—or if unexpected, announced—this was a handout to pre-2010 holders. But after that point, it's all priced in. Everyone knows the index will overestimate inflation, and everyone knows by about how much (any error benefits the govt as much as the investors). Yes, we shouldn't have done it, and maybe we should even claw this money back—but it was a once-off error. Market pricing means it doesn't compound.

But I don't follow his other points at all. Yes, RPI adds some arbitrary amount onto "true" inflation, so post-2012 RPI-linked student loan interest rates are higher than they would be with CPI. But the interest rate on these student loans is entirely arbitrary anyway. Given their repayment rates (around 55%) and repayment schedules, the government is clearly subsidising their true cost to an astonishing degree. The RPI link is a semi subtle way of getting a small portion of that back. Like how "money illusion" means unexpected inflation is good during slumps.

The same is true of rail fares. It's good that RPI hides a little bit of extra increase in real fares. Economising on scarce resources through prices is a good thing, and we currently subsidise rail somewhat too much. If we do it through explicit price increases, people might bear a larger psychological burden when we (slightly) reduce how much the government pays for people's rail travel.

Switching to the CPI doesn't magic up money. In both cases it just makes the government pay more, and the users of the service less. Does Giles really think that the baseline is inflation plus the arbitrary number they've currently set by fiat, rather than inflation plus that arbitrary number, plus the arbitrary chunk of measurement error? It's hard to see why.

This isn't to say that we shouldn't switch away from RPI. It's a bad stat, and if Chris is right about the legality of doing so, then it sounds like we could quite easily switch, eventually, without the large reputation costs that go with seeming like we're reneging on obligations. But let's not use motivated reasoning to get there. And is it really necessary to use language like "fleecing", or blame the ONS, who almost certainly are not the ones making the final judgement call?

Read More
Kevin Dowd Kevin Dowd

Market values and the stress tests

This blog posting is the first in a series on the 2016 Bank of England stress tests. A fuller report, “No Stress III: the Flaws in the Bank of England’s 2016 Stress Tests”, will be published later in the year by the Adam Smith Institute.

This blog posting is the first in a series on the 2016 Bank of England stress tests. A fuller report, “No Stress III: the Flaws in the Bank of England’s 2016 Stress Tests”, will be published later in the year by the Adam Smith Institute.

Early in January this year, ITN’s Joel Hills approached me about a feature on the stress tests that he was planning to do for News at Ten, and which was broadcast on January 10th. He was going to interview Sir John Vickers on the market values vs. book values issue and he asked me if I would provide the results that showed how using the latest available market values instead of book values would have affected the results of the Bank’s 2016 stress tests. Sir John and I had been arguing for some time that the Bank should pay more attention to market values, especially when they are lower than book values: as of early January 2017, market values were about 2/3 of book values.

The choice of book vs. market values makes a big difference to the results of the stress test: if you use book values in the Bank’s stress test, then only RBS fails the test, but if you replace book values by market values and make no other changes to the test, then only Lloyds passes. So book vs. market values is a big deal.

Why should we use market values rather than book values? The reason is that market values being less than book values signals that the markets do not believe the book values: the most likely explanation is that the markets believe that there are expected losses coming through that the book values are not picking up.

Vickers had put a similar point to Carney in a letter of December 5th last year:

… market-to-book ratios for some major UK banks are well below 1. That indicates market doubt about the accuracy of book measures. To the extent that such doubts are correct, stress tests based on book values are undermined.

The Bank appears to take the view that low market-to-book ratios are down to dimmed prospects of future profitability rather than problems with current asset books. But such a view is hard to sustain for banks with [price-to-book] ratios below 1. There is, at the very least, a serious possibility that low market-to-book ratios are signalling underlying problems with book values. This certainly cannot be dismissed, especially when one is examining the ability of the system to bear stress – an exercise that calls for prudence. [1]

To me this statement is self-evidently correct, so I was surprised that in his reply letter Governor Carney sought to challenge it: he continued to defend the Bank’s earlier position that low market-to-book is due to low future profitability and dismissed Vickers’ concerns about the possibility that markets might be signalling deeper issues with the book values.

I have to ask myself how the Bank of England can be so sure (and prudently so!) that its interpretation is correct and that Vickers’ is not.

Vickers’ March 3rd response to Carney’s dismissal of his analysis is unanswerable:

The regulation of banks is based on accounting measures of capital. A major source of risk to financial stability is that capital is mis-measured by the accounting standards used in regulation. In that case, bank regulation that allows high (e.g. 25 times) leverage relative to accounting (or ‘book’) measures of capital is more fragile than may appear.

An instance of this point is that stress tests based on book values are themselves vulnerable to erroneous measurement of capital, because those measurements are their starting point. Furthermore, bank regulation nowadays counts convertible debt instruments such as CoCos as akin to equity capital, but the conditions in which they convert to common equity (or are written down) are also dependent on accounting measures of capital. In short, a lot is riding on book values being reasonably accurate. …

None of this is to say that markets necessarily value assets accurately. Rather, the point is that low price-to-book ratios, especially when below one, signal a serious possibility that book values are inaccurate, and hence that the basis for regulation (not just in stress tests) is open to question.

Market values are not always reliable, but

when [market values] are low, systematic attention should be paid to them, and transparently so. [2](My italics) 

More clutching at straws: further BoE arguments against market values

Let’s consider another objection made by the Bank against it using market values:

Low market valuations can reflect a number of things, all of which lead to weak expected profitability. But, crucially, different reasons for weak profitability can have quite different implications for a bank’s resilience. This is because they have different impacts on the value of the bank’s assets if it needed to sell them to pay for losses elsewhere in the business. [3]

The Bank then illustrated this point by comparing two hypothetical banks with the same cash flows – one is efficient but has poor assets, the other is inefficient but has good assets and could sell some if need be.

The Bank’s argument is a distinction without a difference, however. Weak expected profitability – whatever the cause – is a potentially serious financial stability issue and it is as basic as that. As Vickers pointed out in his April 26th letter to Alex Brazier:

A holder of the BoE view, if I may put it that way, can however respond by noting … that the inefficient bank with good assets can sell some. If such a bank alone faced difficulties – so in the absence of systemic stress – this would be a reasonable answer.

But it is harder to see how asset sales could be a satisfactory response in conditions of systemic stress, a typical feature of which is precisely the inability of banks to sell assets except at distressed prices. This is the well-known ‘fire sale’ problem …

The gist of this problem that a bank that suffers a large loss might be forced to reduce its asset holdings by selling assets at fire-sale prices. If other banks must revalue their assets at these temporarily low market values, then the first sale can set off a cascade of fire sales that inflicts losses on many institutions and thereby create asystemic problem.

This kind of risk, I suggest, should be central to thinking about financial stability, and to stress tests. Financial stability policy should take a prudent approach as a general matter. In particular, it should not place reliance on banks being able to sell assets in crises at good prices. While that might cope with an idiosyncratic shock affecting one bank, it will not do in a systemic crisis. But systemic crisis risk is the principal risk that regulation should guard against. The prudent stress test question, then, is whether the bank can meet its obligations without resorting to asset sales. It is not whether it can do so on the assumption that assets can be sold at good prices.

In sum, low market valuations imply less resilience even when the possibility of asset sales is allowed for. Tests of resilience that rely on resort to asset sales are flawed because, as experience shows, in a systemic crisis it may well be impossible to realise full value from asset sales.

Tim Bush also offers a powerful rebuttal:

Essentially, from the perspective of a shareholder providing capital, the Bank’s second example (good current balance sheet, poor future returns) is really an admission that a bank as a whole is one big impaired asset. Nothing resilient about that. Particularly, no incentive to refinance it if it incurs unexpected losses for example. New investment won't achieve an appropriate return. 

The Bank’s line is a bit like saying British Leyland was resilient if the factories were brand new. [4]

Another objection to the use of market values was made by Alex Brazier in his evidence to the Treasury Committee on January 11th 2017:

…if you had [relied on market cap values] before the crisis, you would have been led completely astray … You would have been led to the conclusion that the British banking system was remarkably resilient, and, as forecasting errors go, that would have been quite a good one. [5]

Really? Consider this chart, which shows how the price-to-book (P2B) ratios of international banks fell the before crisis. The P2B ratios for UK banks are similar.

Then consider the next chart, which shows the ratios of market capitalisation to the book value of equity for two sets of international banks, the “crisis” ones that failed, required assistance or were taken over in distressed conditions, and the “non-crisis” ones that weathered the storm.

It is, thus, clear that markets were signalling problems with the banks and they correctly identified the weakest banks too. In the UK case, they also correctly identified in advance the two biggest UK problem banks, HBOS and RBS. [6]

Mr. Brazier omits to mention that the Bank was relying on Basel model-based book values that completely missed the impending meltdown and he does not offer any alternative that would have credibly worked better.

He also omits to mention the Bank’s own record on this issue. The ‘British banking system is resilient’ is exactly the message that the Bank itself was putting out before the Global Financial Crisis (GFC). Not only did the Bank itself have no inkling of the GFC before it hit, but in the early stages of the GFC and even after the run on Northern Rock, it was still reassuring us that there was little to worry about and that the UK banking system was more than adequately capitalised. These reassurances proved to be as wrong as wrong can be.

The charts above are evidence that market values did provide some warning and there is further evidence too. To quote the Bank’s own chief economist, Andy Haldane:

market-based measures of capital offered clear advance signals of impending distress. … Replacing the book value of capital with its market value lowers errors by a half, often much more. Market values provide both fewer false positives and more reliable advance warnings of future banking distress.

… market-based solvency metrics perform creditably against first principles: they appear to offer the potential for simple, timely and robust control of a complex financial web. [7]

It is also helpful to compare their respective track records at predicting subsequently realised bank failures: markets have sometimes got it right and sometimes got it wrong, but bank regulators have always got it wrong. Their failure prediction rate is exactly zero percent. Even chicken entrails would have had a better success rate than whatever model or crystal ball regulators anywhere use to peer into the future and no rational person would ever believe the forecasts of a group of forecasters with a zero percent success rate.

The former President and CEO of BB&T Bank, John Allison, confirms this point and explains why:

One observation in my 40-year career at BB&T: I don’t know a single time when federal regulators—primarily the FDIC—actually identified a significant bank failure in advance. Regulators are always the last ones to the party after everybody in the market (the other bankers) know something is going on. … regulators have a 100 percent failure rate. Indeed, in my experience, whenever they get involved with a bank that is struggling, they always make it worse—because they don’t know how to run a bank. [8] 

But I digress.

So what it comes down to is that if the Bank does not use market values for the stress tests, then it should have a good reason not to. In terms of a concrete operating criterion, the natural answer is provided by the Principle of Prudence, which suggests that it should value using the lesser of book values and market values – and central bankers are famously prudent.

Whilst on the subject of prudence, wouldn’t it be wise for the Bank to acknowledge at least the possibility that outsiders – not just Vickers and I, but also Anat Admati, Tim Bush, James Ferguson and Gordon Kerr, to name a few, and even Mervyn King, who have pointedly failed to endorse the stress tests – might be right or that we might at least have a point?

So answer me this, Bank of England: you say that your stress tests show that the UK banking system is sound. But how can you be confident in such assertions, when your stress tests are based on book-value numbers and when the markets are clearly signalling that something is wrong with those book values?

To cut to the chase, how can you expect the public to believe your narrative when the markets don’t?

The Vickers proposal for parallel market and book value tests

So let me endorse Sir John’s suggestion for a compromise as set out in his December 5th letter. The Bank should present both sets of results and let readers make up their own minds. As he wrote:

[My] proposal is not that market-based tests for such banks should replace tests of the kind that the Bank has run. The request is merely that the Bank supplements its results with market-based results.

That would inform public debate on a matter of great importance for economic policy, and it would enhance the transparency and accountability of the Bank.

Yet the Bank still insists that it should not publish any such results because – to quote Governor Carney in his December 19th reply to Vickers’ letter – to do so might confuse

the Bank’s communication around its stress tests. If we publish two sets of results that give different messages, people might struggle to understand what we are trying to say about the resilience of the banking system.

But as Vickers responded:

A stress test is primarily a test of the resilience of the banks, not a communications exercise. …. Considerations of transparency and accountability should therefore far outweigh the regulator’s communications agenda. [9]

A related problem is that Dr. Carney takes the Bank’s credibility for granted and then focusses on making the message simple for the audience. Such reasoning puts the cart before the horse. Instead, the key to effective communication is credibility and credibility must be earned and maintained, not presumed.

The Bank does not help its own credibility by brushing aside good outside advice, however politely. Publishing market-based results could allay any possible concerns that it might be trying to window-dress the banking system and itself in the best possible light. The Bank would still be able to give its own commentary explaining why it thinks that the book-value results are more credible than the market-value results.

It is also a mistake for the Bank to under-estimate its intended audience, who should be presumed to be capable of making up their own minds when presented with the evidence and should be treated with appropriate respect.

The Bank repeatedly makes the mistake of ‘oversimplifying’ its message and then making claims that turn out later to have been way off the mark, thereby undermining its own credibility again and again. It made that mistake when it reassured us before the financial crisis that the banking system was strong. It made that mistake when it told us during the Brexit referendum campaign that a Leave vote could trigger a recession and that Brexit was the biggest single risk facing the UK economy, and it is making the same mistake again with the stress tests.

To paraphrase Hubert Humphrey on propaganda, a perhaps not entirely unrelated subject: the Bank of England message, to be effective, must be believed. To be believed, it must be credible. At the moment, it is not.

End Notes

[1] “Supplementary market-based stress test results,” letter from Sir John Vickers to Governor Mark Carney, December 5th 2016.

[2] Sir John Vickers, “Response to the Treasury Select Committee’s Capital Inquiry: Recovery and Resolution,” March 3rd 2017, pp. 7, 8 and 12.

[3] Quoted from Sir John Vickers’ letter to Alex Brazier, April 26th 2017, copies of which are available on request from Sir John.

[4] Personal correspondence.

[5] Treasury Committee “Oral evidence: Bank of England Financial Stability Reports,” HC 549, Wednesday 11 January 2017, answer to Q173.

[6] See, e.g., Chart 2.73 on p. 153 of the FCA/PRA report The Failure of HBOS plc.

[7] A. G. Haldane, “Capital discipline,” speech given at the American Economic Association, Denver, January 9th 2011, p. 8.

[8] J. Allison, “Market discipline beats regulatory discipline,” Cato Journal, 24(2), Spring-Summer 2014, p. 345.

[9] Quoted in Vickers Capital Enquiry testimony.

Read More
Jessica Searle Jessica Searle

Our self-driving future is here... almost

In the past the idea of a driverless car would have featured in a science fiction movie rather than in companies’ plans for anything between the next six months to ten years. But automated vehicles are increasingly becoming less of a product of the imagination and more of a reality. By 2020, roads may look completely different.

In a sense, driverless cars are already here. In the summer of 2016, Uber began trialling them in Pittsburgh—although there were still two members of staff in the vehicles to make notes and step in if anything went wrong. Not that they needed to: the cars were mostly capable of navigating the city without human intervention.

We are very much still in trial stages, but some vehicle companies and their founders are confident of making great strides in the immediate future. Elon Musk, the creator of Tesla, said in April 2017 that by the “November or December of this year, we should be able to go from a parking lot in California to a parking lot in New York, no controls touched at any point during the entire journey”. Although he did clarify that the passenger would need to be able to intervene (so wouldn’t be able to fall asleep, for example), he predicted that Level 4 automated Tesla vehicles—where a car is driverless in almost all situations—would be available from 2019.

Google too have been involved in testing automated vehicles, with perhaps even more optimistic starting estimates than Tesla’s Elon Musk. In 2012, at which point Google’s driverless vehicles had already test-driven 300,000 miles, Sergey Brin said that “you can count on one hand the number of years it will take” for Google to have produced automated vehicles for the public. Although the technology behind the vehicles has not been finalised yet they have developed their automated vehicle project into an official company called Waymo as of December 2016. Chris Urmson, head of the project, has said that they are aiming to release the product by 2020.

Other car manufacturers are not so ambitious but are still making predictions that would see the introduction of entirely automated vehicles within the next five years. Nissan-Renault are looking to slowly increase the capacity of their cars to drive independently. Their aim for the ability to navigate a multi-lane highway by 2018, and complete automated ability in more complex driving situations such as urban environments, by 2020. BMW is working with Intel and Mobileye to create fully automated vehicles by 2021.

Even those that are cautious about the technology are working on ambitious timescales. The president of the Insurance Institute for Highway Safety and the Highway Loss Data Institute, Adrian Lund, said in 2016 that he thought a Level 5 automated vehicles—the top level of automation, which would not require a driver under any circumstances—would be a minimum of 10 years away. A Level 4 vehicle could be managed within five, he thought.

Hyundai has also taken a slower approach: they are “targeting for the highway in 2020 and urban driving in 2030.” It is worth bearing in mind, however, that many later estimates make reference to Level 5 vehicles, while frequently earlier ones are for vehicles that have reached Level 4 of automation. Naturally timeframes are going to be longer if they are aiming for a higher level of technological advancement.

As things stand, the technology is still very much a work in progress, and accidents can and do happen. In May 2016, a trial drive of a Tesla ended in a fatality when “neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied”. However, they did note that this was the first fatality of over 130 million miles of autopilot driving, whereas the world average is a fatality every 60 million miles.

Predicting the future is difficult. We cannot be sure exactly when autonomous vehicles will finally be here. But the best guess is we'll start seeing something serious in the next decade. Seeing as it typically takes fifteen years for the great majority of car owners to update their models, if manufacturers switch to only automated vehicles by 2030, it will still be at least 2045 by the time driverless cars completely dominate the road. While this may seem a while away, it seems as if ultimately, within many of our lifetimes, driverless cars will have a monopoly on roads across the globe.

Read More
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Blogs by email