From “Dark Continent” to “Sun Continent”: a story of power  

Africa: a continent full of colours and great potential, yet suffering from being poorly developed. One way to see it is to have a look on the state of the electricity market in Sub-Saharan Africa. While 87% of the world population have access to electricity in 2016, only 42% of Africans have this privilege, thus ranking the continent at the lowest rate in the world, according to the World Bank. Sub-Saharan Africa is particularly affected. This translates into reduced business hours, or ineffective health systems that hardly meet the needs of the population.

chart

Notwithstanding these problems, Africa has great potential in the energy sector because it is abounding with natural resources. The continent may become a leader in sustainable energies in a near future, with the potential to drive tremendous change in Africa’s growth trend, and lead it on a wide development path.

How has Africa’s energy sector come to be in such a crisis? Are sustainable energies a pragmatic strategy or a nourished temporary utopia? What are the challenges Africa needs to face in order to get out of its energy trap? We explore a few ideas in this article.

The Dark Continent has assets that many could be jealous of. Northern, Western and Eastern African countries possess huge reserves of oil. Some of the biggest reserves of coal are to be found in South Africa. Morocco and Algeria have natural gaz. These resources are precious: fossil fuel-based power is the main source of electricity in Africa. Nevertheless, electricity needs are currently not covered. Indeed, most Sub-Saharan countries export those resources. In the case of Nigeria, the biggest African oil producer and Africa’s leading economy, black gold represents 95% of its export revenues, with Asia and Europe as the main recipients. As a consequence, a lack of infrastructure hinders the processing of oil, and the country even has to import its fuel from Europe. In 2016, 40% of Nigerians did not have an access to electricity at all. The rest of the population sometimes had a limited access to it, only for small time slots.

Nigeria is not the only country to struggle to provide an efficient electricity grid. The lack of refineries, as well as corruption, and bad business schemes for managing those resources explain the actual state of the energy sector. Fossil-fuel energy is the costliest source energy to produce. Moreover, lack of investments in the energy sector prevents firms from efficiently exploiting and transporting electricity in the territory. Since 2007, a global average of US$ 12 billion investment has been made in the sector, which represents only 36% of the estimated need to have an optimal electricity system. Today, governments still remain the main investors in the energy sector. Unfortunately, massive corruption in some countries has led to a reduction of effective allocated funds. In the 1990s, many countries privatised the energy sector in order to attract private investments. It worked for some countries like Namibia, which was able to lower energy tariffs and to improve efficiency. But for others, high costs of production and transformation led to high prices, creating a gap between the rich and the poor. The latter are not able to afford the installation cost of a basic phase circuit in their home. This ascertainment made by the Forum of Energy Ministers of Africa in 2000 still remains true for most people in rural areas today, who made up 60% of Sub-Saharan Africa’s population in 2016. Moreover, providing electricity in isolated areas is seldom profitable, offering firms little incentive to expand their production networks towards people who will not be able to pay.

Should we encourage investments for a better exploitation of coal, natural gas, and oil, which, in many cases, seem to be more profitable if exported? There is a double problem. First, those ressources are infamous for their gas emissions. Second, there are in finite supply. In addition, some catastrophes like the Probo Koala case in 2006, when a Greek-owned petroleum ship offloaded toxic waste in the port of Abidjan and intoxicated thousands of people, have shown the limits of poorly managed resources.

Sustainable energies are becoming an increasingly important part of the conversation. For some, the idea represents an utopia. For others, a fleeting trend. In spite of diverging opinions, it seems to be a reasonable hope for many, and the only possible escape from the energy crisis. Do we have reasons to believe them?

One could say oil and coal are being dethroned by sun and water.

Sometimes called the sun continent, Africa has the highest solar irradiance on earth, with the Sahara desert breaking records in daily sunshine periods. Countries like Morocco have already begun to exploit this major asset. Near the city of Ouarzazate, a giant solar farm has emerged, covering 1.4 million square metres of yellow sand in 2016. The aim of the Northern kingdom is to fulfil 52% of its electricity needs through solar power by 2030.

Africa is also home to the most powerful rivers in the world. One of them, the Congo River, is the second largest river in the world by its volume discharge after the Amazon. Its Inga Falls, situated in the Democratic Republic of Congo, currently hosts several projects that are all part of the Grand Inga project. The dam aims at providing cheaper energy to a large part of Africa,  allowing industries to take off. For instance, a significant part of the produced energy is destined to be sent to South Africa. Initiated in the early 2010s, the project has received a joint bid from a Chinese and a Spanish company in 2018 to cover the estimated cost of US$13 billion for the recent project expansion.

Less covered by the media, the wind power market is also expanding. Mainly produced in South Africa thanks to the REIPPPP project launched in 2011, this source of energy promises to be increasingly exploited in the future, as there are important costal winds in Eastern Africa.

There are many other examples of this kind, such as the geothermal energy in the East African Rift. According to the World Bank, renewable energies represent 70% of total final energy consumption in Sub-Saharan Africa in 2015, mainly because of improvised mini-grids in rural areas, which is still not sufficient to cover a 24-hours-a-day access to electricity.

Potential is not enough. Business schemes have to be improved for Africa to get out of poverty. Public-private partnerships, as recommended by a World Bank report in 2017, have to be implemented to better regulate the sector, incentivise firms, serve the public interest, and encourage private investments. If oil-producing countries need oil for their trade balance to avoid sinking into deficit, governments have to favour alternative energies to provide electricity to their population. Regional cooperation can also be a viable option: an example is the West African Power Pool (WAPP), which was founded in 2000 and gathers 14 countries with the common goal of building a common market for electricity. This creates a bigger and thus more attractive market for investors. The Central Africa Power Pool, Eastern Africa Power Pool, Southern Africa Power Pool and COMELEC – in Northern Africa – have also been implemented, thus dividing Africa into five main markets.

Investments in both non-renewable and renewable energies are already increasing, in particular because of the recent Chinese foreign policy concerning Africa. Electrification is on an upward trend since public policies are more directed towards the common good. Having said that, there remain many efforts to be made. The Dark Continent could become greener, more active, and healthier – the “Sun Continent” could one day be a more fitting name.

by Rose Mba

The winners and losers of the French 2008 feebate policy

In 2008, the French government introduced a policy taxing cars with high carbon emissions and rebating low carbon emission cars, better known as a feebate policy or bonus-malus écologique. This type of policy is appealing for two reasons: first, because it provides incentives to purchase less polluting cars, and secondly, because it can be designed to be revenue neutral since the revenue collected through the taxes subsidies the rebates.

In a recent paper, I conduct a quantitative evaluation of this policy, with a particular focus on its distributional effects: it is particularly relevant in this case to identify the winners and losers of the policy. I also analyse the effect of this policy, which is based on carbon emissions, on other local pollutants like particulate matter or nitrogen oxide. By nature, a policy that targets carbon emissions favours diesel cars, which consume higher levels of emissions of nitrogen oxide and particulate matter than petrol cars. Particulate matter and nitrogen oxide are known to have a direct impact on air quality and hazardous effects on health. While carbon emissions have a global impact, these local pollutants’ emissions raise the question of distributional impacts of the feebate policy in terms of health effects.

To measure these effects, I build a structural model of market equilibrium for the automobile industry. This implies estimating the supply and demand for the different car models using data on car characteristics and sales, which can then be used to simulate what the market would have looked like had there been no feebate policy in 2008. Comparing the observed market equilibrium with the counterfactual one, I can thus deduce the policy’s causal effect. Relying on a structural model is especially useful because some outcomes of interest cannot be observed directly, but can be expressed in terms of the model parameters. This is the case for car manufacturers’ profits and consumer surplus.

A notable challenge in modelling this market and being able to distinguish the winners and losers of this policy is to incorporate a large dimension of heterogeneity in individuals’ preferences for cars and their attributes. I make the assumption that individual heterogeneity in preferences is related to observable demographic characteristics, and leverage the correlation between composition of car sales and demographic characteristics at the municipality level. For instance, observing that the cars purchased in rural areas tend to be more fuel efficient than in urban areas reveals that individuals in rural areas tend to drive more, and are thus likely to be more sensitive to fuel costs than those living in urban areas. I also find a positive correlation between horsepower and income, which can be observed from the sales in wealthier municipalities.

On the supply side, I model the competition between car manufacturers and their pricing strategies, with and without the feebate policy. I do not model the choice of car characteristics and consider they are identical regardless of the regulatory environment. The marginal cost of each car model is estimated under the assumption that the car prices in 2008 are the optimal prices under the feebate policy. In the simulation of the market equilibrium absent the feebate policy, I predict prices and sales for each car model since both are jointly determined by demand and supply.

What is important here is that when setting its prices, the firm anticipates that consumers get a rebate or pay a tax and take up a part of the rebate or the tax. How much is left to the consumer depends on the competition of the market and the market power of car manufacturers.

In the end, the feebate policy improved consumer surplus and firms’ profits, surpassing the 223 million euros it cost in 2008. I find that the feebate caused a decrease in average carbon emissions of 1.56%, while average emissions of local pollutants – carbon monoxide, hydrocarbon, NOx, and PM – all increased. Emissions of local pollutants and carbon dioxide, however, increased once converted into annual tons. The increase in annual carbon emissions can be explained not only by the higher share of diesel cars, which implies more kilometers are driven, but also by the increase in the number of cars purchased. Indeed, the cars with low carbon emissions, which are already cheap cars, become even cheaper because of the rebates. This means that individuals who were not initially buying a car do buy a car, at least in my model. Nonetheless, including the cost of carbon and local pollutant emissions using standard levels still implies that the policy is globally welfare improving, with an estimated net benefit of 124 million euros.

Shifting the focus to the impact on income distribution, the main insight is that the feebate favoured the middle-income category at the expense of low and high-income classes. Moreover, given that the policy was not revenue neutral and contributed to a net deficit, the feebate could have been made redistributive if it were to be compensated by a proportional to income tax.

Clear winners and losers also appear among the car manufacturers. Car manufacturers are typically very specialised in different car segments: French manufacturers specialize in small, fuel-efficient cars, whereas bigger cars are the mainstay of the German car manufacturers. It comes as no surprise that the model points towards PSA and Renault, the two French manufacturers, as the winners of the feebate policy. The feebate policy increased their profits by 3.4% and 4% respectively, a considerably higher gain compared to increase in profits of the total industry (2.1%). Fiat group, the Italian manufacturer, increased its profits by 6.2% while Volkswagen, a German manufacturer very active on the compact car segment, only increased its profits by 0.3%. The other German manufacturers such as Porsche, BMW, and Mercedes-Daimler, were all severely hurt by this policy.

Finally, looking at the heterogeneity of the policy effects in terms of emissions of local pollutants, I find that average emissions increased the most in low emission municipalities. The policy generated a decrease in average emissions of local pollutant in some areas, but a high degree of heterogeneity can be observed across the country.

The analysis is concluded by an evaluation of the feebate in terms of redistribution and limitation of local pollutant emissions. The idea is to ask whether it would have been possible to improve consumer surplus, achieve more distribution across individuals, or limit the increase in emissions of local pollutants with the same budget and the same effect on average carbon emissions. In this exercise, I restrict the set of alternative policies to be simple linear feebates with different slopes for rebates and taxes. Interestingly, I find that average consumer surplus cannot be further improved, while there are large potential gains in terms of profits. Alternative feebate schemes could limit the rise in emissions of local pollutants, but the gains are not very large, and the best outcomes for the different pollutants cannot be achieved with a single feebate scheme: this reveals that there is an arbitrage to be made between the various pollutants.

by Isis Durrmeyer

An interview with Daron Acemoglu on artificial intelligence, institutions, and the future of work

The recipient of the 2018 Jean-Jacques Laffont prize, Daron Acemoglu, is the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology. The Turkish-American economist has been extensively published for his research on political economy, development, and labour economics, and has won multiple awards for his two books, Economic Origins of Dictatorship and Democracy (2006) and Why Nations Fail (2012), which he co-authored with James A. Robinson from the University of Chicago.

The Jean-Jacques Laffont prize is the latest addition to the well-deserved recognition the economist has received for his work, which includes the John Bates Clark Medal from the American Economic Association in 2005 and the BBVA Frontiers of Knowledge Award in Economics in 2017. Despite a schedule heavy with seminars and conferences, Daron kindly set aside some time to offer the TSEconomist his insights on topics ranging from the impact of artificial intelligence for our societies to the role an academic ought to take in public political affairs.

_DSC0724

  1. Congratulations on winning the Jean-Jacques Laffont prize. What does this prize represent to you?

I’m incredibly honoured. Jean-Jacques Laffont was a pioneer economist in both theory and applications of theory to major economic problems. I think this tradition is really important for the relevance of economics and of its flourishing over the last two decades or so. I think it’s a fantastic way of honouring his influence, and I feel very privileged to have been chosen for it.

  1. Thanks to you and other scholars working on economics and institutions, we now know that the way institutions regulate economic life and create incentives are of great importance for the development of a nation. New players such as Google now possess both the technology and the data needed to efficiently solve the optimisation problems institutions face. This raises the debate on government access, purchase, and use of this data, especially in terms of efficiency versus possible harms to democracy due to the centralisation of political power. What is your take on this?

I think you are raising several difficult and important issues. Let me break them into two parts.

One is about whether the advances in technology, including AI and computational power, will change the trade-off between different political regimes. I think the jury’s out and we do not know the answer to that, but my sense would be that it would not change it as much as it changes the feasibility of different regimes to survive even if they are not optimal. What I mean is that you can start thinking about the problem of what was wrong with the Soviet Union in the same way that Hayek did. There are problems to be solved and they’re just too complex, the government can’t handle it and let’s hope that the market solves it.

Then, if you think about it that way, you may say that the government is getting better at solving it, so perhaps we can have a more successful Soviet Union. I think that this is wrong for two reasons that highlight why Hayek’s way of thinking was limited, despite being revolutionary and innovative. One reason is that the problem is not static, but dynamic, so the new algorithms and capabilities create as many new problems that we don’t even know how to articulate. It is therefore naive to think that in such a changing world, we can delegate decision-making to an algorithm and hope that it will do better than the decentralised workings of individuals in groups, markets, communities, and so on.

The second reason is that Hayek’s analysis did not sufficiently emphasise a point that I think he was aware of and stressed in other settings: it is not just about the capabilities of the governments, but about their incentives. It is not simply that governments and rulers cannot do the right thing, but that they do not have the incentives to do so. Even if they wanted to do the right thing, they do not have the trust of the people and thus cannot get the information and implement it. For that reason, I don’t think that the trade-offs between dictatorship and democracy, or market planning versus some sort of market economy, is majorly affected by new technology.

On the other hand, we know that the equilibrium feasibility of a dictatorship may be affected. The ability to control information, the Internet, social media, and other things, may eventually give much greater repressive capability to dictatorships. Most of the fruitful applications of AI are in the future and to be seen, the exception being surveillance, which is already present and will only expand in the next ten years, in China and other countries. This will have major effects on how countries are organised, even if it may not be optimal for them to be organised that way.

To answer the second part of your question, I think that Google is not only expanding technology, but also posing new problems, because we are not used to companies being as large and dominant as Google, Facebook, Microsoft, or Amazon are. Think of when people were up in arms about the power of companies, robber barons, at the beginning of the 20th century, leading to the whole progressive sequence of precedents being reformed, antitrust and other political reforms: as a fraction of GDP, those companies were about one quarter as big as the ones we have today. I therefore think that the modern field of industrial organisation is doing us a huge disfavour by not updating its way of thinking about antitrust and market dominance, with huge effects on the legal framework, among other things. I don’t know the answers, but I know that the answers don’t lie in thinking about something like “Herfindalh is not a good measure of competition so therefore we might have Google dominate everything, but perhaps we are ok” – I think that this is not a particularly good way of going about things.

  1. Some fear that the dominance of these companies could lead to the growth of inequality. Do you think that AI could play a role in this?

I am convinced that automation in general has already played a major role in the rise of inequality, such as changes in wage structure and employment patterns. Industrial robots are part of that, as well as numerically controlled machinery and other automation technologies. Software has been a contributing factor, but probably not the driver in the same sense that people initially thought about it. Projecting from that, one might think that AI will play a similar role, and I think that this is not a crazy projection, although I don’t have much confidence that we can predict what AI will do. The reason is that industrial robotics is a complex but narrow technology. It uses software and even increasingly artificial intelligence, but it isn’t rocket science. The main challenge is developing robots that can interact with and manipulate the physical world.

AI is a much broader technological platform. You can use it in healthcare and education in very different ways than in voice, speech, and image recognition. Therefore, it is not clear how AI will develop and which applications will be more important, and that’s actually one of the places where I worry about the dominance of companies like Google, Amazon, Facebook: they are actually shaping how AI is developing. Their business model and their priorities may be pushing AI to develop in ways that are not advantageous for society and certainly for creating jobs and demand for labour.

We are very much at the beginning of the process of AI and we definitely have to be alert to the possibility that AI will have potentially destructive effects on the labour market. However, I don’t think that it is a foregone conclusion, and I actually believe there are ways of using AI that will be more conducive to higher wages and higher employment.

  1. Regarding the potential polarisation between high and low-skilled labour, do you think that the government could address this issue with universal basic income?

There is a danger – not a certainty, but a danger – that it will polarise, and that even if we use AI in a way that simplifies certain tasks, it may still require some numeracy and some social skills that not all workers have, resulting in probable inequality and displacement effects.

That being said, I believe that universal basic income is a bad idea, because it is not solving the right problem. If the problem is one of redistribution, we have much better tools to address it. Hence, progressive income taxation coupled with something like earned tax credits or negative taxation at the bottom would be much better for redistributing wealth, without wasting resources on people who don’t need to get the transfer. Universal basic income is extremely blunt and wasteful, because it gives many transfers to people who shouldn’t get them, whereas taxation can do much better.

On the one side, I fear that a lot of people who support universal basic income are coming from the part of the spectrum which includes many libertarian ideas on reducing transfers, and I would worry that universal basic income would actually reduce transfers and misdirect them. On the other side, they may be coming from the extreme left, which doesn’t take the budget constraints into account, and again, some of the objectives of redistributing could be achieved more efficiently with tools like progressive income taxation.

Even more importantly, there is another central problem that basic income not only fails to deal with, but actually worsens: I think a society which doesn’t generate employment for people would be a very sad society and will have lots of political and social problems. This fantasy of people not working and having a good living standard is not a good fantasy. Whatever policy we should use should be one that encourages people to obtain a job, and universal basic income will discourage people to do so, as opposed to tax credits on earned income, for example.

  1. In a scenario of individuals being substituted and less people working, how could governments obtain the revenue they are not getting from income taxation? Could taxing robots be a possibility?

I think that this is a bad way of approaching the problem, because when you look at labour income, there is certainly enough to have more redistributive taxation, and no certain need to tax robots. However, we should also think about capital income taxation more generally: there may be reasons for taxing robots, but that has to be related more to the production efficiency and excessive automation. I think that singling out robots, as a revenue source distinct from other capital stock, would be a bad idea. If, for example, you want taxes to raise revenue, then land taxes will be a much better option than robot taxes – this does not mean that we should dismiss the idea of taxing robots. I think that this is confusing because there are efficiency reasons (giving the right incentives to firms) and revenue-raising reasons for taxing. Moreover, because of Bill Gates and other people, public discussions are not helping this confusion.

In terms of sharing wealth, I think that robots do not create new problems compared to other forms of capital. I think it was a confusion of Marx to think of marginal product of capital in very complex ways – that everything that goes to capital is somehow theft – and if neoclassical economics have one contribution, it is to clarify that.  I personally believe there are legitimate reasons for thinking that there is excessive automation. And if there is excessive automation, there are Pigouvian reasons for taxing robots, or actually removing subsidies to robots, which there are many. But that is the discussion we need to have.

  1. There has recently been optimism in regards to the future to AI and the role it could have, for example, on detecting corruption or improving education. You have made the distinction between replacing and enabling technologies. Where does one draw the line between the two?

That is a great question. In reality of course, automation and replacing technologies merge with technology that improve productivity. A great example would be computer-assisted design. Literally interpreted, that would be a labour augmenting technology, because it makes the workers who are working in design more productive. At the same time, however, it may have the same features as automation technology, because with computer-assisted design, some part of the tasks that a drawer would do would be automated. If you do it once, you can do it repeatedly.

So that is a grey area, but it’s okay because the conceptually important point to recognise is that different types of technologies have very different effects. Recognising this is an antidote against the argument that improving productivity through technology will always benefit labour; we actually need to think about what new technologies do and how the increase in productivity will affect labour.

But it is also very important for the discussion regarding AI to point out that AI, as opposed to industrial robot automation, is not necessarily – and does not have to be – labour replacing.  There are ways in which you can use it to create new tasks for labour or increase productivity. This is what I think will play out in real time in the future of AI.

  1. In 2017, you wrote an article for Foreign Policy, “We are the last defence against Trump”, which questioned the belief that institutions are strong enough to prevent a man like Donald Trump to overlook the rule of law. According to you, should economists come off the fence on current affairs? Is it possible to express an opinion without sacrificing some of the intellectual rigour one can expect from a researcher?

I think so. First, there are important personal responsibilities that are crosscutting. Secondly, there is a danger of having the perfect be the enemy of the good.

On the first one, I think that people have to make their own choices as to what is acceptable and what is not. Some things are just within the realm of “I prefer high taxes, you prefer low taxes”, and that is quite a reasonable thing. But some other issues may be a real threat to democracy, to other aspects of institutions, and to minorities that are disempowered. From there, it is important to recognise that there are some lines that should not be crossed, or if they are crossed, that some people need to defend them vocally. Any analogy to the Nazi period is fraud with danger, but it bears saying that, of course, in hindsight, every academic should have walked out of the universities that were managed by Nazis, that were firing Jewish scholars, or were teaching jurisprudence according to the national socialism. That has nothing to do with whether you have evidence of one versus or another – I think that there are some lines.  Similarly, and without saying anything as provocative as drawing parallels between Trump and the Nazis, I think that it is important for people, in general, to defend democracy against the onslaught that it is receiving from Trump’s administration and the circles of people around him. I think I will say openly to everybody that it is wrong for any economist or academic to go and work for Trump, and I think I would certainly never consider doing so, and would consider distancing myself from anybody who does.

But that is on the private ground. On the social science side, there is a lot we do not know. Everything we know is subject to standard errors and external validity constraints, but to refuse to act or to condemn would be to have the perfect be the enemy of the good. On the basis of what we know, we know how democracies fail, we know how certain aspects of American institutions are actually weaker than what people think, and we know how changes in policies against minorities would have terrible effects for certain groups. I think that on the basis of that, to articulate criticism of certain policies and certain politicians is also a good use of the knowledge we have accumulated.

by Valérie Furio, Gökçe Gökkoca, Konrad Lucke, Paula Navarro, and Rémi Perrichon

The price of two percent inflation

Though it is still considered a rather unconventional form of monetary policy, quantitative easing (QE) has become a commonly used tool of central banks around the world. It involves the central bank purchasing government bonds and other financial assets from the market.  First used in Japan in the 2000s and later in the US in the wake of the 2008 financial crisis, with the Federal Reserve consecutively launching three QE programmes in order to stimulate the economy. Other central banks such as the Bank of England and the European Central Bank (ECB) soon followed.

Quantitative easing is an expansionary monetary policy in which the central bank buys vast amounts of debt to increase liquidity and stimulate the economy. When a country is faced with the threat of deflation, a common response of the central bank is to decrease the key interest rate and thus raise inflation. However, when the interest rates are already at around zero, the central bank is caught in a liquidity trap, where standard monetary policy becomes ineffective. Then, it resorts to QE to prevent deflation. On top of increasing liquidity, buying large quantities of government debt allows it to influence long-term interest rates, something on which its standard monetary policies have no effect.

One of the primary tasks of a central bank is to provide price stability, which means keeping the inflation rate close to zero. However, as common practice, central banks often try to keep inflation at around two per cent, since this is neither high enough to be harmful to the economy, nor is there an immediate risk of deflation.

As a response to the European debt crisis, the ECB has lowered the key interest rates close to zero. Thus, when in 2014 inflation in the euro area dropped, with core inflation just above 0.5% and the consumer price index even turning negative, the European System of Central Banks (ESCB) started the asset purchase programme (APP), a QE programme where the government debt of each eurozone state was acquired by their respective central bank. The programme started in March 2015 with an average monthly net purchase of 60 billion euros. As the graph shows, this amount varied throughout the following years, peaking at 80 billion euros and being reduced to a minimum of 15 billion in September 2018, as the programme is supposedly coming to a gradual end. In total, the ESCB currently holds around 2.5 trillion euros as a result of APP.

Second picture
Inflation in the Euro area

An intervention of such a magnitude on financial markets does not go without less intended effects. Let us start by stating the seemingly obvious: inflation in the euro area increased throughout (as it can be seen in the graph) and most probably as a direct result of APP and is now at approximately the targeted level of two per cent. Thus far, QE achieved its purpose.

Curiously, the effect of APP on the inflation rate turned out considerably smaller than anticipated. Apart from intriguing macroeconomists around the globe, this phenomenon poses a potential risk to the European economy. Could the effect on the inflation rate simply be delayed, resulting in excessive and harmful inflation in the coming years? Critics of QE often name this as a risk of the policy. However, the central bank usually has the possibility to counterbalance this effect by reducing its holdings, which would usually diminish inflation.  This strategy too has its own risks. If applied too drastically, it could lead to deflation, the very phenomenon QE is trying to prevent. In between 2012 and 2014, the ECB reduced its holdings by around one trillion euros and simultaneously core inflation dropped by one percentage point, which arguably caused the ECB to create the current QE programme. It seems like preventing either too much or too little inflation is just a matter of finding the right equipoise. It is important to note that our understanding of inflation is at best incomplete and, thus policy makers should be on guard for potential unforeseen effects.

Another side effect of QE is the inevitable decrease in profitability of saving. Whether you are putting aside money for your retirement or the education of your kids, the growth of your savings is dramatically decreased by low interest rates. A less commonly named, but nonetheless momentous cause of the growing support for right wing populist parties, is voters having the feeling of being cheated for their saving yields. In addition, the low interest rates do not only disadvantage private savers, but also any institution financing itself through capital gains on a capital fund.

What differentiates APP from previous QE programmes in Japan or the US is the fact that because of the common currency, it has to be applied simultaneously in all eurozone states, in order for it to be effective. This can create several major complications.

First off, in any country applying QE, there is risk that the debt bought by the central bank is not repaid. In the case of private debt, this is a minor problem since the amounts bought from one institution are relatively small. Further, as long as the state is in a “healthy” situation regarding its debt, purchases in the public sector bear little risk since they effectively only consist of money transfers between two components of public authority.

However, the situation becomes significantly more complicated in the multinational construct of states. Let us take a closer look at the way the European QE programme is constructed. APP is the superordinate term comprising several different programmes, the largest of which is the public sector purchase programme (PSPP) making up around 80% of the APP.

First picture
APP monthly net purchases, by programme

The government bonds are acquired by the ESCB, in the way that each central bank buys exclusively from the government of its country. This regulation was created to prevent one country (or their central bank) having to step in to avert the bankruptcy of another state. This, in principle seems to be a clear regulation consistent with article 125 of the Lisbon treaty (TFEU), better known as the “no bail out” clause, stating that the union or any member state “shall not be liable for or assume the commitments of central governments”. However, the PSPP resolution empowers the ECB council to distribute the liabilities for the acquired bonds on all eurozone states in exceptional circumstances. In such an event, would a redistribution of liabilities be in accordance with the treaties, and what would be its consequence for the eurozone states?

In July 2012, Mario Draghi announced, “the ECB is ready to do whatever it takes to preserve the euro.” This statement was put into consequence through the OMT (Outright Monetary Transactions) programme announced in September 2013. Yet, the programme was never actually executed since the mere announcement was sufficient to calm financial markets. Investors no longer feared that a European state would be unable to repay its debts.

The significance of Mario Draghi’s statement is twofold. On the one hand, it succeeded in averting a possible crisis. On the other hand, it shows the ECB’s determination to prevent a state bankruptcy even if this means making other member states liable for the commitments of the insolvent state, the legal status of which is at least questionable.

However, the problem has another, more structurally profound component. The assurance that a state facing bankruptcy will be saved, leads to the moral hazard of no state having a sufficient incentive to reduce its debt. Why restrict yourself to a balanced budget or at least one leading to a sustainable debt situation, when you can have the benefits of an expanded budget without having to suffer its negative consequences?

In the eurozone this problem of moral hazard is not only embedded in the possibility of OMTs, it is already in practice through quantitative easing. Firstly, states are almost encouraged to incur more debt by the ECB keeping the interest rates artificially low. Secondly, the ECB is practically providing the money for them to incur more debt by buying their bonds. Now, article 123 of the TFEU states that “direct purchases” of debt of member states’ government institutions by the ESCB are prohibited, but since the QE purchases are exclusively conducted on secondary markets, it is not clear whether this actually counts as a “direct purchase”. This question along with the one on a possible breach of article 125 TFEU as explained above is currently being debated before the European Court of Justice.

Putting aside the legal controversies surrounding QE in Europe, the question arises whether the risks connected to this policy do not outweigh its benefits. Is it beneficial to condone low saving yields, uncertainty of future inflation, and the moral hazard of incentivising the issuance of debt, just to raise inflation to two per cent? Or is the financing of indebted states part of an unofficial purpose of APP? Both questions are hard to answer. Independent of any precise answer or opinion, it is important to realise the functioning and the effects of APP that distinguish it from precedent QE programmes in other countries.

by Konrad Lucke

 

Is your internet service provider throttling you?

Net neutrality can be defined as the principle that an internet service provider (ISP) such as AT&T or Verizon treat all the legal data online equally regardless of its sender and receiver.  Thus, under net neutrality, ISPs are not able to slow down, block or charge an extra fee to consumers for certain content. It is the way Internet has always worked so far.

There has been a strong debate on net neutrality for a while, and the different parties involved both have strong arguments on whether it should or should not be abolished.

On the one hand, we have ISPs claiming that without net neutrality they would be able to manage congestion more efficiently. Moreover, abolishing net neutrality would provide them more incentives to invest in capacity, thus leading to faster overall service.

On the other hand, we have content providers (CPs) such as Google and Netflix arguing that the net neutrality regime has been one of the main drivers of growth and innovation on the internet. Without net neutrality, it would be very difficult for new companies to thrive, thus limiting innovation on the web.

In December 2017, the FCC voted against net neutrality, and the annulment took effect in June 2018.

David Choffness, an Assistant Professor in Computer Science at Northwestern University, developed the mobile application “Wehe”, which allows you to observe whether your data is being throttled by your ISP.  Throttling can be defined as web content working poorly, such as a streaming service having low quality instead of HD. The Wehe app has been downloaded by more than 100 000 consumers all over the globe.  Results published in the News@Northwestern, a platform with the latest news, updates, and announcements from Northwestern University, show that almost every ISP in the USA is throttling data.

The most concerning observation is that it seems that ISPs did not even wait for the bill to go into effect because throttling started as early as January 2018. Also, from January to May, the app detected differentiation on throttling. Differentiation can be defined as when a certain type of network is more throttled than another. Indeed, CPs such as Netflix, YouTube, and Amazon (video) have reported that their network has been performing poorly.

Early economic research on net neutrality, prior to the bill, did not reach a consensus on a correct way to implement policies regarding its end. However, a few studies such as Peitz et al (2015) and Choi et al (2014, 2015) suggest that there exist some gains from rationing data i.e. ending net neutrality. For example, throttling certain sites might lead to a better performance in other time-sensitive sites such as Skype. Delaying other less time-sensitive sites will have little impact in social-cost as the real cost is just an inconvenience because content will still be delivered.

Whether users are better off under this setup i.e. without net neutrality, will depend on whether the gains outweigh the distortions created by throttling. Indeed, an important point is determining who will end up paying for the “prioritise delivery”: users or CPs. Consequently, it is important to define how the ISPs will adjust their fees. It is particularly difficult to do so since we are dealing with a two-sided market.

 A major concern according to Choi (2010) is the fact that ISPs could manage congestion such as to extract rents. For example, in peak hours they could offer a slow delivery service for “free” and a paid premium fast delivery, therefore engaging in price discrimination. Moreover, according to Musso and Rossen (1978), the ISPs could degrade the free service on purpose to force users or CPs to subscribe to the premium service. If everyone subscribes to the premium service, then, unless ISPs invest in capacity, the premium service would not be any faster than the regular one.

However, Choffes’ results show that ISPs are currently throttling 24/7. For the moment, it does not seem that they will engage in on-peak/off-peak pricing. Nevertheless, there is clearly differentiation and streaming sites seem to be targeted more than others. Concerning fees, there have not been any modifications since early 2017, when ISPs increased their fees.

In conclusion, for the moment the end of net-neutrality does not seem to have a positive impact on consumers nor CPs. ISPs have started to distort their service; yet, we do not have enough evidence to determine their next steps. The only certain fact is that more changes are coming in the future.

by Saí Bravo