The growth of world air traffic and its impact on climate change

23_NoEco Picture Plane

Driven by economic growth in East Asia, world air traffic has increased at a very rapid pace in the last decades. As more and more people join the middle class in emerging countries, the market for domestic flights is expanding rapidly, which, in turn, boosts demand for commercial flights in different parts of the world. This has contributed a lot to the continuous growth of air traffic, which has quadrupled in 30 years. This is good news for cities like Toulouse, as aircraft production should continue to increase steadily. However, because the environmental impact of the transportation sector is quite high, growth in this sector raises the question of how this trend will affect climate change. While it may seem like we are moving toward an environmental catastrophe, progress in terms of technology and fuel consumption may lessen the ecological footprint of the aeronautical industry.

Air Traffic Around the World

According to the International Civil Aviation Organization – ICAO, a UN specialised agency responsible for international civil aviation standards, 4.3 billion passengers embarked on regular – commercial – flights in 2018, which represents a 6.4% increase compared to the previous year. In perspective, this rate is about twice the growth rate of the world’s real GDP. Nonetheless, this impressive increase is not new: air traffic growth has been very stable in the past decades, doubling every 15 years, and has been resilient to external shocks – such as recessions or the 9/11 terrorist attacks.

Picture_Sébastien Montpetit

Paul Chiambaretto, a Professor of Marketing and Strategy at Montpellier Business School, argues in The Conversation in 2008 that the rapid expansion of air traffic is a result of both demand-side and supply-side factors. On the demand side, he stresses the tight link between the development level of a country and the consumption level of air transport. The International Air Transport Association’s estimate of income elasticity of the demand for airplane tickets is between 1.5 and 2, meaning that a 1% increase in national income implies people buy 1.5 to 2% more tickets. A prime example of this relationship between economic growth and air transport demand is Asia. Passenger traffic in Asia, expressed in revenue passenger-kilometres (RPK) – i.e. the number of passengers multiplied by the distance travelled, a standard measure of air traffic in the industry – grew by 9.5% in the region, which currently accounts for 34.8% of world traffic. Furthermore, planes are now used more than ever for freight transport, which pushes the demand for commercial flights even further.

On the supply side, the emergence of low-cost companies, especially in Europe, has forced other airline companies to lower their prices. For prices to fall, economic theory suggests that the increase in supply must be greater than the increase in demand, which says a lot about the weight that companies such as Ryanair and Easyjet have on the market.

Airplanes and Global Warming

The growth of air transport seems a priori incompatible with the international community’s objective of limiting global warming. Transoceanic flights require tens of thousands of litres of jet fuel for an ever increasing number of departures. In 2017, the transport sector was responsible for 25% of the European Union’s greenhouse gas emissions. Even if air transport generates a small share of these transport emissions, the level of emissions remains very high.

However, new technologies have been implemented by airplane manufacturers to reduce fuel consumption, and consequently mitigate the environmental footprint of aviation. According to the ICAO, aircraft operations are now 70% more efficient than they were in the 1970s. The organisation claims that reducing aircraft noise and emissions is one of its main priorities. Airline companies and manufacturers are committed to deploying new systems that limit greenhouse gas emissions. They mainly focus on three fields: improving airport infrastructures, adapting aircraft technology and increasing the use of sustainable fuels.

In particular, the CORSIA program, adopted in October 2016 in Montreal, Canada, is one of the first binding international environmental agreements in history. This ambitious program aims at maintaining the level of carbon emissions of international aviation at the 2020 threshold. Under the agreement, for the first six years, 65 countries representing 87% of world air traffic, committed to halting the increase in air transport emissions from 2020 to 2026. From 2027, all 191 member countries – with some exceptions for less-developed countries and isolated countries – will be bound by the constraining agreement. As a result of the agreement, a carbon market will be created to force companies that pollute more to buy credits from less polluting companies in order to compensate for their emissions.

At a time when progress concerning the reduction in carbon emissions is scarce, the progress made by the transport industry proves that there is hope. Not only has this sector succeeded in slowing the increase in greenhouse gas emissions despite a high growth in demand, but it has set also ambitious targets for the following decade. Hopefully, many other international initiatives will follow to curb worldwide greenhouse emissions. That being said, it was nice to see Greta Thunberg in Montreal with about half a million people for the Global Climate Strike on 27 September 2019. Greta, may you inspire all of us to fight climate change together!

by Sébastien Montpetit

 

References:

  1. International Civil Aviation Organization. The World of Air Transport in 2018. 2019. https://www.icao.int/annual-report-2018/Pages/the-world-of-air-transport-in-2018.aspx
  2. Schulz, E. (2018). Global Networks, Global Citizens: Global Market Forecast 2018-2037. Airbus GMF 2018.
  3. Chaimbaretto, Paul. Trafic aérien mondial, une croissance pas prête de s’arrêter. The Conversation. 19-05-08. https://theconversation.com/trafic-aerien-mondial-une-croissance-fulgurante-pas-prete-de-sarreter-116107
  4. International Air Transport Association. (2008). Air Travel Demand. IATA Economics Briefing No 9.
  5. Eurostat. Greenhouse gas emission statistics–emission inventories. 2019. https://ec.europa.eu/eurostat/web/environment/air-emissions
  6. Représentation permanente de la France auprès de l’Organisation de l’Aviation Civile Internationale. L’Assemblée de l’OACI adopte une résolution historique relative à un mécanisme mondial pour la compensation des émissions de CO2 de l’aviation internationale. 2019. https://oaci.delegfrance.org/L-Assemblee-de-l-OACI-adopte-une-resolution-historique-relative-a-un-mecanisme

Should we break-up Big Tech?

In recent years, digital technologies have profoundly changed many aspects of our daily lives, from e-commerce to internet search, travel, communication or entertainment consumption. While for the most part these changes have benefited consumers, certain voices have started to speak up against the power and influence of the Big Tech companies – Google, Amazon, Facebook, Apple in particular, accusing them of stifling innovation, dealing unfairly with their suppliers, and violating our privacy among others. Elizabeth Warren, one of the most prominent candidates to the Democratic investiture in the U.S., recently called for a much tougher policy approach towards Big Tech, proposing in particular to dismantle some of these companies, a call that has received a certain echo in the press and among politicians.

To understand whether we should break-up – some of – the big tech companies, it is important to understand why they have become so big, whether such a situation is actually harmful to consumers, and whether a break-up is an appropriate solution.

GAFA

Many digital markets are characterised by the existence of economies of scale and of network effects (see Shapiro, Carl and Varian, 1998). The former corresponds to the idea that the average cost goes down with the number of units sold, which is typical of information goods: their production entails a large fixed cost, but they can be reproduced for a small marginal cost. For instance, once a search engine algorithm has been developed – at a considerable cost, answering an individual query is virtually costless.

Network effects are the demand-side equivalent of economies of scale: a product is more valuable the more users it has. If a social network like Facebook is a natural example of direct network effects, other platforms may exhibit indirect network effects: Android users exert a positive externality on each other, not because communication is easier between Android devices, but rather because more Android users attract more application developers to the platform (see Caillaud, Bernard and Jullien, 2003).

The use of data by technology companies is a particularly important source of returns to scale and network effects: as firms get more data, they can offer better products or services, or produce them more cheaply. Big Data also allow firms to realise economies of scope, that is to enter new markets thanks to the insights generated on their primary market – having access to your email data allows to offer a better calendar app.

By giving an advantage to larger firms, economies of scale and network effects can result in market tipping, that is in one firm becoming dominant as a natural result of the competitive process. The perspective of monopoly is worrying, but two forces push in the opposite direction. First, while possible, tipping is not guaranteed even in the presence of network effects. When these effects are intermediate, they can even intensify competition, as the fight for additional users becomes more intense. Second, even when they lead to monopoly, network effects and economies of scale can induce firms to compete harder to be the early leader: competition for the market, rather than in the market.

Breaking-up a monopolist in such a market, by creating several smaller networks, could result in increased competition. For instance, competing social networks could be induced to offer better privacy protection in order to attract more consumers. But breaking-up a network results in the fragmentation of the market, with some groups of consumers being unable to interact with others. This could make consumers switch network in order to enjoy more interactions, and eventually lead back to market tipping, thereby undoing the break-up.

The big technology firms have not passively enjoyed the rents of their position of natural monopolists, but have instead used a variety of strategies to protect or extend it, some of which have been deemed anticompetitive. Google, for instance, has been fined three times by the European Commission. One set of practices consisted of imposing restrictive clauses – exclusivity, tying –  to its trading partners, thereby preventing its rivals from competing on the merits. For instance, a rival search engine would have had to develop its own application store – or to pay a lot of money – in order to convince a device manufacturer to choose it over Google – and its very popular app store Google Play (see De Cornière and Taylor, 2018).

Another practice consisted in systematically favoring Google Shopping at the expense of other comparison shopping services on Google’s search engine. This issue of “own-content bias” has taken a new dimension with the emergence of internet gatekeepers such as Google or Amazon, the latter having also been accused – but not yet fined – of favoring its own brands on its platform. Own-content bias may also take other forms, such as when Spotify is required to pay Apple a fee when consumers subscribe through iOS, which puts it at a disadvantage compared to Apple Music. Platforms leveraging their dominant position on complementary markets is a key motivation for the proponents of breaking-up these firms.

GAFA2

Despite these legitimate concerns over exclusionary practices by multiproduct incumbents, it is not clear that a break-up – say, separating the search and the shopping activities of Google – would be desirable. First, in the presence of complementary products, common ownership enables firms to better coordinate their production decisions and achieve superior outcomes, which is the reason why competition authorities view vertical mergers more favorably than horizontal ones. Second, being able to use the data acquired on their dominant market on another market gives these firms further incentives to improve their core product. Forcing, say, Amazon to divest its personal assistant business would probably marginally weaken its incentives to offer cheap products on its platform. Third, a break-up in itself would not be sufficient to ensure neutrality of the platform, since they could use other contracts with some of the participants ensuring preferential treatment in exchange for a commission, a common practice in many industries (see De Cornière and Taylor, forthcoming).

A more sensible course of action consists in monitoring more closely the behavior of dominant platforms, and to intervene more quickly. At the moment antitrust actions take too much time to be carried out, and by the time they are the markets have changed, usually to the detriment of smaller rivals. Several recent reports make related arguments,  advocating a more responsive competition policy or the creation of a sectoral regulator (see the UK report “Unlocking digital competition: report from the digital competition expert panel”, or Cremer, Montjoye and Schweitzer, 2019).

Tech giants have also been accused of using acquisitions to cement their market power, buying out the start-ups that could potentially represent a threat to their dominant position. The typical illustration of this phenomenon is Facebook, with its acquisitions of Instagram and WhatsApp – and failed bid for SnapChat.  Google and Amazon have also been very active acquiring start-ups: over the past ten years, these three firms have bought around 300 companies, often relatively young. Most of these acquisitions have not been reviewed by competition authorities because they do not meet the various turnover thresholds.

One concern is that some of these acquisitions are “killer acquisitions,” i.e. made only for the purpose of shutting down potential competition, a phenomenon recently studied in the pharmaceutical sector (see Cunningham et. al 2018). Things look different in the tech sector, as many of the targets offer products that are complementary to the incumbents, and the perspective of being bought out by a big firm is a strong incentive to innovate. At the same time, economies of scope might turn a firm that offers a complementary product today into a rival tomorrow, but it is hard to predict when this is the case.

In markets such as these, with young firms and rapidly evolving technologies, competition authorities are bound to make errors, either of type I – blocking a pro-competitive one – or type II – approving an anticompetitive merger. The current situation is very asymmetric, as none of the reviewed acquisitions by the Big Tech firms have been blocked. This is certainly suboptimal, especially given that the cost of a type II error, namely elimination of competition, is probably much larger than that of a type I error. While recognising that predicting the effects of a merger is especially difficult in innovative markets, moving the needle towards a stricter approach to mergers in the digital sector seems warranted.

As I tried to show in this brief essay, ensuring effective competition in the technological markets will require a more elaborate answer than a break-up, the efficacy of which is highly doubtful. Several approaches have been proposed, and the debate is still raging. These are exciting times to be an industrial economist!

By Alexandre de Corniere

 

References

Caillaud, Bernard, and Bruno Jullien. “Chicken & egg: Competition among intermediation service providers.” RAND Journal of Economics (2003): 309-328.

Crémer, Jacques, Yves-Alexandre de Montjoye and Heike Schweitzer, “Digital policy for the digital era”, 2019

Cunningham, Colleen, Florian Ederer, and Song Ma. “Killer acquisitions.” Working Paper (2018).

De Cornière, Alexandre and Greg Taylor. “Upstream Bundling and Leverage of Market Power”, CEPR working Paper, 2018

De Cornière, Alexandre and Greg Taylor. “A Model of Biased Intermediation”, Rand Journal of Economics, forthcoming

Shapiro, Carl, and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

UK report, “Unlocking digital competition: report from the digital competition expert panel”, 2019.

The winners and losers of the French 2008 feebate policy

In 2008, the French government introduced a policy taxing cars with high carbon emissions and rebating low carbon emission cars, better known as a feebate policy or bonus-malus écologique. This type of policy is appealing for two reasons: first, because it provides incentives to purchase less polluting cars, and secondly, because it can be designed to be revenue neutral since the revenue collected through the taxes subsidies the rebates.

In a recent paper, I conduct a quantitative evaluation of this policy, with a particular focus on its distributional effects: it is particularly relevant in this case to identify the winners and losers of the policy. I also analyse the effect of this policy, which is based on carbon emissions, on other local pollutants like particulate matter or nitrogen oxide. By nature, a policy that targets carbon emissions favours diesel cars, which consume higher levels of emissions of nitrogen oxide and particulate matter than petrol cars. Particulate matter and nitrogen oxide are known to have a direct impact on air quality and hazardous effects on health. While carbon emissions have a global impact, these local pollutants’ emissions raise the question of distributional impacts of the feebate policy in terms of health effects.

To measure these effects, I build a structural model of market equilibrium for the automobile industry. This implies estimating the supply and demand for the different car models using data on car characteristics and sales, which can then be used to simulate what the market would have looked like had there been no feebate policy in 2008. Comparing the observed market equilibrium with the counterfactual one, I can thus deduce the policy’s causal effect. Relying on a structural model is especially useful because some outcomes of interest cannot be observed directly, but can be expressed in terms of the model parameters. This is the case for car manufacturers’ profits and consumer surplus.

A notable challenge in modelling this market and being able to distinguish the winners and losers of this policy is to incorporate a large dimension of heterogeneity in individuals’ preferences for cars and their attributes. I make the assumption that individual heterogeneity in preferences is related to observable demographic characteristics, and leverage the correlation between composition of car sales and demographic characteristics at the municipality level. For instance, observing that the cars purchased in rural areas tend to be more fuel efficient than in urban areas reveals that individuals in rural areas tend to drive more, and are thus likely to be more sensitive to fuel costs than those living in urban areas. I also find a positive correlation between horsepower and income, which can be observed from the sales in wealthier municipalities.

On the supply side, I model the competition between car manufacturers and their pricing strategies, with and without the feebate policy. I do not model the choice of car characteristics and consider they are identical regardless of the regulatory environment. The marginal cost of each car model is estimated under the assumption that the car prices in 2008 are the optimal prices under the feebate policy. In the simulation of the market equilibrium absent the feebate policy, I predict prices and sales for each car model since both are jointly determined by demand and supply.

What is important here is that when setting its prices, the firm anticipates that consumers get a rebate or pay a tax and take up a part of the rebate or the tax. How much is left to the consumer depends on the competition of the market and the market power of car manufacturers.

In the end, the feebate policy improved consumer surplus and firms’ profits, surpassing the 223 million euros it cost in 2008. I find that the feebate caused a decrease in average carbon emissions of 1.56%, while average emissions of local pollutants – carbon monoxide, hydrocarbon, NOx, and PM – all increased. Emissions of local pollutants and carbon dioxide, however, increased once converted into annual tons. The increase in annual carbon emissions can be explained not only by the higher share of diesel cars, which implies more kilometers are driven, but also by the increase in the number of cars purchased. Indeed, the cars with low carbon emissions, which are already cheap cars, become even cheaper because of the rebates. This means that individuals who were not initially buying a car do buy a car, at least in my model. Nonetheless, including the cost of carbon and local pollutant emissions using standard levels still implies that the policy is globally welfare improving, with an estimated net benefit of 124 million euros.

Shifting the focus to the impact on income distribution, the main insight is that the feebate favoured the middle-income category at the expense of low and high-income classes. Moreover, given that the policy was not revenue neutral and contributed to a net deficit, the feebate could have been made redistributive if it were to be compensated by a proportional to income tax.

Clear winners and losers also appear among the car manufacturers. Car manufacturers are typically very specialised in different car segments: French manufacturers specialize in small, fuel-efficient cars, whereas bigger cars are the mainstay of the German car manufacturers. It comes as no surprise that the model points towards PSA and Renault, the two French manufacturers, as the winners of the feebate policy. The feebate policy increased their profits by 3.4% and 4% respectively, a considerably higher gain compared to increase in profits of the total industry (2.1%). Fiat group, the Italian manufacturer, increased its profits by 6.2% while Volkswagen, a German manufacturer very active on the compact car segment, only increased its profits by 0.3%. The other German manufacturers such as Porsche, BMW, and Mercedes-Daimler, were all severely hurt by this policy.

Finally, looking at the heterogeneity of the policy effects in terms of emissions of local pollutants, I find that average emissions increased the most in low emission municipalities. The policy generated a decrease in average emissions of local pollutant in some areas, but a high degree of heterogeneity can be observed across the country.

The analysis is concluded by an evaluation of the feebate in terms of redistribution and limitation of local pollutant emissions. The idea is to ask whether it would have been possible to improve consumer surplus, achieve more distribution across individuals, or limit the increase in emissions of local pollutants with the same budget and the same effect on average carbon emissions. In this exercise, I restrict the set of alternative policies to be simple linear feebates with different slopes for rebates and taxes. Interestingly, I find that average consumer surplus cannot be further improved, while there are large potential gains in terms of profits. Alternative feebate schemes could limit the rise in emissions of local pollutants, but the gains are not very large, and the best outcomes for the different pollutants cannot be achieved with a single feebate scheme: this reveals that there is an arbitrage to be made between the various pollutants.

by Isis Durrmeyer

An interview with Daron Acemoglu on artificial intelligence, institutions, and the future of work

The recipient of the 2018 Jean-Jacques Laffont prize, Daron Acemoglu, is the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology. The Turkish-American economist has been extensively published for his research on political economy, development, and labour economics, and has won multiple awards for his two books, Economic Origins of Dictatorship and Democracy (2006) and Why Nations Fail (2012), which he co-authored with James A. Robinson from the University of Chicago.

The Jean-Jacques Laffont prize is the latest addition to the well-deserved recognition the economist has received for his work, which includes the John Bates Clark Medal from the American Economic Association in 2005 and the BBVA Frontiers of Knowledge Award in Economics in 2017. Despite a schedule heavy with seminars and conferences, Daron kindly set aside some time to offer the TSEconomist his insights on topics ranging from the impact of artificial intelligence for our societies to the role an academic ought to take in public political affairs.

_DSC0724

  1. Congratulations on winning the Jean-Jacques Laffont prize. What does this prize represent to you?

I’m incredibly honoured. Jean-Jacques Laffont was a pioneer economist in both theory and applications of theory to major economic problems. I think this tradition is really important for the relevance of economics and of its flourishing over the last two decades or so. I think it’s a fantastic way of honouring his influence, and I feel very privileged to have been chosen for it.

  1. Thanks to you and other scholars working on economics and institutions, we now know that the way institutions regulate economic life and create incentives are of great importance for the development of a nation. New players such as Google now possess both the technology and the data needed to efficiently solve the optimisation problems institutions face. This raises the debate on government access, purchase, and use of this data, especially in terms of efficiency versus possible harms to democracy due to the centralisation of political power. What is your take on this?

I think you are raising several difficult and important issues. Let me break them into two parts.

One is about whether the advances in technology, including AI and computational power, will change the trade-off between different political regimes. I think the jury’s out and we do not know the answer to that, but my sense would be that it would not change it as much as it changes the feasibility of different regimes to survive even if they are not optimal. What I mean is that you can start thinking about the problem of what was wrong with the Soviet Union in the same way that Hayek did. There are problems to be solved and they’re just too complex, the government can’t handle it and let’s hope that the market solves it.

Then, if you think about it that way, you may say that the government is getting better at solving it, so perhaps we can have a more successful Soviet Union. I think that this is wrong for two reasons that highlight why Hayek’s way of thinking was limited, despite being revolutionary and innovative. One reason is that the problem is not static, but dynamic, so the new algorithms and capabilities create as many new problems that we don’t even know how to articulate. It is therefore naive to think that in such a changing world, we can delegate decision-making to an algorithm and hope that it will do better than the decentralised workings of individuals in groups, markets, communities, and so on.

The second reason is that Hayek’s analysis did not sufficiently emphasise a point that I think he was aware of and stressed in other settings: it is not just about the capabilities of the governments, but about their incentives. It is not simply that governments and rulers cannot do the right thing, but that they do not have the incentives to do so. Even if they wanted to do the right thing, they do not have the trust of the people and thus cannot get the information and implement it. For that reason, I don’t think that the trade-offs between dictatorship and democracy, or market planning versus some sort of market economy, is majorly affected by new technology.

On the other hand, we know that the equilibrium feasibility of a dictatorship may be affected. The ability to control information, the Internet, social media, and other things, may eventually give much greater repressive capability to dictatorships. Most of the fruitful applications of AI are in the future and to be seen, the exception being surveillance, which is already present and will only expand in the next ten years, in China and other countries. This will have major effects on how countries are organised, even if it may not be optimal for them to be organised that way.

To answer the second part of your question, I think that Google is not only expanding technology, but also posing new problems, because we are not used to companies being as large and dominant as Google, Facebook, Microsoft, or Amazon are. Think of when people were up in arms about the power of companies, robber barons, at the beginning of the 20th century, leading to the whole progressive sequence of precedents being reformed, antitrust and other political reforms: as a fraction of GDP, those companies were about one quarter as big as the ones we have today. I therefore think that the modern field of industrial organisation is doing us a huge disfavour by not updating its way of thinking about antitrust and market dominance, with huge effects on the legal framework, among other things. I don’t know the answers, but I know that the answers don’t lie in thinking about something like “Herfindalh is not a good measure of competition so therefore we might have Google dominate everything, but perhaps we are ok” – I think that this is not a particularly good way of going about things.

  1. Some fear that the dominance of these companies could lead to the growth of inequality. Do you think that AI could play a role in this?

I am convinced that automation in general has already played a major role in the rise of inequality, such as changes in wage structure and employment patterns. Industrial robots are part of that, as well as numerically controlled machinery and other automation technologies. Software has been a contributing factor, but probably not the driver in the same sense that people initially thought about it. Projecting from that, one might think that AI will play a similar role, and I think that this is not a crazy projection, although I don’t have much confidence that we can predict what AI will do. The reason is that industrial robotics is a complex but narrow technology. It uses software and even increasingly artificial intelligence, but it isn’t rocket science. The main challenge is developing robots that can interact with and manipulate the physical world.

AI is a much broader technological platform. You can use it in healthcare and education in very different ways than in voice, speech, and image recognition. Therefore, it is not clear how AI will develop and which applications will be more important, and that’s actually one of the places where I worry about the dominance of companies like Google, Amazon, Facebook: they are actually shaping how AI is developing. Their business model and their priorities may be pushing AI to develop in ways that are not advantageous for society and certainly for creating jobs and demand for labour.

We are very much at the beginning of the process of AI and we definitely have to be alert to the possibility that AI will have potentially destructive effects on the labour market. However, I don’t think that it is a foregone conclusion, and I actually believe there are ways of using AI that will be more conducive to higher wages and higher employment.

  1. Regarding the potential polarisation between high and low-skilled labour, do you think that the government could address this issue with universal basic income?

There is a danger – not a certainty, but a danger – that it will polarise, and that even if we use AI in a way that simplifies certain tasks, it may still require some numeracy and some social skills that not all workers have, resulting in probable inequality and displacement effects.

That being said, I believe that universal basic income is a bad idea, because it is not solving the right problem. If the problem is one of redistribution, we have much better tools to address it. Hence, progressive income taxation coupled with something like earned tax credits or negative taxation at the bottom would be much better for redistributing wealth, without wasting resources on people who don’t need to get the transfer. Universal basic income is extremely blunt and wasteful, because it gives many transfers to people who shouldn’t get them, whereas taxation can do much better.

On the one side, I fear that a lot of people who support universal basic income are coming from the part of the spectrum which includes many libertarian ideas on reducing transfers, and I would worry that universal basic income would actually reduce transfers and misdirect them. On the other side, they may be coming from the extreme left, which doesn’t take the budget constraints into account, and again, some of the objectives of redistributing could be achieved more efficiently with tools like progressive income taxation.

Even more importantly, there is another central problem that basic income not only fails to deal with, but actually worsens: I think a society which doesn’t generate employment for people would be a very sad society and will have lots of political and social problems. This fantasy of people not working and having a good living standard is not a good fantasy. Whatever policy we should use should be one that encourages people to obtain a job, and universal basic income will discourage people to do so, as opposed to tax credits on earned income, for example.

  1. In a scenario of individuals being substituted and less people working, how could governments obtain the revenue they are not getting from income taxation? Could taxing robots be a possibility?

I think that this is a bad way of approaching the problem, because when you look at labour income, there is certainly enough to have more redistributive taxation, and no certain need to tax robots. However, we should also think about capital income taxation more generally: there may be reasons for taxing robots, but that has to be related more to the production efficiency and excessive automation. I think that singling out robots, as a revenue source distinct from other capital stock, would be a bad idea. If, for example, you want taxes to raise revenue, then land taxes will be a much better option than robot taxes – this does not mean that we should dismiss the idea of taxing robots. I think that this is confusing because there are efficiency reasons (giving the right incentives to firms) and revenue-raising reasons for taxing. Moreover, because of Bill Gates and other people, public discussions are not helping this confusion.

In terms of sharing wealth, I think that robots do not create new problems compared to other forms of capital. I think it was a confusion of Marx to think of marginal product of capital in very complex ways – that everything that goes to capital is somehow theft – and if neoclassical economics have one contribution, it is to clarify that.  I personally believe there are legitimate reasons for thinking that there is excessive automation. And if there is excessive automation, there are Pigouvian reasons for taxing robots, or actually removing subsidies to robots, which there are many. But that is the discussion we need to have.

  1. There has recently been optimism in regards to the future to AI and the role it could have, for example, on detecting corruption or improving education. You have made the distinction between replacing and enabling technologies. Where does one draw the line between the two?

That is a great question. In reality of course, automation and replacing technologies merge with technology that improve productivity. A great example would be computer-assisted design. Literally interpreted, that would be a labour augmenting technology, because it makes the workers who are working in design more productive. At the same time, however, it may have the same features as automation technology, because with computer-assisted design, some part of the tasks that a drawer would do would be automated. If you do it once, you can do it repeatedly.

So that is a grey area, but it’s okay because the conceptually important point to recognise is that different types of technologies have very different effects. Recognising this is an antidote against the argument that improving productivity through technology will always benefit labour; we actually need to think about what new technologies do and how the increase in productivity will affect labour.

But it is also very important for the discussion regarding AI to point out that AI, as opposed to industrial robot automation, is not necessarily – and does not have to be – labour replacing.  There are ways in which you can use it to create new tasks for labour or increase productivity. This is what I think will play out in real time in the future of AI.

  1. In 2017, you wrote an article for Foreign Policy, “We are the last defence against Trump”, which questioned the belief that institutions are strong enough to prevent a man like Donald Trump to overlook the rule of law. According to you, should economists come off the fence on current affairs? Is it possible to express an opinion without sacrificing some of the intellectual rigour one can expect from a researcher?

I think so. First, there are important personal responsibilities that are crosscutting. Secondly, there is a danger of having the perfect be the enemy of the good.

On the first one, I think that people have to make their own choices as to what is acceptable and what is not. Some things are just within the realm of “I prefer high taxes, you prefer low taxes”, and that is quite a reasonable thing. But some other issues may be a real threat to democracy, to other aspects of institutions, and to minorities that are disempowered. From there, it is important to recognise that there are some lines that should not be crossed, or if they are crossed, that some people need to defend them vocally. Any analogy to the Nazi period is fraud with danger, but it bears saying that, of course, in hindsight, every academic should have walked out of the universities that were managed by Nazis, that were firing Jewish scholars, or were teaching jurisprudence according to the national socialism. That has nothing to do with whether you have evidence of one versus or another – I think that there are some lines.  Similarly, and without saying anything as provocative as drawing parallels between Trump and the Nazis, I think that it is important for people, in general, to defend democracy against the onslaught that it is receiving from Trump’s administration and the circles of people around him. I think I will say openly to everybody that it is wrong for any economist or academic to go and work for Trump, and I think I would certainly never consider doing so, and would consider distancing myself from anybody who does.

But that is on the private ground. On the social science side, there is a lot we do not know. Everything we know is subject to standard errors and external validity constraints, but to refuse to act or to condemn would be to have the perfect be the enemy of the good. On the basis of what we know, we know how democracies fail, we know how certain aspects of American institutions are actually weaker than what people think, and we know how changes in policies against minorities would have terrible effects for certain groups. I think that on the basis of that, to articulate criticism of certain policies and certain politicians is also a good use of the knowledge we have accumulated.

by Valérie Furio, Gökçe Gökkoca, Konrad Lucke, Paula Navarro, and Rémi Perrichon

Fake news: Can there be too much information?

In today’s world, we receive a crazy amount of information in an instant, via Facebook, Twitter, or any other social media we decide to follow. This situation looks good at first glance as most of the simple economics models assume perfect information and hence, more information should be a good news;  the catch is one should be able to differentiate between the truth and the fake.

With so many people sharing and spreading whatever information they want, one may think that the term “fake news” is relatively new. In reality, however, one can go back in time to notice that “fake news” has been part of mankind since people began to understand how powerful having information is. Extortion and gossip are simple examples. However, the way information is communicated is now transforming the underlying message to such an extent that, in the end, one doesn’t know how authentic it is.

For example, think of the very common children’s game in which one player tells another one a statement, and the receiver has to pass the information to another one, until the end – at this point, the statement is completely different. Transpose this game to a form in which people can just click and share information on Facebook, or resend it on WhatsApp. Information will be complete, but the problem is knowing how faithful it is to the original.

You may think that you are smart enough to differentiate between true and false, but is the rest of your country able to do that? Can everyone filter the large amount of information they receive, day after day, well enough? This is a possible reason for the success of fake news in important events like elections. However, one could argue that there is more in using fake news. By election day, people will only remember the most recent news, no matter who communicated it. Because of this, the political tool is perhaps not to make people believe fake news but instead to undermine the value of information, or of its provider, as Donald Trump did with CNN, the American news provider. The question, then, is rather about the authenticity of the source than information itself.

Beyond the political impact of fake news lies a  social one. Professor Seabright discussed the methods recurrently used by politicians to try to distract people’s attention from the so-called truth. One of them is to multiply the amount of information shared and received, as mentioned above. Will this not lead to a lack of trust in our societies, a basis for all human relations? There are, of course, many angles to consider this question. The first one could be the difficulty to believe in others: as we become more cautious about any piece of information we receive, and increasingly aware of the lies that surround them, we tend to push some people to trust others less, including their closest friends. The second is the amount of time one may dedicate to the “search of the truth”. To illustrate, consider the daily proportion of the time spent on reading information published by others. This increasing allocation of our time to “information consumption”, as opposed to interacting directly with others, can lead us to question how this new habit will potentially affect the social cohesion of our societies.

In the end, information is a tricky matter, due to not only how much of it we receive, but also its degree of truth. The new ways in which information is managed have implications for the economic, political, and social dimensions of our lives. The boom of “fake news”, especially in elections, has influenced the way people think and act. As economists, we can imagine that the increasing amount of information will lead individuals to act more rationally, but will this rationality be formed under fake news?

by José Alfonso Muñoz Alvarado, Joël Bréhin, and Aicha Esaad

With thanks to Professor Paul Seabright for sharing his insights on this topic with us