Knowledge for all – Open access to scientific research

Scientific papers are at the very heart of our student lives. They cause nightmares as they feature on the seemingly endless reading list for our seminars and inspire dreams as we strive for seeing our own name in the list of authors. Still, few students waste a thought on the business side of scientific publishing. Unjustly so, as the field might undergo radical changes in the coming years with far-reaching consequences for academia.

The source of the potential upheaval is a European initiative for open-access science publishing. Under the code name “Plan S,” the European Commission and the national research organisations of twelve European countries demand that all work resulting from publicly funded research shall be made accessible free of charge by 2021. In concrete terms, the plan stipulates that research worth €7.6 billion needs to be uploaded in open-access journals. This demand pits them against publishing houses, which fear a severe disruption to their existing business model.

A monopoly on knowledge

As the bankrollers of most research in their countries, national research organisations take a reasonable interest in reforming a system that absurdly overcharges them for bringing the results of the research to the public. In the current system, publishing houses receive the manuscripts of publicly financed researchers free of charge. The manuscripts are in turn checked by peer reviewers – most of whom are also employed at universities. At the end of the production chain, publishers sell the resulting journals to  university libraries. Collectively, publicly funded institutions therefore buy the fruits of their own labour.

Of course, publishers also incur certain costs, such as for administrative tasks, marketing, layout, printing and, perhaps most importantly, the administration of the peer review process. But they could by no means explain the immense increases in journal prices observed over the last decades. From 1984 to 2005, the average price charged for academic periodicals in the US increased sixfold while the overall price level rose by a factor of less than two (see Figure).

University libraries are increasingly unwilling or unable to pay. Couperin, a consortium representing 250 French education institutes, announced last year that its negotiations with Springer came to nought and that it will no longer subscribe to their journals. However, giving up access to top journals is hardly an option for universities. Researchers must stay up to date with the latest findings in their fields and students, whether they like it or not, need to go through their reading lists.

It follows that publishing houses are in a quasi-monopoly position with nearly unrestricted pricing power. This is evident not only from the price increases for journals, but also by the profits that the three biggest publishers – Springer, Elsevier and Wiley-Blackwell – regularly amass. Elsevier, for example, chalked up profit margins of 37% in 2018. In comparison, the average listed company in the S&P 500 index had a margin of only 10% in that year.

KnowledgeForAll.Graph.JPEG

Science without borders

The deficiencies of the current system raise the question for an alternative model. One answer is provided by open access, meaning the free provisioning of research results online. This can take two forms: the first one is “green open access,” where an article continues to be submitted in a paid journal. In addition, after an embargo period of six to twelve months, the authors upload the article for the purpose of self-archiving to their institution’s website. The second is called “golden open access” and refers to publications in journals that are themselves accessible free of charge. Their main difference concerns how the journal covers the remaining publication costs. In the green model, the reader continues to pay the journal for the privilege of early access. In the golden model, the costs are covered by “publication fees” settled by the authors, who usually pass them on to the funder – e.g. their university or grant provider.

With the advent of open access at the beginning of the century, many predicted the end of the existing payment model. And indeed, open access has made some inroads – including the Public Library of Science and BioMed Central journals, as well as the ArXiv website, an online repository for scientific manuscripts. Many students will also be familiar with Sci-Hub, a website hosting papers without regard to copyright. In a legal way, however, the expected open access revolution never fully materialised. Today, only a quarter of scientific articles are made freely available, most of them in green open access.

Now Plan S intends to radically accelerate the transition. It responds to calls for greater transparency and cost efficiency regarding the use of public money. Further, it is expected to accelerate the speed of discoveries. As science advances through cross-fertilisation between projects, any barriers such as paywalls or embargo periods necessarily slow it down. Instantly uploading manuscripts, even before the protracted peer reviewing process, could serve as a catalyst of scientific progress.

Moreover, extending the diffusion of scientific knowledge to a less affluent audience renders science more equitable and encourages diverse thinking in academia. Finally, open access may shift the focus away from publishing exclusively significant results and allow the research community insights into “failed” studies that may have equally valuable insights to give. One study claims that the results of half of all clinic trials in the US go unpublished (Riveros et al., 2013). Without knowing about these, researchers may end up pursuing dead ends that have already been explored by their colleagues.

 

S for Short-Sighted?

In the eyes of sceptics however, the sweeping changes of Plan S risk undermining the quality of research by severely hurting high-class journals. A particularly contentious demand of Plan S is a proposed cap on publication fees. This would be particularly hard to meet for journals with high rejection rates. Since they also incur expenses for the peer review of rejected articles, they face significantly higher costs for every publication. Nature, for example, estimates their publication fees to be at $40,000 per article – many times the limit contemplated by backers of Plan S.

Renowned journals pride themselves on their selectivity as it grants their articles a quality seal that open access journals could struggle to replicate. Critics fear that in the extreme case, open access can end in the practice of “predatory journals,” which accept any article for the sole purpose of cashing in the authors’ publication fees. A survey by the Nature Publishing Group shows that almost half of the authors therefore express doubts about the quality of open access journals.

The main worry about Plan S is therefore that rather than reforming the publishing system worldwide, it could create a parallel system for European research. If the top journals do not go along with the proposed changes, nationally funded researchers would be restricted to less reputable open access outfits. In the worst case, this could even lead to an exodus of scientific talent to countries or funders without open access-requirements. Recognising the risks of an abrupt implementation, the consortium behind Plan S has postponed its introduction by a year – it was initially supposed to start in 2020 – and suggested a two-year transition period. Even after that delay, it remains all but clear whether the plan will indeed manifest or remain the pipe dream of disenchanted open-access advocates.

Conclusion

In the current system, publishers use monopoly power to demand exaggerated prices from university libraries without compensating those who contributed to the research. Open access promises to upend the practice and extend the insights of scientific research to a much broader range of people without any financial limitations. But as its advancement has stalled, new political support is required to maintain the momentum. Plan S could potentially provide this boost. Its success, however, depends on whether it can create mechanisms to continue the process of rigorous peer review and uphold quality. If it does, the plan could serve to inspire other countries to pursue open-access initiatives. Elsewise, it will founder as a quixotic undertaking aspiring for a world with free, unlimited knowledge for all.

By Stefan Preuss

 

References

CSI Market , 2019.
https://csimarket.com/Industry/industry_Profitability_Ratios.php?sp5

Couperin, 2018.
https://www.couperin.org/breves/1333-couperin-ne-renouvelle-pas-l-accord-national-passe-avec-springer

Dingley, B., 2005. US Periodical Price Index 2005.

Kimball, M.S., 2017.
https://blog.supplysideliberal.com/post/2017/7/11/does-the-journal-system-distort-scientific-research

RelX Yearly Result, 2019.
https://www.relx.com/investors/results/2019

Riveros C., Dechartes A., Perrodeau E., Haneef R., Boutron I., Ravaud P., 2013. https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001566

The Economist, 2018. https://www.economist.com/science-and-technology/2018/09/15/european-countries-demand-that-publicly-funded-research-be-free

 

 

 

 

Interview with Julien Grenet from PSE

id4

Julien Grenet is a researcher at the CNRS, an Associate Professor at Paris School of Economics, and one of the founders of the Institut des Politiques Publiques. He is specialised in education economics, public economics and market design. He is known by the general public for his participation in the public debate and the vulgarisation of economic concepts in some media such as France Culture.

 

He agreed to talk to the magazine about his work as a researcher, the importance for economists to be involved in the public debate and about modern issues that the french educational system is facing today.

 

Why did you create l’Institut des Politiques Publiques? What are its specificities?

We created l’Institut des Politiques Publiques – IPP – with Antoine Bozio in 2011. It followed a six-year period that Antoine spent in London working for the Institute for Fiscal Studies – IFS, which is our main inspiration for IPP. What was lacking in France was an institute that evaluates public policy, tries to put together the insights of academic research and translates them into policy brief reports targeting a broader audience such as policymakers, journalists and citizens. We felt that there was a very good academic research in public policies existing in France, but most of the results were not really conveyed to the general debate, which is, in my opinion, quite unfortunate. IFS was a good model to import in France. We started small but we have  grown up ever since, trying to cover a broad range of topics that are interesting for the public debate, such as tax policies, education, housing, pension, and environment. We also work on health issues.

What is your opinion, as a researcher, on the role of economists in the public debate?

I do not want to be judgmental on what we should do or not do. There are different ways to contribute to the public debate. From my point of view, you do so through the academic output you produce that then spills over onto the public debate. You should also try to meet policymakers. The important thing is to participate in the debate on topics that you know, and only on them. Unfortunately, it is not always the case, and that sort of attitude may damage the reputation of economists. I am personally trying to restrict my interventions to questions on education or housing, since I have worked on it.

Why did you choose to study education, and more specifically social segregation and selection processes, as your main topic?

I started to study education because it was the topic of my Master thesis. What drove me to this is that I come from a family of teachers whose social mobility upwards was entirely due to school.  I was shocked in a way by the fact that through the education system, my family managed to climb up the social ladder. Today, we sometimes have the impression that it does not play this role anymore, and we wonder what is wrong with our educational system. I think the tools of economists have a lot to say. What we can learn with economics is improving the efficiency of the educational system.

I went into it for personal reasons; afterwards, the topics that I have addressed are more random. I started working on the return of education, which is a very classic question. Then, since I was working in the same office as Gabrielle Fack, who was working on housing, we thought about working on something in between those two fields of interest. We started working on the effect of school zoning (“la carte scolaire” in French) on housing prices. We thought that this system was one way to assign students to schools, but we actually found out there were many others. We started reading about the school choice mechanism and got interested in that. It is a very dynamic field in economics: how to assign students to schools? How to assign teachers to schools? How to drive students to higher education programs?

In France, there has been a lot going on on the subject lately, and this is important for the public debate. We heard a lot about Admission Post Bac and Parcoursup; those are, in my opinion, important technical tools for policy implications or policy effects. We empirically know quite little about how their effect in the real world. I think this is where we, as economists, can contribute: by improving these tools.

According to the OECD, France is one of the most unequal countries in terms of climbing up the social ladder. What is your analysis?

I think that there are many reasons to it; yet, we can hardly identify them. What the OECD has shown is that at the age of 15, your performance is more determined by your social background in France than in any other country. France is typically in the top three countries where social determinism is the strongest at school.

One reason is that our educational system, especially the middle school system – between 11 and 15 years old – is highly segregated. From research, we know that ghetto schools harm students who are studying there beyond the effect of social background. This segregation in the school system increases inequalities. This might be due to different things: the level of residential segregation is very high in France, and the way we assign students to schools is far from being optimal. As we are assigning students to their local school, if the neighborhood is segregated, then the school is going to be segregated too.

There are many other ways to assign students that we could use. For instance, there is what we call “control school choice” that tries to achieve a balance in the social composition. We could also redesign the school boundaries, or “school catchment”, so that they would be more diverse in their intake students. That is one important topic to be addressed: can we reduce segregation in school by using different methods of assignment?

There is also a problem with how teachers are assigned to school. Typically, young teachers, who are inexperienced, are assigned to the most deprived schools in France, which is obviously a problem. We know that teachers have their biggest efficiency improvement during their first few years of teaching. Hence, students from deprived schools have less probability to benefit from the most efficient teaching.

There is also an issue with the educational system. The French system is very good at selecting an elite and the whole system is created to detect these students who go all their way up to “classes préparatoires”, “grandes écoles” and so on. However, it is not so good to have as many students as possible to succeed. We have a very strong elite but, in the meantime, we are losing a lot of students along the way. France has a high drop-out rate: many students quit school with no certification. Another problem with the system is that vocational courses are seen as a personal failure, unlike many other countries. Therefore, a lot of students who follow this path feel like they failed their studies.

Your research focuses on assignment algorithms. What consequences did you find of such algorithms on students’ choices?

France is a very centralised country; hence, it is more inclined to use these algorithms to assign students and teachers than other countries. There has been very little involvement of researchers and economists to design these algorithms. In fact, a lot of research on this assignment mechanism comes from the U.S.. It is a branch of design mechanism theory which received a lot of visibility thanks to the Nobel prize of Alvin Roth and Lloyd Shapley in 2012. They really transformed the landscape in many dimensions:  for example, the assignment of students to school in the U.S. has been completely redesigned in many cities using these algorithms. Kidney exchanges now rely on these algorithms, and there are many new applications, such as social housing allocation.

In France, in my opinion, the main problem is the fact that there is not enough transparency about these algorithms. They exist in order to produce the best possible matching between students and schools, to try to maximize satisfaction while respecting several priority rules. The problem is that, the way the algorithms and the priority rules work are not well known. This has led many people to reject the whole idea of selecting people with algorithms because they feel that there is a black box, like a lottery, when in fact, an algorithm is just a tool.

What really matters is the way you design priorities. If you have two students who apply to a school and there is only one seat left, which student has the priority over the other is a political decision depending on which criteria you promote – students with better grades, students who live closer to the school, students with a lower social background, … This is not sufficiently explained and democratically decided. The issue today is to bring research into these algorithms, so that there are more discussions and a better understanding of the way they work.

You are currently working on a project on social mix. Why it is a topic of interest? What are your preliminary results and your analysis?

We have already said that the lack of social mobility is one of the reasons why there is so little mobility upward in France. The question is how to address this problem. We have several potential ways of doing it. We could use the  , we could redesign the school catchment area, we could also close some schools and send some students away from their original choice, like in the city center rather than in a suburban area.

We do not have many empirical results telling us in which case we should use this or that tool nor do we know the actual effect of some tools on segregation. Moreover, these effects are mitigated by the behavior of the parents: if they decide to send their child to a private school, we might not get as much social mix as we initially wanted. Therefore, we are trying to evaluate different ways to assign students to school in order to create social mix and evaluate their effect. To do so, we are using several experiments that were launched across the country, and we try to compare the effect of these experiments on social mix.

The reason why we want to increase social mix is because we believe it is going to reduce inequalities. We are interested in the effect of social mixing on both students’ performance and their non cognitive aptitudes: their self-confidence, their social fatalism and the way they perceive others – the perception of difference. What we are trying to use here is the fact that, in some experiments, even if we found a large effect on social mix,

We try to evaluate this through surveys that are conducted in schools. We are now proceeding in the second wave; two other waves  are coming. What we try to evaluate is how does the change of the school social composition individually affect the students through their performance in school and their non-cognitive outcomes. If we look at the literature, there is no evidence of this, especially on the non-cognitive aptitudes, because we cannot really measure it with administrative data. We need to go to the schools and directly ask students some questions. That is our contribution to the literature: trying to answer one of these questions.

Finally, what results in your research were you surprised of?

I did not anticipate the fact that this students’ assignment mechanism would have such a big impact on the composition of schools. I started to work on these assignment mechanisms looking at several high schools in Paris. In 2013, the educational authority of Paris adopted an algorithm to replace the manual procedure. As a part of the algorithm, they created a bonus for low-income students. This bonus would increase their priority, and as a result, the social segregation in high schools in Paris went down by 30 % in only two years, which is huge. This had not been anticipated by the local education authority because they did not think that the way the bonus had been created would make that bonus so large. They did not realise that they gave almost automatically their first choice to low-income students. This completely changed the landscape of Paris, which was the most segregated area in France. This is no longer the case.

By working on this data, I realised that these tools are in fact even more powerful than any reform. For instance, the “assouplissement de la carte scolaire” was relaxing these schools’ catchment areas, so that students could apply to schools that are away from their homes. In reality, this had very little effect on the social composition, whereas these school choice algorithms, like the one implemented in Paris, had a huge impact with very little coverage in the media. The numbers shown in the graph are explanatory: the low-income students now have a bigger set of choices than before. This is one of the surprises of research and economics: it is not because something is not looked upon by researchers or does not get any attention, that it is not existing. You can be like an archeologist: you can dig the results up that were unknown until now and they can change the way you see and understand the educational system.

 

By Thomas Séron

Should we use new economic methods to assess the impact of collusion on welfare in vertical markets? The example of the “Yoghurt case”

 

bonnet
Céline Bonnet is a director of research at INRAE within TSE

 

If literature has widely covered collusion in horizontal markets, it has not given enough attention to collusion in vertical markets, and more precisely on how to properly evaluate the impact of cartels on total welfare. As we observe convictions for collusion among prominent manufacturers, economists try to advise authorities on new approaches to better consider the strategies of retailers, and better assess the impact of collusion on both manufacturers and retailers, as well as on consumers.

 

 

 

A concentrated market which has become the scene of anti-competitive practices

Over the past 30 years in France, the retail sector has known successive mergers that strengthened the bargaining power of big retailers against manufacturers. The food retail sector, for example, is dominated by eight major groups, including Carrefour and Leclerc, who represent about 40% of the total sales. To counteract this concentration trend, manufacturers of the food industry also decided to engage in a consolidation movement in the early 2000s. The increase of concentration among both retailers and manufacturers has led to higher prices for consumers.

Despite that trend, retailers have still searched for new innovative strategies to differentiate themselves and be more competitive on the market. Big retailers have played the strategy of Private Labels – PLs: they sell store-owned brands, such as, for example, la Marque Repère in Leclerc. PLs are then sold along with National Brands – NBs, established manufacturer brands – giving retailers advantages on both horizontal and vertical markets. They can differentiate from other retailers who might sell the same NBs, and they gain bargaining power against NBs manufacturers, which will lose market shares for the benefit of PLs manufacturers if they charge too high prices. Indeed, PLs products can be substitutes for NBs products, and are often sold at a relatively low price.

The concentration of manufacturers, along with increasing selling prices, also facilitated collusion and other anti-competitive practices. This can be illustrated by the “yoghurt case.

In 2015, French authorities charged 10 major PLs producers of the French dairy desserts sector – such as Yoplait and Lactalis – for having colluded from 2006 to 2012. Indeed, even though PLs are retailer-owned brands, one PL manufacturer may produce for several retailers at the same time. This gives PLs producers incentives to collude. If the price proposed by the retailer is too low, they can reduce their market share in the concerned retailer’s store and sell somewhere else. Retailers will suffer from this strategy, as they need PLs products to differentiate and bargain. Hence, the bargaining power of PLs producers increases with collusion.

 

A traditional estimation method of collusion effects has become outdated

To assess the variation in welfare caused by the collusion, the French competition authorities used a traditional economic approach, consisting in mainly focusing on the horizontal collusion, and fixing the retailers’ response. The flaw of this method is that it does not take into account vertical relations between PLs producers and retailers, and hence neglects the strategic response of the retailers. It also ignores the potential  “umbrella effect”, which arises when an increase in PLs products’ wholesale prices diverts demand to the substitute product (NBs) and thus distort NBs products’ wholesale prices and market share. A forthcoming paper  (C. Bonnet, Z. Bouamra-Mechemache, Empirical methodology for the evaluation of collusive behaviour in vertically-related markets: an application to the “yogurt cartel” in France) addresses this issue and applies this new methodology to the “Yoghurt case.

 

A new economic initiative to assess the impact of a cartel on welfare applied to the “Yoghurt case

The idea is to model a competitive setting – or non-collusive counterfactual – to obtain the prices and quantities that would have been observed in such environment, and then compare it with the prices and quantities we currently observe on the market. This new method differs from the traditional one in the sense that the negotiation of the choice of the wholesale prices is modelled as a Nash bargaining game, and not as a unilateral decision from the manufacturers that retailers have to accept. The results from this paper concluded that there was profitable collusion among PLs manufacturers. It also showed that the profit variation for retailers was quite ambiguous, and that PLs producers were not necessarily the only winners of the cartel.

Faculty article

In the competitive setting, by decreasing the wholesale price of PLs products, we would expect that the market share – and hence the wholesale and retail prices – of NBs products would decrease due to a drop in NBs demand. Indeed, in the yoghurt market, we observe an asymmetric substitution between the two types of products: NBs products are more sensitive to a change in the prices of PLs products than the other way around. Strangely, the simulation showed a decrease in market share and wholesale prices for NBs products, but not a decrease in retail prices. In fact, the « umbrella effect » causes a decrease in wholesale prices of NBs products following the decrease in the wholesale prices of PLs products. NBs and PLs manufacturers clearly lose profit in the competitive setting compared to collusion. The novelty then is to take into account the optimal strategy of the retailer, which is actually to slightly increase the retail price of the NBs products: clients will be attracted by the low prices of PLs products, and the retailers will extract a maximum of surplus from consumers who still want to buy NBs products. The retailer actually gains from PLs products but loses from the increase in NBs products’ prices because of the asymmetric substitution. The overall result varies from one retailer to another: for some, the negative effect of NBs products exceeds the positive effect of PLs products, but not for others.

Hence, both PLs and NBs manufacturers are better off with collusion, while the results for retailers are mitigated. The study also found that consumers are worse off with collusion, but the loss is relatively low – less than 1% of the consumer surplus. Overall, total welfare has increased on the yoghurt market.

 

The “yoghurt case” is an example of how variations in welfare can be wrongly estimated when not taking into account all the strategies of all players of the game. With this new methodology, consisting in considering both inter and intra brand competition, as well as a supply model that includes vertical linkages between manufacturers and producers, competition authorities can better evaluate profit sharing between providers and sellers. In the “yoghurt case”, having more precise information on the providers of each seller would have allowed to estimate the exact impact of collusion on each provider.

 

By Céline Bonnet

 

 

An interview with Daron Acemoglu on artificial intelligence, institutions, and the future of work

The recipient of the 2018 Jean-Jacques Laffont prize, Daron Acemoglu, is the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology. The Turkish-American economist has been extensively published for his research on political economy, development, and labour economics, and has won multiple awards for his two books, Economic Origins of Dictatorship and Democracy (2006) and Why Nations Fail (2012), which he co-authored with James A. Robinson from the University of Chicago.

The Jean-Jacques Laffont prize is the latest addition to the well-deserved recognition the economist has received for his work, which includes the John Bates Clark Medal from the American Economic Association in 2005 and the BBVA Frontiers of Knowledge Award in Economics in 2017. Despite a schedule heavy with seminars and conferences, Daron kindly set aside some time to offer the TSEconomist his insights on topics ranging from the impact of artificial intelligence for our societies to the role an academic ought to take in public political affairs.

_DSC0724

  1. Congratulations on winning the Jean-Jacques Laffont prize. What does this prize represent to you?

I’m incredibly honoured. Jean-Jacques Laffont was a pioneer economist in both theory and applications of theory to major economic problems. I think this tradition is really important for the relevance of economics and of its flourishing over the last two decades or so. I think it’s a fantastic way of honouring his influence, and I feel very privileged to have been chosen for it.

  1. Thanks to you and other scholars working on economics and institutions, we now know that the way institutions regulate economic life and create incentives are of great importance for the development of a nation. New players such as Google now possess both the technology and the data needed to efficiently solve the optimisation problems institutions face. This raises the debate on government access, purchase, and use of this data, especially in terms of efficiency versus possible harms to democracy due to the centralisation of political power. What is your take on this?

I think you are raising several difficult and important issues. Let me break them into two parts.

One is about whether the advances in technology, including AI and computational power, will change the trade-off between different political regimes. I think the jury’s out and we do not know the answer to that, but my sense would be that it would not change it as much as it changes the feasibility of different regimes to survive even if they are not optimal. What I mean is that you can start thinking about the problem of what was wrong with the Soviet Union in the same way that Hayek did. There are problems to be solved and they’re just too complex, the government can’t handle it and let’s hope that the market solves it.

Then, if you think about it that way, you may say that the government is getting better at solving it, so perhaps we can have a more successful Soviet Union. I think that this is wrong for two reasons that highlight why Hayek’s way of thinking was limited, despite being revolutionary and innovative. One reason is that the problem is not static, but dynamic, so the new algorithms and capabilities create as many new problems that we don’t even know how to articulate. It is therefore naive to think that in such a changing world, we can delegate decision-making to an algorithm and hope that it will do better than the decentralised workings of individuals in groups, markets, communities, and so on.

The second reason is that Hayek’s analysis did not sufficiently emphasise a point that I think he was aware of and stressed in other settings: it is not just about the capabilities of the governments, but about their incentives. It is not simply that governments and rulers cannot do the right thing, but that they do not have the incentives to do so. Even if they wanted to do the right thing, they do not have the trust of the people and thus cannot get the information and implement it. For that reason, I don’t think that the trade-offs between dictatorship and democracy, or market planning versus some sort of market economy, is majorly affected by new technology.

On the other hand, we know that the equilibrium feasibility of a dictatorship may be affected. The ability to control information, the Internet, social media, and other things, may eventually give much greater repressive capability to dictatorships. Most of the fruitful applications of AI are in the future and to be seen, the exception being surveillance, which is already present and will only expand in the next ten years, in China and other countries. This will have major effects on how countries are organised, even if it may not be optimal for them to be organised that way.

To answer the second part of your question, I think that Google is not only expanding technology, but also posing new problems, because we are not used to companies being as large and dominant as Google, Facebook, Microsoft, or Amazon are. Think of when people were up in arms about the power of companies, robber barons, at the beginning of the 20th century, leading to the whole progressive sequence of precedents being reformed, antitrust and other political reforms: as a fraction of GDP, those companies were about one quarter as big as the ones we have today. I therefore think that the modern field of industrial organisation is doing us a huge disfavour by not updating its way of thinking about antitrust and market dominance, with huge effects on the legal framework, among other things. I don’t know the answers, but I know that the answers don’t lie in thinking about something like “Herfindalh is not a good measure of competition so therefore we might have Google dominate everything, but perhaps we are ok” – I think that this is not a particularly good way of going about things.

  1. Some fear that the dominance of these companies could lead to the growth of inequality. Do you think that AI could play a role in this?

I am convinced that automation in general has already played a major role in the rise of inequality, such as changes in wage structure and employment patterns. Industrial robots are part of that, as well as numerically controlled machinery and other automation technologies. Software has been a contributing factor, but probably not the driver in the same sense that people initially thought about it. Projecting from that, one might think that AI will play a similar role, and I think that this is not a crazy projection, although I don’t have much confidence that we can predict what AI will do. The reason is that industrial robotics is a complex but narrow technology. It uses software and even increasingly artificial intelligence, but it isn’t rocket science. The main challenge is developing robots that can interact with and manipulate the physical world.

AI is a much broader technological platform. You can use it in healthcare and education in very different ways than in voice, speech, and image recognition. Therefore, it is not clear how AI will develop and which applications will be more important, and that’s actually one of the places where I worry about the dominance of companies like Google, Amazon, Facebook: they are actually shaping how AI is developing. Their business model and their priorities may be pushing AI to develop in ways that are not advantageous for society and certainly for creating jobs and demand for labour.

We are very much at the beginning of the process of AI and we definitely have to be alert to the possibility that AI will have potentially destructive effects on the labour market. However, I don’t think that it is a foregone conclusion, and I actually believe there are ways of using AI that will be more conducive to higher wages and higher employment.

  1. Regarding the potential polarisation between high and low-skilled labour, do you think that the government could address this issue with universal basic income?

There is a danger – not a certainty, but a danger – that it will polarise, and that even if we use AI in a way that simplifies certain tasks, it may still require some numeracy and some social skills that not all workers have, resulting in probable inequality and displacement effects.

That being said, I believe that universal basic income is a bad idea, because it is not solving the right problem. If the problem is one of redistribution, we have much better tools to address it. Hence, progressive income taxation coupled with something like earned tax credits or negative taxation at the bottom would be much better for redistributing wealth, without wasting resources on people who don’t need to get the transfer. Universal basic income is extremely blunt and wasteful, because it gives many transfers to people who shouldn’t get them, whereas taxation can do much better.

On the one side, I fear that a lot of people who support universal basic income are coming from the part of the spectrum which includes many libertarian ideas on reducing transfers, and I would worry that universal basic income would actually reduce transfers and misdirect them. On the other side, they may be coming from the extreme left, which doesn’t take the budget constraints into account, and again, some of the objectives of redistributing could be achieved more efficiently with tools like progressive income taxation.

Even more importantly, there is another central problem that basic income not only fails to deal with, but actually worsens: I think a society which doesn’t generate employment for people would be a very sad society and will have lots of political and social problems. This fantasy of people not working and having a good living standard is not a good fantasy. Whatever policy we should use should be one that encourages people to obtain a job, and universal basic income will discourage people to do so, as opposed to tax credits on earned income, for example.

  1. In a scenario of individuals being substituted and less people working, how could governments obtain the revenue they are not getting from income taxation? Could taxing robots be a possibility?

I think that this is a bad way of approaching the problem, because when you look at labour income, there is certainly enough to have more redistributive taxation, and no certain need to tax robots. However, we should also think about capital income taxation more generally: there may be reasons for taxing robots, but that has to be related more to the production efficiency and excessive automation. I think that singling out robots, as a revenue source distinct from other capital stock, would be a bad idea. If, for example, you want taxes to raise revenue, then land taxes will be a much better option than robot taxes – this does not mean that we should dismiss the idea of taxing robots. I think that this is confusing because there are efficiency reasons (giving the right incentives to firms) and revenue-raising reasons for taxing. Moreover, because of Bill Gates and other people, public discussions are not helping this confusion.

In terms of sharing wealth, I think that robots do not create new problems compared to other forms of capital. I think it was a confusion of Marx to think of marginal product of capital in very complex ways – that everything that goes to capital is somehow theft – and if neoclassical economics have one contribution, it is to clarify that.  I personally believe there are legitimate reasons for thinking that there is excessive automation. And if there is excessive automation, there are Pigouvian reasons for taxing robots, or actually removing subsidies to robots, which there are many. But that is the discussion we need to have.

  1. There has recently been optimism in regards to the future to AI and the role it could have, for example, on detecting corruption or improving education. You have made the distinction between replacing and enabling technologies. Where does one draw the line between the two?

That is a great question. In reality of course, automation and replacing technologies merge with technology that improve productivity. A great example would be computer-assisted design. Literally interpreted, that would be a labour augmenting technology, because it makes the workers who are working in design more productive. At the same time, however, it may have the same features as automation technology, because with computer-assisted design, some part of the tasks that a drawer would do would be automated. If you do it once, you can do it repeatedly.

So that is a grey area, but it’s okay because the conceptually important point to recognise is that different types of technologies have very different effects. Recognising this is an antidote against the argument that improving productivity through technology will always benefit labour; we actually need to think about what new technologies do and how the increase in productivity will affect labour.

But it is also very important for the discussion regarding AI to point out that AI, as opposed to industrial robot automation, is not necessarily – and does not have to be – labour replacing.  There are ways in which you can use it to create new tasks for labour or increase productivity. This is what I think will play out in real time in the future of AI.

  1. In 2017, you wrote an article for Foreign Policy, “We are the last defence against Trump”, which questioned the belief that institutions are strong enough to prevent a man like Donald Trump to overlook the rule of law. According to you, should economists come off the fence on current affairs? Is it possible to express an opinion without sacrificing some of the intellectual rigour one can expect from a researcher?

I think so. First, there are important personal responsibilities that are crosscutting. Secondly, there is a danger of having the perfect be the enemy of the good.

On the first one, I think that people have to make their own choices as to what is acceptable and what is not. Some things are just within the realm of “I prefer high taxes, you prefer low taxes”, and that is quite a reasonable thing. But some other issues may be a real threat to democracy, to other aspects of institutions, and to minorities that are disempowered. From there, it is important to recognise that there are some lines that should not be crossed, or if they are crossed, that some people need to defend them vocally. Any analogy to the Nazi period is fraud with danger, but it bears saying that, of course, in hindsight, every academic should have walked out of the universities that were managed by Nazis, that were firing Jewish scholars, or were teaching jurisprudence according to the national socialism. That has nothing to do with whether you have evidence of one versus or another – I think that there are some lines.  Similarly, and without saying anything as provocative as drawing parallels between Trump and the Nazis, I think that it is important for people, in general, to defend democracy against the onslaught that it is receiving from Trump’s administration and the circles of people around him. I think I will say openly to everybody that it is wrong for any economist or academic to go and work for Trump, and I think I would certainly never consider doing so, and would consider distancing myself from anybody who does.

But that is on the private ground. On the social science side, there is a lot we do not know. Everything we know is subject to standard errors and external validity constraints, but to refuse to act or to condemn would be to have the perfect be the enemy of the good. On the basis of what we know, we know how democracies fail, we know how certain aspects of American institutions are actually weaker than what people think, and we know how changes in policies against minorities would have terrible effects for certain groups. I think that on the basis of that, to articulate criticism of certain policies and certain politicians is also a good use of the knowledge we have accumulated.

by Valérie Furio, Gökçe Gökkoca, Konrad Lucke, Paula Navarro, and Rémi Perrichon

Internship Report: ACTeon Environment

anniem2internship.jpg
Annie Krautkraemer
  1. Where did you do your internship and what was your role?

I did my internship at ACTeon Environment in Colmar, France. I contributed to a research project regarding aquatic biodiversity in the European Union called AQUACROSS. Within AQUACROSS, there were eight different case studies in different geographical regions that are facing threats to biodiversity. I worked on the Danube River case study, whose main threat to biodiversity is hydromorphological alterations to the river, or put simply, changes made to the shape of the river (hydroelectric dams, dykes, etc.) These changes impact how the river is connected to wetlands, which consequently affects the ecosystem services that these water bodies provide, such as water filtration and flood prevention. ACTeon was responsible for conducting the efficiency analysis of ecosystem-based management policies aimed at preserving aquatic biodiversity, and I contributed to this analysis.

  1. How did your studies at TSE help you during the internship?

The courses in ERNA were helpful because we learned about environmental valuation and cost-benefit analysis, both of which were pertinent to the project I worked on. Environmental valuation was useful because we learned about different techniques to value non-market goods like ecosystem services, and about their advantages and drawbacks. To carry out an efficiency analysis of the policy, we will conduct a cost-benefit or cost-effectiveness analysis, depending on what information is available, so it was very useful to have a course on cost-benefit analysis.

  1. How did you get the internship? What would be your advice for students looking for a similar internship?

I had seen an internship offer from ACTeon on the alumni network site, and then went to their website and found a couple more offers, including one in English that interested me. I applied by sending a CV and cover letter, and then had a Skype interview. I would recommend checking out ACTeon’s website around December or January to see what internships they are offering. Another tip I have is to use the alumni network to see where other people are already working: For example, those interested in environmental economics can use the directory to see where past ERNA alumni are working, if they have updated their profiles.

by Annie Krautkraemer