Interview with Julien Grenet from PSE

id4

Julien Grenet is a researcher at the CNRS, an Associate Professor at Paris School of Economics, and one of the founders of the Institut des Politiques Publiques. He is specialised in education economics, public economics and market design. He is known by the general public for his participation in the public debate and the vulgarisation of economic concepts in some media such as France Culture.

 

He agreed to talk to the magazine about his work as a researcher, the importance for economists to be involved in the public debate and about modern issues that the french educational system is facing today.

 

Why did you create l’Institut des Politiques Publiques? What are its specificities?

We created l’Institut des Politiques Publiques – IPP – with Antoine Bozio in 2011. It followed a six-year period that Antoine spent in London working for the Institute for Fiscal Studies – IFS, which is our main inspiration for IPP. What was lacking in France was an institute that evaluates public policy, tries to put together the insights of academic research and translates them into policy brief reports targeting a broader audience such as policymakers, journalists and citizens. We felt that there was a very good academic research in public policies existing in France, but most of the results were not really conveyed to the general debate, which is, in my opinion, quite unfortunate. IFS was a good model to import in France. We started small but we have  grown up ever since, trying to cover a broad range of topics that are interesting for the public debate, such as tax policies, education, housing, pension, and environment. We also work on health issues.

What is your opinion, as a researcher, on the role of economists in the public debate?

I do not want to be judgmental on what we should do or not do. There are different ways to contribute to the public debate. From my point of view, you do so through the academic output you produce that then spills over onto the public debate. You should also try to meet policymakers. The important thing is to participate in the debate on topics that you know, and only on them. Unfortunately, it is not always the case, and that sort of attitude may damage the reputation of economists. I am personally trying to restrict my interventions to questions on education or housing, since I have worked on it.

Why did you choose to study education, and more specifically social segregation and selection processes, as your main topic?

I started to study education because it was the topic of my Master thesis. What drove me to this is that I come from a family of teachers whose social mobility upwards was entirely due to school.  I was shocked in a way by the fact that through the education system, my family managed to climb up the social ladder. Today, we sometimes have the impression that it does not play this role anymore, and we wonder what is wrong with our educational system. I think the tools of economists have a lot to say. What we can learn with economics is improving the efficiency of the educational system.

I went into it for personal reasons; afterwards, the topics that I have addressed are more random. I started working on the return of education, which is a very classic question. Then, since I was working in the same office as Gabrielle Fack, who was working on housing, we thought about working on something in between those two fields of interest. We started working on the effect of school zoning (“la carte scolaire” in French) on housing prices. We thought that this system was one way to assign students to schools, but we actually found out there were many others. We started reading about the school choice mechanism and got interested in that. It is a very dynamic field in economics: how to assign students to schools? How to assign teachers to schools? How to drive students to higher education programs?

In France, there has been a lot going on on the subject lately, and this is important for the public debate. We heard a lot about Admission Post Bac and Parcoursup; those are, in my opinion, important technical tools for policy implications or policy effects. We empirically know quite little about how their effect in the real world. I think this is where we, as economists, can contribute: by improving these tools.

According to the OECD, France is one of the most unequal countries in terms of climbing up the social ladder. What is your analysis?

I think that there are many reasons to it; yet, we can hardly identify them. What the OECD has shown is that at the age of 15, your performance is more determined by your social background in France than in any other country. France is typically in the top three countries where social determinism is the strongest at school.

One reason is that our educational system, especially the middle school system – between 11 and 15 years old – is highly segregated. From research, we know that ghetto schools harm students who are studying there beyond the effect of social background. This segregation in the school system increases inequalities. This might be due to different things: the level of residential segregation is very high in France, and the way we assign students to schools is far from being optimal. As we are assigning students to their local school, if the neighborhood is segregated, then the school is going to be segregated too.

There are many other ways to assign students that we could use. For instance, there is what we call “control school choice” that tries to achieve a balance in the social composition. We could also redesign the school boundaries, or “school catchment”, so that they would be more diverse in their intake students. That is one important topic to be addressed: can we reduce segregation in school by using different methods of assignment?

There is also a problem with how teachers are assigned to school. Typically, young teachers, who are inexperienced, are assigned to the most deprived schools in France, which is obviously a problem. We know that teachers have their biggest efficiency improvement during their first few years of teaching. Hence, students from deprived schools have less probability to benefit from the most efficient teaching.

There is also an issue with the educational system. The French system is very good at selecting an elite and the whole system is created to detect these students who go all their way up to “classes préparatoires”, “grandes écoles” and so on. However, it is not so good to have as many students as possible to succeed. We have a very strong elite but, in the meantime, we are losing a lot of students along the way. France has a high drop-out rate: many students quit school with no certification. Another problem with the system is that vocational courses are seen as a personal failure, unlike many other countries. Therefore, a lot of students who follow this path feel like they failed their studies.

Your research focuses on assignment algorithms. What consequences did you find of such algorithms on students’ choices?

France is a very centralised country; hence, it is more inclined to use these algorithms to assign students and teachers than other countries. There has been very little involvement of researchers and economists to design these algorithms. In fact, a lot of research on this assignment mechanism comes from the U.S.. It is a branch of design mechanism theory which received a lot of visibility thanks to the Nobel prize of Alvin Roth and Lloyd Shapley in 2012. They really transformed the landscape in many dimensions:  for example, the assignment of students to school in the U.S. has been completely redesigned in many cities using these algorithms. Kidney exchanges now rely on these algorithms, and there are many new applications, such as social housing allocation.

In France, in my opinion, the main problem is the fact that there is not enough transparency about these algorithms. They exist in order to produce the best possible matching between students and schools, to try to maximize satisfaction while respecting several priority rules. The problem is that, the way the algorithms and the priority rules work are not well known. This has led many people to reject the whole idea of selecting people with algorithms because they feel that there is a black box, like a lottery, when in fact, an algorithm is just a tool.

What really matters is the way you design priorities. If you have two students who apply to a school and there is only one seat left, which student has the priority over the other is a political decision depending on which criteria you promote – students with better grades, students who live closer to the school, students with a lower social background, … This is not sufficiently explained and democratically decided. The issue today is to bring research into these algorithms, so that there are more discussions and a better understanding of the way they work.

You are currently working on a project on social mix. Why it is a topic of interest? What are your preliminary results and your analysis?

We have already said that the lack of social mobility is one of the reasons why there is so little mobility upward in France. The question is how to address this problem. We have several potential ways of doing it. We could use the  , we could redesign the school catchment area, we could also close some schools and send some students away from their original choice, like in the city center rather than in a suburban area.

We do not have many empirical results telling us in which case we should use this or that tool nor do we know the actual effect of some tools on segregation. Moreover, these effects are mitigated by the behavior of the parents: if they decide to send their child to a private school, we might not get as much social mix as we initially wanted. Therefore, we are trying to evaluate different ways to assign students to school in order to create social mix and evaluate their effect. To do so, we are using several experiments that were launched across the country, and we try to compare the effect of these experiments on social mix.

The reason why we want to increase social mix is because we believe it is going to reduce inequalities. We are interested in the effect of social mixing on both students’ performance and their non cognitive aptitudes: their self-confidence, their social fatalism and the way they perceive others – the perception of difference. What we are trying to use here is the fact that, in some experiments, even if we found a large effect on social mix,

We try to evaluate this through surveys that are conducted in schools. We are now proceeding in the second wave; two other waves  are coming. What we try to evaluate is how does the change of the school social composition individually affect the students through their performance in school and their non-cognitive outcomes. If we look at the literature, there is no evidence of this, especially on the non-cognitive aptitudes, because we cannot really measure it with administrative data. We need to go to the schools and directly ask students some questions. That is our contribution to the literature: trying to answer one of these questions.

Finally, what results in your research were you surprised of?

I did not anticipate the fact that this students’ assignment mechanism would have such a big impact on the composition of schools. I started to work on these assignment mechanisms looking at several high schools in Paris. In 2013, the educational authority of Paris adopted an algorithm to replace the manual procedure. As a part of the algorithm, they created a bonus for low-income students. This bonus would increase their priority, and as a result, the social segregation in high schools in Paris went down by 30 % in only two years, which is huge. This had not been anticipated by the local education authority because they did not think that the way the bonus had been created would make that bonus so large. They did not realise that they gave almost automatically their first choice to low-income students. This completely changed the landscape of Paris, which was the most segregated area in France. This is no longer the case.

By working on this data, I realised that these tools are in fact even more powerful than any reform. For instance, the “assouplissement de la carte scolaire” was relaxing these schools’ catchment areas, so that students could apply to schools that are away from their homes. In reality, this had very little effect on the social composition, whereas these school choice algorithms, like the one implemented in Paris, had a huge impact with very little coverage in the media. The numbers shown in the graph are explanatory: the low-income students now have a bigger set of choices than before. This is one of the surprises of research and economics: it is not because something is not looked upon by researchers or does not get any attention, that it is not existing. You can be like an archeologist: you can dig the results up that were unknown until now and they can change the way you see and understand the educational system.

 

By Thomas Séron

Should we use new economic methods to assess the impact of collusion on welfare in vertical markets? The example of the “Yoghurt case”

 

bonnet
Céline Bonnet is a director of research at INRAE within TSE

 

If literature has widely covered collusion in horizontal markets, it has not given enough attention to collusion in vertical markets, and more precisely on how to properly evaluate the impact of cartels on total welfare. As we observe convictions for collusion among prominent manufacturers, economists try to advise authorities on new approaches to better consider the strategies of retailers, and better assess the impact of collusion on both manufacturers and retailers, as well as on consumers.

 

 

 

A concentrated market which has become the scene of anti-competitive practices

Over the past 30 years in France, the retail sector has known successive mergers that strengthened the bargaining power of big retailers against manufacturers. The food retail sector, for example, is dominated by eight major groups, including Carrefour and Leclerc, who represent about 40% of the total sales. To counteract this concentration trend, manufacturers of the food industry also decided to engage in a consolidation movement in the early 2000s. The increase of concentration among both retailers and manufacturers has led to higher prices for consumers.

Despite that trend, retailers have still searched for new innovative strategies to differentiate themselves and be more competitive on the market. Big retailers have played the strategy of Private Labels – PLs: they sell store-owned brands, such as, for example, la Marque Repère in Leclerc. PLs are then sold along with National Brands – NBs, established manufacturer brands – giving retailers advantages on both horizontal and vertical markets. They can differentiate from other retailers who might sell the same NBs, and they gain bargaining power against NBs manufacturers, which will lose market shares for the benefit of PLs manufacturers if they charge too high prices. Indeed, PLs products can be substitutes for NBs products, and are often sold at a relatively low price.

The concentration of manufacturers, along with increasing selling prices, also facilitated collusion and other anti-competitive practices. This can be illustrated by the “yoghurt case.

In 2015, French authorities charged 10 major PLs producers of the French dairy desserts sector – such as Yoplait and Lactalis – for having colluded from 2006 to 2012. Indeed, even though PLs are retailer-owned brands, one PL manufacturer may produce for several retailers at the same time. This gives PLs producers incentives to collude. If the price proposed by the retailer is too low, they can reduce their market share in the concerned retailer’s store and sell somewhere else. Retailers will suffer from this strategy, as they need PLs products to differentiate and bargain. Hence, the bargaining power of PLs producers increases with collusion.

 

A traditional estimation method of collusion effects has become outdated

To assess the variation in welfare caused by the collusion, the French competition authorities used a traditional economic approach, consisting in mainly focusing on the horizontal collusion, and fixing the retailers’ response. The flaw of this method is that it does not take into account vertical relations between PLs producers and retailers, and hence neglects the strategic response of the retailers. It also ignores the potential  “umbrella effect”, which arises when an increase in PLs products’ wholesale prices diverts demand to the substitute product (NBs) and thus distort NBs products’ wholesale prices and market share. A forthcoming paper  (C. Bonnet, Z. Bouamra-Mechemache, Empirical methodology for the evaluation of collusive behaviour in vertically-related markets: an application to the “yogurt cartel” in France) addresses this issue and applies this new methodology to the “Yoghurt case.

 

A new economic initiative to assess the impact of a cartel on welfare applied to the “Yoghurt case

The idea is to model a competitive setting – or non-collusive counterfactual – to obtain the prices and quantities that would have been observed in such environment, and then compare it with the prices and quantities we currently observe on the market. This new method differs from the traditional one in the sense that the negotiation of the choice of the wholesale prices is modelled as a Nash bargaining game, and not as a unilateral decision from the manufacturers that retailers have to accept. The results from this paper concluded that there was profitable collusion among PLs manufacturers. It also showed that the profit variation for retailers was quite ambiguous, and that PLs producers were not necessarily the only winners of the cartel.

Faculty article

In the competitive setting, by decreasing the wholesale price of PLs products, we would expect that the market share – and hence the wholesale and retail prices – of NBs products would decrease due to a drop in NBs demand. Indeed, in the yoghurt market, we observe an asymmetric substitution between the two types of products: NBs products are more sensitive to a change in the prices of PLs products than the other way around. Strangely, the simulation showed a decrease in market share and wholesale prices for NBs products, but not a decrease in retail prices. In fact, the « umbrella effect » causes a decrease in wholesale prices of NBs products following the decrease in the wholesale prices of PLs products. NBs and PLs manufacturers clearly lose profit in the competitive setting compared to collusion. The novelty then is to take into account the optimal strategy of the retailer, which is actually to slightly increase the retail price of the NBs products: clients will be attracted by the low prices of PLs products, and the retailers will extract a maximum of surplus from consumers who still want to buy NBs products. The retailer actually gains from PLs products but loses from the increase in NBs products’ prices because of the asymmetric substitution. The overall result varies from one retailer to another: for some, the negative effect of NBs products exceeds the positive effect of PLs products, but not for others.

Hence, both PLs and NBs manufacturers are better off with collusion, while the results for retailers are mitigated. The study also found that consumers are worse off with collusion, but the loss is relatively low – less than 1% of the consumer surplus. Overall, total welfare has increased on the yoghurt market.

 

The “yoghurt case” is an example of how variations in welfare can be wrongly estimated when not taking into account all the strategies of all players of the game. With this new methodology, consisting in considering both inter and intra brand competition, as well as a supply model that includes vertical linkages between manufacturers and producers, competition authorities can better evaluate profit sharing between providers and sellers. In the “yoghurt case”, having more precise information on the providers of each seller would have allowed to estimate the exact impact of collusion on each provider.

 

By Céline Bonnet

 

 

Should we break-up Big Tech?

In recent years, digital technologies have profoundly changed many aspects of our daily lives, from e-commerce to internet search, travel, communication or entertainment consumption. While for the most part these changes have benefited consumers, certain voices have started to speak up against the power and influence of the Big Tech companies – Google, Amazon, Facebook, Apple in particular, accusing them of stifling innovation, dealing unfairly with their suppliers, and violating our privacy among others. Elizabeth Warren, one of the most prominent candidates to the Democratic investiture in the U.S., recently called for a much tougher policy approach towards Big Tech, proposing in particular to dismantle some of these companies, a call that has received a certain echo in the press and among politicians.

To understand whether we should break-up – some of – the big tech companies, it is important to understand why they have become so big, whether such a situation is actually harmful to consumers, and whether a break-up is an appropriate solution.

GAFA

Many digital markets are characterised by the existence of economies of scale and of network effects (see Shapiro, Carl and Varian, 1998). The former corresponds to the idea that the average cost goes down with the number of units sold, which is typical of information goods: their production entails a large fixed cost, but they can be reproduced for a small marginal cost. For instance, once a search engine algorithm has been developed – at a considerable cost, answering an individual query is virtually costless.

Network effects are the demand-side equivalent of economies of scale: a product is more valuable the more users it has. If a social network like Facebook is a natural example of direct network effects, other platforms may exhibit indirect network effects: Android users exert a positive externality on each other, not because communication is easier between Android devices, but rather because more Android users attract more application developers to the platform (see Caillaud, Bernard and Jullien, 2003).

The use of data by technology companies is a particularly important source of returns to scale and network effects: as firms get more data, they can offer better products or services, or produce them more cheaply. Big Data also allow firms to realise economies of scope, that is to enter new markets thanks to the insights generated on their primary market – having access to your email data allows to offer a better calendar app.

By giving an advantage to larger firms, economies of scale and network effects can result in market tipping, that is in one firm becoming dominant as a natural result of the competitive process. The perspective of monopoly is worrying, but two forces push in the opposite direction. First, while possible, tipping is not guaranteed even in the presence of network effects. When these effects are intermediate, they can even intensify competition, as the fight for additional users becomes more intense. Second, even when they lead to monopoly, network effects and economies of scale can induce firms to compete harder to be the early leader: competition for the market, rather than in the market.

Breaking-up a monopolist in such a market, by creating several smaller networks, could result in increased competition. For instance, competing social networks could be induced to offer better privacy protection in order to attract more consumers. But breaking-up a network results in the fragmentation of the market, with some groups of consumers being unable to interact with others. This could make consumers switch network in order to enjoy more interactions, and eventually lead back to market tipping, thereby undoing the break-up.

The big technology firms have not passively enjoyed the rents of their position of natural monopolists, but have instead used a variety of strategies to protect or extend it, some of which have been deemed anticompetitive. Google, for instance, has been fined three times by the European Commission. One set of practices consisted of imposing restrictive clauses – exclusivity, tying –  to its trading partners, thereby preventing its rivals from competing on the merits. For instance, a rival search engine would have had to develop its own application store – or to pay a lot of money – in order to convince a device manufacturer to choose it over Google – and its very popular app store Google Play (see De Cornière and Taylor, 2018).

Another practice consisted in systematically favoring Google Shopping at the expense of other comparison shopping services on Google’s search engine. This issue of “own-content bias” has taken a new dimension with the emergence of internet gatekeepers such as Google or Amazon, the latter having also been accused – but not yet fined – of favoring its own brands on its platform. Own-content bias may also take other forms, such as when Spotify is required to pay Apple a fee when consumers subscribe through iOS, which puts it at a disadvantage compared to Apple Music. Platforms leveraging their dominant position on complementary markets is a key motivation for the proponents of breaking-up these firms.

GAFA2

Despite these legitimate concerns over exclusionary practices by multiproduct incumbents, it is not clear that a break-up – say, separating the search and the shopping activities of Google – would be desirable. First, in the presence of complementary products, common ownership enables firms to better coordinate their production decisions and achieve superior outcomes, which is the reason why competition authorities view vertical mergers more favorably than horizontal ones. Second, being able to use the data acquired on their dominant market on another market gives these firms further incentives to improve their core product. Forcing, say, Amazon to divest its personal assistant business would probably marginally weaken its incentives to offer cheap products on its platform. Third, a break-up in itself would not be sufficient to ensure neutrality of the platform, since they could use other contracts with some of the participants ensuring preferential treatment in exchange for a commission, a common practice in many industries (see De Cornière and Taylor, forthcoming).

A more sensible course of action consists in monitoring more closely the behavior of dominant platforms, and to intervene more quickly. At the moment antitrust actions take too much time to be carried out, and by the time they are the markets have changed, usually to the detriment of smaller rivals. Several recent reports make related arguments,  advocating a more responsive competition policy or the creation of a sectoral regulator (see the UK report “Unlocking digital competition: report from the digital competition expert panel”, or Cremer, Montjoye and Schweitzer, 2019).

Tech giants have also been accused of using acquisitions to cement their market power, buying out the start-ups that could potentially represent a threat to their dominant position. The typical illustration of this phenomenon is Facebook, with its acquisitions of Instagram and WhatsApp – and failed bid for SnapChat.  Google and Amazon have also been very active acquiring start-ups: over the past ten years, these three firms have bought around 300 companies, often relatively young. Most of these acquisitions have not been reviewed by competition authorities because they do not meet the various turnover thresholds.

One concern is that some of these acquisitions are “killer acquisitions,” i.e. made only for the purpose of shutting down potential competition, a phenomenon recently studied in the pharmaceutical sector (see Cunningham et. al 2018). Things look different in the tech sector, as many of the targets offer products that are complementary to the incumbents, and the perspective of being bought out by a big firm is a strong incentive to innovate. At the same time, economies of scope might turn a firm that offers a complementary product today into a rival tomorrow, but it is hard to predict when this is the case.

In markets such as these, with young firms and rapidly evolving technologies, competition authorities are bound to make errors, either of type I – blocking a pro-competitive one – or type II – approving an anticompetitive merger. The current situation is very asymmetric, as none of the reviewed acquisitions by the Big Tech firms have been blocked. This is certainly suboptimal, especially given that the cost of a type II error, namely elimination of competition, is probably much larger than that of a type I error. While recognising that predicting the effects of a merger is especially difficult in innovative markets, moving the needle towards a stricter approach to mergers in the digital sector seems warranted.

As I tried to show in this brief essay, ensuring effective competition in the technological markets will require a more elaborate answer than a break-up, the efficacy of which is highly doubtful. Several approaches have been proposed, and the debate is still raging. These are exciting times to be an industrial economist!

By Alexandre de Corniere

 

References

Caillaud, Bernard, and Bruno Jullien. “Chicken & egg: Competition among intermediation service providers.” RAND Journal of Economics (2003): 309-328.

Crémer, Jacques, Yves-Alexandre de Montjoye and Heike Schweitzer, “Digital policy for the digital era”, 2019

Cunningham, Colleen, Florian Ederer, and Song Ma. “Killer acquisitions.” Working Paper (2018).

De Cornière, Alexandre and Greg Taylor. “Upstream Bundling and Leverage of Market Power”, CEPR working Paper, 2018

De Cornière, Alexandre and Greg Taylor. “A Model of Biased Intermediation”, Rand Journal of Economics, forthcoming

Shapiro, Carl, and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

UK report, “Unlocking digital competition: report from the digital competition expert panel”, 2019.

From “Dark Continent” to “Sun Continent”: a story of power  

Africa: a continent full of colours and great potential, yet suffering from being poorly developed. One way to see it is to have a look on the state of the electricity market in Sub-Saharan Africa. While 87% of the world population have access to electricity in 2016, only 42% of Africans have this privilege, thus ranking the continent at the lowest rate in the world, according to the World Bank. Sub-Saharan Africa is particularly affected. This translates into reduced business hours, or ineffective health systems that hardly meet the needs of the population.

chart

Notwithstanding these problems, Africa has great potential in the energy sector because it is abounding with natural resources. The continent may become a leader in sustainable energies in a near future, with the potential to drive tremendous change in Africa’s growth trend, and lead it on a wide development path.

How has Africa’s energy sector come to be in such a crisis? Are sustainable energies a pragmatic strategy or a nourished temporary utopia? What are the challenges Africa needs to face in order to get out of its energy trap? We explore a few ideas in this article.

The Dark Continent has assets that many could be jealous of. Northern, Western and Eastern African countries possess huge reserves of oil. Some of the biggest reserves of coal are to be found in South Africa. Morocco and Algeria have natural gaz. These resources are precious: fossil fuel-based power is the main source of electricity in Africa. Nevertheless, electricity needs are currently not covered. Indeed, most Sub-Saharan countries export those resources. In the case of Nigeria, the biggest African oil producer and Africa’s leading economy, black gold represents 95% of its export revenues, with Asia and Europe as the main recipients. As a consequence, a lack of infrastructure hinders the processing of oil, and the country even has to import its fuel from Europe. In 2016, 40% of Nigerians did not have an access to electricity at all. The rest of the population sometimes had a limited access to it, only for small time slots.

Nigeria is not the only country to struggle to provide an efficient electricity grid. The lack of refineries, as well as corruption, and bad business schemes for managing those resources explain the actual state of the energy sector. Fossil-fuel energy is the costliest source energy to produce. Moreover, lack of investments in the energy sector prevents firms from efficiently exploiting and transporting electricity in the territory. Since 2007, a global average of US$ 12 billion investment has been made in the sector, which represents only 36% of the estimated need to have an optimal electricity system. Today, governments still remain the main investors in the energy sector. Unfortunately, massive corruption in some countries has led to a reduction of effective allocated funds. In the 1990s, many countries privatised the energy sector in order to attract private investments. It worked for some countries like Namibia, which was able to lower energy tariffs and to improve efficiency. But for others, high costs of production and transformation led to high prices, creating a gap between the rich and the poor. The latter are not able to afford the installation cost of a basic phase circuit in their home. This ascertainment made by the Forum of Energy Ministers of Africa in 2000 still remains true for most people in rural areas today, who made up 60% of Sub-Saharan Africa’s population in 2016. Moreover, providing electricity in isolated areas is seldom profitable, offering firms little incentive to expand their production networks towards people who will not be able to pay.

Should we encourage investments for a better exploitation of coal, natural gas, and oil, which, in many cases, seem to be more profitable if exported? There is a double problem. First, those ressources are infamous for their gas emissions. Second, there are in finite supply. In addition, some catastrophes like the Probo Koala case in 2006, when a Greek-owned petroleum ship offloaded toxic waste in the port of Abidjan and intoxicated thousands of people, have shown the limits of poorly managed resources.

Sustainable energies are becoming an increasingly important part of the conversation. For some, the idea represents an utopia. For others, a fleeting trend. In spite of diverging opinions, it seems to be a reasonable hope for many, and the only possible escape from the energy crisis. Do we have reasons to believe them?

One could say oil and coal are being dethroned by sun and water.

Sometimes called the sun continent, Africa has the highest solar irradiance on earth, with the Sahara desert breaking records in daily sunshine periods. Countries like Morocco have already begun to exploit this major asset. Near the city of Ouarzazate, a giant solar farm has emerged, covering 1.4 million square metres of yellow sand in 2016. The aim of the Northern kingdom is to fulfil 52% of its electricity needs through solar power by 2030.

Africa is also home to the most powerful rivers in the world. One of them, the Congo River, is the second largest river in the world by its volume discharge after the Amazon. Its Inga Falls, situated in the Democratic Republic of Congo, currently hosts several projects that are all part of the Grand Inga project. The dam aims at providing cheaper energy to a large part of Africa,  allowing industries to take off. For instance, a significant part of the produced energy is destined to be sent to South Africa. Initiated in the early 2010s, the project has received a joint bid from a Chinese and a Spanish company in 2018 to cover the estimated cost of US$13 billion for the recent project expansion.

Less covered by the media, the wind power market is also expanding. Mainly produced in South Africa thanks to the REIPPPP project launched in 2011, this source of energy promises to be increasingly exploited in the future, as there are important costal winds in Eastern Africa.

There are many other examples of this kind, such as the geothermal energy in the East African Rift. According to the World Bank, renewable energies represent 70% of total final energy consumption in Sub-Saharan Africa in 2015, mainly because of improvised mini-grids in rural areas, which is still not sufficient to cover a 24-hours-a-day access to electricity.

Potential is not enough. Business schemes have to be improved for Africa to get out of poverty. Public-private partnerships, as recommended by a World Bank report in 2017, have to be implemented to better regulate the sector, incentivise firms, serve the public interest, and encourage private investments. If oil-producing countries need oil for their trade balance to avoid sinking into deficit, governments have to favour alternative energies to provide electricity to their population. Regional cooperation can also be a viable option: an example is the West African Power Pool (WAPP), which was founded in 2000 and gathers 14 countries with the common goal of building a common market for electricity. This creates a bigger and thus more attractive market for investors. The Central Africa Power Pool, Eastern Africa Power Pool, Southern Africa Power Pool and COMELEC – in Northern Africa – have also been implemented, thus dividing Africa into five main markets.

Investments in both non-renewable and renewable energies are already increasing, in particular because of the recent Chinese foreign policy concerning Africa. Electrification is on an upward trend since public policies are more directed towards the common good. Having said that, there remain many efforts to be made. The Dark Continent could become greener, more active, and healthier – the “Sun Continent” could one day be a more fitting name.

by Rose Mba

The winners and losers of the French 2008 feebate policy

In 2008, the French government introduced a policy taxing cars with high carbon emissions and rebating low carbon emission cars, better known as a feebate policy or bonus-malus écologique. This type of policy is appealing for two reasons: first, because it provides incentives to purchase less polluting cars, and secondly, because it can be designed to be revenue neutral since the revenue collected through the taxes subsidies the rebates.

In a recent paper, I conduct a quantitative evaluation of this policy, with a particular focus on its distributional effects: it is particularly relevant in this case to identify the winners and losers of the policy. I also analyse the effect of this policy, which is based on carbon emissions, on other local pollutants like particulate matter or nitrogen oxide. By nature, a policy that targets carbon emissions favours diesel cars, which consume higher levels of emissions of nitrogen oxide and particulate matter than petrol cars. Particulate matter and nitrogen oxide are known to have a direct impact on air quality and hazardous effects on health. While carbon emissions have a global impact, these local pollutants’ emissions raise the question of distributional impacts of the feebate policy in terms of health effects.

To measure these effects, I build a structural model of market equilibrium for the automobile industry. This implies estimating the supply and demand for the different car models using data on car characteristics and sales, which can then be used to simulate what the market would have looked like had there been no feebate policy in 2008. Comparing the observed market equilibrium with the counterfactual one, I can thus deduce the policy’s causal effect. Relying on a structural model is especially useful because some outcomes of interest cannot be observed directly, but can be expressed in terms of the model parameters. This is the case for car manufacturers’ profits and consumer surplus.

A notable challenge in modelling this market and being able to distinguish the winners and losers of this policy is to incorporate a large dimension of heterogeneity in individuals’ preferences for cars and their attributes. I make the assumption that individual heterogeneity in preferences is related to observable demographic characteristics, and leverage the correlation between composition of car sales and demographic characteristics at the municipality level. For instance, observing that the cars purchased in rural areas tend to be more fuel efficient than in urban areas reveals that individuals in rural areas tend to drive more, and are thus likely to be more sensitive to fuel costs than those living in urban areas. I also find a positive correlation between horsepower and income, which can be observed from the sales in wealthier municipalities.

On the supply side, I model the competition between car manufacturers and their pricing strategies, with and without the feebate policy. I do not model the choice of car characteristics and consider they are identical regardless of the regulatory environment. The marginal cost of each car model is estimated under the assumption that the car prices in 2008 are the optimal prices under the feebate policy. In the simulation of the market equilibrium absent the feebate policy, I predict prices and sales for each car model since both are jointly determined by demand and supply.

What is important here is that when setting its prices, the firm anticipates that consumers get a rebate or pay a tax and take up a part of the rebate or the tax. How much is left to the consumer depends on the competition of the market and the market power of car manufacturers.

In the end, the feebate policy improved consumer surplus and firms’ profits, surpassing the 223 million euros it cost in 2008. I find that the feebate caused a decrease in average carbon emissions of 1.56%, while average emissions of local pollutants – carbon monoxide, hydrocarbon, NOx, and PM – all increased. Emissions of local pollutants and carbon dioxide, however, increased once converted into annual tons. The increase in annual carbon emissions can be explained not only by the higher share of diesel cars, which implies more kilometers are driven, but also by the increase in the number of cars purchased. Indeed, the cars with low carbon emissions, which are already cheap cars, become even cheaper because of the rebates. This means that individuals who were not initially buying a car do buy a car, at least in my model. Nonetheless, including the cost of carbon and local pollutant emissions using standard levels still implies that the policy is globally welfare improving, with an estimated net benefit of 124 million euros.

Shifting the focus to the impact on income distribution, the main insight is that the feebate favoured the middle-income category at the expense of low and high-income classes. Moreover, given that the policy was not revenue neutral and contributed to a net deficit, the feebate could have been made redistributive if it were to be compensated by a proportional to income tax.

Clear winners and losers also appear among the car manufacturers. Car manufacturers are typically very specialised in different car segments: French manufacturers specialize in small, fuel-efficient cars, whereas bigger cars are the mainstay of the German car manufacturers. It comes as no surprise that the model points towards PSA and Renault, the two French manufacturers, as the winners of the feebate policy. The feebate policy increased their profits by 3.4% and 4% respectively, a considerably higher gain compared to increase in profits of the total industry (2.1%). Fiat group, the Italian manufacturer, increased its profits by 6.2% while Volkswagen, a German manufacturer very active on the compact car segment, only increased its profits by 0.3%. The other German manufacturers such as Porsche, BMW, and Mercedes-Daimler, were all severely hurt by this policy.

Finally, looking at the heterogeneity of the policy effects in terms of emissions of local pollutants, I find that average emissions increased the most in low emission municipalities. The policy generated a decrease in average emissions of local pollutant in some areas, but a high degree of heterogeneity can be observed across the country.

The analysis is concluded by an evaluation of the feebate in terms of redistribution and limitation of local pollutant emissions. The idea is to ask whether it would have been possible to improve consumer surplus, achieve more distribution across individuals, or limit the increase in emissions of local pollutants with the same budget and the same effect on average carbon emissions. In this exercise, I restrict the set of alternative policies to be simple linear feebates with different slopes for rebates and taxes. Interestingly, I find that average consumer surplus cannot be further improved, while there are large potential gains in terms of profits. Alternative feebate schemes could limit the rise in emissions of local pollutants, but the gains are not very large, and the best outcomes for the different pollutants cannot be achieved with a single feebate scheme: this reveals that there is an arbitrage to be made between the various pollutants.

by Isis Durrmeyer