The English Language: History and Etymology



Old English – First three lines of the epic Beowulf (composed in the early eighth century) 1. in the “Insular Hand”, the handwriting of the time, which had been adopted from the Irish, 2. the transcription into the Latin alphabet and the translation into modern English (read line by line).


Each of us uses, hears, and reads words every day. And beneath the manifold meanings a word can have in its current usage, lies its even richer history which can span millennia and continents. The study of words, their origins and their development is called etymology – a branch of linguistics. The purpose of this article is to give a brief overview of the development and etymology of the English language, then to provide some examples of words and their history, and finally to convince you that etymology can be practical in everyday life.

English is a particularly gratifying object of etymological study, as it combines the influences of several language families. Old English (449-1100) was imported to the British Isles by the Germanic Angle, Saxon and Jute tribes of the northern European mainland. Their own language had evolved in the Indo-European language family, a prehistoric tongue which was the source of most other European and many south-Asian languages. In due course, the languages on the British Isles incurred influences of  , Latin through the spread of Christianity and the alignment with the Roman Catholic Church, and Scandinavian through repeated invasions by the Vikings.





Old English (late West Saxon dialect) – Opening verses of Genesis, the first chapter of the Bible, as translated by Ælfric, the greatest prose writer of the Old English period.



The transition to the Middle English period (1100-1500) was marked by an important shift in grammar compared to Old English. Its starting point can be seen at the year 1066, when the Norman army invaded and conquered England. The Normans came from Normandy in northern France and were descendants of the Vikings who had settled that area some generations earlier;  y the time of the conquest they had become culturally Frankish. They replaced the native English nobility and thus Norman French became the language of   government. Latin remained the language of the clergy and English the language spoken by the majority of the population – Britain effectively became trilingual. With time, English regained in importance, as ties with France loosened (e.g. by the loss of the Normandy territory, the Hundred Year’s War between England and France). The power of the English-speaking common people increased, partly due to the Black Death killing around 1/3 of England’s population; English language poetry (e.g. by Chaucer) became popular and the Bible was translated into English. By the end of the 14th century public documents were written in English and kings made their declarations in English. By that time, Middle English had changed considerably compared to Old English: Latin and Scandinavian had introduced new words into the word-stock, and Old French – the largest influence by far – besides adding words to the vocabulary, also influenced the grammar.



Late Middle English – Opening verses of Genesis, in the translation to English by John Wycliffe in the 1380s


In the period of Early Modern English (1500-1800), British influence vastly expanded across the world, laying the foundations for English as a world language. This was also not only the time of Shakespeare, but also one of transformation for the language. While the transition from Old to Middle English occurred in terms of grammar, the shift of Middle to Early Modern English (1500-1800) was driven by a notable pronunciation change and an expansion of the word-stock.

In part, new words were acquired from foreign languages: the Renaissance period led to an influx of Latin and Ancient Greek vocabulary, French remained a strong influence, and Portuguese and Spanish gained in importance due to their role in the colonial conquests in Latin America. Britain itself expanded its influence during that time, founding colonies in America, Asia and Australia, and through this  not only goods but also words.



Early Modern English – Opening verses of Genesis from the
King James Bible published in 1611.


Furthermore, starting in the 15th century, the English language underwent its most important shift in pronunciation, termed the Great Vowel Shift: the phonetics of all of the Middle English long vowels changed as described in the picture below -and that of many other vowels and consonants as well. For example the a in name used to be pronounced as in spa, or the double e of feet was pronounced as the vowel in made. The reasons for this shift are essentially unknown. Spelling, however, was not adjusted to reflect the new pronunciation, as the archaic medieval ways of spelling were preferred; this is one of the reasons why spellings do not correspond to pronunciation. Another one is that, at the time,   men studying etymology were fond of introducing -sometimes erroneously- new spellings of words based on their etymological roots. This explains the gap between the writing and the pronunciation of words such as debt or doubt. Those words come from Old French and were spelled det and dout in Middle English, in line with its pronunciation. Today’s b was inserted to reflect the Latin origin debere (to owe, to have to) and dubitare (to doubt). Similar examples are indict, victual, receipt, all pronounced differently than suggested by their spelling.



Early Modern English: The Great Vowel Shift


Today, in the period of Late Modern English (1800-present), English is a world language; the total number of speakers may be two billion -although of varying competence . Algeo (2009) differentiates three circles of English speakers: “an inner circle of native speakers in countries where English is the primary language, an outer circle of second-language speakers in countries where English has wide use alongside native official languages, and an expanding circle of foreign-language speakers in countries where English has no official standing but is used for ever-increasing special purposes.

To illustrate the concept of etymology, let me present an example. One rather far-fetched etymology is that of the word muscle: it derives from the Latin word for muscle musculus, which is literally the diminutive of mus, for mouse. Apparently the shape and the movement of muscles, in particular the biceps, invoked the image of mice. This image of muscles as little moving animals underneath the skin seems to have been widespread: in Greek mys is also both mouse and muscle, in Arabic adalah is for muscle and adal for field mouse, and the Middle English lacerte meant both muscle and lizard.

How can such knowledge be not only entertaining but also useful? Since we are studying in Toulouse, I want to finish by focusing on the links between English and French, and give you some tricks I accumulated over the years to figure out the meaning of unknown French words. They do not always work perfectly or at all, but are awesome when they do.

English started off as the language of a few Germanic tribes who had settled a small island off the coast of Europe. Over its history it evolved and by some coincidences became a world language with many millions of speakers – in this process collecting and incorporating words and grammar from French, Latin, Scandinavian, Portuguese, Spanish, German, and many other languages around the world. These influences are still visible today – and knowing how languages are interrelated can help us use our knowledge about one language to decipher another.

By Julia  Baarck


For those who would like to learn more about languages and etymology, I warmly recommend the “Johnson” column in The Economist, and further the book “The origins and development of the English language” (base for the history part of this text).

Further references

Algeo, John. “The origins and development of the English language.” (2009).

Crystal, David. “Two thousand million?.” English today 24.1 (2008): 3-6 , retrieved at


Merriam Webster Dictionary.

The Economist. Johnson Column.

TSEconomist Coffee Talk #3 : Socioeconomic Inequality across Religious Groups in the Middle East

We are happy to invite you the 3rd Coffee Talk of the year that takes place on March 27!  Our speaker is Prof. Mohamed Saleh and he will give a talk titled “Socioeconomic Inequality across Religious Groups in the Middle East.” which will be followed by a discussion upon your participation! Hope to see you on March 27 during lunch break at MC203! Please find the details of the talk below.
Abstract: “This book project plans to study various questions related to the long-standing socioeconomic inequality across non-Muslims and Muslims in the Middle East. It draws on novel primary data sources including medieval papyri, historical population censuses, and tax registers, in order to document the socioeconomic advantage of non-Muslim minorities in the region and how it evolved over time and varied across groups and territories. It then examines how inter-religion socioeconomic inequality was impacted by European influence and state-led development since 1800. Finally, it explores the historical roots of this inequality and the role of Islamic taxation in its emergence, and how the Islamic tax system itself evolved in response to it. Overall, the planned manuscript is part of a larger project that attempts to write a new evidence-based economic history of the region that draws on the digitization of various primary unexplored data sources at local and European archives, and that combines the quantitative approaches of the social sciences with the historical literature. In doing so, it builds on earlier work of pioneering economic historians of the region, while attempting to go beyond the conceptual and methodological divisions that separate economic historians from historians as well as those that separate nationalist from colonial narratives.”

An interview with Daron Acemoglu on artificial intelligence, institutions, and the future of work

The recipient of the 2018 Jean-Jacques Laffont prize, Daron Acemoglu, is the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology. The Turkish-American economist has been extensively published for his research on political economy, development, and labour economics, and has won multiple awards for his two books, Economic Origins of Dictatorship and Democracy (2006) and Why Nations Fail (2012), which he co-authored with James A. Robinson from the University of Chicago.

The Jean-Jacques Laffont prize is the latest addition to the well-deserved recognition the economist has received for his work, which includes the John Bates Clark Medal from the American Economic Association in 2005 and the BBVA Frontiers of Knowledge Award in Economics in 2017. Despite a schedule heavy with seminars and conferences, Daron kindly set aside some time to offer the TSEconomist his insights on topics ranging from the impact of artificial intelligence for our societies to the role an academic ought to take in public political affairs.


  1. Congratulations on winning the Jean-Jacques Laffont prize. What does this prize represent to you?

I’m incredibly honoured. Jean-Jacques Laffont was a pioneer economist in both theory and applications of theory to major economic problems. I think this tradition is really important for the relevance of economics and of its flourishing over the last two decades or so. I think it’s a fantastic way of honouring his influence, and I feel very privileged to have been chosen for it.

  1. Thanks to you and other scholars working on economics and institutions, we now know that the way institutions regulate economic life and create incentives are of great importance for the development of a nation. New players such as Google now possess both the technology and the data needed to efficiently solve the optimisation problems institutions face. This raises the debate on government access, purchase, and use of this data, especially in terms of efficiency versus possible harms to democracy due to the centralisation of political power. What is your take on this?

I think you are raising several difficult and important issues. Let me break them into two parts.

One is about whether the advances in technology, including AI and computational power, will change the trade-off between different political regimes. I think the jury’s out and we do not know the answer to that, but my sense would be that it would not change it as much as it changes the feasibility of different regimes to survive even if they are not optimal. What I mean is that you can start thinking about the problem of what was wrong with the Soviet Union in the same way that Hayek did. There are problems to be solved and they’re just too complex, the government can’t handle it and let’s hope that the market solves it.

Then, if you think about it that way, you may say that the government is getting better at solving it, so perhaps we can have a more successful Soviet Union. I think that this is wrong for two reasons that highlight why Hayek’s way of thinking was limited, despite being revolutionary and innovative. One reason is that the problem is not static, but dynamic, so the new algorithms and capabilities create as many new problems that we don’t even know how to articulate. It is therefore naive to think that in such a changing world, we can delegate decision-making to an algorithm and hope that it will do better than the decentralised workings of individuals in groups, markets, communities, and so on.

The second reason is that Hayek’s analysis did not sufficiently emphasise a point that I think he was aware of and stressed in other settings: it is not just about the capabilities of the governments, but about their incentives. It is not simply that governments and rulers cannot do the right thing, but that they do not have the incentives to do so. Even if they wanted to do the right thing, they do not have the trust of the people and thus cannot get the information and implement it. For that reason, I don’t think that the trade-offs between dictatorship and democracy, or market planning versus some sort of market economy, is majorly affected by new technology.

On the other hand, we know that the equilibrium feasibility of a dictatorship may be affected. The ability to control information, the Internet, social media, and other things, may eventually give much greater repressive capability to dictatorships. Most of the fruitful applications of AI are in the future and to be seen, the exception being surveillance, which is already present and will only expand in the next ten years, in China and other countries. This will have major effects on how countries are organised, even if it may not be optimal for them to be organised that way.

To answer the second part of your question, I think that Google is not only expanding technology, but also posing new problems, because we are not used to companies being as large and dominant as Google, Facebook, Microsoft, or Amazon are. Think of when people were up in arms about the power of companies, robber barons, at the beginning of the 20th century, leading to the whole progressive sequence of precedents being reformed, antitrust and other political reforms: as a fraction of GDP, those companies were about one quarter as big as the ones we have today. I therefore think that the modern field of industrial organisation is doing us a huge disfavour by not updating its way of thinking about antitrust and market dominance, with huge effects on the legal framework, among other things. I don’t know the answers, but I know that the answers don’t lie in thinking about something like “Herfindalh is not a good measure of competition so therefore we might have Google dominate everything, but perhaps we are ok” – I think that this is not a particularly good way of going about things.

  1. Some fear that the dominance of these companies could lead to the growth of inequality. Do you think that AI could play a role in this?

I am convinced that automation in general has already played a major role in the rise of inequality, such as changes in wage structure and employment patterns. Industrial robots are part of that, as well as numerically controlled machinery and other automation technologies. Software has been a contributing factor, but probably not the driver in the same sense that people initially thought about it. Projecting from that, one might think that AI will play a similar role, and I think that this is not a crazy projection, although I don’t have much confidence that we can predict what AI will do. The reason is that industrial robotics is a complex but narrow technology. It uses software and even increasingly artificial intelligence, but it isn’t rocket science. The main challenge is developing robots that can interact with and manipulate the physical world.

AI is a much broader technological platform. You can use it in healthcare and education in very different ways than in voice, speech, and image recognition. Therefore, it is not clear how AI will develop and which applications will be more important, and that’s actually one of the places where I worry about the dominance of companies like Google, Amazon, Facebook: they are actually shaping how AI is developing. Their business model and their priorities may be pushing AI to develop in ways that are not advantageous for society and certainly for creating jobs and demand for labour.

We are very much at the beginning of the process of AI and we definitely have to be alert to the possibility that AI will have potentially destructive effects on the labour market. However, I don’t think that it is a foregone conclusion, and I actually believe there are ways of using AI that will be more conducive to higher wages and higher employment.

  1. Regarding the potential polarisation between high and low-skilled labour, do you think that the government could address this issue with universal basic income?

There is a danger – not a certainty, but a danger – that it will polarise, and that even if we use AI in a way that simplifies certain tasks, it may still require some numeracy and some social skills that not all workers have, resulting in probable inequality and displacement effects.

That being said, I believe that universal basic income is a bad idea, because it is not solving the right problem. If the problem is one of redistribution, we have much better tools to address it. Hence, progressive income taxation coupled with something like earned tax credits or negative taxation at the bottom would be much better for redistributing wealth, without wasting resources on people who don’t need to get the transfer. Universal basic income is extremely blunt and wasteful, because it gives many transfers to people who shouldn’t get them, whereas taxation can do much better.

On the one side, I fear that a lot of people who support universal basic income are coming from the part of the spectrum which includes many libertarian ideas on reducing transfers, and I would worry that universal basic income would actually reduce transfers and misdirect them. On the other side, they may be coming from the extreme left, which doesn’t take the budget constraints into account, and again, some of the objectives of redistributing could be achieved more efficiently with tools like progressive income taxation.

Even more importantly, there is another central problem that basic income not only fails to deal with, but actually worsens: I think a society which doesn’t generate employment for people would be a very sad society and will have lots of political and social problems. This fantasy of people not working and having a good living standard is not a good fantasy. Whatever policy we should use should be one that encourages people to obtain a job, and universal basic income will discourage people to do so, as opposed to tax credits on earned income, for example.

  1. In a scenario of individuals being substituted and less people working, how could governments obtain the revenue they are not getting from income taxation? Could taxing robots be a possibility?

I think that this is a bad way of approaching the problem, because when you look at labour income, there is certainly enough to have more redistributive taxation, and no certain need to tax robots. However, we should also think about capital income taxation more generally: there may be reasons for taxing robots, but that has to be related more to the production efficiency and excessive automation. I think that singling out robots, as a revenue source distinct from other capital stock, would be a bad idea. If, for example, you want taxes to raise revenue, then land taxes will be a much better option than robot taxes – this does not mean that we should dismiss the idea of taxing robots. I think that this is confusing because there are efficiency reasons (giving the right incentives to firms) and revenue-raising reasons for taxing. Moreover, because of Bill Gates and other people, public discussions are not helping this confusion.

In terms of sharing wealth, I think that robots do not create new problems compared to other forms of capital. I think it was a confusion of Marx to think of marginal product of capital in very complex ways – that everything that goes to capital is somehow theft – and if neoclassical economics have one contribution, it is to clarify that.  I personally believe there are legitimate reasons for thinking that there is excessive automation. And if there is excessive automation, there are Pigouvian reasons for taxing robots, or actually removing subsidies to robots, which there are many. But that is the discussion we need to have.

  1. There has recently been optimism in regards to the future to AI and the role it could have, for example, on detecting corruption or improving education. You have made the distinction between replacing and enabling technologies. Where does one draw the line between the two?

That is a great question. In reality of course, automation and replacing technologies merge with technology that improve productivity. A great example would be computer-assisted design. Literally interpreted, that would be a labour augmenting technology, because it makes the workers who are working in design more productive. At the same time, however, it may have the same features as automation technology, because with computer-assisted design, some part of the tasks that a drawer would do would be automated. If you do it once, you can do it repeatedly.

So that is a grey area, but it’s okay because the conceptually important point to recognise is that different types of technologies have very different effects. Recognising this is an antidote against the argument that improving productivity through technology will always benefit labour; we actually need to think about what new technologies do and how the increase in productivity will affect labour.

But it is also very important for the discussion regarding AI to point out that AI, as opposed to industrial robot automation, is not necessarily – and does not have to be – labour replacing.  There are ways in which you can use it to create new tasks for labour or increase productivity. This is what I think will play out in real time in the future of AI.

  1. In 2017, you wrote an article for Foreign Policy, “We are the last defence against Trump”, which questioned the belief that institutions are strong enough to prevent a man like Donald Trump to overlook the rule of law. According to you, should economists come off the fence on current affairs? Is it possible to express an opinion without sacrificing some of the intellectual rigour one can expect from a researcher?

I think so. First, there are important personal responsibilities that are crosscutting. Secondly, there is a danger of having the perfect be the enemy of the good.

On the first one, I think that people have to make their own choices as to what is acceptable and what is not. Some things are just within the realm of “I prefer high taxes, you prefer low taxes”, and that is quite a reasonable thing. But some other issues may be a real threat to democracy, to other aspects of institutions, and to minorities that are disempowered. From there, it is important to recognise that there are some lines that should not be crossed, or if they are crossed, that some people need to defend them vocally. Any analogy to the Nazi period is fraud with danger, but it bears saying that, of course, in hindsight, every academic should have walked out of the universities that were managed by Nazis, that were firing Jewish scholars, or were teaching jurisprudence according to the national socialism. That has nothing to do with whether you have evidence of one versus or another – I think that there are some lines.  Similarly, and without saying anything as provocative as drawing parallels between Trump and the Nazis, I think that it is important for people, in general, to defend democracy against the onslaught that it is receiving from Trump’s administration and the circles of people around him. I think I will say openly to everybody that it is wrong for any economist or academic to go and work for Trump, and I think I would certainly never consider doing so, and would consider distancing myself from anybody who does.

But that is on the private ground. On the social science side, there is a lot we do not know. Everything we know is subject to standard errors and external validity constraints, but to refuse to act or to condemn would be to have the perfect be the enemy of the good. On the basis of what we know, we know how democracies fail, we know how certain aspects of American institutions are actually weaker than what people think, and we know how changes in policies against minorities would have terrible effects for certain groups. I think that on the basis of that, to articulate criticism of certain policies and certain politicians is also a good use of the knowledge we have accumulated.

by Valérie Furio, Gökçe Gökkoca, Konrad Lucke, Paula Navarro, and Rémi Perrichon

An economic theory of war: the Syrian example beyond religion and ethnicity

When we look at the Middle East today, we directly think of inter-religious, inter-ethnic and politically motivated conflicts triggered by some event. Take, for example, the Syrian civil war and the first Arab spring demonstrations there, which took place in the city of Daraa.

This raises the question: can we create a general theory of war that works for every conflict, rather than thinking of war as a series of individual circumstances? On the one hand, journalism has a tendency to present war as a single timeline within a context, implying that there are direct causal relationships between events. On the other hand, historians and political scientists have put forward different unifying theories of war: it could be part of a bargaining process between states or factions within a state (Bargaining model, James D. Fearon); wars could occur simply because no one can stop them (“Anarchy” theory, Kenneth Waltz); other researchers present wars as unique events that cannot really be theorised and depend on the belligerents’ culture (Why Wars Happen, Jeremy Black). This is actually a clear refinement of the journalist approach.

Recently, there has been an interesting trend among war and civil war theorists to use utility functions, increasingly collaborating with economists to create stylised micro models. They use expected utility, information and commitment problems, such as adverse selection, moral hazard and bargaining theory. This approach is very elegant but is also rather difficult to test. How should we, as young economists, theorise war? Can we validate this theory using the Syrian conflict?

A warmonger’s objective

Taking a modern war economics perspective, it must be that the initiator of a war has an economic, political, or social interest in starting it, and that the other protagonist benefits from fighting back. Both must also reward their supporters, or at least spare them a very high loss. In reality, wars are not only extremely costly in terms of human life and liquidity, but also in terms of infrastructure, human capital, trade, and international reputation. Yet wars still happen.

ethnicsyria_izady2000From the 2017 World Bank report The toll of war, the economic and social consequences of the conflict in Syria, World Bank, on the fallout of the Syrian conflict, we know that a third of Syria’s total residential buildings have been destroyed, that half its population has been displaced, and that around 450,000 people have been killed. Most importantly, Syria’s nominal GDP has contracted by 61% since 2011.

Let us assume that the agents or factions partaking in the war are rational. Then, for a war to start, it must be that one of the protagonists expects at least some long-term benefit from it. This rationality assumption is very plausible because starting a war is not a one-man decision, even in the most despotic regimes. Here, the important point is that the agents’ expectations entering the war are positive, but are also based only on the information available to them. This information is obviously not perfect and often too optimistic. This explains why countries and factions start wars they cannot win, or wars that leave them worse off in the long run.

Take, for example, Saddam Hussein’s invasion of Kuwait in August 1990 and its aftermath operation “Desert Storm” in January 1991: after two conflicts with Iran, Iraq’s finances were depleted, so Saddam decided to invade Kuwait in order to claim its vast oil reserves and lift up Iraqis’ spirits to protect his regime. He completely underestimated the probability of a United Nations backed intervention against him. The invasion destroyed Iraq’s reputation and led to tremendous military losses. It is considered as the main reason for Saddam losing his grip on power in the following years, which ultimately led to his disposal in 2003.

The factors behind the conflict: resources, information and grievance

Factions engage in a war because, based on their current information, they can get something out of it in the long run. Army size and firepower are both important factors that come into play in these decisions. If a faction has no way of defeating another, it could either not rebel at all, start an insurgency, or a low intensity conflict. This is the case with Afghanistan’s Taliban, ISIS in Syria, and Iraq, now that its standing army has been more or less defeated. As for the other factors, there are major differences in opinions between war economists. Paul Collier is famous for advocating a greed model in opposition to a grievance model. He argues that, ultimately, any group that engages in war is motivated by two main economic variables, the first being the presence of natural resources in a specific region: these resources are vital and at the same time indivisible, because only one group can control them. The second is economic marginalisation, or more precisely, perceived economic losses and lack of opportunities.

Again, let’s take Syria as an example. Alawites-dominated cities, which are mainly in the western part of the country, have considerably developed in comparison to the east, despite many oil fields and gas deposits being located there. Another big issue is water: the Tigre and the Euphrates have their source in Turkey and are vital for less developed areas in Syria, which are located downstream. This is also true for Iraq, which lies even further downstream. The upstream, however, is controlled by Turkey and Assad’s regime, which have built dams and pumped most of the water for themselves. Last but not least, Syria’s population went from 3 million in 1950 to 22.5 million in 2011, with its youth (age under 25) representing 56% of its total population. A third of working age young Syrians are unemployed, and this figure is even higher for the highly educated. From this point of view, the Syrian conflict does not look so much like it is caused by ideological differences or racial and religious hatred between Kurds and Arabs, or Sunnites and Alawites, but mostly by extreme economic marginalisation and inequality between Alawite dominated regions and the rest, and between the old and the young.

Collier’s theory is very useful and has been tested successfully by using the Taliban insurgency since 2001 or Sri Lanka’s civil war from 1983 to 2009. Nevertheless, this theory can be too simple. David Keen, one of his staunch opponents, believes that to model a conflict, leaders and supporters should have different utilities and that greed and grievances can be combined and feed off each other. He takes the case of the second Sudan civil war (1983-2005) and argues that Northern leaders stirred up the resentment and hatred of Arab militias so that they would commit crimes in the South and, in the end, depopulate it. These exactions took place precisely at the current frontier between North and South Sudan, where all of Sudan’s oil fields are located. Hence, the leaders’ interests were both ideological and practical, demonstrated by attacking the Christian and pagan South, and controlling important natural resources, respectively. As for the militias, they did more than share the ideology of their leaders. That is why they were able to commit atrocities that economic disfranchisement could never excuse. Similarly, the systematic use of rape and torture by Islamist militias (both Sunni and Shia) in the Iraqi-Syrian conflict cannot be driven by greed only.

Despite being a young field, war economics, much like political or identity economics, is proving itself to be very useful because its theories can be tested. It is so useful that other social sciences are copying its methods and models. This is not just a trend: in all kinds of social sciences, economics is playing an increasingly important role. In my opinion, it is becoming the default quantitative social science because it puts as much emphasis on the empirical side as it does on the theory, and because theories are tested. It is no longer accepted in economics to “talk the talk” and design a nice theory without “walking the walk” by either proving or disproving it seriously with the help of data.


by Hippolyte Boucher


Featured Image: Before (2015) / After (2017) Al-Nuri Mosque Neighbourhood

History of TSE: the origins

During the spring of 1978, a young researcher fresh from a Harvard PhD had the idea to create a high-level centre of economic research in his hometown, Toulouse. His name was Jean-Jacques Laffont. In 1979, he returned to Toulouse and laid the foundations for what would become the Toulouse School of Economics.

Continue reading “History of TSE: the origins”