2017 Competition Curtain Raiser, Part 1: Excessive Pricing

This is the first in a series of posts highlighting competition issues and cases that are set to drive the debate in Europe this year.

Pfizer and Flynn Pharma: a major decision from the CMA

On 7 December 2016, the United Kingdom’s (UK’s) Competition and Markets Authority (CMA) issued a potentially precedent-setting decision against pharmaceutical producer Pfizer and distributor Flynn Pharma, imposing a fine of nearly £90 million for excessive pricing.

In September 2012, Pfizer sold the distribution rights to its anti-epilepsy drug Epanutin (phenytoin sodium) to Flynn Pharma, which debranded (or “genericised”) the drug, with the effect that it was no longer subject to price regulation. Following the sale, Pfizer increased its price for phenytoin sodium to Flynn Pharma by between 780% and 1,600% relative to the price at which it had previously sold the drug in the UK, and in turn Flynn Pharma increased the wholesale price of the drug to between 2,300% and 2,600% of the former price.

A key feature of phenytoin sodium appears to be that patients taking the drug cannot readily switch to the same drug manufactured by another producer, since even minor differences in production processes could affect the efficacy of the drug in treating epilepsy in individual patients. Therefore, despite the fact that the drug was genericised, the CMA appears to have found that Pfizer and Flynn Pharma retained a de facto monopoly over the sale of the drug to existing patients taking Epanutin. Such a finding would also imply that alternative epilepsy treatments were not viable substitutes for phenytoin sodium in respect of the relevant patients, and were therefore not included in the definition of the relevant market.

The excessive pricing debate

The prohibition of excessive or unfair pricing by dominant firms is a controversial part of UK and European competition law (it has no meaningful counterpart in US legislation or case law). On the one hand, there are strong economic arguments, at least from a static point of view, for preventing a dominant firm from exploiting its monopoly position by charging prices higher than the theoretical competitive price. One of the key results of microeconomic theory (and indeed the foundation of competition policy) is that monopoly pricing lowers overall welfare compared to a competitive market outcome, since the monopoly maximises profits by producing a less-than-efficient quantity of the relevant good and selling it at a higher-than-efficient price.

However, enforcing a prohibition against excessive pricing presents various difficulties. One of these is to establish a benchmark price against which the actual price charged by the dominant firm is to be evaluated, and deciding whether the margin above this benchmark is excessive. According to the CMA press release, it appears to have had regard both to the initial regulated price of phenytoin sodium, and the price charged by Pfizer in other European countries, in reaching a finding of excessive pricing.

It is important to note, however, that there is no inherent reason why such prices should represent useful comparators. In other words, although a price increase of 2,600% naturally appears alarming at first glance, a range of factors could have resulted in the initial price being very low, especially if it was regulated. In this case, Pfizer and Flynn Pharma argue that the regulated price of Epanutin in the UK prior to September 2012 had been loss-making. It remains to be seen how the CMA established a relevant benchmark when its non-confidential decision is made public.

A further risk in enforcing a prohibition of excessive pricing, partly related to the issue discussed above, is that it could have a negative impact on firms’ dynamic incentives to invest across the economy. For example, over-enforcement could prevent a firm from earning economic profits where it has innovated in order to gain a temporary competitive advantage. More generally, over-enforcement runs the risk of creating uncertainty, and thereby lowering incentives to invest, if businesses fear that their future profits will be capped by a competition authority at a level which they cannot predict in advance.

For such reasons, economists such as Massimo Motta and Jorge Padilla (both teaching in the Competition masters at BGSE)  have proposed that excessive pricing provisions should be enforced only in cases where there is little or no prospect of the relevant market eventually correcting itself, and where a sector regulator would not be better placed than the competition authority to intervene (among further restrictive conditions). In this case the CMA may have concluded that the inability of other phenytoin sodium producers to compete for existing Epanutin patients created such a situation where entry is infeasible. Even so, the question remains whether this issue could not better be addressed through amending existing drug price regulation. The release of the CMA’s final decision is likely shed more light on this issue.

What to look out for in 2017

In the meantime, Flynn Pharma has appealed the CMA’s decision to the Competition Appeals Tribunal (CAT). 2017 could therefore reveal how the CAT views the different considerations surrounding excessive pricing, and to what extent the CMA decision will be applicable to other drugs and industries. The finding of excessive pricing also raises the prospect that Flynn Pharma’s customers, and specifically the UK Department of Health, could sue it for damages resulting from the high price, which would raise further interesting issues in so-called “private”  excessive pricing enforcement.

References

Evans, D.S. & Padilla, A.J. 2005. “Excessive Prices: Using Economics to Define Administrable Legal Rules”. Journal of Competition Law and Economics 1(1), pp. 97–122.

Motta, M & de Streel, A. 2007. “Excessive Pricing in Competition Law: Never say Never?” The Pros and Cons of High Prices, pp. 14-46. Swedish Competition Authority.

What economists do: Talking to Philip Wales, UK Office for National Statistics

Wondering what economists get up to in the real world? The BGSE Voice team spoke to Philip Wales, Head of Productivity at the UK Office for National Statistics about what economists like him get up to, and what makes a good professional economist.


First of all, could you please tell us about yourself: what is your background, and what do you do at the ONS?

My name is Philip Wales; I’m Head of Productivity at the Office for National Statistics. I’ve been working at ONS for just over five years, and have held a range of different posts over that period. Before I started work here, I was studying for my PhD at the London School of Economics, and prior to that I was a graduate economist at a private sector consultancy.

In my current position, I manage a team of 14 economists with responsibility for the production and analysis of the UK’s core productivity statistics. We produce the UK’s main measures of labour productivity as well as estimates of multi-factor productivity. It’s an interesting area which has got a lot of attention since the onset of the economic downturn and the ‘Productivity Puzzle’. We regularly answer queries from the press, academics and a range of other experts; we often contribute to external seminars and talks at big international conferences, and our statistics help to inform the national policy debate.

The part of my job that I enjoy most is working with the detailed survey data that ONS receives to understand developments in the UK economy. Over the last few years, I’ve analysed the UK’s Labour Force Survey, the Annual Business Survey and the Annual Survey of Hours and Earnings. I’ve worked with micro-level information gathered for the Consumer Prices Index and with survey data on household income and expenditure, and we’ve produced a range of outputs using these resources.

What other things do economists do at the ONS? 

There are more than 100 economists at ONS, engaged in a wide variety of activities. In our central team, our economists scrutinise and sense-check our economic data. They are responsible for building a coherent narrative around ONS’ economic statistics, and help the media, outside experts and other interested users to understand recent developments. ONS economists also conduct in-depth research and analysis on our micro-level datasets to support user understanding. At times when an economic aggregate is behaving unusually, or when there is a demand for more detailed information, a clear understanding of recent developments informed by high-level data handling and analytical skills is critical.

Alongside work on these data-driven questions, our economists are also deployed to work on detailed methodological and measurement questions. How do you go about modelling the depreciation of a firm’s capital stock? What is the difference between a democratically- and a plutocratically-weighted price index? How have changes in workforce composition affected average wage growth? In all of these cases, economists work alongside methodologists and statisticians to draw grounded, intellectually-robust conclusions about our approach

Finally, ONS economists are also involved in supporting the corporate functions at ONS. We work on business cases – seeking to value the costs and benefits of big projects – as well as modelling the implications of changes in pay rates for the department’s budget. In all of these roles, the skill set that economists bring to the role is highly valued, and our numbers have grown as a result

What are the main challenges you face as an economist at the ONS?

One of the biggest challenges that economists face – both at ONS and elsewhere – is communicating our findings in a clear, concise and accessible form. Even when speaking to other, technically minded colleagues, communicating the findings of a piece of analysis clearly can be a real challenge – especially if it is at the more complicated end of the spectrum. Communicating our findings to a non-technical user-base or – occasionally – members of the media, can be even more difficult.

In my experience, good preparation is central to achieving a good result. I try to pitch my work in a clear, intuitive manner, using thought-through and interesting graphics to help users to understand the questions that I’m posing and the answers that my analysis provides. Above all, I try to weave a narrative through my work – helping the audience to understand what the broader picture is, as well as our detailed findings. Clear communication is especially important when you work for an institution like ONS – where users are looking for clear messages on recent developments. Expert knowledge is certainly very important and the world is a complicated place – but communicating difficult concepts in a clear and intuitive manner is a key skill for economists in all walks of life.

In your opinion, what makes a good professional/public sector economist? What does the ONS look for in economists?

Besides the good communication skills, a good economist needs solid data-skills, an enquiring and inquisitive mind and the capability to bring together and synthesise information from a range of different sources within a coherent framework.

The best economists that I’ve worked with at ONS have all of these attributes. Confronted with a dataset, they’re interested in exploring it and visualising it in different ways. They ask questions about what the data can tell us about the way that different agents are working in the economy, and whether that accords with our intuition or conceptual approach. They explore how the dataset was constructed, what limitations this imposes and think about how applicable the survey results are for the population as a whole. They deploy their skills of data-analysis and their economic theory to explain what is going on, and they build intuitive examples and graphics into their work to communicate their meaning to others. They have a real desire to learn about the economy through the data that we collect and they have a clear interest in helping others to understand what is going on.

These economists tend to be enthusiastic, inquiring and curious in their job, with a rapacious appetite for detail and knowledge. They have one eye on the datasets, another eye on their methodology and theory, and a clear line of sight to an issue central to economic policy. It’s this blend of economic skills that make a good economist, and that’s what I’m looking for when I interview prospective ONS economists.

Do you have any advice for the current generation of economics students?

I’m wary of giving advice – as one size rarely fits all – but I think there’s a lot of interesting developments taking place in economics that students should look to exploit. Firstly, the economic downturn and the financial crisis created an appetite to revisit what had become established theoretical tenets of the field and the nature of our modelling. A sceptical, but fair-minded assessment of the models and approaches that you learn is really important.

Secondly, as the field has progressed, the gulf between theoreticians and empiricists has grown very large, with relatively few economists now able to bridge the gap between the frontier of theory and the frontier of measurement. Understanding the strengths and weaknesses of both is central – and will be more so in the future. As more data becomes available, having both a decent theoretical foundation and a strong set of applied empirical skills will be critical.

Finally, if I had my time again, I’d want to make the very most of the opportunity that studying offers. Go to the talk that you’re interested in; speak to the lecturer afterwards; think critically about how you would extend someone else’s analysis and try and come up with ideas of your own. It’s a great time, enjoy it!

#ICYMI on the BGSE Data Science blog: Interview with Ioannis Kosmidis

In this series we interview professors and contributors to our fields of passion: computational statistics, data science and machine learning. Post by Robert Lange ’17 and Nandan Rao ’17 on BGSE Data Scientists blog.

In this series we interview professors and contributors to our fields of passion: computational statistics, data science and machine learning. The first interviewee is Dr. Ioannis Kosmidis. He is Senior Lecturer at the University College London and during our first term he taught a workshop on GLMs and statistical learning in the n>>p setting. Furthermore, his research interests also focus on data-analytic applications for sport sciences and health…read the full post by Robert Lange ’17 and Nandan Rao ’17 on Barcelona GSE Data Scientists

Convenience Effect on Birth Timing Manipulation: Evidence from Brazil

According to the United Nations Children’s Fund, Brazil ranked first place with the highest cesarean section rate among 139 countries in the world for the period of 2007-2012.[1] In 2009, the number of surgical births surpassed vaginal deliveries. During the years of 2012-2014, cesarean delivery (CD) corresponded to 57% of all registered births in the country. Another less but still invasive medical intervention is labor induction. This is a technique used to bring on or speed up contractions and thus anticipate vaginal births. For the period of 2012-2014, 33% of all registered normal deliveries in the country occurred after induced labor. Therefore, only 29 out of 100 births in Brazil occurred in the form of natural birth, through a spontaneous (non-induced) vaginal delivery.[2]

Such medical interventions (CD and labor induction) allow for manipulation in the timing of birth. Although birth timing can be altered due to medical reasons (e.g., when labor could be dangerously stressful or in case of post-term pregnancies), the existing evidence suggests that it is also manipulated for reasons other than the health of the fetus or of the mother. Mothers’ incentives to intervene in the timing of their deliveries are usually financial when compensations are involved, such as baby bonuses (Gans and Leigh, 2009) or tax savings (Dickert-Conlin and Chandra, 1999), or even related to cultural issues (Lo, 2003). Doctors’ incentives tend to be determined by risk-aversion (Fabbri et. al, 2015) or convenience (Gans et al., 2007).

As CD can be scheduled for medical reasons, a concentration of scheduled CD’s in convenient moments does not constitute enough evidence to suggest that deliveries are being scheduled due to convenience motivations. However, since complications during delivery that require an emergency CD should be randomly distributed across time, a concentration of unplanned CD’s during convenient times indicates that reasons other than the protocol are playing a role. Brown (1996) and Lefèvre (2014) show evidence on this matter. Both papers suggest that physicians induce CD in the labor room during convenient moments. Thus, physicians’ convenience motivations as well as other incentives correlated to convenient moments could be at play.

Convenient times usually coincide with times when it might be safer to deliver. It is also during non-leisure days and usual business hours that the largest capacity of hospital staff is on-shift and medical staff is fresher. If this is the case, then doctors who are risk-averse or altruistic might have preferences to allocate complex deliveries on those moments when risk can be minimized. Fabbri et al. (2015) provide evidence of risk aversion attitudes for a sample of women admitted at the onset of labor in a public hospital in Italy.

In my thesis from UFRJ, I tested whether convenience effects play any relevant role in birth-timing manipulation in Brazil. More specifically, I investigated if births that would have occurred after spontaneous labor during inconvenient times are anticipated to convenient times. I adopted several strategies in order to isolate the convenience effect from potential risk aversion attitude.

First, I used a new type of inconvenient days that may attenuate risk aversion attitudes in manipulating the timing of births: business days in-between holidays. As these are business days, hospitals should be fully-staffed. However, risk-averse physicians may still manipulate the timing of births in order to eliminate the possibility of women going spontaneously into labor on the surrounding leisure days. Second, I analyzed the results by hospital funding. Public funded hospitals provide a context where women do not actively participate in the decision-making process. This scenario enabled me to attribute the results to physicians. Third, I further investigated the results by level of risk. While birth timing manipulation motivated by convenience should happen mostly among low-risk births, timing manipulation guided by risk aversion should be concentrated in high-risk births – as in this latter case the goal is to minimize the risk of low quality hospital services.

Using daily data on birth records, I constructed a daily panel of number of deliveries by hospitals for the period 2012-2014, with information on hospitals, deliveries (e.g. type of birth procedure and nature of labor), pregnancy, mothers and newborns. Having classified births as low-risk and high-risk according to observable variables (e.g. mother’s age below 18 or above 35 years old, multiple pregnancy, newborn with congenital anomaly), I ended up with daily panels of number of high and low-risk deliveries by hospital.

As my goal was to understand if births that would have occurred after spontaneous labor were anticipated, I ran regressions of the number of births after spontaneous labor on days in-between holidays. I found a significant negative result, which suggests that either convenience or risk-aversion motivations were playing a role. Then, I verified that the results were robust to the restricted sample of public funded hospital. Hence, I attributed the results to physicians’ motivations. Finally, I further restricted the sample to low-risk births and re-estimated the results. Having found out that the findings were driven by low-risk deliveries provided further evidence that births were being anticipated due to physicians’ convenience effect. Moreover, I ran the same regressions for the days preceding the leisure period and verified an increase of cesarean sections, which reinforces the previous results that births that would otherwise have happened after spontaneous labor occurred instead by the scheduling of cesarean sections.

 

References

[1] http://data.un.org/Data.aspx?q=cesarean&d=SOWC&f=inID%3a219

[2] CD rates extracted from the Brazilian National System of Information on Birth Records (Datasus/SINASC).

Borra, C., González, L.; Sevilla, A. Birth timing and neonatal health. The American Economic Review, v. 106, n. 5, p. 329-332, 2016.

Borra, C., González, L.; Sevilla, A. The impact of scheduling birth early on infant health. Working Paper presented at Tinbergen Institute, 2016.

Gans, J.S.; Leigh, A. Born on the first of July: An (un)natural experiment in birth timing. Journal of Public Economics, v. 93, n. 1-2, p. 246-263, 2009.

Dickert-Conlin, S.; Chandra A. Taxes and the timing of births. Journal of Political Economy, v. 107, n. 1, p. 161-177, 1999.

Fabbri, D.; Castaldini, I.; Monfardini, C.; Protonotari, A., Caesarean section and the manipulation of exact delivery time. HEDG working paper n.15, University of York, 2015.

Gans, J.S.; Leigh, A.; Varganova, E. Minding the shop: The case of obstetrics conferences. Social Science and Medicine, v. 6, n. 7, p. 1458-1465, 2007.

Brown, H.S. Physician demand for leisure: Implications for cesarean section rates. Journal of Health Economics, v.15, p. 233-242, 1996.

Lefevre, M. Physician induced demand for C-sections: does the convenience incentive matter? HEDG working paper n. 14, University of York, 2014.

Why Are Negative Interest Rates Failing? An Analysis of the Swiss Case

Miguel Alquezar and Gabriel Bracons (both Economics ’17 and alumni of UPF Undergraduate Program in Economics) present their Bachelor thesis research work, supervised by Barcelona GSE Affiliated Professor Luca Fornaro.

Standard macroeconomic theory tells us that a reduction of the interest rates, even into the negative ground, should lead to an increase in consumption and investment through the real interest rate channel, by lowering liquidity-constraints on firms and households and discounting future returns. On the currency market, such a measure should discourage capital inflows, leading to a lower demand of the currency, and thus its depreciation. However, the policy also erodes the profitability of banks as it reduces the net interest margins, and hence increases their own default risk. Furthermore, anomalies in the valuation of returns and payment streams usually put pressure on the financial institutions to redesign financial transactions functioning. Finally, as this policy was unconventional, uncertainty has been a key parameter in the short run, thus increasing results volatility and counteracting most of its efficiency.

On our attempt to see to which extent theory matched reality, we checked the main economic indicators of the Swiss economy and found that real economic variables such as real GDP, unemployment rate, money velocity and loan rate were not affected by the negative interest rates policy (see charts above).  But this monetary policy was not neutral either as other monetary variables such as currency in circulation (M2) and exchange rate with the Euro have been negatively impacted. Theory was also proven wrong by saving deposits which actually increased (see charts below). This adds evidence to the fact that negative interest rates have failed to achieve their expected benefits, mainly due to the increase in uncertainty generated by this “unconventional” policy.  Remarkably, Swiss banks’ balance sheets show that even under negative interest rates, deposits have not sunk and lending has been steadily increasing, especially in mortgages. The only variable that seems to start to react is inflation, which is expected to be around 1% in 2017.

Conclusions

According to the theoretical implications presented above, real economy and inflation should improve as well as the whole set of variable related to them, triggering a vigorous recovery. However, real economy has not reacted to the negative interest rates as expected and the real variables remain weak by Swiss standards. Moreover, nominal variables and other indicators such as money velocity, money multiplier or loans growth have also not reacted as predicted. The fact that banks do not charge clients appears to be the main problem as they do not transfer the interest, thus accumulating more risk without further profits. This means that in the long run, the banking sector has to find a different way to offset losses that come from the negative interest rates. To solve this situation, a set of new policies has been proposed, ranging from punishing banks that do not lend money with different interest rates, to giving strong incentives to banks to lend some share of their assets in exchange of very generous conditions. Additionally, some changes to the basic model have been suggested to reduce the gap between theory and reality. Hence further work on the adoption of these new policies and changes to the model should be a fruitful line of research.

You want to learn more about this very interesting and topical issue? Consult their full research paper here: Why are negative interest rates failing?

Is the fat tax really effective?

Recently, some European countries, including Spain, have considered introducing new taxes to reduce their budget deficits. Among the set of measures, they have proposed a fat tax, which has the aims of not only increasing revenues, but also reducing junk food consumption, and thereby obesity rates and the concomitant health costs.

Although this is what we expect from the theory, empirical evidence is revelatory of a different picture. Firstly, the tax is not fully reflected in the prices of the products that have been targeted. In fact, in the countries where the tax has been imposed, supermarkets and other food stores have played with prices and margins of other products in an attempt to avoid raising the prices of the products that are subject to taxation. For example, some firms have increased the prices of substitutes (e.g. sodas with low calories) while spreading the higher cost of taxed food onto other foods or own-branded products that typically have higher profit margins. Another interesting reaction from the supply side has been that producers have been substituting those taxed products with un-taxed products that are just as unhealthy. Additionally, they have also reduced the amount of some inputs below the legal threshold to avoid the fat tax. Ostensibly, these tricks have clearly diminished the effectiveness of the tax.

On the demand side, we have also seen interesting reactions. Firstly, it is important to notice that people who consume more junk food are less responsive to a price increase than moderate consumers. Hence, the fat tax is not actually benefiting its most important target audience. Furthermore, there is an important substitution effect as all products have, on the whole, become more expensive, consumers have switched to cheaper goods of lower quality.

According to different studies, the impact of the fat tax has not achieved its desired outcomes, and there has not been a significant reduction in obesity rates. Therefore, although some studies claim that what is needed is a higher tax of at least 20% to lead to a sizeable decrease in obesity rates, it is unlikely that this policy alone would be effective enough. Moreover, it is a highly regressive tax because poor people are spending more on less healthy foods. In view of this, we should instead study the introduction of other interventions such as heathy food subsidies, campaigns promoting a healthy diet, health education at school to try to curb obesity through moral suasion from an early age and , albeit a very unpopular one, taxing people according to their body mass index.

References:

Leicester and F. Windmeijer. The ‘fat tax’: economic incentives to reduce obesity. Institute for Fiscal Studies

ECORYS (2014). Food taxes and their impact on competitiveness in the agri-food sector. ECSIP

Frank, S.M. Grandi and M.J. Eisenberg (2013). Taxing junk food to counter obesity. American Journal of Public Heath

Cornelsen, R. Green, A. Dangour and R. Smith (2014). Why fat taxes won’t make us thin. Journal of Public Health.

#ICYMI on the BGSE Data Science blog: Randomized Numerical Linear Algebra (RandNLA) For Least Squares: A Brief Introduction

Dimensionality reduction is a topic that has governed our (the 2017 BGSE Data Science cohort) last three months. At the heart of topics such as penalized likelihood estimation (Lasso, Ridge, Elastic Net, etc.), principal component analysis and best subset selection lies the fundamental trade-off between complexity, generalizability and computational feasibility.

David Rossell taught us that even if we have found a methodology to compare across models, there is still the problem of enumerating all models to be compared… read the full post by Robert Lange ’17 on Barcelona GSE Data Scientists

TEDxGothenburg: Money talks – but where does it come from? (Sascha Buetzer ’11)

In this post, Sascha Buetzer ’11 (Macro) shares his experience of giving a talk at TEDx Gothenburg.

buetzer.jpg

Barcelona GSE Macroeconomics and Financial Markets alum Sascha Buetzer ’11 is currently a PhD candidate in Economics at the University of Munich. After starting his career at the European Central Bank, he has been working as an economist in the International Monetary Affairs Division at the Deutsche Bundesbank, primarily in an advisory role for the German executive director at the International Monetary Fund.

In this post, Sascha shares his experience of giving a talk at TEDx Gothenburg in November 2016. The video is included below.


tedx

Last November I was given the unique opportunity to present my ideas to an audience that usually doesn’t get to hear much about the inner workings of the financial system and central banking. At the TEDx Conference “Spectrum” in Gothenburg, Sweden, I was one of 10 speakers who were given not more than 20 minutes to convey an “idea worth spreading.”

My talk centered around how money is created in a modern economy and what can be done to improve upon this process, in particular in the context of the euro crisis. Quite a challenge, as it turned out, given the short amount of time and the need to keep things understandable and entertaining since most of the audience did not have an economic background.

It was, however, an extremely exciting and rewarding experience.

After getting in touch with the organizers through a chance encounter at the Annual Meeting of the Asian Development Bank (ADB) last year, all speakers got an individually assigned coach who provided excellent feedback and recommendations for the talk.

The day of the conference itself was thoroughly enjoyable (at least after the initial rush of adrenaline had subsided). It was fascinating to interact with people from widely varying backgrounds that all shared a natural curiosity and desire to learn from each other. And at the end of the day it felt great for having been able to contribute to this, by providing people with insight into a topic so important, yet so little-known.

Watch the talk here:

See also:
Coverage of the talk by University of Gothenburg
– More coverage from University of Gothenburg
– Other talks from TEDxGothenburg “Spectrum” 2016

#ICYMI on the BGSE Data Science blog: Covariance matrix estimation, a challenging research topic

Covariance matrix estimation represents a challenging topic for many research fields. Sample covariance matrix might perform poorly in many circumstances, especially when the number of variables is approximately equal or greater then the number of observations. Moreover, when the precision matrix is the object of interest the sample covariance matrix might not be positive definite and more robust estimators must be used.

With this article I will try to give a brief (and non-comprehensive) overview of some of the topics in this research field. In particular, I will describe Stenian shrinkage, covariance matrix selection through penalized likelihood and graphical lasso implementing the description with some potential extensions of these methodologies… Read the full post by Davide Viviano ’17 on Barcelona GSE Data Scientists

Optimal density forecast combinations (Unicredit & Universities Job Market Best Paper Award)

Greg Ganics (Economics ’12 and PhD candidate at UPF-GPEFM) provides a non-technical summary of his job market paper, which has won the 2016 UniCredit & Universities Economics Job Market Best Paper Award.

authorEditor’s note: In this post, Greg Ganics (Economics ’12 and PhD candidate at UPF-GPEFM) provides a non-technical summary of his job market paper, “Optimal density forecast combinations,” which has won the 2016 UniCredit & Universities Economics Job Market Best Paper Award.


After the recent Great Recession, major economies found themselves in a situation with low interest rates and fragile economic growth. This combination, along with major political changes in key countries (the US and the UK) makes forecasting more difficult and uncertain. As a consequence, policy makers and researchers have become more interested in density forecasts, which provide a measure of uncertainty around point forecasts (for a non-technical overview of density forecasts, see Rossi (2014)). This facilitates communication between researchers, policy makers, and the wider public. Well-known examples include the fan charts of the Bank of England, and the Surveys of Professional Forecasters of the Philadelphia Fed and the European Central Bank.

chart
BOE fan chart. Source: Bank of England Inflation Report, November 2016

Forecasters often use a variety of models to generate density forecasts. Naturally, these forecasts are different, and therefore researchers face the question: how shall we combine these predictions? While there is an extensive literature on both the theoretical and practical aspects of combinations of point forecasts, our knowledge is rather limited about how density forecasts should be combined.

In my job market paper “Optimal density forecast combinations,” I propose a method that answers this question. My main contribution is a consistent estimator of combination weights, which could be used to produce a combined predictive density that is superior to the individual models’ forecasts. This framework is general enough to include a wide range of forecasting methods, from judgmental forecast to structural and non-structural models. Furthermore, the estimated weights provide information on the individual models’ performance over time. This time-variation could further enhance researchers’ and policy makers’ understanding of the relevant drivers of key economic variables, such as GDP growth or unemployment.

Macroeconomists in academia and at central banks often pay special attention to industrial production, as this variable is available at the monthly frequency, therefore it can signal booms and busts in a timely manner. In an empirical example of forecasting monthly US industrial production, I demonstrate that my novel methodology delivers density forecasts which outperform well-known benchmarks, such as the equal weights scheme. Moreover, I show that housing permits had valuable predictive power before and after the Great Recession. Furthermore, stock returns and corporate bond spreads proved to be useful predictors during the recent crisis, suggesting that financial variables help with density forecasting in a highly leveraged economy.

The methodology I propose in my job market paper can be useful in a wide range of applications, for example in macroeconomics and finance, and offers several avenues for further research, both theoretical and applied.

References:

Ganics, G. (2016): Optimal density forecast combinations. Job market paper

Rossi, B. (2014): Density forecasts in economics and policymaking. Els Opuscles del CREI, No. 37