UK News Shocks and Business Cycles

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Authors: 
Jorge Meliveo and Willy Scherrieble

Master’s Program:
Macroeconomic Policy and Financial Markets

Paper Abstract:

In this paper we use a structural Factor Augmented VAR (FAVAR) approach to estimate the effects of news shocks in a new institutional setting: the United Kingdom. We define news shocks as the stock price shock orthogonal to TFP that maximizes the forecast error variance of TFP at the 40 quarter horizon. We find that news shocks account for around 18 – 45% of the variance in output at business cycle frequencies. Furthermore, the predictions of our estimation are in line with the predictions of standard neoclassical business cycle theories, i.e. following a positive news shock, agents increase both consumption and leisure, hence, reducing the amount of hours worked. Our contribution is twofold: First, we enlarge the geographical investigation of the news shock literature by considering a new dataset for the UK. This is important since all major studies have exclusively focused on the US economy so far. Second, we address the problem of non-fundamentalness by comparing a VAR and FAVAR approach. We find that including factors to the VAR changes the results and generates negative co-movement between hours worked and consumption on impact. Furthermore, our results are in line with the findings of Barsky and Sims (2011) and Forni, Gambetti, and Sala (2014) for the US.

Presentation Slides:

[slideshare id=50791122&doc=uk-news-business-cycles-150722070243-lva1-app6892]

Predicting gender disparities in attitudes towards intimate partner violence against women: a case study from Rwanda

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Authors: 
Mette Albèr, Ming Yu Wong, Urša Krenk & Stan deRuijter

Master’s Program:
Economics

Paper Abstract:

This paper examines the factors associated with gender disparities in attitudes towards intimate-partner violence against women (IPVAW) at the regional and household level using data from the 2010 Demo-graphic and Health Survey (DHS) in Rwanda. An OLS regression model was used at the regional level, while multivariate logistic regression models were fitted at the household level. The results show that women’s education level and women’s TV-viewing frequency are significant and consistent predictors of gender disparities at the household level, with sizeable marginal effects. More generally, many factors beyond national- and regional-level characteristics account for variation in IPVAW acceptance across genders, suggesting that more granular and sophisticated modes of analysis can help to determine the true nature of relationships between individual and household level factors and attitudes towards IPVAW.

Read the paper or view presentation slides:

Presentation Slides:

[slideshare id=50916638&doc=predicting-gender-disparities-violence-150725112951-lva1-app6892]

What can the risk neutral moments tell us about future returns?

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Authors:
Juan Imbet, Nuria Mata

Master’s Program:
Finance

Paper Abstract:

We test if the first four moments of the risk neutral distribution implicit in options’ prices predict market returns. We estimate the risk
neutral distribution of the S&P 500 over different frequencies using a non parametric polynomial fitting, and test if the first four moments of the distribution predict returns of the S&P 500. Our results suggest that there is no evidence on this predictability power.

Presentation Slides:

[slideshare id=50497458&doc=risk-neutral-future-returns-150714070303-lva1-app6891]

The pass-through of United States monetary policy to emerging markets: evidence at the firm level from India

Master’s project by Ana Arencibia Pareja, Marina Conesa Martínez, Iuliia Litvinenko, and Ruth Llovet Montañés

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


 

Authors: 
Ana Arencibia Pareja, Marina Conesa Martínez, Iuliia Litvinenko, and Ruth Llovet Montañés

Master’s Program:
International Trade, Finance and Development

Paper Abstract:

This paper evaluates the reaction of Indian firms’ equity prices to U.S. monetary policy changes during the period from 2005 to 2015, limiting the analysis to the days where monetary policy announcements took place. We find that a one percentage point permanent increase in Fed funds rate is associated with a 0.09% drop in equity prices, being this association large in economic terms. Results also show that the response of Indian companies is not homogeneous. For instance, we find that equity prices of companies with higher capital over total assets react less compared to firms with low capital levels, given the same U.S interest rate increase. Moreover, we also see that larger firms, proxied by the number of employees, will be less affected by the U.S monetary tightening. The same conclusions can be obtained when using EBIT over interest expenses, cashflows over sales and dividends per share. Besides, we show that stocks respond much stronger to monetary shocks in periods of contractionary interventions and higher global risk aversion. We propose firms to be better capitalized by holding more equity relative to loans and relying less on banks’ short-term external debt denominated. Finally, we also recommend them to have more liquidity, which goes in line with having a larger EBIT and bigger cashflows. However, we can definitely conclude that advanced economies should promote greater international policy cooperation and communicate their monetary policy intention. This would reduce the risk of large market volatility of Emerging Countries´ economies.

Presentation Slides:

[slideshare id=50496909&doc=us-monetary-policy-emerging-markets-150714064158-lva1-app6892]

What’s behind a number? Information systems and the road to universal health coverage

Adam Aten ’13 (Health Economics and Policy)alumni is a researcher at The Brookings Institution focusing on evidence development and biomedical innovation within the Center for Health Policy. Prior to joining Brookings, he was a civil servant at the U.S. Department of Health and Human Services developing policy expertise in health insurance for low-income populations, digital information systems and information governance, and cost effectiveness of public health programs.

This week he has written a post for the World Bank’s Investing in Health blog on universal health coverage (UHC). Here are some excerpts:

Decision-makers now have many tools at their disposal to analyze trends and take strategic decisions – increasingly in real-time – thanks to the rapid diffusion and adoption of information and communications technologies. New approaches to collect, manage and analyze data to improve health systems learning, such as how the poor are benefitting (or not) from health care services, are helping to ensure the right care is given to the right patient at the right time, every time – the goal of UHC.

It is relatively easy to agree on public health targets, but actual progress requires a management structure supported by dashboards that can allow monitoring of intermediate outcomes in real-time.

Read the full post on the World Bank’s health blog: What’s behind a number? Information systems and the road to universal health coverage

For those interested in current health policy topics, Mr. Aten is also a chapter co-author of the recently published WB/PAHO book, Toward Universal Health Coverage and Equity in Latin America and the Caribbean : Evidence from Selected Countries

Local Agency Costs of Political Centralization – A lecture by Roger Myerson

photoby Genevieve Jeffrey ’15, current student in the Economics of Public Policy program.


Nobel Laureate Roger Myerson presented his latest paper, ‘Local Agency Costs of Political Centralization’ to the Barcelona GSE community on the 3rd of June.

The central focus of the paper is to examine how efficient the widely accepted notion of Political Centralisation is.

Some existing literature already argues that political decentralization and community empowerment may be the key to successful development. They show how autonomous local governments can reduce entry barriers in national politics. Creating the space for success on a smaller scale can lead to them becoming strong candidates for higher office. The decentralization of power also gives local leaders a stake in nation building.

Others argue that one centralized government could also have the same effects by applying differentiated policies accounting for regional differences.

myerson

The need for targeted policies for differing dynamics in different regions is clear but the way to achieve this is still being explored. Myerson comes in with this paper is to show that in order to achieve efficient local public investment there needs to be accountability which is best achieved with decentralization of power.

The mechanism he uses to analyze this issue is a model of moral hazard in local public services in which an efficient solution is only feasible when officials are held accountable to local voters.

Political Decentralization can lead to Accountability

If the quality of the local public services can only be observed by the local residents, then unless the residents have power over the officials’ career, they will not be accountable.

If the local residents do not have the power to dismiss the officials, their careers would depend more on political relationships than on effectively managing local public services. This would have a roundabout effect on development as without good public services, private investments would be scarce since the success of these investments would depend on the quality of these public services.

In his talk he expounded on the following example to illustrate his findings. In a remote town, if we imagine each resident invests to start an enterprise whose probability of success would depend on the amount spent on public services in the town. The only observable evidence that the official spent the money on public services would be the number of successes among the residents’ enterprises.

This budget is managed by the local official, who could divert any part to his personal consumption. What Myerson shows is that there is a premium that can be paid to the official in addition to his salary that would incentivize the official to stay. This would make up the moral hazard rents and allow the efficient amount of investment to be spent.

Equilibria

Myerson shows that for a given budget if the official is paid ‘p’ if a specified fraction of residents report success then the renewal thresholds and official salaries can be set as functions of the local population ‘n’ that would result in the efficient budget and induced public investment levels.

Distrust and Instability

However if there were distrust, the voters would rather replace the incumbent if they expect the funds to be stolen in the future. In this case, public investment and residents’ benefits would be a decreasing function of the political instability parameter ‘q’. However as long as the utility from the induced public investment for the given level of instability is positive, residents can still benefit.

Autocracy – Rulers Incentive

In an autocracy the national ruler can commit to allow the town to elect an autonomous local government in exchange for a tax the residents would pay until their utility is maximized.

Moral hazard rents would be paid by candidates in terms of cash or political support. The autocracy could offer these offices as rewards for support and help them ward off challengers in this way.

If the national ruler could credibly commit to allow an autonomous local government after picking the first incumbent local official, the ruler could gain fiscal and political benefits per resident. But for this to happen, the ruler must make a credible commitment to the division of power constitutionally which would be against his interests ex-post.

With local accountability then the ruler would not be able to use the offices as rewards as this would make him vulnerable to voters distrust.

If successful they could be serious contenders for national office competing against him. This may be politically too costly for the incumbent national leader.

Separation of Information from Influence in an Autocracy

Key supporters are important for the success of any leader. A leader can get more support when key supporters monitor how he treats others so not rewarding one would cause distrust in all. If successful, competitors cannot recruit supporters without constraints.

Since an autocratic ruler is only accountable to these courtiers, they must deter wrongful dismissals as he can resell the vacant offices with moral hazard rents.

Courtiers can impose political costs on the ruler but these costs cannot depend on information that the ruler would be able to manipulate.

Courtiers could impose a penalty on the ruler depending on the set of dismissed officials. They should choose this penalty such that the dismissal set is a small fraction of all offices. If the net service value the ruler would get from replacing the official would equal his penalty for dismissal, the ruler would pick the dismissal set based on information on public services. However on the other hand, he might choose to dismiss those with the best services as they may prove to be strong contenders and rivals.

Under democracies also, political leaders need the reputation of rewarding patronage to motivate supporters. Under democracy, local autonomy threatens to increase competitive entry into national politics against the interest of national officials. One example of this given was Pakistan. National democratic competition also raises the political risk for the retention of appointees.

However if Local Accountability leads to better Public Services could Democracy induce leaders to promise it?

An analysis shows that with sequential bids the challenger can win by offering marginally more in most districts and zero where the incumbent spends the most. Contrary to the optimal socially, this will lead to small offers everywhere.

Endogenous Decentralization in a Unitary Democracy

However if public budgets are fixed as in most cases, then candidates can compete based on promises of accountability of the local officials. An alternative for this would be for the candidate to sell the office for a political contribution and then spend this money obtained on the campaign. This would win over the uninformed voters while the informed would vote for the best promise. This result thus depends on the proportion of the population that is informed.

Inefficient Centralization in a Unitary Democracy

Citing the example of Ukraine, Myerson also warned of the possibility of politically neglected regions being dangerous for a nations territorial integrity if it facilitated the opportunity for disaffected regions to secede. In an equilibrium of a 2-candidate election for national leadership of a unitary state, each candidate would naturally maintain inefficient centralized management of local public services in a fraction of all districts.

In Sum

The key points were that moral hazard in local public investments can be efficiently managed with local accountability. When the public services can only be observed by local residents, officials can only be held accountable if their careers depend on the resident’s approval. Political decentralization would guarantee such local power.

However if the leader can resell offices then he would not a neutral judge of the local public services. In the specific case of a centralized autocracy, dismissal must be approved by the national elite and residents cannot communicate their complaints to them, so without a guarantee of local political rights there can be no credible commitment to sustain efficient local investments by the autocratic national government.

Full Barcelona GSE Lecture by Roger Myerson:

Lucila Mederos ’11 attends Honoris Causa ceremony for Prof. Andreu Mas-Colell at University of Chicago

photo
Alum Lucila Mederos ’11 with Barcelona GSE founder Andreu Mas-Colell in Chicago last week

Last week, I had the pleasure of attending the ceremony where Prof. Andreu Mas-Colell was named Doctor Honoris Causa by the University of Chicago. As an alum of the Barcelona GSE, I felt honored to watch the founder of our school receive this recognition and to represent the alumni community here in Chicago.

After the ceremony I got to join Prof. Mas-Colell, his family, the President Emeritus of the University of Chicago and Chairman of the Barcelona GSE Scientific Council Hugo Sonnenschein, and the other recipients for lunch, where I took this photo. Professor Mas-Colell seemed very grateful to receive the Honoris Causa distinction and to have the GSE represented at the event.


Lucila Mederos ’11 (Master Program in International Trade, Finance, and Development) is Project Manager for Euromonitor International’s LATAM Custom Research Team in Chicago, IL (USA). Follow her on Twitter @lucilamederos

The end of the master’s in two acts

Competition and Market Regulation student Fernando Cota ’15 just finished his master project. Here’s how he celebrated:

https://twitter.com/fer_cota/status/609025041460793345

One of the perks of studying in Barcelona…

Graduation is right around the corner! Stay tuned in July and August for examples of this year’s master projects.

Using H20 for competitive data science

Reposted from H2o


In this special H2O guest blog post, Gaston Besanson and Tim Kreienkamp talk about their experience using H2O for competitive data science. They are both students in the new Master of Data Science Program at the Barcelona Graduate School of Economics and used H2O in an in-class Kaggle competition for their Machine Learning class. Gaston’s team came in second, scoring 0.92838 in overall accuracy, slightly surpassed by Tim’s team with 0.92964, on a subset of the famous “Forest Cover” dataset.

What is your background prior to this challenge?

Tim: We both are students in the Master of Data Science at the Graduate School of Economics in Barcelona. I come from a business background. I took part in a few Kaggle challenges before, but didn’t have a formal machine learning background before this class.

Gaston: I have a mixed background in Economics, Finance and Law. With no prior experience on Kaggle or Machine Learning other than Andrew Ng’s online course :).

Could you give a brief introduction to the dataset and the challenges associated with it?

Tim: The good thing about this dataset is that it is relatively “clean” (no missing values etc) and small (7 mb of training data). This allows for fast iteration and testing out a couple of different methods and hunches relatively quickly (relatively – a classmate of ours spent $300 on AWS trying to train support vector machines). The main challenge I see in the multiclass nature – this always makes it harder as basically one has to train 7 models (due to the one-vs-all nature of multiclass classification).

Gaston: Yes, this dataset is a classic on Kaggle: Forest Cover Type Prediction. Which, as Tim said and adding to it, there are 7 types of trees and 54 features (10 quantitative variables, like Elevation, and 44 binary variables: 4 binary wilderness areas and 40 binary soil type variables). What come to our attention was the highly unbalanced that was the dataset. Class 1 and 2 represented 80% of the training data.

What feature engineering and preprocessing techniques did you use?

Gaston: Our team added an extra layer to this competition that was to predict as best as possible the type of tree in a region with the purpose of minimizing the fires. Even though we used the same loss for each type of misclassification – in other words, all trees are equally important -, we decided to create new features. We created six new variables to try to identify features important to fire risk. And, we applied a normalization on both the training and the test sets to the 60 features.

Tim: We included some difference and interaction terms. However, we didn’t scale the numerical features or use any unsupervised dimension reduction techniques. I briefly tried to do supervised feature learning with H2O Deep Learning – it gave me really impressive results in cross-validation, but broke down on the test set.

Editor’s note: L1/L2/Dropout regularization or fewer neurons can help avoid overfitting

Which supervised learning algorithms did you try and to what success?

Tim: I tried H2O’s implementation of Gradient Boosting, Random Forest, Deep Learning (MLP with stochastic gradient descent), and the standard R implementation of SVM and k-NN. k-NN performed poorly, so did SVM – Deep Learning overfit, as I already mentioned. The tree based methods both performed very well in our initial tests. We finally settled for Random Forest, since it gave the best results and was faster to train than Gradient Boosting.

Gaston: We tried KNN, SVM, Random Forest all from different packages, with not that great results. And finally we used H2O’s implementation of GBM – we ended up using this model because it introduces a lot of freedom into the model design. The model we used had the following attributes: Number of trees: 250; Maximum Depth: 18; Minimum Rows: 10; Shrinkage: 0.1.

What feature selection techniques did you try?

Tim: We didn’t try anything fancy (like LASSO) for this challenge. Instead, we decided to take advantage of the fact that random forests can compute feature importances. I used this to code my own recursive elimination procedure. At each iteration, a random forest was trained and cross-validated (ten fold). The feature importances are computed, the worst two features are discarded, and the next iteration begins with the remaining features. The resulting cross validation errors at each stage made up a nice “textbook-like” curve, where the error first decreased with fewer features and at the end made a sharp increase again. We then chose the set of features that gave the second-best cross validation error, to not overfit by feature selection.

Gaston: Actually, we did not do any feature selection other than removing the variables that did have a variance, which if I am not mistaken was one in the original dataset (before feature creation). Neither turns the binary variables into one categorical (one for wilderness areas and one for soil type). We had a naïve approach of sticking with the story of fire risk no matter what; maybe next time we will change the approach.

Why did you use H2O and what were the major benefits?

Tim: We were constrained by our teachers in the sense that we could only use R – that forced me out of my scikit-learn comfort zone. So I looked for something as accurate and fast. As an occasional Kaggler, I am familiar with Arno’s forum post, and so I decided to give H2O a shot – and I didn’t regret it at all. Apart from the nice R interface, the major benefit is the strong parallelization – this way we were able to make the most of our AWS academic grants.

Gaston: I came across H2O just by searching the web and reading about alternatives within R possibilities after the GBM package proved really untestable. Just to add to what Tim said, I think H2O will be my weapon of choice in the near future.

For a more detailed description of the methods used and results obtained, see the report of Gaston’s and Tim’s teams.

Attack when the world is not watching? International media and conflicts

Iacopo

By Iacopo Tonini ’15, current student in the International Trade, Finance and Development program. Follow him on Twitter @iacopotoni


On April 8th Professor Ruben Durante (Sciences Po) visited the Institute of Political Economy and Governance (IPEG) to present his latest research paper, “Attack when the world is not watching? International Media and the Israeli-Palestinian conflict” (coauthored with Ekaterina Zhuravskaya, Paris School of Economics). The title clearly suggests that the article relates to the Israeli-Palestinian conflict, however the research aims at demonstrating a more general result. Indeed, the objective is establishing a causal relationship between unpopular policy choices and news with a “highly relevant” media content.

“I have noticed that my older daughter steals her brother’s toys when we (the parents) are not home”, the Italian professor stated cheerfully while explaining where he had initially gotten the idea for the research. The process seems to make sense. In the case of an influential conflict such as the Israeli-Palestinian one, it could well be the case that the factions involved wait for the world to look away before coordinating and ultimately launching a brutal offensive – or interrupting a ceasefire after a period of peace.

Beyond the intuitive example extrapolated from the professor’s family life, more concrete cases are the following. On the very same day in which the Italian National team qualified for the World Cup final (July 13, 1994), the Berlusconi government passed and approved an emergency decree – recently renamed the “save-the-thieves” decree– that allowed several politicians accused of corruption to avoid imprisonment, and keep working unscathed. Another example is the start of the Russian military operations to invade Georgia. The Russian army received the order to attack during the opening ceremony for the 2008 Beijing Summer Olympics. Finally, during the 2014 FIFA World Cup, while at the stadium Mineirão in Belo Horizonte the German national team was inflicting a historical débâcle to the Seleçao, Israel launched the operation “Protective Edgeagainst Gaza.

La Stampa
News item about an unpopular policy appears below news about the World Cup qualifier in this Italian newspaper.

“Governments are accountable to the extent that the public is informed about their actions. Mass media ensure accountability by informing citizens about government actions (Besley and Prat, 2006; Snyder and Stromberg, 2010),” answered the professor to one of the questions asked by a scholar. “Yet, how effectively mass media inform the public depends, among other things, on the presence of other newsworthy events that may crowd out the news coverage of governments’ actions (Eisensee and Stromberg, 2007).” The assumption on which the study is built – and that the Israeli-Palestinian case seems to demonstrate – is that politicians might exploit the presence of other “newsworthy events” to put forward unpopular actions so that they coincide with major events that distract the audience.

However, could this perhaps be the overly-pessimistic vision of the researcher? The data shows otherwise. In particular, Professor Durante shows in his work that when the Israeli-Palestinian conflict is given coverage in the media, there is a 12% increase in the Google searches related to the topic.

Let us focus on the case studied in the paper: the Israeli-Palestinian conflict.

Since the 70s, the Israeli government has put a great effort in projecting a positive image of Israel and its army abroad. This policy is denominated Hasbara, which in Hebrew means “explanation”, and entails the production of informative material on issues regarding Israel and the Middle East. Furthermore, Hasbara includes collaborations between international and local journalists, and social-media usage to influence public opinion. Nothing similar is in place in Palestine.

In his article the Professor writes: “Most likely, in an armed conflict nothing has a worse impact on the international public opinion than civilian casualties.” The factions involved seem to be perfectly aware of the primary role mass media have in informing, and thus influencing, public opinion. Evidence of this is found, for instance, in the words of the Prime Minister Netanyahu who, when interviewed by the CNN about the numerous civilian victims, commented: “[Hamas] wants to pile up as many civilian dead as they can…they use telegenically dead Palestinians for their cause.”

By combining daily data on attacks on both sides of the conflict with data on the content of evening news for top U.S. TV networks, the authors show that: “Israeli attacks are more likely to be carried out when the U.S. news are expected to be dominated by important (non-Israel-related, such as an election or the Super Bowl) events on the following day.” Specifically, the empirical findings indicate that the strategic timing of the Israeli army is less relevant in a period of intense fighting as opposed to attacks in a period of relative peace. Moreover, the attacks are strategically planned only in the case when there is the possibility of civilian casualties. Thus, the facts documented in the paper confirm that the aim of Israel is that of “minimizing negative international publicity” – and in the case of graphic or emotional contents regarding civilian victims, would result in a strong, negative impact on the American public opinion.

In contrast, the researchers find “no evidence that attacks by Palestinian militant groups are timed to U.S. news pressure.” This might be reflecting the marked gap that exists between the two arrays: unlike Israel, Palestine does not have a structured or well-organized army, nor does it have the same level of resources that would allow it to plan such attacks strategically. Nevertheless, it should be noted that this does not exclude the possibility of Palestine’s lack of interest towards shaping American public opinion.

Beyond the interesting findings of the research on the role international media plays in the Israeli-Palestinian conflict, this study also reminds us of the powerful responsibility it holds. In particular, mass media is a watchdog powerful enough to influence the actions of governments by simply carrying out their mission of informing citizens. This becomes even more important when considering the exponential interconnectedness of people throughout the globe via technological progresses.

References

  1. Besley, Timothy and Andrea Prat, “Handcuffs for the Grabbing Hand? Media Capture and Political Accountability,” American Economic Review, 2006, 96 (3), 720–736.
  2. Eisensee, Thomas and David Stromberg, “News Droughts, News Floods, and U.S. Disaster Relief,” The Quarterly Journal of Economics, 05 2007, 122 (2), 693–728.
  3. Snyder, James M. and David Stromberg, “Press Coverage and Political Accountability,” Journal of Political Economy, 04 2010, 118 (2), 355-408.