Finance master project by Alejandro García, Thomas Kelly, and Joan Segui ’20
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Introduction
This paper builds on the stealth trading literature to investigate the relationship between several different trade characteristics and price discovery in US equity markets. Our work extends the Weighted Price Contribution (WPC) methodology, which in its simplest form posits that if all trades conveyed the same amount of information, their contribution to market price dynamics over a certain time interval should equate their share in total transactions or total volume traded in the period considered. Traditionally, the approach has been used to provide evidence that trades of smaller sizes convey a disproportionate amount of information in mature equity markets through the estimation of a parsimonious linear specification.
The methodology is flexible enough to accommodate for a first set of key extensions in our work, which focus on studying the relative price contribution from trades initiated by high-frequency traders (HFTs) and on stocks of different market capitalization categories over the daily session. Nonetheless, previous research has found that short-lived frictions make the WPC methodology ill-suited for analyzing price discovery at under-a-minute frequencies, a key timespan when HFTs are in focus. Therefore, to analyze the information content of trades of different attributes at higher frequencies we use a Fixed Effects specification to characterize trades that correctly anticipate price trends over under-a-minute windows of varying length as price informative.
Key results
At the daily level, our results underpin prior research that has found statistical evidence of smaller trades inputting a disproportionate amount of information into market prices. This result holds regardless of the type of initiating trader or market capitalization category of the stock being transacted, suggesting that the type of trader on either side on the transaction does not significantly alter the average information content over the session.
At higher frequencies, trades initiated by HFTs are found to contribute more to price discovery than trades initiated by non-HFTs only when large and mid cap stocks are being traded, consistent with prior empirical findings pointing to HFTs having a strong preference for trading on highly liquid stocks.
Economics master project by Amil Camilo, Doruk Gökalp, Julian Klix, Daniil Iurchenko, and Jeremy Rubinoff ’20
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Around the world, and especially in high-tech economies, the demandand adoptionof industrial robots have increased dramatically. The abandonment of robots (referred to as derobotization or, more broadly, deautomation) has, on the other hand, been less discussed. It would seem that the discussion on industrial robots has rarely been about their abandonment because, presumably, the abandonment of industrial robots would be rare. Our investigation, however, shows that the opposite is true: not only do a substantial number of manufacturing firms deautomate, a fact which has been overlooked by the literature, but the reasons for which they deautomate are highly multi-dimensional, suggesting that they depend critically on the productivity of firms and those firms’ beliefs about robotization.
Extending the analysis of Koch et al. (2019), we use data from SEPI Foundation’s Encuesta sobre Estrategias Empresariales (ESEE), which annually surveys over 2000 Spanish manufacturing firms on business strategies, including on whether they adopt robots in their production lines. We document three major facts on derobotization. First, firms that derobotize tend to do so quickly, with over half derobotizing in the first four years after adoption of robots. Second, derobotizing firms tend to be relatively smaller than firms which stay automated for longer periods of time. Third, firms that abandon robots demand less labor and increase their capital-to-labor ratios. The prompt abandonment of robots, we believe, is indicative of a learning process in which firms robotize production with expectations of higher earnings, but later learn information which causes them to derobotize and adjust their production accordingly.
With this in mind, we propose a dynamic model of automation that allows firms to both adopt robots and later derobotize their production. In our setup, firms face a sequence of optimal stopping problems where they consider whether to robotize, then whether to derobotize, then whether to robotize again, and so on. The production technology in our model is micro-founded by the task-based approach from Acemoglu and Autor (2011). In this approach, firms assign tasks to workers of different occupations as well as to robots in order to produce output. For simplicity, we assume two occupations, that of low-skilled and high-skilled workers, where the latter workers are naturally more productive than the former. When firms adopt robots, the firm’s overall productivity (and the relative productivity of high-skilled workers) increases, but the relative productivity of low-skilled workers decreases. At the same time, once firms robotize they learn the total cost of maintaining robots in production, which may exceed their initial expectations. At any point in time, firms can derobotize production with the newfound knowledge of the cost. Likewise, firms can reautomate at a lower cost with the added assumption that firms retain the infrastructure of operating robots in production.
The simulations of our model can accurately explain and reproduce the behavioral distribution of automation across firms in the data (see Figure 1). Indeed, we are able to show that larger and more productive firms are more likely to robotize and, in turn, the firms which derobotize tend to be less productive (referred to as the productivity effect). However, the learning process which reveals the true cost of robotized production (referred to as the revelation effect) also highlights the role of incomplete information as a plausible explanation for prompt abandonment. Most importantly, our simulations suggest that analyses which ignore abandonment can overestimate the effects of automation and, therefore, must be incomplete.
Our project is the first, to our knowledge, to document the pertinent facts on deautomation as well as the productivity effect and the revelation effect. It is apparent to us, based on our investigation, that any research seeking to model automation would benefit from modeling deautomation. From that starting point, there remains plenty of fertile ground for new questions and, consequently, new insights.
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Paper abstract
In venture capital, two or more venture capitalists (VC) often form syndicates to participate in the same financing rounds. Historically, syndicated investments have been found to have a positive effect on the investment performance. The paper provides insight into the effects of syndication on the likelihood of a successful exit for the venture-backed firm. It addresses the possible driving components such as the composition of the syndicates and, in particular, the internal investment funds being classed as external firms in two of the four models proposed, as well as a relaxation on the definition of investment round. One of the main conclusions is that in the analysis, using the chance of exiting and money in minus money out as success factors, syndication coefficients across all models are shown to have a higher chance of exiting. This supports the Value-add hypothesis and opposes the alternative, the Selection hypothesis, as it proposes that syndicated VC firms bring varying expertise to the project in order to increase the success factors post-investment. The paper advises to proceed with caution as the story is not consistent across the analysis.
Main conclusions
The paper aimed at looking to add to the literature of debates on reasons for syndication, such as the Valueadd vs Selection hypothesis as set out from various points of views. Uncertainty around profitability is the reason for syndication through the Selection hypothesis, however, the Value-add hypothesis suggests that VCs syndicate to add additional value to the venture post-investment. This is where the varying definitions of syndication we introduced, in order to draw inferences from the data. If the Soft definition of syndication (where syndication can occur across multiple investment rounds), was more successful, it may favour the Value-add hypothesis. However, in the initial test using “exited” as success, the Soft syndication models did not show a significant difference compared to the Hard syndication models.
Using the chance of exiting as a success factor, syndication coefficients across all models showed a higher chance of exiting. Using this as a success factor, you could argue for the Value-add and against the Selection hypothesis, as syndicated investments across all models resulted in a higher chance of exiting the investment. Including the key controls, resulted in similar conclusions to be drawn, with syndication increasing the log odds of exiting. This does support the conclusions of Brander, Amit and Antweiler (2002) that highlight that the Valueadd hypothesis dominates.
Using Money Out minus Money In as a success factor it was shown syndicated investments increased this which would be in line with the Value-add hypothesis according to Brander, Amit and Antweiler (2002), however, this could be down to successful companies being input with greater investments which are already successful.
Using exit duration as a success factor, conclusions were unable to be drawn about syndication, as the syndication coefficients were not significant. A potential reason for this, as the literature suggests, Guo, Lou and Pérez-Castrillo (2015), highlight, that the type of fund the investment is being purchased for has an impact on the duration and amount of funding, therefore impacting the returns of the VCs. They find that CVC (corporate venture capital) backed startups receive a significantly higher investment amount and stay in the market for longer before they exit (Guo, Lou and Pérez-Castrillo, 2015). The data did not allow us to analyse the type of fund, meaning the investment strategy could differ from the outset. As no control variable exists for the type of fund it is therefore assumed this does not significantly impact the outcome. Controlling for the type of fund may have shed light on this aspect of the results.
International Trade, Finance, and Development master project by Zhuldyz Ashikbayeva, Marei Fürstenberg, Timo Kapelari, Albert Pierres, Stephan Thies ’19
Source: NY Times
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Abstract
This thesis studies the impacts of flooding on income and expenditures of rural households in northeast Thailand. It explores and compares shock coping strategies and identifies household level differences in flood resilience. Drawing on unique household panel data collected between 2007 and 2016, we exploit random spatio-temporal variation in flood intensities on the village level to identify the causal impacts of flooding on households. Two objective measures for flood intensities are derived from satellite data and employed in the analysis. Both proposed measures rely on the percentage area inundated in the surrounding of a village, but the second measure is standardized and expressed in comparison to the median village level flood exposure. We find that household incomes are negatively affected by floods. However, our results suggest that rather than absolute levels of flooding, deviations from median flood exposure are driving negative effects on households. This indicates a certain degree of adaptation to floods. Household expenditures for health and especially food rise in the aftermath of flooding. Lastly, we find that above primary school education helps to completely offset potential negative effects of flooding.
Conclusion
This paper adds to the existing body of literature by employing a satellite based measure to investigate the long-run effects of recurrent floods on household level outcomes. We first set out to identify the causal impacts of flooding on income and expenditures of rural households in Thailand. Next, we explored and compared shock coping strategies and identified potential differences in flood resilience based on household characteristics. For this purpose, we leveraged a detailed household panel data set provided by the Thailand Vietnam Socio Economic Panel. To quantify the severity of flood events, we calculated flood indices based on flood maps collected by the Geo-Informatics and Space Technology Development Agency (GISTDA) measuring the deviation from median levels of flooding in a 5km radius around a respective village. The figure below illustrates the construction of the index for a set of exemplary villages that lie in the Nang Rong district of Buri Ram in northeast Thailand.
(a) 2010 flooding in Buri Ram and surrounding provinces. Red lines mark the location of the Nong Rong district.
(b) Detailed overview of flood index construction. Red dot shows the exact location of each village with the 5 km area around each village marked by the red circle.
Our results suggest a negative relationship between floods and per household member income, for both total income and income from farming. Per household member expenditure, however, does not seem to be affected by flood events at all. The only exemptions are food and health expenditures, which increase after flood events that are among the top 10 percent of the most severe floods. The former is likely to be driven by the fact that many households in northeastern Thailand live at subsistence level, and therefore consume their farming produce. A lack of production in a given year may lead these households to substitute this loss by buying produce from markets. Rising health expenditures may be explained by injuries caused or diseases obtained during a heavy flood.
Investigating potential risk mitigation strategies revealed that households with better educated household heads suffer less during flood events. However, this result does not necessarily point to a causal relationship, as better educated households might settle in locations of the village which are less likely to be flooded. While our data does not allow to control for such settlement choices on the micro-spatial level, our findings still provide valuable insights for future policy-relevant research on the effects of education on disaster resilience in rural Thailand. Moreover, our data suggests that only very few households are insured against potential disasters. Future research will help to investigate flood impacts and risk mitigation channels in more detail.
Economics master project by Julie Balitrand, Joseph Buss, Ana Monteiro, Jens Oehlen, and Paul Richter ’19
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Abstract
We study the effects of the #BlackLivesMatter movement on the law abiding behavior of African-Americans. First, we derive a conceptual framework to illustrate changes in risk perceptions across different races. Second, we use data from the Illinois Traffic Study Dataset to investigate race ratios in police stops. For identification, we apply a linear probability OLS regression on media coverage as well as an event study framework with specific cases. We find that the number of black people committing traffic law violations is significantly reduced after spikes in media coverage and notable police shootings. In the latter case, we further find that the effect holds for an approximate ten day period. We argue that these observed changes in driving behavior are a result of the updated risk beliefs.
Game Tree. Balitrand et al.
Conclusions
Beginning with our model, we show that media related changes in risk perceptions cause a change in the proportion of people committing crimes. Using this model, we further predict that this change would be different across different racial groups. More specifically, it predicts that Blacks became more cautious in order to decrease the chance of a negative interaction with the police. On the other hand, whites were predicted to not change their behavior, since the violence in media coverage is not relevant to their driving decisions.
In order to test our model, we develop a hypothesis testing strategy that allows us disentangle police actions from civilian decisions. By considering the proportion of stopped people who are black at nighttime, we completely remove any effect caused by changes in policing intensity and bias. Instead, we create a testable hypothesis that only focuses on the differences in behavior between racial groups.
To test this hypothesis, we use a linear probability model along with traffic data from Illinois. We test the hypothesis using both an event study approach, as well as using media intensity data from the GDELT Project. Both approaches verify our model’s predictions with high significance levels. Therefore, we have shown that Blacks became more cautious in response to these events compared to other racial groups. In addition, our robustness check on the total number of stops supports the claim that non-blacks do not have a significant response to media coverage of police brutality toward Blacks. This leads to the conclusion that the expected proportion of Blacks breaking traffic laws goes down in response to coverage of these events.
An implicit assumption in our model was that as media coverage goes to zero, Blacks would revert back to their original level of caution. To test this we looked at three days intervals following each media event. We showed that after approximately 10 days, the coefficients were not significant anymore, showing that the media only caused a short term change in behavior. Since this was a robustness check, and not a main focus of our model, we did not investigate this further. This is an interesting conclusion, and warrants future analysis.
On a final note, we want to address the type of media we use for our analysis. Our model section considers media in a general sense. This can include, but is not limited to, social media platforms such as Twitter and Facebook, as well as more traditional media platforms such as television and print newspapers. All of these sources cover police brutality cases at similar intensities. We use TV data for media intensity, since it affects the broadest demographic and therefore best represents the average driver’s exposure to the topic. Different media age medians might affect different demographics more or less. For example, social media may have a greater effect on younger drivers than older drivers. We believes this topic warrants further analysis, in a addition to the topic of the previous paragraph.
Competition and Market Regulation master project by Leandro Benítez and Ádám Torda ’19
Evaluating the performance of merger simulation using different demand systems: Evidence from the Argentinian beer market
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Abstract
This research arises in a context of strong debate on the effectiveness of merger control and how competition authorities assess the potential anticompetitive effects of mergers. In order to contribute to the discussion, we apply merger simulation –the most sophisticated and often used tool to assess unilateral effects– to predict the post-merger prices of the AB InBev / SAB-Miller merger in Argentina.
The basic idea of merger simulation is to simulate post-merger equilibrium from estimated structural parameters of the demand and supply equations. Assuming that firms compete a la Bertrand, we use different discrete choice demand systems –Logit, Nested Logit and Random Coefficients Logit models– in order to test how sensible the predictions are to changes in demand specification. Then, to get a measure of the precision of the method we compare these predictions with actual post-merger prices.
Finally, to conclude, we point out the
importance of post-merger evaluation of merger simulation methods
applied in complex cases, as well as the advantages and limitations of
using these type of demand models.
Conclusion
Merger simulations yield mixed conclusions on the use of different demand models. The Logit model is ex-ante considered inappropriate because of its restrictive pattern of substitution, however it performed better than expected. Its predictions on average were close to the predictions of the Random Coefficients Logit model, which should yield the most realistic and precise estimates. Conversely, the Nested Logit model largely overestimated the post-merger prices. However, the poor performance is mainly motivated by the nests configuration: the swap of brands generates almost two close to monopoly positions in the standard and low-end segment for AB InBev and CCU, respectively. This issue, added to the high correlation of preferences for products in the same nest, generates enhanced price effects.
Regarding the substitution patterns, the Logit, Nested Logit and Random Coefficients Logit models yielded different results. The own-price elasticities are similar for the Logit and Nested Logit model, however for the Random Coefficients Logit model they are more almost tripled. This is likely driven by the estimated larger price coefficient as well as the standard deviations of the product characteristics. As expected, by construction the Random Coefficients Logit model yielded the most realistic cross-price elasticities.
Our question on how does the different discrete choice demand models affects merger simulation –and, by extension, their policy implications– is hard to be answered. For the AB InBev / SAB-Miller merger the Logit and Random Coefficients Logit model predict almost zero changes in prices. Conversely, according to the Nested Logit, both scenarios were equally harmful to consumers in terms of their unilateral effects. However, as mentioned above, given the particular post-merger nests configuration, evaluating this model solely by the precision of its predictions might be misleading. We cannot discard to have better predictions under different conditions.
As a concluding remark, we must acknowledge the virtues and limitations of merger simulation. Merger simulation is a useful tool for competition policy as it gives us the possibility to analyze different types of hypothetical scenarios –like approving the merger, or imposing conditions or directly blocking the operation–. However, we must take into account that it is still a static analysis framework. By focusing only on the current pre-merger market information, merger simulation does not consider dynamic factors such as product repositioning, entry and exit, or other external shocks.
Macroeconomic master project by Ivana Ganeva and Rana Mohie ’19
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Introduction
The question of whether a currency crisis can be predicted beforehand has been discussed in the literature for decades. Economists and econometricians have been trying to develop different prediction models that can work as an Early Warning System (EWS) for a currency crisis. The significance of such systems is that they provide policy makers with a valuable tool to aid them in tackling economic issues or speculation pressure, and in taking decisions that would prevent that from turning into a crisis. This topic is especially relevant to Emerging Markets Economies due to the presence of a greater number of fluctuations in their exchange rate translating to a bigger currency crisis risk.
In this paper, we propose an Early Warning System for predicting currency crises that is based on an Artificial Neural Networks (ANN) algorithm. The performance of this EWS is then evaluated both in-sample and out-of-sample using a data set of 17 developed and developing countries over the period of 1980-2019. The performance of this Neural-Network-based EWS is then compared to two other models that are widely used in the literature. The first one is the Probit model dependent variable which is considered the standard model in predicting currency crises, and is based on Berg and Patillo, 1999. The second model under consideration is a regime switching prediction model based on that proposed by Abiad, 2006.
Artificial Neural Networks
Artificial Neural Networks (ANN-s) is a Machine Learning technique which drives its inspiration from biological nervous systems and the (human) brain structure. With recent advancement in the computing technologies, computer scientists were able to mimic the brain functionality using artificial algorithms. This has motivated researchers to use the same brain functionality to design algorithms that can solve complex and non-linear problems. As a result, ANN-s have become a source of inspiration for a large number of techniques across a vast variety of fields. The main financial areas where ANN-s are utilized include credit authorisation and screening, financial and economic forecasting, fixed income investments, and prediction of default and bankruptcy and credit card manipulations (Öztemel, 2003).
Main Contributions
1. Machine Learning Techniques:
(a) Using an Artificial Neural Network predictive model based on the multi-layered feed forward neural network (MLFN), also known as the “Back-propagation Network” which is one of the most widely used architectures in the financial series neural network literature (Aydin and Savdar 2015). To the best of our knowledge, this is the first study that used a purely neural network model in forecasting currency crises.
(b) Improving the forecast performance of the Neural Network model by allowing the model to be trained (learn) from the data of other countries in the cluster; i.e countries with similar traits and nominal exchange rate depreciation properties. The idea behind this model extension is mainly adopted from the “Transferred Learning” technique that is used in image recognition applications.
2. The Data Set: Comparing models across a large data set of 17 countries in 5 continents, and including both developing and developed economies.
3. Crisis Definition: Adding an extra step to the Early Warning System design by clustering the set of countries into 6 clusters based on their economy’s traits, and the behavior of their nominal exchange rate depreciation fluctuations. This allows for having a crisis definition that is uniquely based on each set of countries properties – we call it the ’middle-ground’ definition. Moreover, this allowed to test for the potential of improving the forecasting performance of the neural network by training the model on data sets of other countries within the same cluster. 4. Reproducible Research: Downloading and Cleaning Data has been automated, so that the results can be easily updated or extended.
Conclusions
We compare between models based on two main measures. The Good Signals measure captures the percentage of currency crises predicted out of the total crises that actually occurred in the data set. The second measure used for comparing across models is the False Alarms measure. That is, the percentage of false signals that the EWS gives out of the total number of crises it predicts. In other words, that is the percentage of times when the EWS predicts a crisis that never happens.
The tables presented below show our findings and how the models perform against each other on our data set of 17 countries. We also provide the relevant findings from literature as a benchmark for our research.
The results in Table 1 show that Berg & Patillo’s clustering of all countries together generally works worse than our way of clustering data. Therefore, we can confirm that the choice of a ’middle-ground’ crisis definition has indeed helped us preserve any potential important country- or cluster-specific traits. In brief, we get comparable results to the ones found in the literature when using conventional methods, as highlighted by the table to follow.
After introducing the ANN model and its extension, we observe their Out-of-Sample models performance and obtain some of the key results to our research.
Summary of the key results
The proposed Artificial Neural Network model crisis predictability is shown to be comparable to the standard currency crisis forecasting model across both measures of Good Signals and less False Alarms. However, the modified Neural Network model on the special clustering data set has shown superior performance to the standard forecasting model.
The performance of the Artificial Neural Network model observed a tangible improvement when introducing our method of clustering the data. That is, data from similar countries as part of the training set of the network could indeed serve as an advantage rather than a distortion. To the contrary, using the standard Probit model with the panels of clustered data resulted in lower performance as compared to the respective country-by-country measures.
Economics master project by Shaily Bahuguna, Diego Loras Gimeno, Davina Heer, Manuel E. Lago, and Chiara Toietta ’19
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Abstract
This paper aims to find a pattern in the evolution of altruistic and cooperative behaviour whilst distinguishing across different types of schools in Spain. In specific, we design a controlled laboratory experiment by running the standard dictator game and a public goods game in a public and private (“concertada”) high school. Using a sample of 180 students, we compare 12 and 16 year old children to distinguish the evolutionary pattern and test if there is a significant change by the type of schooling system. Alongside, we control for variants such as parental wealth status, religious views and ethical opinions. Interestingly, evidence from our data highlights that altruism levels rise throughout public school education whilst it falls in private schools. On the contrary, cooperation levels are relatively stable in public schools but rise in private schools. The results from this paper can be exploited to understand how education may influence selfish and individualistic behaviour in our society.
Key results
Diff-in-Diff (Altruism (L) & Cooperation (R))
Our results show that at the initial stage, i.e. for the first year students, the level of altruism is higher in public schools and this prevails throughout the students’ education in a public school. On the other hand, we observe an opposite trend for students attending private school, as over the four years of education, the average level of altruism declines. In regards to cooperation, we find some surprising results. Although students attending a public school initially show higher levels of cooperation than private schools, over the course of their education, this gap is not only reduced but it is also surpassed by the private school. Our results are in line with previous research which state that females are more likely to donate and cooperate than males but contradict the popular view in literature that income has a positive correlation with both dependent variables.
Economics of Public Policy master project by Agrima Sahore, Ah Young Jang, and Marjorie Pang ’19
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Abstract
Using household survey data from rural Bangladesh, we explore determinants of domestic violence. We propose two hypotheses: first, women suffer more domestic abuse as a result of marrying young; and second, women who are empowered suffer less gender-based violence. We isolate the causal effect of marriage timing using age at first menstruation and extreme weather as instruments; and the effect of empowerment using the number of types of informal credit sources as instrument. We find robust evidence contrary to our hypotheses. Our findings highlight that mere empowerment or increasing age at first marriage are insufficient mediums to combat gender-based violence and can in fact be counterproductive to reducing domestic violence against women, if the socio-economic context is not carefully considered.
Conclusion
Interestingly, we find a positive relationship between age at first marriage and domestic violence; and empowerment and domestic violence. This highlights the complexity of the nature of domestic violence against women in a highly conservative setting like rural Bangladesh.
Violence against women continues to be a social and economic problem Bangladesh struggles with. Although the government had aimed to eliminate gender based violence in the country by 2015, their efforts have not achieved the desired results. However, if the empowerment of women (an improvement in their economic and social status) and violence against them follows an inverted U-shaped curve, it is possible that Bangladesh is still adjusting to egalitarian gender norms and expectations and is stationed somewhere on the positive slope of the curve, wherein increase in empowerment initially would increase violence against women, before reducing it.
In order to design successful policies to combat violence against women, our study highlights the importance of understanding traditional cultural norms – especially prevailing gender norms – economic conditions, and how the interplay of various socio-economic factors contribute to domestic violence against women. Ultimately, actions and practices aimed at improving women’s condition in societies should work towards confronting existing circumstances and environments that underlie women’s risk of experiencing domestic violence.
Economics master project by Eimear Flynn, Florencia Saravia, Josefina Cenzon, Nimisha Gupta, and Selena Tezel ’19
Editor’s note: This post is part of a series showcasing BSE master projects. The project is a required component of all Master’s programs at the Barcelona School of Economics.
Abstract
Is financial globalization beneficial to economies at all levels of development? Or are there certain “threshold” levels of financial, institutional and economic development a country must first attain in order to realize the growth benefits of globalization? Kose, Prasad and Taylor (2009) develop a unified empirical framework to answer this question. The debate on the literature is ongoing. Yet few studies have explored these questions in a post-crisis context. In this paper, we replicate and extend their work, paying close attention to the period 2005-2014. Our analysis yields three key results. First, the financial depth threshold above which countries can benefit from financial globalization increases from 66% to 81% when we consider the extended period. Second, the proportion of countries with depth levels above this threshold declines over time. Finally, the coefficients are smaller in absolute value over the period 1975-2014. Taken together, these results imply a breakdown in the relationship between financial depth, openness and growth since the Great Recession. Financial deepening on its own can no longer ensure positive growth effects of financial integration.
Conclusion
In the paper, we examine two periods 1975-2004 and 1975-2014 and test for threshold effects in three variables, financial depth, institutional quality and trade openness. This paper is unique in its inclusion of the years immediately before and following the Great Recession. Following a surge in international financial integration between 2005 and 2007, financial openness plummeted with the onset of the crisis in 2007. This effect was most pronounced in advanced economies. As economic growth rates declined, countries turned their backs on financial globalization. Financial flows have since rebounded, albeit not to their pre-crisis levels. The effect of this volatility on the relationship between financial openness and economic growth however is not well understood.
Overall Financial Openness Coefficient and Financial Depth in 1975-2014 vs 1975-2004
We analyze changes in the financial depth threshold over time as well as changes in the proportion of countries with depth levels above this threshold over time. We present three key findings. We first document an increase in the threshold level of financial depth from 66% to 81% when we extend the period to 2014. It follows that the proportion of countries with levels of depth above this threshold decreases over time. Our estimate of 66% for the period 1975-2004 is remarkably close to that of Kose et al. Secondly, the coefficient estimates are smaller and less significant which points to a breakdown in the relationship between financial openness and growth in the post-crisis period. Finally, we identify significant threshold effects of institutional quality.
Together these results point to a weaker relationship between financial openness and growth in the post-crisis period. Our estimates suggest that once a country reaches the threshold level of financial depth, further improvements in depth stop being important quite rapidly. It is now more difficult for countries to attain the benefits of financial integration, not just because the threshold of financial depth is higher but because financial depth alone may no longer be sufficient to ensure growth. The trade-off that further financial deepening can generate between higher growth and a higher risk of crisis needs to be addressed. The Great Recession was a reminder that financial depth and financial stability need not go hand in hand. The risks of financial deepening are more evident than before. Focusing only on the long run growth view overlooks this trade-off. In order to conduct policy relevant research, a new approach that realistically accounts for both the growth and crisis effects of financial deepening is required.
We use our own and third party cookies to carry out analysis and measurement of our website activities, in order to improve our services. Your continuing on this website implies your acceptance of these terms.