How has public discourse on marriage equality affected change in US institutions?

Giulia Mariani ’12 (International Trade, Finance, and Development)

Gradual institutional change analyses have allowed drawing a more flexible line between stability and transformation when examining how institutions evolve over time, particularly in the absence of major critical junctures or exogenous shocks. Yet, the explanatory power of the theory has been undermined by a lack of attention to the overlapping boundaries of the modes of gradual institutional change, a relatively static model of agency, and conceptual confusion regarding what the modes of change exactly are.

In our recent article “Discursive Strategies and Sequenced Institutional Change: The Case of Marriage Equality in the United States” published in Political Studies, Tània Verge and I argue that addressing these shortcomings requires investigating the agent-based dynamics underpinning gradual institutional change and bringing to the fore the role of ideas. Indeed, ideas and discourses can have a constitutive impact in the creation, maintenance and reform of institutions, and actors strategically reframe problems and redirect solutions to influence both the process and the outcome of policy reforms.

Employing marriage equality in the United States as a case study, we show that the modes of gradual institutional change can be studied simultaneously as processes that unfold over time, often in a sequential fashion, as outcomes of these processes, and as strategies pursued by actors to steer, impede or undermine policy change.

Our results reveal that proponents and opponents of marriage equality have deployed discursive frames to legitimize institutional change to take off sequentially in a progressive direction — through the modes of “layering“ and “displacement“ — and in a regressive direction — through the mode of “conversion“.

Throughout this sequenced process, opposing actors have not only adjusted their discursive strategies to both their rivals and the targeted institutional venues, but have also shifted roles as change and status quo agents. Indeed, our study shows that the actors contesting the institutional status quo in one stage may become the actors defending it in a subsequent phase of the institutional change process, and vice versa. Thus, we argue that traditional, static conceptualizations of agency should be problematized and, rather than as resistance to gender-friendly reforms, opposition to marriage equality should be understood as a proactive mobilization to transform existing institutions.

The recent US Supreme Court decision in Fulton v. City of Philadelphia (2021) in favor of a Catholic foster care agency that refuses to work with same-sex couples, should then be understood as a victory of the years-long conservative strategy to undermine LGBT couples’ newly recognized right to marry.

Lastly, our study highlights the role of private actors as ideational entrepreneurs in the adoption and implementation of “morality policies,“ such as marriage equality. While morality policy scholars have so far predominantly examined how governmental actors shape policymaking, we show that the discursive strategies deployed by LGBT advocates, religious-conservative organizations and other private actors, such as foster care agencies, florists, and bakers, created new opportunities to influence policy debates and tip the scales to their preferred policy outcome.

Connect with the author

portrait


Giulia Mariani ’12 is a postdoctoral researcher in Political Science at Uppsala University. She is an alum of the Barcelona GSE Master’s in International Trade, Finance and Development.

Labor Markets, Search Frictions and International Trade: Assessing the China Shock

Master project by Marcos Mac Mullen ’18

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2018. The project is a required component of every master program.


Authors:

Marcos Mac Mullen

Master’s Program:

Macroeconomic Policy and Financial Markets

Paper Abstract:

The goal of this paper is to assess quantitatively the impact that the emergence of China in the international markets during the 1990s had on the U.S. economy (i.e. the so-called China Shock). To do so, I build a model with two sectors producing two final goods, each of them using as the only input of production an intermediate good specific to each sector. Final goods are produced in a perfectly competitive environment. The intermediate goods are produced in a frictional environment with labor as the only input. First I calibrate the close economy model to match some salient stylized facts from the 1980s in the U.S. Then to assess the China Shock I introduce a new country (China) in the international scene. I proceed with two calibration strategies: (i) calibrate China such that it matches the variation in the price of imports relative to the price of exports for the U.S. between the average of the 1980s and the average of 2005-2007, (ii) Calibrate China such that variation in allocations are close to the ones observed in data, for the same window of time. I found that under calibration (i) the China Shock in the model explains 26.38% of the variation in the share of employment in the manufacturing sector, 16.28% of the variation in the share of manufacturing production and 27.40% of the variation in the share of wages of the manufacturing sector. Finally, under calibration (ii) I found that the change in relative price needed to match between 80 to 90 percent of the variation in allocations is around 3.47 times the one observed in data.

Conclusions and key results:

According to the model, the China Shock explains 26.35% of the variation in the share of manufacture employment, 16.28% of the variation in the share of manufacturing production and 27.44% of the variation in the share of wages of the manufacturing sector. The first of these results is consistent with findings in Autor et al. (2013). On the other hand, the variation in the unemployment rate of the economy is not matched, neither for the first nor the second calibration of the open economy. I also found that as a consequence of the China Shock, real wages increase when measuring them in terms of the price of the import good, and decrease when measured in terms of the price of the export good. This result is not in line with findings in Autor et al. (2013). The optimal unemployment insurance in the open economy is 6.13% of average wages higher than in the close economy because the unemployment rate of the open economy is higher than in the close economy (0.9% difference). Finally, the model generates a non-traditional source of comparative advantage, arising from differences in the relative bargaining power of workers.

Download the full paper [pdf]


More about the Macro Program at the Barcelona Graduate School of Economics

Alum Charlie Thompson (ITFD ’14) uses data science to build a virtual Coachella experience

ITFD alum Charlie Thompson ’14 is an R enthusiast who enjoys “tapping into interesting data sets and creating interactive tools and visualizations.”

image credit: musichistoryingifs.com

ITFD alum Charlie Thompson ’14 is an R enthusiast who enjoys “tapping into interesting data sets and creating interactive tools and visualizations.” His latest blog post explains how he used cluster analysis to build a Coachella playlist on Spotify:

“Coachella kicks off today, but since I’m not lucky enough to head off into the California desert this year, I did the next best thing: used R to scrape the lineup from the festival’s website and cluster the attending artists based on audio features of their top ten Spotify tracks!”

source: Charlie Thompson

 

source: Charlie Thompson

Read the full blog post on his website

Charlie shares a bit of his background on his website:

Currently an Analytics Specialist at a tech startup called VideoBlocks, I create models of online customer behavior and manage our A/B testing infrastructure. I previously worked as a Senior Data Analyst for Booz Allen Hamilton, where I developed immigration forecasts for the Department of Homeland Security. I also built RShiny applications for various clients to visualize trends in global disease detection, explore NFL play calling, and cluster MLB pitchers. After grad school I worked as a Research Assistant in the Macroeconomics Department of Banc Sabadell in Spain, measuring price bubbles in the Colombian housing market.

I have an MS in International Trade, Finance, and Development from the Barcelona Graduate School of Economics and a BS in Economics from Gonzaga University. For my Master’s thesis I drafted a policy proposal on primary education reform in Argentina, using cluster analysis to determine the optimal regions to implement the program. I also conducted research in behavioral economics and experimental design, using original surveys and statistical modelling to estimate framing effects and the maximization of employee effort.

Read more about Charlie on his website

Vaccine-preventable Childhood Disease and Adult Human Capital: Evidence from the 1967 Measles Eradication Campaign in the United States

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2016. The project is a required component of every master program.


Authors:
Philipp Barteska, Sonja Dobkowitz, Maarit Olkkola, Michael Rieser, Pengfei Zhao

Master’s Program:

Economics

Abstract:

Measles is currently one of the leading causes of death for young children worldwide. We analyze the impact of measles prevention on later-life human capital outcomes by taking advantage of a measles eradication campaign implemented in 1967 in the United States. We provide evidence with a difference-in-differences design from the 2000 US census micro-sample for the following statistically significant results: the campaign increased completed years of schooling by two weeks, the probability of completing high school by 0.32 per cent and decreased the probability of being unemployed by 4.26 per cent. Due to the exogenous timing of the eradication campaign, we argue that these results can be interpreted causally. To the best of our knowledge our paper is the first one to document adult human capital impacts of early-life measles exposure using a natural experiment.

chart
Figure 1: Yearly Cases of Measles in the United States

Empirical strategy:

The 1967 measles eradication campaign led to an unprecedented drop in reported measles exposure in the US, as depicted in figure 1.

Our empirical strategy uses the fact that there is variation in measles exposure between states prior to the eradication campaign: the decrease in incidence is highest in those states with the highest incidence rates, as depicted in the first stage relationship in figure 2. This allows for a difference-in-differences design, exploring whether the states that had higher prior exposure to the disease gained more in human capital outcomes than the states with less exposure, controlling for pre-existing state-level linear time trends and state fixed-effects among other controls.

chart
Figure 2: Decline 1966-1970

We also perform placebo interventions to test the robustness of our results. As depicted in figure 3, the only positive and statistically significant impacts are found for 1967, the actual intervention year. This lends more support to the causal interpretation of our results.

chart
Figure 3: Placebo Interventions Around Cutoff

Conclusion:

In this paper we show suggestive evidence that exposure to a previously common childhood disease can have negative impacts on educational attainment in adulthood, although the effect sizes are not large. This finding strengthens the literature on the early-life origins of human capital.

Our results are for the most part relevant for developing countries, many of which have not yet achieved the vaccination levels required for herd immunity.

Full project available here

Data sources:

IPUMS-USA, University of Minnesota, www.ipums.org.

Project Tycho, University of Pittsburgh, www.tycho.pitt.edu

Monetary policy effects on inequality: A country state-level analysis for the United States

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2016. The project is a required component of every master program.


Authors:
Carola Ebert, Sigurdur Olafson, and Hannah Pfarr

Master’s Program:

Macroeconomic Policy and Financial Markets

Paper Abstract:

Our project focuses on the assessment of a potential relationship between monetary policy actions and economic inequality in form of the Gini index. The analysis is conducted for the United States on a country- as well as on a state-level. Lack of quality and comparability of inequality data is a major empirical challenge in the research on inequality in general. Thus, we use different methodology for the data on the Gini index to ensure the robustness of the main results on the country level. Moreover, the main contribution of this work is the analysis on a more dis-aggregated level, i.e. the state level and a regional level. The state- and region- level analysis provide a further line of investigation with respect to the relationship between monetary policy and inequality. When considering monetary policy effects on inequality on an aggregated country level, one cannot be sure whether potentially heterogeneous reactions of inequality within the country are washed away by the aggregation over states. Therefore, this paper does not only analyse the relation between monetary policy and inequality on the aggregate country level, but also on the state level. Furthermore, as a first step in the direction of investigating potential transmission channels of monetary policy into inequality, further tests with regards to the initial wealth levels across states are conducted.


Main Conclusion:

In this paper we analyzed the implication of a monetary policy shock for inequality on a country-level, regional-level and (on a) state-level (as well). The results are largely consistent across different model specifications and considering different geographic levels, implying that a contractionary monetary policy shock seems to raise inequality on impact. Although the effect is small, it is consistently positive across states and regions. Already the fact that we find that there is an effect of monetary policy on inequality is a contribution in itself.

chart

Our benchmark model using OLS leads to the conclusion that a contractionary monetary policy shock leads to an increase of inequality. This finding holds for richer as well as poorer states. The contemporaneous impact stays positive over the considered time horizon. We have run several robustness checks using different model specification for OLS as well as a VAR approach. These checks have confirmed the earlier findings.

chart


Outlook:

However, we have stressed that there is room for further investigation to shed light in this area.One potential way to go would be to expand the analysis by using alternative inequality measures. We conducted our analyses using the Theil and Atkinson indices obtained from Frank’s website. However, since the results obtained by using these two measures do not add anything to the analysis, we have not included them in the paper.

Furthermore, some authors stress the relevance of the top income and low income distribution when analyzing the effects of inequality. Therefore, one might think about controlling for the share of top incomes within the states to asses to what extent the reaction to monetary policy would change. We already tried capturing parts of these potential dynamics using the simple mean comparison tests for wealth dependence.

We find evidence that (seems to) suggest that monetary policy has a stronger impact on wealthy states and regions compared to poorer states and regions. Those results are obtained using simple mean-comparison tests and should be viewed as preliminary, as further research on the issue is necessary to concretely conclude whether the initial wealth-level of states and regions is relevant for the transmission mechanism from monetary policy on inequality.

After our results show that there is an effect of monetary policy on inequality, the next step in this line of research could be the investigation of the actual transmission mechanism. One could include a more elaborate analysis of the potential channels through which monetary policy affects inequality.

As a last remark, we want to point out that we were originally interested in investigating this topic for the European Monetary Union member states. This, however, is not possible due to data limitations. Firstly, across European countries there is no uniform definition and measurement of inequality. Secondly, the monetary union is fairly new and the data of a unitary monetary policy shock would be too short to conduct our analysis. Nevertheless, we do believe this is an interesting path for future research and, as pointed out in the literature review, the finding of a robust relationship between monetary policy and inequality backs this claim.

Defining data for decision-making

authorBy Benjamin Anderson ’15 (Master’s in Economics of Public Policy).

Ben is a Data Strategist for Made in Durham, a non-profit organization in North Carolina (United States) that works to improve education and career outcomes for local youths.

This article originally appeared on Made in Durham’s website.


In the past few weeks, there have been a barrage of media reports about educational achievement and, more generally, life outcomes for the youth of Durham.

The positive news is that these issues are receiving attention, but the downside is that the reports may be more harmful than helpful. At its best, data optimizes decision-making, but at its worst data can be deceptive and divisive.

Specialized knowledge is required to leverage data for decision-making, whereas selectively reporting figures requires some effort but no expertise. In the latter scenario, the ambiguity of statistical assumptions predisposes the audience to personal, as well as, framing bias. Those who go through the effort to produce data often have an agenda, and therefore, have incentives to make claims which imply causes and solutions. Data is dangerous when misused. It can create tension, undermine trust and unity, and result in costly adverse decision-making.

One key characteristic of amateur statistics, aside from lacking an experimental design, is that they do not account for the fact that outcomes are a function of many different variables. For example, schools clearly play a crucial role in influencing academic attainment, but a report drawing relative comparisons between attainment outcomes within or across cities usually implicates unidentified failures of one school district versus another while all but ignoring the effects of transportation, affordable housing, food, healthcare, and social support accessibility, as well as people’s different lived experiences, including traumatic exposure of various kinds.

Reactivity to outcomes is strongly linked to bias and emotion. Making decisions about problems and solutions based exclusively on outcomes is the logical equivalent to going with your gut.

Descriptive statistics alone have a tendency to reinforce what we already think we know rather than helping us to gain an objective understanding of the issues because we often overestimate our understanding of the context. Shards of truth may be buried in our presumptions or between the different storylines, but other times the truth isn’t within sight.

If one wanted to know what public schools are doing right and what positive changes could be made, the reported outcomes would not meaningfully increase understanding. This would be like a college basketball coach using the Ratings Percentage Index (RPI) to make game plans. The RPI is simply a function of outcome variables that are influenced by other, more influential variables over a team’s success, such as shot selection, rebounding, ball control and many others.

Similarly, objective inference about the determinants of academic achievement are impossible when we simply have some measure of the output, like grade level proficiency, graduation rates or achievement gaps. Summarized outcomes do not even begin to untangle the multifaceted causal factors of student achievement, or even point to which factors are within the schools’ control and which are shaped by other institutions that govern infrastructure, real estate development, credit markets and criminal justice.

Good intentions often lead to unintended consequences. Calculating outcomes or deriving slightly new definitions of them does not enhance the cultural or intellectual competence of our community, its citizens or the institutions within it.

This is troubling because the extent of harm done with every report that subjectively frames and selectively reports data will never be known. A symptomatic obsession can enable data to have a negative social impact, leading to the proliferation of economic and racial segregation, adverse selection of people and funds from public schools, victim blaming and the marginalization of objectivity. The focus needs to shift from symptoms to solutions.

Data should be collected and analyzed in a way that enables us to separately identify effects on outcomes, including those determinants within the school’s control and those outside, so that all can be addressed in order of impact and feasibility. Robust evaluations should yield insight, pointing out specific causal factors that affect outcomes that the schools, nonprofits policy and citizens can address.

Applying a scientific lens to social issues transforms data from punitive to instructive. Careful investigation using valid quantitative methods can help us gain an understanding of the inferences that the data will and will not permit. Through empirical analysis, we have the opportunity to disentangle the effects that different factors have on certain outcomes. This is powerful because it enables us to create informed strategies.

Subsequently, when we know how our potential actions will affect an outcome, a cost-benefit analysis can help decide which evidence should be brought to action. Operating in the public and nonprofit sectors, the cost-benefit analysis goes beyond fiscal considerations to examine social returns. Combining these empirical tools puts us in a position to optimize social welfare. Data or analysis vacant of these characteristics will result in suboptimal decision-making.

An empirical basis for decision-making that respects the complexity of determinants on outcomes and the tradeoffs between various actions or lack of action should be utilized at all levels – from the systemic to the programmatic. A symptomatic focus and a preoccupation with a single area will not result in systemic improvement. As institutions, organizations and programs, our goal should be to improve, which can only be achieved through learning.

Durham has great potential to grow while enhancing the well-being of all, including the most marginalized. Continuous improvement requires the commitment of people in the public, private, and social sectors to work together.

Part of analytical integrity is the acknowledgement that sometimes our data tells us nothing at all. If we truly care about addressing systemic issues, lack of information is a strong argument for why we should build more robust datasets that incorporate variables across institutions and the socio-economic environment. This requires a willingness to coordinate and to learn. Importantly, these actions imply the willingness to change.

The Made in Durham partnership exists to address issues of the highest importance. It is the job of data is to increase the role of evidence in the partnership’s decision-making, and because of the gravity of these decisions, I also feel an ethical accountability to this work.

If we aren’t asking the right questions, data can lead to costly decisions that undermine improvement. As members of the community, we should all be able to ask the right questions to hold decision-makers accountable to analytical standards that drive improvement.

Regardless of what the outcomes show now, or anytime in the future, what we should be asking is: what are the causes of these outcomes, what are their magnitudes, and thus, what can we do to improve.

The effect of family income on birth weight

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Authors:
Genevieve Jeffrey, Yi-Ting Kuo, Laura López and Stella Veazey

Master’s Program:
Economics of Public Policy

Paper Abstract:

We examine the effect of income on birth weight by employing two identification strategies using US Vital Statistics Natality data. First, following a study by Hoynes et al. (2015), we take advantage of an exogenous increase in income from the Earned Income Tax Credit using a difference-in-differences methodology. The Earned Income tax credit (EITC), enacted in 1975, is a refundable transfer to lower-income working families through the tax system and is one of the primary tools used in the United States to fight poverty. The EITC underwent an expansion is 1993 (Omnibus Budget Reconciliation Act), increasing the maximum credit families with and without children could receive. Following Hoynes et al. (2015), we take advantage of the difference in maximum credit available for families with different numbers of children. We find that the increase from the EITC reduces the incidence of low birth weight and increases mean birth weight. In addition, we discover that maternal smoking and drinking behavior during gestation is reduced.

Next, in order to try to capture the effect of income on birth weight across the population (as opposed to just high-impact groups), we exploit income variation from a policy change in Alaska that allowed payments from oil wealth to be distributed to all Alaska residents. We employ a comparative case study methodology using a synthetic control group following Abadie et al. (2010). Our comparison group is comprised of a combination of North Dakota, Oregon, Delaware, Kentucky and Nevada. The analysis shows a substantial increase in Alaska’s average birth weight over its synthetic counterpart around the onset of the policy. However, we refrain from attributing the divergence to the dividend payments alone, given significant changes in Alaska’s economy that coincide with the policy and are not well-mirrored by the control states.

Presentation Slides:

[slideshare id=51094757&doc=family-income-birth-weight-150730102428-lva1-app6891]

Systematic Component of Monetary Policy in Open Economy SVAR’s: A New Agnostic Identification Procedure

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Authors: 
Adrian Ifrim and Önundur Páll Ragnarsson

Master’s Program:
Macroeconomic Policy and Financial Markets

Paper Abstract:

We propose a new identification method in open economy models by restricting both the systematic component of monetary policy and the IRFs to a monetary policy shock, at the same time remaining agnostic with respect to the effects of monetary policy shocks on output and open economy variables. We estimate the model for the U.S/U.K economies and find that a U.S monetary shock has a significant and permanent effect on output. Quantitatively a 0.4% annual increase in the interest rates causes output to contract by 1.2%. This contradicts the findings of Uhlig (2005) and Scholl and Uhlig (2008). We compute the long-run multipliers implied by the monetary policy reaction function and compare our identification with to the ones proposed by Uhlig (2005), Scholl and Uhlig (2008) and Arias et al. (2015). We argue that neither of the above schemes identify correctly the monetary policy shock since the latter overestimates the effects of the shock and the former implies a counterfactual behavior of monetary policy. We also find that the delayed overshooting puzzle is a robust feature of the data no matter what identification is chosen.

Read the paper or view presentation slides:

[slideshare id=51009241&doc=systematic-component-monetary-policy-open-economy-svars-150728104454-lva1-app6892]

Monetary Policy Uncertainty: does it justify requiring the Fed to follow a Taylor rule?

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Authors:
Jacques Alcabes, Ángelo Gutiérrez, Patrick Mayer, and Hugo Kaminski

Master’s Program:
Economics

Paper Abstract:

In 2014 the “Federal Reserve Accountability and Transparency Act” (FRATA) was introduced in the U.S. congress requiring the Fed to adopt a rules-based policy. Supporters of this act argue that uncertainty about economic policy is one of the main explanations for the slow economic recovery witnessed by the U.S. since the 2008 financial crisis. In this article we investigate the effects of monetary policy as a specific source of policy uncertainty and propose some novel measures to estimate the effect and magnitude of monetary policy uncertainty on economic activity. We find that, while the effects of monetary policy uncertainty are statistically significant, it is not a large contributor to economic fluctuations.

This project got a shout out from John Taylor himself on Twitter!

Presentation Slides:

[slideshare id=50808034&doc=monetary-policy-uncertainty-150722150209-lva1-app6891]

The pass-through of United States monetary policy to emerging markets: evidence at the firm level from India

Master’s project by Ana Arencibia Pareja, Marina Conesa Martínez, Iuliia Litvinenko, and Ruth Llovet Montañés

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


 

Authors: 
Ana Arencibia Pareja, Marina Conesa Martínez, Iuliia Litvinenko, and Ruth Llovet Montañés

Master’s Program:
International Trade, Finance and Development

Paper Abstract:

This paper evaluates the reaction of Indian firms’ equity prices to U.S. monetary policy changes during the period from 2005 to 2015, limiting the analysis to the days where monetary policy announcements took place. We find that a one percentage point permanent increase in Fed funds rate is associated with a 0.09% drop in equity prices, being this association large in economic terms. Results also show that the response of Indian companies is not homogeneous. For instance, we find that equity prices of companies with higher capital over total assets react less compared to firms with low capital levels, given the same U.S interest rate increase. Moreover, we also see that larger firms, proxied by the number of employees, will be less affected by the U.S monetary tightening. The same conclusions can be obtained when using EBIT over interest expenses, cashflows over sales and dividends per share. Besides, we show that stocks respond much stronger to monetary shocks in periods of contractionary interventions and higher global risk aversion. We propose firms to be better capitalized by holding more equity relative to loans and relying less on banks’ short-term external debt denominated. Finally, we also recommend them to have more liquidity, which goes in line with having a larger EBIT and bigger cashflows. However, we can definitely conclude that advanced economies should promote greater international policy cooperation and communicate their monetary policy intention. This would reduce the risk of large market volatility of Emerging Countries´ economies.

Presentation Slides:

[slideshare id=50496909&doc=us-monetary-policy-emerging-markets-150714064158-lva1-app6892]