Associated Programs

Explorations in Theory and Empirical Analysis

Explorations in Theory and Empirical Analysis

On occasion, scholars at the Levy Institute conduct research that does not fall within a current program or general topic area. Such study might include examination of a subject of particular policy interest, empirical research that has grown out of work in a current program area, or initial exploration in an area being considered for a new research program. Recent studies have included those on Harrodian growth models, the economic consequences of German reunification, and campaign finance reform.

Research Program

Economic Policy for the 21st Century

Program Publications

  • Working Paper No. 811 | July 2014
    An Evaluation Using the Maximum Entropy Bootstrap Method

    This paper challenges two clichés that have dominated the macroeconometric debates in India. One relates to the neoclassical view that deficits are detrimental to growth, as they increase the rate of interest, and in turn displace the interest-rate-sensitive components of private investment. The second relates to the assumption of “stationarity”—which has dominated the statistical inference in time-series econometrics for a long time—as well as the emphasis on unit root–type testing, which involves detrending, or differencing, of the series to achieve stationarity in time-series econometric models. The paper examines the determinants of rates of interest in India for the periods 1980–81 and 2011–12, using the maximum entropy bootstrap (Meboot) methodology proposed in Vinod 1985 and 2004 (and developed extensively in Vinod 2006, Vinod and Lopez-de-Lacalle 2009, and Vinod 2010 and 2013). The practical appeal of Meboot is that it does not necessitate all pretests, such as structural change and unit root–type testing, which involve detrending the series to achieve stationarity, which in turn is problematic for evolutionary short time series. It also solves problems related to situations where stationarity assumptions are difficult to verify—for instance, in mixtures of I(0) and nonstationary I(d) series, where the order of integration can be different for different series.

    What makes Meboot compelling for Indian data on interest rates? Prior to interest rate deregulation in 1992, studies to analyze the determinants of interest rates were rare in India. Analytical and econometric limitations to dealing with the nonvarying administered rates for a meaningful time-series analysis have been the oft-cited reason. Using high-frequency data, the existing attempts have focused on the recent financially deregulated interest rate regime to establish possible links between interest rates and macroeconomic variables (Chakraborty 2002 and 2012, Dua and Pandit 2002, and Goyal 2004). The results from the Meboot analysis revealed that, contrary to popular belief, the fiscal deficit is not significant for interest rate determination in India. This is in alignment with the existing empirical findings, where it was established that the interest rate is affected by changes in the reserve currency, expected inflation, and volatility in capital flows, but not by the fiscal deficit. This result has significant policy implications for interest rate determination in India, especially since the central bank has cited the high fiscal deficit as one of the prime constraints for flexibility in fixing the rates.

  • Working Paper No. 809 | June 2014

    Work and life satisfaction depends on a number of pecuniary and nonpecuniary factors at the workplace and determines these in turn. We analyze these causal linkages using a structural vector autoregression approach for a sample of the German working populace collected from 1984 to 2008, finding that workplace autonomy plays an important causal role in determining well-being.

  • Working Paper No. 808 | June 2014
    A Quantile Approach

    Unemployment has been robustly shown to strongly decrease subjective well-being (or “happiness”). In the present paper, we use panel quantile regression techniques in order to analyze to what extent the negative impact of unemployment varies along the subjective well-­being distribution. In our analysis of British Household Panel Survey data (1996–2008) we find that, over the quantiles of our subjective well-being variable, individuals with high well-­being suffer less from becoming unemployed. A similar but stronger effect of unemployment is found for a broad mental well-being variable (GHQ-12). For happy and mentally stable individuals, it seems their higher well-being acts like a safety net when they become unemployed. We explore these findings by examining the heterogeneous unemployment effects over the quantiles of satisfaction with various life domains.

  • Working Paper No. 805 | May 2014
    Measures and Structural Factors

    Economic theory frequently assumes constant factor shares and often treats the topic as secondary. We will show that this is a mistake by deriving the first high-frequency measure of the US labor share for the whole economy. We find that the labor share has held remarkably steady indeed, but that the quasi-stability masks a sizable composition effect that is detrimental to labor. The wage component is falling fast and the stability is achieved by an increasing share of benefits and top incomes. Using NIPA and Piketty-Saez top-income data, we estimate that the US bottom 99 percent labor share has fallen 15 points since 1980. This amounts to a transfer of $1.8 trillion from labor to capital in 2012 alone and brings the US labor share to its 1920s level. The trend is similar in Europe and Japan. The decrease is even larger when the CPI is used instead of the GDP deflator in the calculation of the labor share.

  • Working Paper No. 804 | May 2014
    Empirical Studies

    In this second part of our study we survey the rapidly expanding empirical literature on the determinants of the functional distribution of income. Three major strands emerge: technological change, international trade, and financialization. All contribute to the fluctuations of the labor share, and there is a significant amount of self-reinforcement among these factors. For the case of the United States, it seems that the factors listed above are by order of increasing importance. We conclude by noting that the falling US wage shares cointegrates with rising inequality and a rising top 1 percent income share. Thus, all measures of income distribution provide the same picture. Liberalization and financialization worsen economic inequality by raising top incomes, unless institutions are strongly redistributive.

    The labor share has also fallen, for structural reasons and for reasons related to economic policy. Such explanations are left to parts III and IV of our study, respectively. Part I investigated the theories of income distribution.

  • Working Paper No. 803 | May 2014

    This series of working papers explores a theme enjoying a tremendous resurgence: the functional distribution of income—the division of aggregate income by factor share. This first installment surveys some landmark theories of income distribution. Some provide a technology-based account of the relative shares while others provide a demand-driven explanation (Keynes, Kalecki, Kaldor, Goodwin). Two questions lead to a better understanding of the literature: is income distribution assumed constant?, and is income distribution endogenous or exogenous? However, and despite their insights, these theories alone fail to fully explain the current deterioration of income distribution.

    Subsequent installments are dedicated to analyzing the empirical literature (part II), to the measurement and composition of the relative shares (part III), and to a study of the role of economic policy (part IV).

  • Working Paper No. 800 | May 2014

    Behavioral economics has shown that individuals sometimes make decisions that are not in their best interests. This insight has prompted calls for behaviorally informed policy interventions popularized under the notion of “libertarian paternalism.” This type of “soft” paternalism aims at helping individuals without reducing their freedom of choice. We highlight three problems of libertarian paternalism: the difficulty of detecting what is in the best interest of an individual, the focus on freedom of choice at the expense of a focus on autonomy, and the neglect of the dynamic effects of libertarian-paternalistic policy interventions. We present a form of soft paternalism called “autonomy-enhancing paternalism” that seeks to constructively remedy these problems. Autonomy-enhancing paternalism suggests using insights from subjective well-being research in order to determine what makes individuals better off. It imposes an additional constraint on the set of permissible interventions highlighting the importance of autonomy in the sense of the capability to make critically reflected (i.e., autonomous) decisions. Finally, it acknowledges that behavioral interventions can change the strength of individual decision-making anomalies over time as well as influence individual preference learning. We illustrate the differences between libertarian paternalism and autonomy-enhancing paternalism in a simple formal model in the context of optimal sin nudges.

  • Working Paper No. 795 | April 2014

    This paper contributes to the debate on income growth and distribution from a nonmainstream perspective. It looks, in particular, at the role that the degree of capacity utilization plays in the process of growth of an economy that is not perfectly competitive. The distinctive feature of the model presented in the paper is the hypothesis that the rate of capital depreciation is an increasing function of the degree of capacity utilization. This hypothesis implies analytical results that differ somewhat from those yielded by other Kaleckian models. Our model shows that, in a number of cases, the process of growth can be profit-led rather than wage-led. The model also determines the value to which the degree of capacity utilization converges in the long run.

  • Working Paper No. 786 | January 2014
    An Assessment from Popper’s Philosophy

    The rational expectations hypothesis (REH) is the standard approach to expectations formation in macroeconomics. We discuss its compatibility with two strands of Karl Popper´s philosophy: his theory of knowledge and learning, and his “rationality principle” (RP). First, we show that the REH is utterly incompatible with the former. Second, we argue that the REH can nevertheless be interpreted as a heuristic device that facilitates economic modeling and, consequently, it may be justified along the same lines as Popper´s RP. We then argue that, our position as to the resolution of this paradox notwithstanding, Popper´s philosophy provides a metatheoretical framework with which we can evaluate the REH. Within this framework, the REH can be viewed as a heuristic device or strategy that fulfils the same function as, for instance, the optimizing assumption. However, we believe that the REH imparts a serious methodological bias, since, by implying that macroeconomic instability is caused exclusively by “exogenous” shocks that randomly hit the economy, it precludes the analysis of any sources of inherent instability caused by the making of (nonrandom) errors by individuals, and hence it favors the creation of an institutional configuration that may be ill suited to address this type of instability.

  • Working Paper No. 782 | December 2013

    In this paper an alternative approach for the estimation of higher-order linear fixed-effects models is described. The strategy relies on the transformation of the data prior to calculating estimations of the model. While the approach is computationally intensive, the hardware requirements for the estimation process are minimal, allowing for the estimation of models with more than two high-order fixed effects for large datasets. An illustration of the implementation is presented using the US Census Bureau Current Population Survey data with four fixed effects.