Associated Programs

Explorations in Theory and Empirical Analysis

Explorations in Theory and Empirical Analysis

On occasion, scholars at the Levy Institute conduct research that does not fall within a current program or general topic area. Such study might include examination of a subject of particular policy interest, empirical research that has grown out of work in a current program area, or initial exploration in an area being considered for a new research program. Recent studies have included those on Harrodian growth models, the economic consequences of German reunification, and campaign finance reform.

Research Program

Economic Policy for the 21st Century



Program Publications

  • Working Paper No. 1036 | January 2024
    For decades, the literature on the estimation of production functions has focused on the elimination of endogeneity biases through different estimation procedures to obtain the correct factor elasticities and other relevant parameters. Theoretical discussions of the problem correctly assume that production functions are relationships among physical inputs and output. However, in practice, they are most often estimated using deflated monetary values for output (value added or gross output) and capital. This introduces two additional problems—an errors-in-variables problem, and a tendency to recover the factor shares in value added instead of their elasticities.  The latter problem derives from the fact that the series used are linked through the accounting identity that links value added to the sum of the wage bill and profits. Using simulated data from a cross-sectional Cobb-Douglas production function in physical terms from which we generate the corresponding series in monetary values, we show that the coefficients of labor and capital derived from the monetary series will be (a) biased relative to the elasticities by simultaneity and by the error that results from proxying physical output and capital with their monetary values; and (b) biased relative to the factor shares in value added as a result of a peculiar form of omitted variables bias. We show what these biases are and conclude that estimates of production functions obtained using monetary values are likely to be closer to the factor shares than to the factor elasticities. An alternative simulation that does not assume the existence of a physical production function confirms that estimates from the value data series will converge to the factor shares when cross-sectional variation in the factor prices is small. This is, again, the result of the fact that the estimated relationship is an approximation to the distributional accounting identity.
    Download:
    Associated Program:
    Author(s):
    Jesus Felipe John McCombie Aashish Mehta
    Related Topic(s):

  • Working Paper No. 1006 | April 2022
    This paper argues that the 40-year-old Feldstein-Horioka “puzzle” (i.e., that in a regression of the domestic investment rate on the domestic saving rate, the estimated coefficient is significantly larger than what would be expected in a world characterized by high capital mobility) should have never been labeled as such. First, we show that the investment and saving series typically used in empirical exercises to test the Feldstein-Horioka thesis are not appropriate for testing capital mobility. Second, and complementary to the first point, we show that the Feldstein-Horioka regression is not a model in the econometric sense, i.e., an equation with a proper error term (a random variable). The reason is that by adding the capital account to their regression, one gets the accounting identity that relates the capital account, domestic investment, and domestic saving. This implies that the estimate of the coefficient of the saving rate in the Feldstein-Horioka regression can be thought of as a biased estimate of the same coefficient in the accounting identity, where it has a value of one. Since the omitted variable is known, we call it “pseudo bias.” Given that this (pseudo) bias is known to be negative and less than one in absolute terms, it should come as no surprise that the Feldstein-Horioka regression yields a coefficient between zero and one.
    Download:
    Associated Program:
    Author(s):
    Jesus Felipe Scott Fullwiler Al-Habbyel Yusoph
    Related Topic(s):

  • Working Paper No. 1004 | March 2022
    A Theoretical Framework
    Liabilities denominated in foreign currency have established a permanent role on emerging market firms’ balance sheets, which implies that changes in both global liquidity conditions and in the value of the currency may have a long-lasting effect for them. In order to consider the financial conditions that may encourage (discourage) structural change in a small, open economy, we adopt the framework put forward by the “monetary theory of distribution” (MTD). More specifically, we follow the formulation adopted by Dvoskin and Feldman (2019), whereby the financial system is intended as a basic sector that promotes innovation (Schumpeter 1911). In accordance with this, financial conditions are binding only for the innovative entrepreneurs, whose methods of production are not dominant and hence they need to borrow from banks to kickstart their production. Through this device, our model offers an explanation of the technological lock-in experienced by a small, open economy that takes international prices as given.

  • Working Paper No. 1001 | February 2022
    This paper estimates the distribution-led regime of the US economy for the period 1947–2019. We use a time varying parameter model, which allows for changes in the regime over time. To the best of our knowledge this is the first paper that has attempted to do this. This innovation is important, because there is no reason to expect that the regime of the US economy (or any economy for that matter) remains constant over time. On the contrary, there are significant reasons that point to changes in the regime. We find that the US economy became more profit-led in the first postwar decades until the 1970s and has become less profit-led since; it is slightly wage-led over the last fifteen years.

  • Working Paper No. 998 | January 2022
    A Critique of Aggregate Indicators
    Economic analysts have used trends in total factor productivity (TFP) to evaluate the effectiveness with which economies are utilizing advances in technology. However, this measure is problematic on several different dimensions. First, the idea that it is possible to separate out the relative contribution to economic output of labor, capital, and technology requires ignoring their complex interdependence in actual production. Second, since TFP growth has declined in recent decades in all of the developed market societies, there is good reason to believe that the decline is an artifact of the slower rates of economic growth that are linked to austerity policies. Third, reliance on TFP assumes that measures of gross domestic product are accurately capturing changes in economic output, even as the portion of the labor force producing tangible goods has declined substantially. Finally, there are other indicators that suggest that current rates of technological progress might be as strong or stronger than in earlier decades.

  • Working Paper No. 994 | October 2021
    Biased Coefficients and Endogenous Regressors, or a Case of Collective Amnesia?
    The possible endogeneity of labor and capital in production functions, and the consequent bias of the estimated elasticities, has been discussed and addressed in the literature in different ways since the 1940s. This paper revisits an argument first outlined in the 1950s, which questioned production function estimations. This argument is that output, capital, and employment are linked through a distribution accounting identity, a key point that the recent literature has overlooked. This identity can be rewritten as a form that resembles a production function (Cobb-Douglas, CES, translog). We show that this happens because the data used in empirical exercises are value (monetary) data, not physical quantities. The argument has clear predictions about the size of the factor elasticities and about what is commonly interpreted as the bias of the estimated elasticities. To test these predictions, we estimate a typical Cobb-Douglas function using five estimators and show that: (i) the identity is responsible for the fact that the elasticities must be the factor shares; (ii) the bias of the estimated elasticities (i.e., departure from the factor shares) is, in reality, caused by the omission of a term in the identity. However, unlike in the standard omitted-variable bias problem, here the omitted term is known; and (iii) the estimation method is a second-order issue. Estimation methods that theoretically deal with endogeneity, including the most recent ones, cannot solve this problem. We conclude that the use of monetary values rather than physical data poses an insoluble problem for the estimation of production functions. This is, consequently, far more serious than any supposed endogeneity problems.

  • Working Paper No. 993 | September 2021
    Theory and Empirics
    This paper provides a theoretical and empirical reassessment of supermultiplier theory. First, we show that, as a result of the passive role it assigns to investment, the Sraffian supermultiplier (SSM) predicts that the rate of utilization leads the investment share in a dampened cycle or, equivalently,  that a convergent cyclical motion in the utilization-investment share plane would be counterclockwise. Second, impulse response functions from standard recursive vector autoregressions (VAR) for postwar US samples strongly indicate that the investment share leads the rate of utilization, or that these cycles are clockwise. These results raise questions about the key mechanism underlying supermultiplier theory.

  • Working Paper No. 989 | June 2021
    The paper provides an empirical discussion of the national emergency utilization rate (NEUR), which is based on a “national emergency” definition of potential output and is published by the US Census Bureau. Over the peak-to-peak period 1989–2019, the NEUR decreased by 14.2 percent. The paper examines the trajectory of potential determinants of capacity utilization over the same period as specified in the related theory, namely: capital intensity, relative prices of labor and capital, shift differentials, rhythmic variations in demand, industry concentration, and aggregate demand. It shows that most of them have moved in a direction that would lead to an increase in utilization. The main factor that can explain the decrease in the NEUR is aggregate demand, while the increase in industry concentration might have also played a small role.

  • Working Paper No. 986 | March 2021
    Evolution and Contemporary Relevance
    This paper traces the evolution of John Maynard Keynes’s theory of the business cycle from his early writings in 1913 to his policy prescriptions for the control of fluctuations in the early 1940s. The paper identifies six different “theories” of business fluctuations. With different theoretical frameworks in a 30-year span, the driver of fluctuations—namely cyclical changes in expectations about future returns—remained substantially the same. The banking system also played a pivotal role throughout the different versions, by financing and influencing the behavior of return expectations. There are four major changes in the evolution of Keynes’s business cycle theories: a) the saving–investment framework to understand changes in economic fluctuations; b) the capabilities of the banking system to moderate the business cycle; c) the effectiveness of monetary policy to fine tune the business cycle through the control of the short-term interest rate or credit conditions; and d) the role of a comprehensive fiscal policy and investment policy to attenuate fluctuations. Finally, some conclusions are drawn about the present relevance of the policy mix Keynes promoted for ensuring macroeconomic stability.

  • Working Paper No. 974 | October 2020
    Financial Instability and Crises in Keynes’s Monetary Thought
    This paper revisits Keynes’s writings from Indian Currency and Finance (1913) to The General Theory (1936) with a focus on financial instability. The analysis reveals Keynes’s astute concerns about the stability/fragility of the banking system, especially under deflationary conditions. Keynes’s writings during the Great Depression uncover insights into how the Great Depression may have informed his General Theory. Exploring the connection between the experience of the Great Depression and the theoretical framework Keynes presents in The General Theory, the assumption of a constant money stock featuring in that work is central. The analysis underscores the case that The General Theory is not a special case of the (neo-)classical theory that is relevant only to “depression economics”—refuting the interpretation offered by J. R. Hicks (1937) in his seminal paper “Mr. Keynes and the Classics: A Suggested Interpretation.” As a scholar of the Great Depression and Federal Reserve chairman at the time of the modern crisis, Ben Bernanke provides an important intellectual bridge between the historical crisis of the 1930s and the modern crisis of 2007–9. The paper concludes that, while policy practice has changed, the “classical” theory Keynes attacked in 1936 remains hegemonic today. The common (mis-)interpretation of The General Theory as depression economics continues to describe the mainstream’s failure to engage in relevant monetary economics.

  • Working Paper No. 957 | June 2020
    The Long Period Method, Technical Change, and Gender
    This paper presents a critique of Karl Marx’s labor theory of value and his theory of falling profit rates from an intersectional political economy perspective. Specifically, I rely on social reproduction theory to propose that Marx-biased technical change disrupts the social order and leads to competition between workers. The bargaining power of workers cannot be dissociated from class struggle within the working class. I argue that technical change increases social conflict, which can counterbalance the long-run tendency of the profit rate to fall. The conclusion is that class struggle is multilayered and endogenous to the process of accumulation.

  • Working Paper No. 953 | April 2020
    Some Empirical Issues
    The paper makes three contributions. First, following up on Nikiforos (2016), it provides an in-depth examination of the Federal Reserve measure of capacity utilization and shows that it is closer to a cyclical indicator than a measure of long run variations of normal utilization. Other measures, such as the average workweek of capital or the national emergency utilization rate are more appropriate for examining long-run changes in utilization. Second, and related to that, it argues that a relatively stationary measure of utilization is not consistent with any theory of the determination of utilization. Third, based on data on the lifetime of fixed assets it shows that for the issues around the “utilization controversy” the long run is a period after thirty years or more. This makes it a Platonic Idea for some economic problems.

  • Working Paper No. 952 | April 2020
    Some Theoretical Issues
    This paper discusses some issues related to the triangle between capital accumulation, distribution, and capacity utilization. First, it explains why utilization is a crucial variable for the various theories of growth and distribution—more precisely, with regards to their ability to combine an autonomous role for demand (along Keynesian lines) and an institutionally determined distribution (along classical lines). Second, it responds to some recent criticism by Girardi and Pariboni (2019). I explain that their interpretation of the model in Nikiforos (2013) is misguided, and that the results of the model can be extended to the case of a monopolist. Third, it provides some concrete examples of why demand is a determinant for the long-run rate of utilization of capital. Finally, it argues that when it comes to the normal rate of utilization, it is the expected growth rate of demand that matters, not the level of demand.

  • Working Paper No. 949 | February 2020
    This paper extends the empirical stock-flow consistent (SFC) literature through the introduction of distributional features and labor market institutions in a Godley-type empirical SFC model. In particular, labor market institutions, such as the minimum wage and the collective bargaining coverage rate, are considered as determinants of the wage share and, in turn, of the distribution of national income. Thereby, the model is able to examine both the medium-term stability conditions of the economy via the evolution of the sectoral financial balances and the implications of functional income distribution on the growth prospects of the economy at hand. The model is then applied to the Greek economy. The empirical results indicate that the Greek economy has a significant structural competitiveness deficit, while the institutional regime is likely debt-led. The policies implemented in the context of the economic adjustment programs were highly inappropriate, triggering private sector insolvency. A minimum wage increase is projected to have a positive impact on output growth and employment. However, policies that would enhance the productive sector’s structural competitiveness are required in order to ensure the growth prospects of the Greek economy.

  • Working Paper No. 946 | February 2020
    A Comment on Autor and Salomons
    We show that Autor and Salomons’ (2017, 2018) analysis of the impact of technical progress on employment growth is problematic. When they use labor productivity growth as a proxy for technical progress, their regressions are quasi-accounting identities that omit one variable of the identity. Consequently, the coefficient of labor productivity growth suffers from omitted-variable bias, where the omitted variable is known. The use of total factor productivity (TFP) growth as a proxy for technical progress does not solve the problem. Contrary to what the profession has argued for decades, we show that this variable is not a measure of technical progress. This is because TFP growth derived residually from a production function, together with the conditions for producer equilibrium, can also be derived from an accounting identity without any assumption. We interpret TFP growth as a measure of distributional changes. This identity also indicates that Autor and Salomons’ estimates of TFP growth’s impact on employment growth are biased due to the omission of the other variables in the identity. Overall, we conclude that their work does not shed light on the question they address.

  • Working Paper No. 943 | January 2020
    Whether China’s low fertility rate is the consequence of the country’s strict population control policy is a puzzling question. This paper attempts to disentangle the Chinese population control policy’s impacts on the fertility rate from socioeconomic factors using the synthetic control method proposed by Abadie and Gardeazabal (2003). The results indicate that the population control policy significantly decreased China’s birth rate after the “Later, Longer, and Fewer” policy came into force, but had little effect on the birth rate in the long run. We estimate that between 164.2 million and 268.3 million prevented births from 1971 to 2016 can be attributed to the Chinese population control policy. In addition, we implement a placebo study to check the validity of the method and confirm the robustness of the paper’s conclusions.

  • Working Paper No. 942 | January 2020
    This paper emphasizes the need for understanding the interdependencies between the real and financial sides of the economy in macroeconomic models. While the real side of the economy is generally well explained in macroeconomic models, the financial side and its interaction with the real economy remains poorly understood. This paper makes an attempt to model the interdependencies between the real and financial sides of the economy in Denmark while adopting a stock-flow consistent approach. The model is estimated using Danish data for the period 1995–2016. The model is simulated to create a baseline scenario for the period 2017–30, against which the effects of two standard shocks (fiscal shocks and interest rate shocks) are analyzed. Overall, our model is able to replicate the stylized facts, as will be discussed. While the model structure is fairly simple due to different constraints, the use of the stock-flow approach makes it possible to explain several transmission mechanisms through which real economic behavior can affect the balance sheets, and at the same time capture the feedback effects from the balance sheets to the real economy. Finally, we discuss certain limitations of our model.

  • Working Paper No. 940 | November 2019
    A Rejoinder and Some Comments
    The critique by Gahn and González (2019) of the conclusions in Nikiforos (2016) regarding what data should be used to evaluate whether capacity utilization is endogenous to demand is weak for the following reasons: (i) The Federal Reserve Board (FRB) measure of utilization is not appropriate for measuring long-run variations of utilization because of the method and purpose of its construction. Even if its difference from the measures of the average workweek of capital (AWW) were trivial, this would still be the case; if anything, it would show that the AWW is also an inappropriate measure. (ii) Gahn and González choose to ignore the longest available estimate of the AWW produced by Foss, which has a clear long-run trend. (iii) Their econometric results are not robust to more suitable specifications of the unit root tests. Under these specifications, the tests overwhelmingly fail to reject the unit root hypothesis. (iv) Other estimates of the AWW, which were not included in Nikiforos (2016) confirm these conclusions. (v) For the comparison between the AWW series and the FRB series, they construct variables that are not meaningful because they subtract series in different units. When the comparison is done correctly, the results confirm that the difference between the AWW series and the FRB series has a unit root. (vi) A stationary utilization rate is not consistent with any theory of the determination of capacity utilization. Even if demand did not play a role, there is no reason to expect that all the other factors that determine utilization would change in a fashion that would keep utilization constant.

  • Working Paper No. 907 | May 2018
    The paper discusses the Sraffian supermultiplier (SSM) approach to growth and distribution. It makes five points. First, in the short run the role of autonomous expenditure can be appreciated within a standard post-Keynesian framework (Kaleckian, Kaldorian, Robinsonian, etc.). Second, and related to the first, the SSM model is a model of the long run and has to be evaluated as such. Third, in the long run, one way that capacity adjusts to demand is through an endogenous adjustment of the rate of utilization. Fourth, the SSM model is a peculiar way to reach what Garegnani called the “Second Keynesian Position.” Although it respects the letter of the “Keynesian hypothesis,” it makes investment quasi-endogenous and subjects it to the growth of autonomous expenditure. Fifth, in the long run it is unlikely that “autonomous expenditure” is really autonomous. From a stock-flow consistent point of view, this implies unrealistic adjustments after periods of changes in stock-flow ratios. Moreover, if we were to take this kind of adjustment at face value, there would be no space for Minskyan financial cycles. This also creates serious problems for the empirical validation of the model.

  • Working Paper No. 905 | May 2018
    The Vested Interests, Limits to Reform, and the Meaning of Liberal Democracy
    I subject some aspects of Roosevelt’s “New Deal” to critical analysis, with particular attention to what is termed “liberal democracy.” This analysis demonstrates the limits to reform, given the power of “vested interests” as articulated by Thorstein Veblen.
     
    While progressive economists and others are generally favorably disposed toward the New Deal, a critical perspective casts doubt on the progressive nature of the various programs instituted during the Roosevelt administrations. The main constraint that limited the framing and operation of these programs was that of maintaining liberal democracy. The New Deal was shaped by the institutional forces then dominant in the United States, including the segregationist system of the South. In the end, vested interests dictated what transpired, but what did transpire required a modification of the understanding of liberal democracy.

  • Book Series | March 2018
    Edited by Marcella Corsi, Jan Kregel, and Carlo D'Ippoliti
    Edited by Marcella Corsi, Sapienza University of Rome, Levy Institute Director of Research Jan Kregel, and Carlo D’Ippoliti, Sapienza University of Rome, this new collection of 16 essays is dedicated to Alessandro Roncaglia and deals with the themes that “have characterized his work or represent expressions of his personality, his interests and method," particularly his contributions to the interpretation of classical political economists as a means for informing present-day policy.

    Published by: Anthem Press
  • Working Paper No. 879 | December 2016

    This paper presents a methodological discussion of two recent “endogeneity” critiques of the Kaleckian model and the concept of distribution-led growth. From a neo-Keynesian perspective, and following Kaldor (1955) and Robinson (1956), the model is criticized because it treats distribution as quasi-exogenous, while in Skott (2016) distribution is viewed as endogenously determined by a series of (exogenous) institutional factors and social norms, and therefore one should focus on these instead of the functional distribution of income per se. The paper discusses how abstraction is used in science and economics, and employs the criteria proposed by Lawson (1989) for what constitutes an appropriate abstraction. Based on this discussion, it concludes that the criticisms are not valid, although the issues raised by Skott provide some interesting directions for future work within the Kaleckian framework.

  • Working Paper No. 857 | December 2015

    This paper describes the transformations in federal classification of ethno-racial information since the civil rights era of the 1960s. These changes were introduced in the censuses of 1980 and 2000, and we anticipate another major change in the 2020 Census. The most important changes in 1980 introduced the Hispanic Origin and Ancestry questions and the elimination of two questions on parental birthplace. The latter decision has made it impossible to adequately track the progress of the new second generation. The change in 2000 allowed respondents to declare origins in more than one race; the anticipated change for 2020 will create a single question covering race and Hispanic Origin—or, stated more broadly, race and ethnic origin. We show that the 1980 changes created problems in race and ethnic classification that required a “fix,” and the transformation anticipated for 2020 will be that fix. Creating the unified question in the manner the Census Bureau is testing will accomplish by far the hardest part of what we believe should be done. However, we suggest two additional changes of a much simpler nature: restoring the parental birthplace questions (to the annual American Community Survey) and possibly eliminating the Ancestry question (the information it gathered will apparently now be obtained in the single race-and-ethnicity question). The paper is historical in focus. It surveys how the classification system prior to 1980 dealt with the tension between ethno-racial continuity and assimilation (differently for each major type of group); how the political pressures producing the changes of 1980 and 2000 changed the treatment of that tension; and, finally, the building pressure for a further change.

  • Working Paper No. 854 | November 2015
    Graph Theory and Macroeconomic Regimes in Stock-flow Consistent Modeling

    Standard presentations of stock-flow consistent modeling use specific Post Keynesian closures, even though a given stock-flow accounting structure supports various different economic dynamics. In this paper we separate the dynamic closure from the accounting constraints and cast the latter in the language of graph theory. The graph formulation provides (1) a representation of an economy as a collection of cash flows on a network and (2) a collection of algebraic techniques to identify independent versus dependent cash-flow variables and solve the accounting constraints. The separation into independent and dependent variables is not unique, and we argue that each such separation can be interpreted as an institutional structure or policy regime. Questions about macroeconomic regime change can thus be addressed within this framework.

    We illustrate the graph tools through application of the simple stock-flow consistent model, or “SIM model,” found in Godley and Lavoie (2007). In this model there are eight different possible dynamic closures of the same underlying accounting structure. We classify the possible closures and discuss three of them in detail: the “standard” Godley–Lavoie closure, where government spending is the key policy lever; an “austerity” regime, where government spending adjusts to taxes that depend on private sector decisions; and a “colonial” regime, which is driven by taxation.

  • Working Paper No. 846 | October 2015
    Steindl after Summers

    The current debate on secular stagnation is suffering from some vagueness and several shortcomings. The same is true for the economic policy implications. Therefore, we provide an alternative view on stagnation tendencies based on Josef Steindl’s contributions. In particular, Steindl (1952) can be viewed as a pioneering work in the area of stagnation in modern capitalism. We hold that this work is not prone to the problems detected in the current debate on secular stagnation: It does not rely on the dubious notion of an equilibrium real interest rate as the equilibrating force of saving and investment at full employment levels, in principle, with the adjustment process currently blocked by the unfeasibility of a very low or even negative equilibrium rate. It is based on the notion that modern capitalist economies are facing aggregate demand constraints, and that saving adjusts to investment through income growth and changes in capacity utilization in the long run. It allows for potential growth to become endogenous to actual demand-driven growth. And it seriously considers the role of institutions and power relationships for long-run growth—and for stagnation.

  • Working Paper No. 841 | July 2015

    Marx’s theory of money is critiqued relative to the advent of fiat and electronic currencies and the development of financial markets. Specific topics of concern include (1) today’s identity of the money commodity, (2) possible heterogeneity of the money commodity, (3) the categories of land and rent as they pertain to the financial economy, (4) valuation of derivative securities, and (5) strategies for modeling, predicting, and controlling production and exchange of the money commodity and their interface with the real economy.

  • Working Paper No. 836 | April 2015

    This paper evaluates the presence of heterogeneity, by household type, in the elasticity of substitution between food expenditures and time and in the goods intensity parameter in the household food and eating production functions. We use a synthetic dataset constructed by statistically matching the American Time Use Survey and the Consumer Expenditure Survey. We establish the presence of heterogeneity in the elasticity of substitution and in the intensity parameter. We find that the elasticity of substitution is low for all household types.

  • Working Paper No. 824 | January 2015
    A New Framework for Envisioning and Evaluating a Mission-oriented Public Sector

    Today, countries around the world are seeking “smart” innovation-led growth, and hoping that this growth is also more “inclusive” and “sustainable” than in the past. This paper argues that such a feat requires rethinking the role of government and public policy in the economy—not only funding the “rate” of innovation, but also envisioning its “direction.” It requires a new justification of government intervention that goes beyond the usual one of “fixing market failures.” It also requires the shaping and creating of markets. And to render such growth more “inclusive,” it requires attention to the ensuing distribution of “risks and rewards.”

    To approach the innovation challenge of the future, we must redirect the discussion, away from the worry about “picking winners” and “crowding out” toward four key questions for the future:

    1. Directions: how can public policy be understood in terms of setting the direction and route of change; that is, shaping and creating markets rather than just fixing them? What can be learned from the ways in which directions were set in the past, and how can we stimulate more democratic debate about such directionality?
    2. Evaluation: how can an alternative conceptualization of the role of the public sector in the economy (alternative to MFT) translate into new indicators and assessment tools for evaluating public policies beyond the microeconomic cost/benefit analysis? How does this alter the crowding in/out narrative?
    3. Organizational change: how should public organizations be structured so they accommodate the risk-taking and explorative capacity, and the capabilities needed to envision and manage contemporary challenges?
    4. Risks and Rewards: how can this alternative conceptualization be implemented so that it frames investment tools so that they not only socialize risk, but also have the potential to socialize the rewards that enable “smart growth” to also be “inclusive growth”?

  • Working Paper No. 823 | December 2014
    A Sympathetic Critique

    This paper starts with a review of the literature about National Systems of Innovation (NSI), by linking the origin of the concept to the evolutionary theory of the firm and innovation. The first point reviews the flaws of the NSI concept by looking at the pioneering works of Chris Freeman, Bent-Åke Lundvall, and Richard Nelson. These authors’ definitions of NSI contain some striking aspects: (1) the definitions are so broad that they can encompass almost everything; (2) although all definitions share the central role played by institutions, the state and its policy are not explicitly mentioned; and (3) it is not clear if the NSI concept is a descriptive or a normative tool. The second point we would like to make is that, when the role of the financial system was finally recognized by evolutionary traditions, it was just added as a “new” element within the NSI. The main aim became one of including the financial system within the NSI and looking for the “right” financial system for the “right” type of innovation. After addressing the weaknesses of the conceptualization of the state within the NSI and the difficulty of the evolutionary theory in understanding the financialization of the economy, our third and last point refers to a new way to view innovations. As Mariana Mazzuccato shows, the state has always been a fundamental, though indirect, actor for the development of certain innovations in certain sectors. Yet this is not enough, especially in a period of crisis. The state should direct innovative activities toward more basic and social needs, thus becoming an “innovator of first resort.”

  • Working Paper No. 811 | July 2014
    An Evaluation Using the Maximum Entropy Bootstrap Method

    This paper challenges two clichés that have dominated the macroeconometric debates in India. One relates to the neoclassical view that deficits are detrimental to growth, as they increase the rate of interest, and in turn displace the interest-rate-sensitive components of private investment. The second relates to the assumption of “stationarity”—which has dominated the statistical inference in time-series econometrics for a long time—as well as the emphasis on unit root–type testing, which involves detrending, or differencing, of the series to achieve stationarity in time-series econometric models. The paper examines the determinants of rates of interest in India for the periods 1980–81 and 2011–12, using the maximum entropy bootstrap (Meboot) methodology proposed in Vinod 1985 and 2004 (and developed extensively in Vinod 2006, Vinod and Lopez-de-Lacalle 2009, and Vinod 2010 and 2013). The practical appeal of Meboot is that it does not necessitate all pretests, such as structural change and unit root–type testing, which involve detrending the series to achieve stationarity, which in turn is problematic for evolutionary short time series. It also solves problems related to situations where stationarity assumptions are difficult to verify—for instance, in mixtures of I(0) and nonstationary I(d) series, where the order of integration can be different for different series.

    What makes Meboot compelling for Indian data on interest rates? Prior to interest rate deregulation in 1992, studies to analyze the determinants of interest rates were rare in India. Analytical and econometric limitations to dealing with the nonvarying administered rates for a meaningful time-series analysis have been the oft-cited reason. Using high-frequency data, the existing attempts have focused on the recent financially deregulated interest rate regime to establish possible links between interest rates and macroeconomic variables (Chakraborty 2002 and 2012, Dua and Pandit 2002, and Goyal 2004). The results from the Meboot analysis revealed that, contrary to popular belief, the fiscal deficit is not significant for interest rate determination in India. This is in alignment with the existing empirical findings, where it was established that the interest rate is affected by changes in the reserve currency, expected inflation, and volatility in capital flows, but not by the fiscal deficit. This result has significant policy implications for interest rate determination in India, especially since the central bank has cited the high fiscal deficit as one of the prime constraints for flexibility in fixing the rates.

  • Working Paper No. 809 | June 2014

    Work and life satisfaction depends on a number of pecuniary and nonpecuniary factors at the workplace and determines these in turn. We analyze these causal linkages using a structural vector autoregression approach for a sample of the German working populace collected from 1984 to 2008, finding that workplace autonomy plays an important causal role in determining well-being.

  • Working Paper No. 808 | June 2014
    A Quantile Approach

    Unemployment has been robustly shown to strongly decrease subjective well-being (or “happiness”). In the present paper, we use panel quantile regression techniques in order to analyze to what extent the negative impact of unemployment varies along the subjective well-­being distribution. In our analysis of British Household Panel Survey data (1996–2008) we find that, over the quantiles of our subjective well-being variable, individuals with high well-­being suffer less from becoming unemployed. A similar but stronger effect of unemployment is found for a broad mental well-being variable (GHQ-12). For happy and mentally stable individuals, it seems their higher well-being acts like a safety net when they become unemployed. We explore these findings by examining the heterogeneous unemployment effects over the quantiles of satisfaction with various life domains.

  • Working Paper No. 807 | June 2014

    Recent research stresses the macroeconomic dimension of income distribution, but no theory has yet emerged. In this note, we introduce factor shares into popular growth models to gain insights into the macroeconomic effects of income distribution. The cost of modifying existing models is low compared to the benefits. We find, analytically, that (1) the multiplier is equal to the inverse of the labor share and is about 1.4; (2) income distribution matters mostly in the medium run; (3) output is wage led in the short run, i.e., as long as unemployment persists; (4) capacity expansion is profit led in the full-employment long run, but this is temporary and unstable.

  • Working Paper No. 805 | May 2014
    Measures and Structural Factors

    Economic theory frequently assumes constant factor shares and often treats the topic as secondary. We will show that this is a mistake by deriving the first high-frequency measure of the US labor share for the whole economy. We find that the labor share has held remarkably steady indeed, but that the quasi-stability masks a sizable composition effect that is detrimental to labor. The wage component is falling fast and the stability is achieved by an increasing share of benefits and top incomes. Using NIPA and Piketty-Saez top-income data, we estimate that the US bottom 99 percent labor share has fallen 15 points since 1980. This amounts to a transfer of $1.8 trillion from labor to capital in 2012 alone and brings the US labor share to its 1920s level. The trend is similar in Europe and Japan. The decrease is even larger when the CPI is used instead of the GDP deflator in the calculation of the labor share.

  • Working Paper No. 804 | May 2014
    Empirical Studies

    In this second part of our study we survey the rapidly expanding empirical literature on the determinants of the functional distribution of income. Three major strands emerge: technological change, international trade, and financialization. All contribute to the fluctuations of the labor share, and there is a significant amount of self-reinforcement among these factors. For the case of the United States, it seems that the factors listed above are by order of increasing importance. We conclude by noting that the falling US wage shares cointegrates with rising inequality and a rising top 1 percent income share. Thus, all measures of income distribution provide the same picture. Liberalization and financialization worsen economic inequality by raising top incomes, unless institutions are strongly redistributive.

    The labor share has also fallen, for structural reasons and for reasons related to economic policy. Such explanations are left to parts III and IV of our study, respectively. Part I investigated the theories of income distribution.

  • Working Paper No. 803 | May 2014
    Theories

    This series of working papers explores a theme enjoying a tremendous resurgence: the functional distribution of income—the division of aggregate income by factor share. This first installment surveys some landmark theories of income distribution. Some provide a technology-based account of the relative shares while others provide a demand-driven explanation (Keynes, Kalecki, Kaldor, Goodwin). Two questions lead to a better understanding of the literature: is income distribution assumed constant?, and is income distribution endogenous or exogenous? However, and despite their insights, these theories alone fail to fully explain the current deterioration of income distribution.

    Subsequent installments are dedicated to analyzing the empirical literature (part II), to the measurement and composition of the relative shares (part III), and to a study of the role of economic policy (part IV).

  • Working Paper No. 800 | May 2014

    Behavioral economics has shown that individuals sometimes make decisions that are not in their best interests. This insight has prompted calls for behaviorally informed policy interventions popularized under the notion of “libertarian paternalism.” This type of “soft” paternalism aims at helping individuals without reducing their freedom of choice. We highlight three problems of libertarian paternalism: the difficulty of detecting what is in the best interest of an individual, the focus on freedom of choice at the expense of a focus on autonomy, and the neglect of the dynamic effects of libertarian-paternalistic policy interventions. We present a form of soft paternalism called “autonomy-enhancing paternalism” that seeks to constructively remedy these problems. Autonomy-enhancing paternalism suggests using insights from subjective well-being research in order to determine what makes individuals better off. It imposes an additional constraint on the set of permissible interventions highlighting the importance of autonomy in the sense of the capability to make critically reflected (i.e., autonomous) decisions. Finally, it acknowledges that behavioral interventions can change the strength of individual decision-making anomalies over time as well as influence individual preference learning. We illustrate the differences between libertarian paternalism and autonomy-enhancing paternalism in a simple formal model in the context of optimal sin nudges.

  • Working Paper No. 795 | April 2014

    This paper contributes to the debate on income growth and distribution from a nonmainstream perspective. It looks, in particular, at the role that the degree of capacity utilization plays in the process of growth of an economy that is not perfectly competitive. The distinctive feature of the model presented in the paper is the hypothesis that the rate of capital depreciation is an increasing function of the degree of capacity utilization. This hypothesis implies analytical results that differ somewhat from those yielded by other Kaleckian models. Our model shows that, in a number of cases, the process of growth can be profit-led rather than wage-led. The model also determines the value to which the degree of capacity utilization converges in the long run.

  • Working Paper No. 786 | January 2014
    An Assessment from Popper’s Philosophy

    The rational expectations hypothesis (REH) is the standard approach to expectations formation in macroeconomics. We discuss its compatibility with two strands of Karl Popper´s philosophy: his theory of knowledge and learning, and his “rationality principle” (RP). First, we show that the REH is utterly incompatible with the former. Second, we argue that the REH can nevertheless be interpreted as a heuristic device that facilitates economic modeling and, consequently, it may be justified along the same lines as Popper´s RP. We then argue that, our position as to the resolution of this paradox notwithstanding, Popper´s philosophy provides a metatheoretical framework with which we can evaluate the REH. Within this framework, the REH can be viewed as a heuristic device or strategy that fulfils the same function as, for instance, the optimizing assumption. However, we believe that the REH imparts a serious methodological bias, since, by implying that macroeconomic instability is caused exclusively by “exogenous” shocks that randomly hit the economy, it precludes the analysis of any sources of inherent instability caused by the making of (nonrandom) errors by individuals, and hence it favors the creation of an institutional configuration that may be ill suited to address this type of instability.

  • Working Paper No. 782 | December 2013

    In this paper an alternative approach for the estimation of higher-order linear fixed-effects models is described. The strategy relies on the transformation of the data prior to calculating estimations of the model. While the approach is computationally intensive, the hardware requirements for the estimation process are minimal, allowing for the estimation of models with more than two high-order fixed effects for large datasets. An illustration of the implementation is presented using the US Census Bureau Current Population Survey data with four fixed effects.

  • Working Paper No. 781 | December 2013

    The paper seeks to lay out a stock-flow-based theoretical framework that provides a foundation for a general theory of pricing. Contemporary marginalist economics is usually based on the assumption that prices are set in line with the value placed on goods by consumers. It does not take into account expectations, or the fact that real goods are often simultaneously assets. Meanwhile, contemporary theories of asset markets are flawed in that they either rely, implicitly or explicitly, on a market equilibrium framework or provide no framework at all. This paper offers a working alternative that relies, not on a market equilibrium framework, but rather on a stock-flow equilibrium framework. In doing so, we lay out a properly general theory of pricing that can be applied to any market—whether financial, real, or a real market that has been financialized—and which does not require that prices inevitably tend toward some prespecified market equilibrium.

  • Working Paper No. 779 | November 2013
    An Application to the Labor Market

    This paper argues that a hierarchy of ideals exists in market interactions that sets the benchmark on the norm of fairness associated with these interactions, thus affecting pricing decisions associated with market exchange. As norms emerge, an ideal determines the criteria of optimal behavior and serves as a basis for market exchange. Norms homogenize the diversity of commodities in market interactions according to a hierarchy of norms and values. The paper then goes on to illustrate how this hierarchy of ideals works in the labor market, leading to inequality of access to jobs and wages between groups of individuals. Groups socially perceived to be diverging from the context-dependent dominant ideal are likely to suffer most in market interactions.

    Download:
    Associated Program:
    Author(s):
    Aurélie Charles
    Related Topic(s):

  • Working Paper No. 777 | October 2013

    This paper presents a small macroeconomic model describing the main mechanisms of the process of credit creation by the private banking system. The model is composed of a core unit—where the dynamics of income, credit, and aggregate demand are determined—and a set of sectoral accounts that ensure its stock-flow consistency. In order to grasp the role of credit and banks in the functioning of the economic system, we make an explicit distinction between planned and realized variables, thanks to which, while maintaining the ex-post accounting consistency, we are able to introduce an ex-ante wedge between current aggregate income and planned expenditure. Private banks are the only economic agents capable of filling this gap through the creation of new credit. Through the use of numerical simulation, we discuss the link between credit creation and the expansion of economic activity, also contributing to a recent academic debate on the relation between income, debt, and aggregate demand.

    Download:
    Associated Program:
    Author(s):
    Giovanni Bernardo Emanuele Campiglio
    Related Topic(s):

  • Working Paper No. 775 | September 2013
    The Limits to Neo-Kaleckian Models and a Kaldorian Proposal

    We argue that a fundamental difference between Post-Keynesian approaches to economic growth lies in their treatment of investment. Kaleckian-Robinsonian models postulate an investment function dependent on the accelerator and profitability. Some of these models rely on the importance of profitability, captured by the profit share, to make the case for profit-led growth. For their part, Kaldorian models place the emphasis on the accelerator. More important, investment is a derived demand; that is, it is ruled by the adjustment of capacity to exogenous demand, which, in turn, determines the normal level of capacity utilization.

    In our view, the Kaldorian approach is better equipped to deal with some of the issues relating income distribution to accumulation with effective demand in the long run. We develop a Kaldorian open-economy model to examine the conditions under which an increase in real wages can produce profit or wage-led growth, showing that the limit to a wage-led expansion is a binding external constraint. The role and limitations of wages as a determinant of growth are further examined through spectral techniques and cycle analysis for a subset of developed economies. The evidence indicates that real wages are positively related to growth, investment, and capacity utilization. It also highlights the role of finance in sustaining expansions, suggesting that debt-led growth should not be identified with profit-led growth.

    Download:
    Associated Program:
    Author(s):
    Esteban Pérez Caldentey Matías Vernengo
    Related Topic(s):

  • Working Paper No. 773 | August 2013

    Keynes had many plausible things to say about unemployment and its causes. His “mercurial mind,” though, relied on intuition, which means that he could not strictly prove his hypotheses. This explains why Keynes’s ideas immediately invited bastardizations. One of them, the Phillips curve synthesis, turned out to be fatal. This paper identifies Keynes’s undifferentiated employment function as a sore spot. It is replaced by the structural employment function, which also supersedes the bastard Phillips curve. The paper demonstrates in a formal and rigorous manner why there is no trade-off between price inflation and unemployment.

    Download:
    Associated Program:
    Author(s):
    Egmont Kakarot-Handtke

  • Working Paper No. 770 | July 2013
    An Essay on the Business Cycle
    This paper presents a discussion of the forces at play behind the economic fluctuations in the medium run and their relation with the short-run macroeconomic equilibrium. The business cycle is the result of two separate phenomena. On the one hand, there is the instability caused by the discrepancy between expected and realized outcomes. On the other hand, this instability is contained by the inherent contradictions of capitalism; the upswing carries within it “the seeds of its own destruction.” The same happens with the downswing. The paper provides a formal exposition of these insights, a discussion of how the formulation of this mechanism resembles the simple harmonic motion of classical mechanics, and an empirical evaluation. 

  • Working Paper No. 766 | June 2013

    Should shocks be part of our macro-modeling tool kit—for example, as a way of modeling discontinuities in fiscal policy or big moves in the financial markets? What are shocks, and how can we best put them to use? In heterodox macroeconomics, shocks tend to come in two broad types, with some exceptions for hybrid cases. What I call Type 1 shocks are one-time exogenous changes in parameters or variables. They are used, for example, to set computer simulations in motion or to pose an analytical question about dynamic behavior outside of equilibrium. On the other hand, Type 2 shocks, by construction, occur at regular time intervals, and are usually drawn at random from a probability distribution of some kind. This paper is an appreciation and a survey of shocks and their admittedly scattered uses in the heterodox macro literature, along with some proposals and thoughts about using shocks to improve models. Since shocks of both types might appear at times to be ad hoc when used in macro models, this paper examines possible justifications for using them.

  • Working Paper No. 764 | May 2013

    Recent episodes of housing bubbles, which occurred in several economies after the burst of the United States housing market, suggest studying the evolution of housing prices from a global perspective. We utilize a theoretical model for the purposes of this contribution, which identifies the main drivers of housing price appreciation—for example, income, residential investment, financial elements, fiscal policy, and demographics. In the second stage of our analysis, we test our theoretical hypothesis by means of a sample of 18 Organisation for Economic Co-operation and Development (OECD) countries from 1970 to 2011. We employ the vector error correction econometric technique in terms of our empirical analysis. This allows us to model the long-run equilibrium relationship and the short-run dynamics, which also helps to account for endogeneity and reverse-causality problems.

    Download:
    Associated Program:
    Author(s):
    Philip Arestis Ana Rosa González

  • Working Paper No. 755 | February 2013
    Building an Argument for a Shared Society

    This paper presents a review of the literature on the economics of shared societies. As defined by the Club de Madrid, shared societies are societies in which people hold an equal capacity to participate in and benefit from economic, political, and social opportunities regardless of race, ethnicity, religion, language, gender, or other attributes, and where, as a consequence, relationships between the groups are peaceful. Our review centers on four themes around which economic research addresses concepts outlined by the Club de Madrid: the effects of trust and social cohesion on growth and output, the effect of institutions on development, the costs of fractionalization, and research on the policies of social inclusion around the world.

  • Working Paper No. 754 | February 2013

    Do all types of demand have the same effect on output? To answer this question, I estimate a cointegrated vector autoregressive (VAR) model of consumption, investment, and government spending on US data, 1955–2007. I find that: (1) economic growth can be decomposed into a short-run (transitory) cycle gravitating around a long-run (permanent) trend made of consumption shocks and government spending; (2) the estimated fluctuations are investment dominated, they coincide remarkably with the business cycle, and they are highly correlated with capacity utilization in both labor and capital; and (3) the long-run multipliers point to a large induced-investment phenomenon and to a smaller, but still significantly positive, government spending multiplier, around 1.5. The results cover a lot of theoretical ground: Paul Samuelson’s accelerator principle, John Kenneth Galbraith’s stress on consumption and government spending, Jan Tinbergen's investment-driven business cycle, and Robert Eisner’s inquiries on the investment function. The results are particularly useful to distinguish between economic policies for the short and long runs, albeit no attempt is made at this point to inquire into the effectiveness of specific economic policies.

  • Working Paper No. 749 | January 2013
    A Distinctive Feature of the Business Cycle in Latin America and the Caribbean

    Using two standard cycle methodologies (classical and deviation cycle) and a comprehensive sample of 83 countries worldwide, including all developing regions, we show that the Latin American and Caribbean cycle exhibits two distinctive features. First, and most important, its expansion performance is shorter and, for the most part, less intense than that of the rest of the regions considered; in particular, that of East Asia and the Pacific. East Asia’s and the Pacific’s expansions last five years longer than those of Latin American and the Caribbean, and its output gain is 50 percent greater. Second, the Latin American and Caribbean region tends to exhibit contractions that are not significantly different from those other regions in terms of duration and amplitude. Both these features imply that the complete Latin American and Caribbean cycle has, overall, the shortest duration and smallest amplitude in relation to other regions. The specificities of the Latin American and Caribbean cycle are not confined to the short run. These are also reflected in variables such as productivity and investment, which are linked to long-run growth. East Asia’s and the Pacific’s cumulative gain in labor productivity during the expansionary phase is twice that of Latin American and the Caribbean. Moreover, the evidence also shows that the effects of the contraction in public investment surpass those of the expansion, leading to a declining trend over the entire cycle. In this sense, we suggest that policy analysis needs to increase its focus on the expansionary phase of the cycle. Improving our knowledge of the differences in the expansionary dynamics of countries and regions can further our understanding of the differences in their rates of growth and levels of development. We also suggest that, while the management of the cycle affects the short-run fluctuations of economic activity and therefore volatility, it is not trend neutral. Hence, the effects of aggregate demand management policies may be more persistent over time, and less transitory, than currently thought.

    Download:
    Associated Program:
    Author(s):
    Esteban Pérez Caldentey Daniel Titelman Pablo Carvallo
    Related Topic(s):

  • Working Paper No. 748 | January 2013
    Evidence from India

    The effectiveness of public spending remains a relatively elusive empirical issue. This preliminary analysis is an attempt, using benefit incidence methodology, to define the effectiveness of spending at the subnational government level in India’s health sector. The results reveal that the public health system is “seemingly” more equitable in a few states, while regressivity in the pattern of public health-care utilization is observed in others. Both results are to be considered with caution, as the underdeveloped market for private inpatient care in some states might be a factor in the disproportionate crowding-in of inpatients, making the public health-care system simply appear more equitable. However, patients “voting with their feet” and choosing better, private services seems evident only in the higher-income quintiles. Results also suggest that polarization is distinctly evident in the public provisioning of health-care services, though more related to inpatient, rather than ambulatory, services.

    Download:
    Associated Program:
    Author(s):
    Lekha S. Chakraborty Yadawendra Singh Jannet Farida Jacob
    Related Topic(s):

  • Working Paper No. 746 | January 2013
    A Kaleckian Perspective

    This paper examines a major channel through which financialization or finance-dominated capitalism affects macroeconomic performance: the distribution channel. Empirical data for the following dimensions of redistribution in the period of finance-dominated capitalism since the early 1980s is provided for 15 advanced capitalist economies: functional distribution, personal/household distribution, and the share and composition of top incomes. Based on the Kaleckian approach to the determination of income shares, the effects of financialization on functional income distribution are studied in more detail. Some stylized facts of financialization are integrated into the Kaleckian approach, and by means of reviewing empirical and econometric literature it is found that financialization and neoliberalism have contributed to the falling labor income share since the early 1980s through three main Kaleckian channels: (1) a shift in the sectoral composition of the economy; (2) an increase in management salaries and rising profit claims of the rentiers, and thus in overheads; and (3) weakened trade union bargaining power.

  • Working Paper No. 745 | January 2013

    The aim of the paper is to provide an overview of the current stock-flow consistent (SFC) literature. Indeed, we feel the SFC approach has recently led to a blossoming literature, requiring a new summary after the work of Dos Santos (2006) and, above all, after the publication of the main reference work on the methodology, Godley and Lavoie’s Monetary Economics: An Integrated Approach to Credit, Money, Income, Production and Wealth (2007). The paper is developed along the following lines. First, a brief historical analysis investigates the roots of this class of models that can be traced as far back as 1949 and the work of Copeland. Second, the competing points of view regarding some of its main controversial aspects are underlined and used to classify the different methodological approaches followed in using these models. Namely, we discuss (1) how the models are solved, (2) the treatment of time and its implication, and (3) the need—or not—of microfoundations. These results are then used in the third section of the paper to develop a bifocal perspective, which allows us to divide the literature reviewed according to both its subject and the methodology. We explore various topics such as financialization, exchange rate modeling, policy implication, the need for a common framework within the post-Keynesian literature, and the empirical use of SFC models. Finally, the conclusions present some hypotheses (and wishes) over the possible lines of development of the stock-flow consistent models.

  • Working Paper No. 744 | December 2012
    Empirical Evidence on Fiscal Deficit – Interest Rate Linkages and Financial Crowding Out

    Controlling for capital flows using the high-frequency macro data of a financially deregulated regime, this paper examines whether there is any evidence of the fiscal deficit determining the interest rate in the context of India. The period of analysis is FY 2006–07 (April) to FY 2011 (April). Contrary to the debates in policy circles, the paper finds that an increase in the fiscal deficit does not cause a rise in interest rates. Using the asymmetric vector autoregressive model, the paper establishes that the interest rate is affected by changes in the reserve currency, expected inflation, and volatility in capital flows, but not by the fiscal deficit. This result has significant policy implications for interest rate determination in India, especially since the central bank has cited the high fiscal deficit as the prime reason for leaving the rates unchanged in all of its recent policy announcements. The paper analyzes both long- and short-term interest rates to determine the occurrence of financial crowding out, and finds that the fiscal deficit does not appear to be causing either shorts and longs. However, a reverse causality is detected, from interest rates to deficits.

  • Working Paper No. 741 | December 2012

    The analytical starting point determines the course of a theoretical investigation and, ultimately, the productiveness of an approach. The classics took production and accumulation as their point of departure; the neoclassics, exchange. Exchange implies behavioral assumptions and notions like rationality, optimization, and equilibrium. It is widely recognized that this approach has led into a cul-de-sac. To change a theory means to change its premises; or, in Keynes’s words, to “throw over” the axioms. The present paper swaps the standard behavioral axioms for structural axioms and applies the latter to the analysis of the emergence of secondary markets from the flow part of the economy. Real and nominal residuals at first give rise to the accumulation of the stock of money and the stock of commodities. These stocks constitute the demand-and-supply side of secondary markets. The pricing in these markets is different from the pricing in the primary markets. Realized appreciation in the secondary markets is different from income or profit. To treat primary and secondary markets alike is therefore a category mistake. Vice versa, to take a set of objective propositions as the analytical starting point yields a comprehensive and consistent theory of market exchange and valuation.

  • Working Paper No. 739 | November 2012
    A Theoretical and Empirical Discussion of the Kaleckian Model of Growth and Distribution

    This paper examines the “utilization controversy” around the Kaleckian model of growth and distribution. We show that the Federal Reserve data on capacity utilization, which have been used by both sides of this debate, are the wrong kind of data for the issue under examination. Instead, a more appropriate measurement can be derived from the data on the Average Workweek of Capital. We argue that the long-run dynamic adjustment proposed by Kaleckian scholars lacks a coherent economic rationale, and provide an alternative path toward the endogeneity of the desired utilization at the micro and macro levels. Finally, we examine the proposed adjustment mechanism econometrically. Our results verify the endogeneity of the normal utilization rate.

  • Working Paper No. 737 | November 2012

    This paper examines the endogeneity (or lack thereof) of the rate of capacity utilization in the long run at the firm level. We provide economic justification for the adjustment of the desired rate of utilization toward the actual rate on behalf of a cost-minimizing firm after examining the factors that determine the utilization of resources. The cost-minimizing firm has an incentive to increase the utilization of its capital if the rate of the returns to scale decreases as its production increases. The theory of economies of scale provides justification for this kind of behavior. In this manner, the desired rate of utilization becomes endogenous.

  • Working Paper No. 733 | October 2012
    An SFC Analysis of Great Surges of Development

    Schumpeter, a century ago, argued that boom-and-bust cycles are intrinsically related to the functioning of a capitalistic economy. These cycles, inherent to the rise of innovation, are an unavoidable consequence of the way in which markets evolve and assimilate successive technological revolutions. Furthermore, Schumpeter’s analysis stressed the fundamental role played by finance in fostering innovation, in defining bank credit as the “monetary complement” of innovation. Nevertheless, we feel that the connection between innovation and firm financing has seldom been examined from a theoretical standpoint, not only by economists in general, but even within the Neo-Schumpeterian research line. Our paper aims at analyzing both the long-term structural change process triggered by innovation and the related financial dynamics inside the coherent framework provided by the stock-flow consistent (SFC) approach. The model presents a multisectoral economy composed of consumption and capital goods industries, a banking sector, and two household sectors: capitalists and wage earners. The SFC approach helps us to track the flows of funds resulting from the rise of innovators in the system. The dynamics of prices, employment, and wealth distribution among the different sectors and social groups is analyzed. Above all, the essential role of finance in fostering innovation and its interaction with the real economy is underlined.

    Download:
    Associated Program:
    Author(s):
    Alessandro Caiani Antoine Godin Stefano Lucarelli
    Related Topic(s):

  • Working Paper No. 731 | September 2012
    An Essential Rectification of the Accounting Approach

    This paper takes the explanatory superiority of the integrated monetary approach for granted. It will be demonstrated that the accounting approach could do even better, provided it frees itself from theoretically ill-founded notions like GDP and other artifacts of the equilibrium approach. National accounting as such does not provide a model of the economy but is, rather, the numerical reflex of the underlying theory. It is this theory that will be scrutinized, rectified, and ultimately replaced in what follows. The formal point of reference is “the integrated approach to credit, money, income, production and wealth” of Wynne Godley and Marc Lavoie.

  • Working Paper No. 729 | August 2012

    As the heirs to classical political economy and the German historical school, the American institutionalists retained rent theory and its corollary idea of unearned income. More than any other institutionalist, Thorstein Veblen emphasized the dynamics of banks financing real estate speculation and Wall Street maneuvering to organize monopolies and trusts. Yet despite the popularity of his writings with the reading public, his contribution has remained isolated from the academic mainstream, and he did not leave behind a “school.”

    Veblen criticized academic economists for having fallen subject to “trained incapacity” as a result of being turned into factotums to defend rentier interests. Business schools were painting an unrealistic happy-face picture of the economy, teaching financial techniques but leaving out of account the need to reform the economy’s practices and institutions.

    In emphasizing how financial “predation” was hijacking the economy’s technological potential, Veblen’s vision was as materialist and culturally broad as that of the Marxists, and as dismissive of the status quo. Technological innovation was reducing costs but breeding monopolies as the finance, insurance, and real estate (FIRE) sectors joined forces to create a financial symbiosis cemented by political-insider dealings—and a trivialization of economic theory as it seeks to avoid dealing with society’s failure to achieve its technological potential. The fruits of rising productivity were used to finance robber barons who had no better use of their wealth than to reduce great artworks to the status of ownership trophies and achieve leisure-class status by funding business schools and colleges to promote a self-congratulatory but deceptive portrayal of their wealth-grabbing behavior.

  • Working Paper No. 725 | May 2012
    A Caveat Emptor for Regional Scientists

    Over the last 20 years or so, mainstream economists have become more interested in spatial economics and have introduced largely neoclassical economic concepts and tools to explain phenomena that were previously the preserve of economic geographers. One of these concepts is the aggregate production function, which is also central to much of regional growth theory. However, as Franklin Fisher, inter alios, has shown, the conditions necessary to aggregate microproduction functions into an aggregate production function are so stringent that in all probability the aggregate production function does not exist. This paper shows that the good statistical fits commonly found empirically are solely due to the use of value data and an underlying accounting identity. The result is that the estimates obtained cannot be regarded as providing evidence of the underlying technological structure of the spatial economy, including the aggregate elasticity of substitution, the degree of returns to scale, and the rate of technical progress.

  • Working Paper No. 720 | May 2012
    A FAVAR Model for Greece and Ireland

    This paper examines the underlying dynamics of selected euro-area sovereign bonds by employing a factor-augmenting vector autoregressive (FAVAR) model for the first time in the literature. This methodology allows for identifying the underlying transmission mechanisms of several factors; in particular, market liquidity and credit risk. Departing from the classical structural vector autoregressive (VAR) models, it allows us to relax limitations regarding the choice of variables that could drive spreads and credit default swaps (CDSs) of euro-area sovereign debts. The results show that liquidity, credit risk, and flight to quality drive both spreads and CDSs of five years’ maturity over swaps for Greece and Ireland in recent years. Greece, in particular, is facing an elastic demand for its sovereign bonds that further stretches liquidity. Moreover, in current illiquid market conditions spreads will continue to follow a steep upward trend, with certain adverse financial stability implications. In addition, we observe a negative feedback effect from counterparty credit risk.

  • Working Paper No. 718 | May 2012
    Further Reflections on Temple’s Criticisms and Misunderstandings

    In a reply to Felipe and McCombie (2010a), Temple (2010) has largely ignored the main arguments that underlie the accounting identity critique of the estimation of production functions using value data. This criticism suggests that estimates of the parameters of aggregate production functions cannot be regarded as reflecting the underlying technology of the industry. While Temple concedes some points, he erroneously believes that the critique holds only under some ad hoc assumptions. As a consequence, he argues that the critique works only “part-time.” This rejoinder discusses Temple’s arguments and demonstrates that the critique works full-time.

  • Working Paper No. 715 | April 2012
    What Is It, Who Is in It, and Why?

    This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950–2010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country’s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.

  • Working Paper No. 708 | February 2012

    What is called “capitalism” is best understood as a series of stages. Industrial capitalism has given way to finance capitalism, which has passed through  pension fund capitalism since the 1950s and a US-centered monetary imperialism since 1971, when the fiat dollar (created mainly to finance US global military spending) became the world’s monetary base. Fiat dollar credit made possible the bubble economy after 1980, and its substage of casino capitalism. These economically radioactive decay stages resolved into debt deflation after 2008, and are now settling into a leaden debt peonage and the austerity of neo-serfdom.

    The end product of today’s Western capitalism is a neo-rentier economy—precisely what industrial capitalism and classical economists set out to replace during the Progressive Era from the late 19th to early 20th century. A financial class has usurped the role that landlords used to play—a class living off special privilege. Most economic rent is now paid out as interest. This rake-off interrupts the circular flow between production and consumption, causing economic shrinkage—a dynamic that is the opposite of industrial capitalism’s original impulse. The “miracle of compound interest,” reinforced now by fiat credit creation, is cannibalizing industrial capital as well as the returns to labor.

    The political thrust of industrial capitalism was toward democratic parliamentary reform to break the stranglehold of landlords on national tax systems. But today’s finance capital is inherently oligarchic. It seeks to capture the government—first and foremost the treasury, central bank, and courts—to enrich (indeed, to bail out) and untax the banking and financial sector and its major clients: real estate and monopolies. This is why financial “technocrats” (proxies and factotums for high finance) were imposed in Greece, and why Germany opposed a public referendum on the European Central Bank’s austerity program.

  • Working Paper No. 701 | December 2011

    Using data from the Bicol region of the Philippines, we examine why women are more educated than men in a rural, agricultural economy in which women are significantly less likely than men to participate in the labor market. We hypothesize that educational homogamy in the marriage market and cross-productivity effects in the household allow Filipino women to reap substantial benefits from schooling regardless of whether they enter the labor market. Our estimates reveal that the return to schooling for women is approximately 20 percent in both labor and marriage markets. In comparison, men experience a 12 percent return to schooling in the labor market. By using birth order, sibship size, percent of male siblings, and parental education as instruments, we correct for a significant downward bias that is caused by the endogeneity of schooling attainment.

    Download:
    Associated Program:
    Author(s):
    Sanjaya DeSilva Mohammed Mehrab Bin Bakhtiar
    Related Topic(s):

  • Working Paper No. 699 | December 2011

    Ricardian trade theory was based on the cost of labor at a time when grain and other consumer goods accounted for most subsistence spending. But today’s budgets are dominated by payments to the finance, insurance, and real estate (FIRE) sector and to newly privatized monopolies. This has made FIRE the determining factor in trade competitiveness.

    The major elements in US family budgets are housing (with prices bid up on credit), debt service, and health insurance—and wage withholding for financializing Social Security and Medicare. Industrial firms also have been financialized, using debt leverage to increase their return on equity. The effect is for interest to increase as a proportion of cash flow (earnings before interest, taxes, depreciation, and amortization, or EBITDA). Corporate raiders pay their high-interest bondholders, while financial managers also are using EBITDA for stock buybacks to increase share prices (and hence the value of their stock options).

    Shifting taxes off property and onto employment and retail sales spurs the financialization of family and business budgets as tax cuts on property are capitalized into higher bank loans. Payments to government agencies for taxes and presaving for Social Security and Medicare absorb another 30 percent of family budgets. These transfer payments to the FIRE sector and government agencies have transformed international cost structures, absorbing roughly 75 percent of US family budgets. This helps explain the deteriorating US industrial trade balance as the economy has become financialized.

  • Working Paper No. 697 | November 2011
    A Dynamic Kaleckian Approach

    This paper studies the effects of an (exogenous) increase of nominal wages on profits, output, and growth. Inspired by an article by Michał Kalecki (1991), who concentrated on the effects on total profits, the paper develops a model that explicitly considers the dynamics of demand, prices, profits, and investment. The outcomes of the initial wage rise are found to be path dependent and crucially affected by the firms’ initial response to an increase in demand and a decrease in profit margins. The present model, which relates to other Post Keynesian/Kaleckian contributions, can offer an alternative to the mainstream approach to analyzing the effects of wage increases.

    Download:
    Associated Program:
    Author(s):
    Fabrizio Patriarca Claudio Sardoni
    Related Topic(s):

  • Working Paper No. 689 | October 2011

    Immigration is having an increasingly important effect on the social insurance system in the United States. On the one hand, eligible legal immigrants have the right to eventually receive pension benefits but also rely on other aspects of the social insurance system such as health care, disability, unemployment insurance, and welfare programs, while most of their savings have direct positive effects on the domestic economy. On the other hand, most undocumented immigrants contribute to the system through taxed wages but are not eligible for these programs unless they attain legal status, and a large proportion of their savings translates into remittances that have no direct effects on the domestic economy. Moreover, a significant percentage of immigrants migrate back to their countries of origin after a relatively short period of time, and their savings while in the United States are predominantly in the form of remittances. Therefore, any analysis that tries to understand the impact of immigrant workers on the overall system has to take into account the decisions and events these individuals face throughout their lives, as well as the use of the government programs they are entitled to. We propose a life-cycle Overlapping Generations (OLG) model in a general equilibrium framework of legal and undocumented immigrants’ decisions regarding consumption, savings, labor supply, and program participation to analyze their role in the financial sustainability of the system. Our analysis of the effects of potential policy changes, such as giving some undocumented immigrants legal status, shows increases in capital stock, output, consumption, labor productivity, and overall welfare. The effects are relatively small in percentage terms but considerable given the size of our economy.

  • Working Paper No. 677 | July 2011
    A Nonmainstream Perspective

    The global financial crisis has now spread across multiple countries and sectors, affecting both financial and real spheres in the advanced as well as the developing economies. This has been caused by policies based on “rational expectation” models that advocate deregulated finance, with facilities for easy credit and derivatives, along with globalized exposures for financial institutions. The financial crisis has combined with long-term structural changes in the real economy that trend toward underconsumption, generating contractionary effects therein and contributing to further instabilities in the financial sector. The responses so far from US monetary authorities have not been effective, especially in dealing with issues of unemployment and low real growth in the United States, or in other countries. Nor have these been of much use in the context of the lost monetary and fiscal autonomy in both developing countries and the eurozone, especially with the debt-related distress in the latter. Solutions to the current maladies in the global economy include strict control of financial speculation and the institution of an “employer of last resort” policy, both at the initiative of the state.

  • Working Paper No. 672 | May 2011
    A Ricardo-Keynes Synthesis

    The paper provides a novel theory of income distribution and achieves an integration of monetary and value theories along Ricardian lines, extended to a monetary production economy as understood by Keynes. In a monetary economy, capital is a fund that must be maintained. This idea is captured in the circuit of capital as first defined by Marx. We introduce the circuit of fixed capital; this circuit is closed when the present value of prospective returns from employing it is equal to its supply price. In a steady-growth equilibrium with nominal wages and interest rates given, the equation that closes the circuit of fixed capital can be solved for prices, implying a definitive income distribution. Accordingly, the imputation for fixed capital costs is equivalent to that of a money contract of equal length, which is the payment per period that will repay the cost of the fixed asset, together with interest. It follows that if capital assets remain in use for a period longer than is required to amortize them, their earnings beyond that period have an element of pure rent.

    Download:
    Associated Program:
    Author(s):
    Nazim Kadri Ekinci
    Related Topic(s):

  • Working Paper No. 670 | May 2011
    What Does It Say About the Opportunities for Growth and Structural Transformation of Sub-Saharan Africa?

    In this paper we look at the economic development of Sub-Saharan Africa (SSA) in the context of structural transformation. We use Hidalgo et al.’s (2007) concept of product space to show the evolution of the region’s productive structure, and discuss the opportunities for growth and diversification. The majority of SSA countries are trapped in the export of unsophisticated, highly standard products that are poorly connected in the product space; this makes the process of structural transformation of the region particularly difficult. The products that are nearby to those they already export have the same characteristics. Therefore, shifting to these products will do little to improve SSA’s growth prospects. To jump-start and sustain growth, governments must implement policies and provide public inputs that will encourage the private sector to invest in new and more sophisticated activities.

  • Working Paper No. 652 | March 2011

    The Queen of England famously asked her economic advisers why none of them had seen “it” (the global financial crisis) coming. Obviously, the answer is complex, but it must include reference to the evolution of macroeconomic theory over the postwar period—from the “Age of Keynes,” through the Friedmanian era and the return of Neoclassical economics in a particularly extreme form, and, finally, on to the New Monetary Consensus, with a new version of fine-tuning. The story cannot leave out the parallel developments in finance theory—with its efficient markets hypothesis—and in approaches to regulation and supervision of financial institutions.

    This paper critically examines these developments and returns to the earlier Keynesian tradition to see what was left out of postwar macro. For example, the synthesis version of Keynes never incorporated true uncertainty or “unknowledge,” and thus deviated substantially from Keynes’s treatment of expectations in chapters 12 and 17 of the General Theory. It essentially reduced Keynes to sticky wages and prices, with nonneutral money only in the case of fooling. The stagflation of the 1970s ended the great debate between “Keynesians” and “Monetarists” in favor of Milton Friedman’s rules, and set the stage for the rise of a succession of increasingly silly theories rooted in pre-Keynesian thought. As Lord Robert Skidelsky (Keynes’s biographer) argues, “Rarely in history can such powerful minds have devoted themselves to such strange ideas.” By returning to Keynes, this paper attempts to provide a new direction forward.

  • Working Paper No. 644 | December 2010
    It’s the Economic Structure . . . Duh!

    Becoming a rich country requires the ability to produce and export commodities that embody certain characteristics. We classify 779 exported commodities according to two dimensions: (1) sophistication (measured by the income content of the products exported); and (2) connectivity to other products (a well-connected export basket is one that allows an easy jump to other potential exports). We identify 352 “good” products and 427 “bad” products. Based on this, we categorize 154 countries into four groups according to these two characteristics. There are 34 countries whose export basket contains a significant share of good products. We find 28 countries in a “middle product” trap. These are countries whose export baskets contain a significant share of products that are in the middle of the sophistication and connectivity spectra. We also find 17 countries that are in a “middle-low” product trap, and 75 countries that are in a difficult and precarious “low product” trap. These are countries whose export baskets contain a significant share of unsophisticated products that are poorly connected to other products. To escape this situation, these countries need to implement policies that would help them accumulate the capabilities needed to manufacture and export more sophisticated and better connected products.

    Download:
    Associated Program:
    Author(s):
    Jesus Felipe Utsav Kumar Arnelyn Abdon
    Related Topic(s):

  • Working Paper No. 643 | December 2010
    Some Caveats

    Since the early 1990s, the number of papers estimating econometric models and using other quantitative techniques to try to understand different aspects of the Chinese economy has mushroomed. A common feature of some of these studies is the use of neoclassical theory as the underpinning for the empirical implementations. It is often assumed that factor markets are competitive, that firms are profit maximizers, and that these firms respond to the same incentives that firms in market economies do. Many researchers find that the Chinese economy can be well explained using the tools of neoclassical theory. In this paper, we (1) review two examples of estimation of the rate of technical progress, and (2) discuss one attempt at modeling investment. We identify their shortcomings and the problems with the alleged policy implications derived. We show that econometric estimation of neoclassical models may result in apparently sensible results for misinformed reasons. We conclude that modeling the Chinese economy requires a deeper understanding of its inner workings as both a transitional and a developing economy.

    Download:
    Associated Program:
    Author(s):
    Jesus Felipe John McCombie
    Related Topic(s):

  • Working Paper No. 641 | December 2010
    Is the Curse More Difficult to Dispel in Oil States than in Mineral States?

    The hypothesis of the natural resource curse has captivated the economics profession, and since the mid-1990s has generated a large body of policymaking initiatives aimed at dispelling the curse. In this paper, we evaluate how the effect of resource abundance on economic growth has changed since these policies were first introduced by comparing the periods 1970–89 and 1996–2008. We disaggregate resources into oil, gas, coal, and nonfuel mineral resources, and find that disaggregation unmasks diverse effects of resources on concurrent economic and institutional outcomes, as well as on the ability of countries to transform their economic and institutional infrastructure. We consider resource dependence and institutional quality as two channels linking resource abundance to economic growth in the context of an instrumental variables (IV) model. In addition to exploring these channels, the IV framework enables us to test for the endogeneity of the measures of resource dependence and institutional quality in the growth regressions, paying particular attention to the weakness of the instruments.

     

  • Working Paper No. 638 | November 2010

    An extensive literature argues that India’s manufacturing sector has underperformed, and that the country has failed to industrialize; in particular, it has failed to take advantage of its labor-abundant comparative advantage. India’s manufacturing sector is smaller as a share of GDP than that of East Asian countries, even after controlling for GDP per capita. Hence, its contribution to overall GDP growth is modest. Without greater participation of the secondary sector, the argument goes, the country will not be able to develop and become a modern economy. Standard arguments blame the “license-permit raj,” the small-scale industrial policy, and the supposedly stringent laws. All these were part of the industrial policy regime instituted after independence, which favored the heavy-machinery subsector. We show that this policy bias negatively affected the development of India’s labor-intensive sector, as the country should export with comparative advantage a larger number of these products, given its income per capita. However, India’s manufacturing sector is relatively well diversified and sophisticated, given also the country’s income per capita. In particular, India’s inroads into machinery, metals, chemicals, and other capital- and skilled labor–intensive products has allowed the country to accumulate a large number of capabilities. This positions India well to expand its exports of other sophisticated products.

  • Working Paper No. 631 | October 2010

    This paper explores the degree of structural change of the Philippine economy using the input-output framework. It examines how linkages among economic sectors evolved over 1979–2000, and identifies which economic sectors exhibited the highest intersectoral linkages. We find that manufacturing is consistently the key sector in the Philippine economy. Specifically, resource-intensive and scale-intensive manufacturing industries exhibit the highest linkages. We also find a growing impact on the economy of private services and transportation, communication, and storage sectors, probably due to the globalization of these activities. Overall, however, the services sector exhibits lower intersectoral linkages than the manufacturing sector. We conclude that the Philippines cannot afford to leapfrog the industrialization stage and largely depend on a service-oriented economy when the potential for growth still lies primarily in manufacturing.

    Download:
    Associated Program:
    Author(s):
    Nedelyn Magtibay-Ramos Gemma Estrada Jesus Felipe
    Related Topic(s):

  • Working Paper No. 629 | October 2010

    This paper examines the growth experience of the Central Asian economies after the breakup of the Soviet Union. In particular, it evaluates the impact of being landlocked and resource rich. The main conclusions are: (1) Over the period 1994–2006, the landlocked resource-scarce developing countries of Central Asia grew at a slower pace than other landlocked resource-scarce developing countries; on the other hand, resource-rich developing countries in Central Asia grew at the same pace as other resource-rich developing economies. (2) Having “good” neighbors pays off in the form of growth spillovers; this calls for greater regional cooperation and enhanced regional integration through regional transport infrastructure, improved trade facilitation, and enhanced and coordinated economic policies. And (3) countries with a higher share of manufacturing exports in GDP grow faster, and the more sophisticated a country’s export basket, the higher its future growth; Central Asian countries should, therefore, take a more aggressive stance in supporting export diversification and upgrading.

  • Working Paper No. 628 | October 2010
    A Gravity Model

    With a decrease in formal trade barriers, trade facilitation has come into prominence as a policy tool for promoting trade. In this paper, we use a gravity model to examine the relationship between bilateral trade flows and trade facilitation. We also estimate the gains in trade derived from improvements in trade facilitation for the Central Asian countries. Trade facilitation is measured through the World Bank’s Logistic Performance Index (LPI). Our results show that there are significant gains in trade as a result of improving trade facilitation in these countries. These gains in trade vary from 28 percent in the case of Azerbaijan to as much as 63 percent in the case of Tajikistan. Furthermore, intraregional trade increases by 100 percent. Among the different components of LPI, we find that the greatest increase in total trade comes from improvement in infrastructure, followed by logistics and efficiency of customs and other border agencies. Also, our results show that the increase in bilateral trade, due to an improvement in the exporting country’s LPI, in highly sophisticated, more differentiated, and high-technology products is greater than the increase in trade in less sophisticated, less differentiated, and low-technology products. This is particularly important for the Central Asian countries as they try to reduce their dependence on exports of natural resources and diversify their manufacturing base by shifting to more sophisticated goods. As they look for markets beyond their borders, trade facilitation will have an important role to play.

  • Working Paper No. 627 | October 2010

    For the past decade, the US economy has been driven not by industrial investment but by a real estate bubble. Although the United States may seem to be the leading example of industrial capitalism, its economy is no longer based mainly on investing in capital goods to employ labor to produce output to sell at a profit. The largest sector remains real estate, whose cash flow (EBITDA, or earnings before interest, taxes, depreciation, and amortization) accounts for over a quarter of national income. Financially, mortgages account for 70 percent of the US economy’s interest payments, reflecting the fact that real estate is the financial system’s major customer.

    As the economy’s largest asset category, real estate generates most of the economy’s capital gains. The gains are the aim of real investors, as the real estate sector normally operates without declaring any profit. Investors agree to pay their net rental income to their mortgage banker, hoping to sell the property at a capital gain (mainly a land-price gain).

    The tax system encourages this debt pyramiding. Interest and depreciation absorb most of the cash flow, leaving no income tax due for most of the post-1945 period. States and localities have shifted their tax base off property onto labor via income and sales taxes. Most important, capital gains are taxed at a much lower rate than are current earnings. Investors do not have to pay any capital gains tax at all as long as they invest their gains in the purchase of new property.

    This tax favoritism toward real estate—and behind it, toward bankers as mortgage lenders—has spurred a shift in US investment away from industry and toward speculation, mainly in real estate but also in the stock and bond markets. A postindustrial economy is thus largely a financialized economy that carries its debt burden by borrowing against capital gains to pay the interest and taxes falling due.

  • Working Paper No. 626 | October 2010

    We use the real wage–profit rate schedule to examine the direction of technical change in India’s organized manufacturing sector during 1980–2007. We find that technical change was Marx biased (i.e., declining capital productivity with increasing labor productivity) through the 1980s and 1990s; and Hicks neutral (increasing both capital and labor productivity) post-2000. The historical experience suggests that Hicks-neutral technical change may only be a passing phase before we see a return to the long-term trend of Marx-biased technical change. We also find that the real profit rate has increased from about 30 percent to a very high 45 percent, that the real wage rate increased marginally, and that the share of capital in value added doubled. Overall, technical change in India’s organized manufacturing sector during 1980–2007 favored capital.

     

  • Working Paper No. 616 | September 2010
    We rank 5,107 products and 124 countries according to the Hidalgo and Hausmann (2009) measures of complexity. We find that: (1) the most complex products are in machinery, chemicals, and metals, while the least complex products are raw materials and commodities, wood, textiles, and agricultural products; (2) the most complex economies in the world are Japan, Germany, and Sweden, and the least complex, Cambodia, Papua New Guinea, and Nigeria; (3) the major exporters of the more complex products are the high-income countries, while the major exporters of the less complex products are the low-income countries; and (4) export shares of the more complex products increase with income, while export shares of the less complex products decrease with income. Finally, we relate the measure of product complexity with the concept of Complex Products and Systems, and find a high degree of conformity between them.

  • Working Paper No. 613 | August 2010
    From Capabilities to Opportunities
    We develop an Index of Opportunities for 130 countries based on their capabilities to undergo structural transformation. The Index of Opportunities has four dimensions, all of them characteristic of a country’s export basket: (1) sophistication; (2) diversification; (3) standardness; and (4) possibilities for exporting with comparative advantage over other products. The rationale underlying the index is that, in the long run, a country’s income is determined by the variety and sophistication of the products it makes and exports, which reflect its accumulated capabilities. We find that countries like China, India, Poland, Thailand, Mexico, and Brazil have accumulated a significant number of capabilities that will allow them to do well in the long run. These countries have diversified and increased the level of sophistication of their export structures. At the other extreme, countries like Papua New Guinea, Malawi, Benin, Mauritania, and Haiti score very poorly in the Index of Opportunities because their export structures are neither diversified nor sophisticated, and they have accumulated very few and unsophisticated capabilities. These countries are in urgent need of implementing policies that lead to the accumulation of capabilities.

  • Working Paper No. 609 | August 2010
    We forecast average annual GDP growth for 147 countries for 2010–30. We use a cross-country regression model where the long-run fundamentals are determined by countries’ accumulated capabilities and the capacity to undergo structural transformation.
    Download:
    Associated Program:
    Author(s):
    Jesus Felipe Utsav Kumar Arnelyn Abdon
    Related Topic(s):

  • Working Paper No. 596 | May 2010
    The process of constructing impulse-response functions (IRFs) and forecast-error variance decompositions (FEVDs) for a structural vector autoregression (SVAR) usually involves a factorization of an estimate of the error-term variance-covariance matrix V. Examining residuals from a monetary VAR, this paper finds evidence suggesting that all of the variances in V are infinite. Specifically, this study estimates alpha-stable distributions for the reduced-form error terms. The ML estimates of the residuals’ characteristic exponents α range from 1.5504 to 1.7734, with the Gaussian case lying outside 95 percent asymptotic confidence intervals for all six equations of the VAR. Variance-stabilized P-P plots show that the estimated distributions fit the residuals well. Results for subsamples are varied, while GARCH(1,1) filtering yields standardized shocks that are also all likely to be non-Gaussian alpha stable. When one or more error terms have infinite variance, V cannot be factored. Moreover, by Proposition 1, the reduced-form DGP cannot be transformed, using the required nonsingular matrix, into an appropriate system of structural equations with orthogonal, or even finite-variance, shocks. This result holds with arbitrary sets of identifying restrictions, including even the null set. Hence, with one or more infinite-variance error terms, structural interpretation of the reduced-form VAR within the standard SVAR model is impossible.

  • Public Policy Brief No. 110 | March 2010

    The United States has the most expensive health care system in the world, yet its system produces inferior outcomes relative to those in other countries. This brief examines the health care reform debate and argues that the basic structure of the health care system is unlikely to change, because “reform” measures actually promote the status quo. The authors believe that the fundamental problem facing the US health care system is the unhealthy lifestyle of many Americans. They prefer to see a reduced role for private insurers and an increased role for government funding, along with greater public discussion of environmental and lifestyle factors. A Medicare buy-in (“public option”) for people under 65 would provide more cost control (by competing with private insurance), help to solve the problem of treatment denial based on preexisting conditions, expand the risk pool of patients, and enhance the global competitiveness of US corporations—thus bringing the US health care system closer to the “ideal” low-cost, universal (single-payer) insurance plan.

  • Working Paper No. 575 | August 2009

    Utilizing a 2002 household-level World Bank Survey for rural Turkey, this paper explores the link between concentration of land ownership and rural factor markets. We construct a unique index that measures market malfunctioning based on the neoclassical model linking land and labor endowments through factor markets to household income. We further test whether land ownership concentration affects market malfunctioning. Our empirical investigation supports the claim that factor markets are structurally limited in reducing existing inequalities as a result of land ownership concentration. Our findings show that in the presence of land ownership inequality, malfunctioning rural factor markets result in increased land concentration, increased income inequality, and inefficient resource allocation. This work fills an important empirical gap within the development literature and establishes a positive association between asset inequality and factor market failure.

  • Working Paper No. 571 | August 2009

     

    Self-reported home values are widely used as a measure of housing wealth by researchers; the accuracy of this measure, however, is an open empirical question, and requires some type of market assessment of the values reported. In this study, the authors examine the predictive power of self-reported housing wealth when estimating housing prices, utilizing the portion of the University of Michigan’s Health and Retirement Study covering 1992–2006. They find that homeowners, on average, overestimate the value of their properties by 5–10 percent. More importantly, the authors establish a strong correlation between accuracy and the economic conditions at the time of the property’s purchase. While most individuals overestimate the value of their property, those who buy during more difficult economic times tend to be more accurate; in some cases, they even underestimate the property's value. The authors find a surprisingly strong, likely permanent, and in many cases long-lived effect of the initial conditions surrounding the purchase of properties, and on how individuals value them. This cyclicality of the overestimation of house prices provides some explanation for the difficulties currently faced by many homeowners, who were expecting large appreciations in home value to rescue them in case of interest rate increases—which could jeopardize their ability to live up to their financial commitments.

     

    Download:
    Associated Program(s):
    Author(s):
    Hugo Benítez-Silva Selçuk Eren Frank Heiland Sergi Jiménez-Martín
    Related Topic(s):

  • Working Paper No. 546 | October 2008

    Since Christopher Sims’s “Macroeconomics and Reality” (1980), macroeconomists have used structural VARs, or vector autoregressions, for policy analysis. Constructing the impulse-response functions and variance decompositions that are central to this literature requires factoring the variance-covariance matrix of innovations from the VAR. This paper presents evidence consistent with the hypothesis that at least some elements of this matrix are infinite for one monetary VAR, as the innovations have stable, non-Gaussian distributions, with characteristic exponents ranging from 1.5504 to 1.7734 according to ML estimates. Hence, Cholesky and other factorizations that would normally be used to identify structural residuals from the VAR are impossible.

  • Working Paper No. 529 | March 2008
    Two Dreadful Models of Money Demand with an Endogenous Probability of Crime

    This paper attempts to explain one version of an empirical puzzle noted by Mankiw (2003): a Baumol-Tobin inventory-theoretic money demand equation predicts that the average adult American should have held approximately $551.05 in currency and coin in 1995, while data show an average of $100. The models in this paper help explain this discrepancy using two assumptions: (1) the probabilities of being robbed or pick-pocketed, or having a purse snatched, depend on the amount of cash held; and (2) there are costs of being robbed other than loss of cash, such as injury, medical bills, lost time at work, and trauma. Two models are presented: a dynamic, stochastic model with both instantaneous and decaying noncash costs of robbery, and a revised version of the inventory-theoretic model that includes one-period noncash costs. The former model yields an easily interpreted first-order condition for money demand involving various marginal costs and benefits of holding cash. The latter model gives quantitative solutions for money demand that come much closer to matching the 1995 data—$75.98 for one plausible set of parameters. This figure implies that consumers held approximately $96 billion less cash in May 1995 than they would have in a world without crime. The modified Baumol-Tobin model predicts a large increase in household money demand in 2005, mostly due to reduced crime rates.

  • Working Paper No. 521 | November 2007
    Extending Oaxaca’s Approach

    This paper extends the famous Blinder and Oaxaca (1973) discrimination in several directions. First, the wage difference breakdown is not limited to two groups. Second, a decomposition technique is proposed that allows analysis of the determinants of the overall wage dispersion. The authors’ approach combines two techniques. The first of these is popular in the field of income inequality measurement and concerns the breakdown of inequality by population subgroup. The second technique, very common in the literature of labor economics, uses Mincerian earnings functions to derive a decomposition of wage differences into components measuring group differences in the average values of the explanatory variables, in the coefficients of these variables in the earnings functions, and in the unobservable characteristics. This methodological novelty allows one to determine the exact impact of each of these three elements on the overall wage dispersion, on the dispersion within and between groups, and on the degree of overlap between the wage distributions of the various groups.

    However, this paper goes beyond a static analysis insofar as it succeeds in breaking down the change over time in the overall wage dispersion and its components (both between and within group dispersion and group overlapping) into elements related to changes in the value of the explanatory variables and the coefficients of those variables in the earnings functions, in the unobservable characteristics, and in the relative size of the various groups.

  • Working Paper No. 520 | November 2007

    Ragnar Nurkse was one the pioneers in development economics. This paper celebrates the hundredth anniversary of his birth with a critical retrospective of his overall contribution to the field, in particular his views on the importance of employment policy in mobilizing domestic resources and the difficulties surrounding the use of external resources to finance development. It also demonstrates the affinity between Nurkse’s theory of mobilizing domestic resources and employer-of-last-resort proposals.

  • Working Paper No. 514 | September 2007

    This working paper examines the legacy of Keynes’s General Theory of Employment, Interest, and Money (1936) on the occasion of the 70th anniversary of its publication and the 60th anniversary of Keynes’s death. The paper incorporates some of the latest research by prominent followers of Keynes, presented at the 9th International Post Keynesian Conference in September 2006.

  • Working Paper No. 492 | March 2007
    Lucas’s Calculus of Hardship and Chooser-dependent, Non–Expected Utility Preferences

    In his presidential address to the American Economic Association, Robert Lucas claimed that the welfare costs of the business cycle in the United States equaled .05 percent of consumption. His calculation compared the utility of a representative consumer receiving actual per-capita consumption each year with that of a similar consumer receiving the expectation of consumption. To a risk-averse person, the latter path of consumption confers more utility, because it is less volatile. Applying Amartya Sen's chooser-dependent preferences to a non-expected utility case, I will counter Lucas's claim by arguing that people have different attitudes toward risk that is imposed and risk that is voluntarily taken on, and that policymakers, in carrying out public duties, must use sorts of reasoning different from those used by the optimizing consumers of neoclassical economic theory.

  • Working Paper No. 480 | November 2006

    This paper reviews the recently published doctoral thesis of Hyman P. Minsky, summarizing its main contributions to methodology and microeconomics. These were aspects of economics with which Minsky is not usually associated, but which lie at the foundation of his later work. They include critical remarks on Cambridge economics. The paper then draws out some antecedents of Minsky's ideas in the work of Henry Simons, and highlights the Marshallian monetary analysis that he adopted.

    Download:
    Associated Program:
    Author(s):
    Jan Toporowski

  • Working Paper No. 474 | August 2006

    The essential insight Minsky drew from Keynes was that optimistic expectations about the future create a margin, reflected in higher asset prices, which makes it possible for borrowers to access finance in the present. In other words, the capitalized expected future earnings work as the collateral against which firms can borrow in financial markets or from banks. But, then, the value of long-lived assets cannot be assessed on any firm basis, as they are highly sensitive to the degree of confidence that markets have about certain events and circumstances that will unfold in the future. This means that any sustained shortfall in economic performance in relation to the level of expectations that are already capitalized in asset prices may promote the view that asset prices are excessive. Once the view that asset prices are excessive takes hold in financial markets, higher asset prices cease to be a stimulant. Initially debt-led, the economy becomes debt-burdened. In this article, it is argued that Keynes's views on the alternation of the "bull" and "bear" sentiment and asset price speculation over the business cycle can explain two of Minsky's central propositions relative to business cycle turning points that have often been found less than fully persuasive: (1) that financial fragility increases gradually over the expansion, and, (2) that the interest rate sooner or later, increases setting off a downward spiral bringing the expansion to an end.

    Download:
    Associated Program:
    Author(s):
    Korkut A. Ertürk

  • Working Paper No. 452 | June 2006
    Properties of the Minskyan Analysis and How to Theorize and Model a Monetary Production Economy
    This is the first part of a three-part analysis of the Minskyan framework. Via an extensive review of the literature, this paper looks at 12 essential elements necessary to get a good understanding of Minsky's theory, and argues that those elements are central to comprehend how a monetary production economy works. This paper also shows how important these 12 elements are for the modeling of the Minskyan framework, and how the omission of one of them may be detrimental to an understanding of the essential dynamics that Minsky put forward: the Financial Instability Hypothesis.

  • Working Paper No. 448 | May 2006
    The Gibson paradox, long observed by economists and named by John Maynard Keynes (1936), is a positive relationship between the interest rate and the price level. This paper explains the relationship by means of interest-rate, cost-push inflation. In the model, spending is driven in part by changes in the rate of interest, and the central bank sets the interest rate using a policy rule based on the levels of output and inflation. The model shows that the cost-push effect of inflation, long known as Gibson’s paradox, intensifies destabilizing forces and can be involved in the generation of cycles.

  • Working Paper No. 444 | March 2006

    This paper elaborates a simple model of growth with a Taylor-like monetary policy rule that includes inflation targeting as a special case. When the inflation process originates in the product market, inflation targeting locks in the unemployment rate prevailing at the time the policy matures. Even though there is an apparent NAIRU and Phillips curve, this long-run position depends on initial conditions; in the presence of stochastic shocks, it would be path dependent. Even with an employment target in the Taylor Rule, the monetary authority will generally achieve a steady state that misses both its targets since there are multiple equilibria. With only one policy instrument, Tinbergen’s Rule dictates that policy can only achieve one goal, which can take the form of a linear combination of the two targets.

    Download:
    Associated Program:
    Author(s):
    Thomas R. Michl

  • Working Paper No. 443 | February 2006

    This paper studies personality as a potential explanation for wage differentials between apparently similar workers. This follows initial studies by Jencks (1979) that suggest that certain personality traits, such as industriousness and leadership, have an impact on earnings. The paper aims to provide a theoretical framework within which these effects may be analyzed.

    The study begins by outlining four issues as a backdrop to the model: rationality, the industry, firms, and workers. A crucial factor to the model is the meme—a mental gene that affects personality. Taking these four factors into consideration, the Contested Exchange model from Bowles and Gintis (1990) is used. Then, it is adapted to study memetic effects on the wage rate. This is followed by an analysis of how memes may affect personality and thus earnings. The issues that require further study and resolution are 1) which traits create wage differentials, and 2) two-way causality: does personality affect the wage, or does a wage premium become an incentive for a person to adopt new memes?

    Download:
    Associated Program:
    Author(s):
    Kaye K.W. Lee

  • Working Paper No. 438 | January 2006
    An Assessment after 70 Years

    This paper first examines two approaches to money adopted by John Maynard Keynes in his General Theory (GT). The first is the more familiar “supply and demand” equilibrium approach of Chapter 13 incorporated within conventional macroeconomics textbooks. Indeed, even post-Keynesians utilizing Keynes’s “finance motive” or the “horizontal” money supply curve adopt similar methodology. The second approach of the GT is presented in Chapter 17, where Keynes drops “money supply and demand” in favor of a liquidity preference approach to asset prices that offers a more satisfactory treatment of money’s role in constraining effective demand. In the penultimate section, I return to Keynes’s earlier work in his Treatise on Money (TOM), as well as the early drafts of the GT, to obtain a better understanding of the nature of money. I conclude with policy implications.

  • Working Paper No. 435 | January 2006

    The sharp exchanges that Keynes had with some of his critics on the loanable funds theory made it harder to appreciate the degree to which his thought was continuous with the tradition of monetary analysis that emanates from Wicksell, of which Keynes’s A Treatise on Money was a part. In the aftermath of the General Theory (GT), many of Keynes’s insights in the Treatise were lost or abandoned because they no longer fit easily in the truncated theoretical structure he adopted in his latter work. A part of Keynes’s analysis in the Treatise which emphasized the importance of financial conditions and asset prices in determining firms’ investment decisions was later revived by Minsky, but another part, about the way self-sustained biases in asset price expectations in financial markets exerted their influence over the business cycle, was mainly forgotten. This paper highlights Keynes’s early insights on asset price speculation and its link to monetary circulation, at the risk perhaps, of downplaying the importance of the GT.

    Download:
    Associated Program:
    Author(s):
    Korkut A. Ertürk

  • Working Paper No. 415 | November 2004
    A Cointegration Method

    This paper derives measures of potential output and capacity utilization for a number of OECD countries, using a method based on the cointegration relation between output and the capital stock. The intuitive idea is that economic capacity (potential output) is the aspect of output that co-varies with the capital stock over the long run. We show that this notion can be derived from a simple model that allows for a changing capital-capacity ratio in response to partially exogenous, partially embodied, technical change. Our method provides a simple and general procedure for estimating capacity utilization. It also closely replicates a previously developed census-based measure of US manufacturing capacity-utilization. Of particular interest is that our measures of capacity utilization are very different from those based on aggregate production functions, such as the ones provided by the IMF.

    Download:
    Associated Program(s):
    Author(s):
    Anwar M. Shaikh Jamee K. Moudud
    Region(s):
    Europe

  • Working Paper No. 413 | October 2004
    Heilbroner's Worldly Philosophy, Lowe's Political Economics, and the Methodology of Ecological Economics

    Ecological economics is a transdisciplinary alternative to mainstream environmental economics. Attempts have been made to outline a methodology for ecological economics and it is probably fair to say that, at this point, ecological economics takes a "pluralistic" approach. There are, however, some common methodological themes that run through the ecological economics literature. This paper argues that the works of Adolph Lowe and Robert Heilbroner can inform the development of some of those themes. Both authors were aware of the environmental challenges facing humanity from quite early on in their work, and quite ahead of time. In addition, both Lowe's Economics and Sociology (and related writings) and Heilbroner's "Worldly Philosophy" (itself influenced by this work of Lowe) recognized the endogeneity of the natural environment, the impact of human activity on the environment, and the implications of this for questions of method. Lowe and Heilbroner also became increasingly concerned with issues related to the environment over time, such that these issues became of prime importance in their frameworks. This work deals directly with ecological and environmental issues; both authors also dealt with other issues that relate to the environmental challenge, such as technological change. But it is not only their work that explicitly addresses the environment or relates to environmental challenges that is relevant to the concerns of ecological economists. Both Heilbroner's "Worldly Philosophy" and Lowe's "Political Economics" offer insights that may prove useful in developing a methodology of ecological economics. Ecological economists have taken a pluralistic approach to methodology, but the common themes in this work regarding the importance and nature of vision, analysis (including structural analysis), scenarios, implementation, the necessity of working backwards, the role for imagination, rejecting the positive/normative dichotomy, and so on, all are issues that have been elaborated in Lowe's work, and in ways that are relevant to ecological economics. The goal of the paper is actually quite modest: to make ecological economists aware of the work of the two authors, and get them interested enough to explore the possible contribution of these ideas to their methodological approach.

  • Book Series | August 2004
    Edited and with an introduction by Dimitri B. Papadimitriou
    This unique volume presents, for the first time in publication, the original doctoral thesis of Hyman P. Minsky, one of the most innovative thinkers on financial markets. Dimitri B. Papadimitriou’s introduction places the thesis in a modern context, and explains its relevance today. The thesis explores the relationship between induced investment, the constraints of financing investment, market structure, and the determinants of aggregate demand and business cycle performance. Forming the basis of his subsequent development of financial Keynesianism and his “Wall Street” paradigm, Minsky investigates the relevance of the accelerator-multiplier models of investment to individual firm behavior in undertaking investment dependent on cost structure. Uncertainty, the coexistence of other market structures, and the behavior of the monetary system are also explored.

     

    In assessing the assumptions underlying the structure and coefficient values of the accelerator models frequently used, the book addresses their limitations and inapplicability to real-world situations where the effect of financing conditions on the balance sheet structures of individual firms plays a crucial and determining role for further investment. Finally, Minsky discusses his findings on business cycle theory and economic policy.

    This book will greatly appeal to advanced undergraduate and graduate students in economics, as well as to policymakers and researchers. In addition, it will prove to be valuable supplementary reading for those with an interest in advanced microeconomics.

    Published By: Edward Elgar Publishing, Inc.

  • Working Paper No. 405 | April 2004

    We address the finance motive and the determination of profits in the Monetary Theory of Production associated with the Circuitist School. We show that the "profit paradox" puzzle addressed by many authors who adopt this approach can be solved by integrating a simple Circuit model with a consistent set of stock-flow accounts. We then discuss how to reconcile some crucial differences between the Circuit approach and other Keynesian and post-Keynesian models.

  • Public Policy Brief Highlights No. 70A | November 2002
    Medical Practice Norms and the Quality of Care
    This brief considers the interaction between physician incentive systems and product market competition in the delivery of medical services via managed care organizations. At the center of the analysis is the process by which health maintenance organizations (HMOs) assemble physician networks and the role these networks play in the competition for customers. The authors find that although physician practice styles respond to financial incentives, there is little evidence that HMO cost-containment incentives cause a discernable reduction in care quality. They propose a model of the managed care marketplace that solves for both physician incentive contracts and HMO product market strategies in an environment of extreme information asymmetry: physicians perceive the quality of care they offer perfectly and their patients do not perceive it at all.

  • Public Policy Brief No. 70 | November 2002
    Medical Practice Norms and the Quality of Care

    This brief considers the interaction between physician incentive systems and product market competition in the delivery of medical services via managed care organizations. At the center of the analysis is the process by which health maintenance organizations (HMOs) assemble physician networks and the role these networks play in the competition for customers. The authors find that although physician practice styles respond to financial incentives, there is little evidence that HMO cost-containment incentives cause a discernable reduction in care quality. They propose a model of the managed care marketplace that solves for both physician incentive contracts and HMO product market strategies in an environment of extreme information asymmetry: physicians perceive the quality of care they offer perfectly and their patients do not perceive it at all.

  • Working Paper No. 353 | September 2002
    Racing to the Bottom or Pulling to the Top?

    The incentive contracts that managed care organizations write with physicians have generated considerable controversy. Critics fear that if informational asymmetries inhibit patients from directly assessing the quality of care provided by their physician, competition will lead to a "race to the bottom" in which managed care plans induce physicians to offer only minimal levels of care. To analyze this issue we propose a model of competition between managed care organizations. The model serves for both physician incentive contracts and HMO product market strategies in an environment of extreme information asymmetry--physicians perceive quality of care perfectly, and patients don't perceive it at all. We find that even in this stark setting, managed care organizations need not race to the bottom. Rather, the combination of product differentiation and physician practice norms causes managed care organizations to race to differing market niches, with some providing high levels of care as a means of assembling large physician networks. We also find that relative physician practice norms, defined endogenously by the standards of medical care prevailing in a market, exert a "pull to the top" that raises the quality of care provided by all managed care organizations in the market. We conclude by considering the implications of our model for public policies designed to limit the influence of HMO incentive systems.

    Download:
    Associated Program:
    Author(s):
    David J. Cooper James B. Rebitzer

  • Working Paper No. 352 | September 2002

    This paper is concerned with two issues. First, it discusses some of the main problems and inferences the methodological approach of critical realism raises for empirical work in economics, while considering an approach adopted to try to overcome these problems. Second, it provides a concrete illustration of these arguments, with reference to our recent research project analyzing the single European currency. It is argued that critical realism provides a method that is partially appropriate to concrete levels of analysis, as illustrated by the attempt to explain the falling value of the euro. It is concluded that the critical realist method is inappropriate to the most abstract and fundamental levels of theory.

  • Working Paper No. 348 | June 2002

    In his Treatise on Money, John Maynard Keynes relied on two different premises to argue that the interest rate need not rise with rising levels of expenditure. One of these was the elasticity of the money supply, and the other was the interaction between financial and industrial circulation. A decrease (increase) in what Keynes called the bear position was similar in its impact to that of a policy-induced increase (decrease) in the money supply. In his General Theory, this second line of argument lost much of its force as it became reformulated under the rubric of Keynes's liquidity preference theory of interest. Assuming that the interest rate sets the return on capital, Keynes dismissed the effect of bull or bear sentiment in equity markets as a second-order complication that can be ignored in analyzing the equilibrium level of investment and output. The objective of this paper is to go back to this old theme from the Treatise and underscore its importance for the Keynesian theory of the business cycle.

    Download:
    Associated Program:
    Author(s):
    Korkut A. Ertürk

  • Working Paper No. 347 | June 2002
    An Investigation into the Keynesian Roots of Milton Friedman's Monetary Thought and Its Apparent Monetarist Legacies

    It is widely perceived that today's conventional monetary wisdom, and the common practice of monetary policy based thereupon, is essentially "monetarist" by nature, if not by name. One objective of this paper is to assess whether monetarism has had a lasting effect on the theory and practice of monetary policy; another is to scrutinize the key dividing lines between Milton Friedman's monetary thought and that of John Maynard Keynes. Among the paper's main theoretical findings are that the key issue is the theory of interest, which is at the root of differences in approach to money demand and liquidity preference. Similarities are more pronounced with respect to the supply of money and monetary policy control issues. However, while Keynes favored a stabilized wage unit combined with a flexible central bank to steer interest rates and aggregate demand, Friedman advocated a stabilized central bank combined with a free interest rate and employment determination in financial and labor markets, respectively. Additional differences arise at the practical and empirical levels: the dynamics of adjustment processes and expectation formation on the one hand, and the relative efficiency and riskiness of market-driven versus government-guided adjustments on the other. The puzzling fact is that, despite today's dominant market-enthusiast ideology, Friedman's idea of delegation—not to independent central bankers, but to the markets—enjoys remarkably little popularity.

  • Working Paper No. 340 | October 2001

    We studied the effect of physician incentives in an HMO network. Physician incentives are controversial because they may induce doctors to make treatment decisions that differ from those they would choose in the absence of incentives. We set out a theoretical framework for assessing the degree to which incentive contracts do, in fact, induce physicians to deviate from a standard, guided only by patient interest and professional medical judgment. Our empirical evaluation of the model relies on details of the HMO's incentive contracts and access to the firms' internal expenditure records. We estimate that the HMO's incentive contract provides a typical physician an increase, at the margin, of $.10 in income for each $1.00 reduction in medial utilization expenditures. The average response is a 5 percent reduction in medical expenditures. We also find suggestive evidence that financial incentives linked to commonly used "quality" measures may stimulate an improvement in measured quality.

    Download:
    Associated Program:
    Author(s):
    Martin Gaynor James B. Rebitzer Lowell J. Taylor

  • Working Paper No. 339 | October 2001

    This paper addresses the problem of the conceptualization of social structure and its relationship to human agency in economic sociology. The background is provided by John Maynard Keynes's observations on the effects of uncertainty and conventional behavior on the stock market; the analysis consists of a comparison of the social ontologies of the French Intersubjectivist School and the Economics as Social Theory Project in the light of these observations. The theoretical argument is followed by concrete examples drawn from a prominent recent study of the stock market boom of the 1990s.

    Download:
    Associated Program:
    Author(s):
    Jörg Bibow Paul Lewis Jochen Runde

  • Public Policy Brief Highlights No. 64A | June 2001
    A Study of the Effects of Campaign Finance Reform
    Proposals for campaign finance reform are essentially based on the belief that political influence can be bought with financial donations to a candidate’s campaign. But do contributions really influence the decisions of legislators once they are in office? In this brief, Christopher Magee examines the link between campaign donations and legislators’ actions. His results suggest that political action committees donate campaign funds to challengers in order to affect the outcome of the election by increasing the challenger’s chances of winning. These contributions have a large effect on the election outcome but do not seem to affect challengers’ policy stances. In contrast, campaign contributions to incumbents do not raise their chances of being reelected and seem to be given with the hope of gaining influence.
    Download:
    Associated Program:
    Author(s):
    Christopher Magee

  • Public Policy Brief No. 64 | June 2001
    A Study of the Effects of Campaign Finance Reform

    Proposals for campaign finance reform are essentially based on the belief that political influence can be bought with financial donations to a candidate’s campaign. But do contributions really influence the decisions of legislators once they are in office? In this brief, Christopher Magee examines the link between campaign donations and legislators’ actions. His results suggest that political action committees donate campaign funds to challengers in order to affect the outcome of the election by increasing the challenger’s chances of winning. These contributions have a large effect on the election outcome but do not seem to affect challengers’ policy stances. In contrast, campaign contributions to incumbents do not raise their chances of being reelected and seem to be given with the hope of gaining influence.

    Download:
    Associated Program:
    Author(s):
    Christopher Magee

  • Working Paper No. 327 | March 2001
    Evidence from the 1870 and 1880 Censuses

    Using unpublished data contained in samples from the manuscripts of the 1870 and 1880 censuses of manufactures—the earliest comprehensive estimates available—this study examines the extent and correlates of part-year manufacturing during the late 19th century. While the typical manufacturing plant operated full-time, part-year operation was not uncommon; its likelihood of this varied across industries and locations and with plant characteristics. Workers in such plants received somewhat higher monthly wages than those in firms that operated year round, compensating them somewhat for their losses and possible inconvenience.

    Download:
    Associated Program:
    Author(s):
    Jeremy Atack Fred Bateman Robert A. Margo

  • Book Series | February 2001
    Edited by William Lazonick and Mary O’Sullivan

    How can we explain the persistent worsening of the income distribution in the United States in the 1980s and 1990s? What are the prospects for the reemergence of sustainable prosperity in the American economy over the next generation? In addressing these issues, this book focuses on the microeconomics of corporate investment behavior, especially as reflected in investments in integrated skill bases, and the macroeconomics of household saving behavior, especially as reflected in the growing problem of intergenerational dependence of retirees on employees. Specifically, the book analyzes how the combines pressures of excessive corporate growth, international competition, and intergenerational dependence have influenced corporate investment behavior over the past two decades. Part One sets out a perspective on how corporate investment in skill bases can support sustainable prosperity. Part Two presents studies of investments in skill bases in the machine tool, aircraft engine, and medical equipment industries. Part Three provides a comparative and historical analysis of corporate governance and sustainable prosperity in the United States, Japan, and Germany. By integrating a theory of innovative enterprise with in-depth empirical analyses of industrial development and international competition, Corporate Governance and Sustainable Prosperity explores the relation between changes in corporate resource allocation and the persistence of income inequality in the United States in the 1980s and 1990s. Contributors to the volume include Beth Almeida, Robert Forrant, Michael Handel, William Lazonick, Philip Moss, Mary O’Sullivan, and Chris Tully. Editors Lazonick and O’Sullivan are Levy Institute research associates, as is contributor Handel.

    Published By: Palgrave Macmillan, Ltd.

  • Working Paper No. 318 | December 2000
    British Resistance to American Multilateralism

    Fiftieth-anniversary explanations for the efficacy of the GATT imply that the institution's longevity is testimony to the free trade principles upon which it is based. In this light, the predominantly American architects of the system figure as free trade visionaries who benevolently imposed postwar institutions of international cooperation on their war-torn allies. This paper takes issue with such a characterization. Instead, the success of the GATT has been crucially dependent upon its ability to generate pragmatic and detailed policy via a uniquely inclusive forum. An effective institutional procedure, not free trade dogma, has proved key to its enduranc—and this feature has been in place since the institution's inception.

    Download:
    Associated Program:
    Author(s):
    James N. Miller

  • Working Paper No. 317 | November 2000
    Evidence from the 1880 Census of Manufactures

    Data from the manuscript census of manufacturing are used to estimate the effects of the length of the working day on output and wages. We find that the elasticity of output with respect to daily hours worked was positive but less than one—implying diminishing returns to increases in working hours. When the annual number of days worked is held constant, the average annual wage is found to be positively related to daily hours worked, but again the elasticity less than 1.0. At the modal value of daily hours (10 hours per day), it appears that from the standpoint of employers, the marginal benefits of a shorter working day (a lower wage bill) were approximately offset by the marginal cost (lower output).

    Download:
    Associated Program:
    Author(s):
    Jeremy Atack Fred Bateman Robert A. Margo

  • Working Paper No. 316 | November 2000
    A Reassessment of Export-led Growth

    This paper contrasts the different approaches to export-led growth used by Harrod and Thirlwall. It argues that, unlike Thirlwall's model, Harrod emphasized the importance of both demand- and supply-sides in his analysis of growth. The fundamental difference between the two authors lies in their differing characterizations of the long run. While both authors assume unemployment, Thirlwall's long run is presumably consistent with excess capacity, while Harrod's warranted path assumes normal capacity growth. Harrod's perspective suggests that if the warranted growth rate exceeds the natural growth rate, desired saving is excessive relative to the amount that is necessary to maintain the economy along its maximum growth path. Under these circumstances, rising exports have the beneficial effect of adjusting the warranted path to the economy's maximum growth path while, at the same time, giving a boost to the actual growth rate. If, however, the warranted growth rate is lower than the natural rate, then rising net exports have to be accompanied by appropriate fiscal and/or tax policies to raise warranted growth. In either case, the long-run growth rate is regulated by the social saving rate (other things equal). Data for a number of OECD countries tend to confirm this implication of what might be called a classical-Harrodian perspective. The Harrodian growth tradition suggests that growth in an open economy, with normal capacity utilization and persistent cycles, can be characterized as export-oriented rather than export-led since both demand- and supply-side factors are important.

    Download:
    Associated Program:
    Author(s):
    Jamee K. Moudud

  • Working Paper No. 309 | August 2000
    The Views of Jerome Levy and Michal Kalecki

    Profits are the incentive for production and therefore employment in almost all of the world's economies; they also may represent exploitation of workers and consumers. Jerome Levy, using a complex process, derived the profits identity during the years 1908–1914. Michal Kalecki, taking advantage of the development of national accounting, derived it in the 1930s. Levy viewed the equation as a tool for developing policies that would enable capitalist economies to achieve high rates of employment. Recent American experience gives weight to his views. Kalecki's insights from the identity strengthened his belief that unemployment was inescapable under capitalism. He would find empirical support in Europe's high unemployment rates during the past two decades.

    Download:
    Associated Program:
    Author(s):
    S. Jay Levy

  • Working Paper No. 303 | June 2000
    A Minskian Analysis of Japan's Lost Decade

    This paper asks two questions: First, can we explain Japan's ongoing financial crisis by means of an institutional analysis similar to the one Hyman P. Minsky applied to the American economy during the postwar period? Second, what are the implications of this analysis for what is going on in the Canadian and American economies today?

    To answer the first question, we develop an interpretation of Japan's postwar history, in particular, the evolution of its financial institutions that we believe fits Minsky's institutional analysis. We begin by identifying three broad periods in Japan's postwar economic history through 1990. We label the 1945 to 1972 period as “stable,” thanks in part to tight regulation of the financial and trading system. By the early 1970s and through the end of the decade, however, these systems were under severe strain for both internal and external reasons. Internally, Japan's largest companies were relying less on bank credit to finance investment and trade and more on retained earnings. This affected the financial system by reducing bank profitability and forcing banks to seek business elsewhere, notably in the real estate sector. Externally, Japan suffered from the collapse of the Bretton Woods exchange-rate system, increasing trade tensions with the United States that led to “forced” deregulation, and what were two very difficult oil shocks for a country unusually reliant on oil imports. During the last period, from 1980 to 1990, Japan's economy easily outperformed the OECD countries, leading to yet more pressure from abroad to deregulate and stimulate domestic demand. Ultimately, we suggest that the country's financial system was not able to adapt adequately to a rapidly changing domestic and international setting. This created a powder keg for ill-considered fiscal and monetary policy (surpluses and high interest rates) and fertile ground for the financial crisis that took root in 1990 and persists to some extent today.

    To answer the second question, we draw parallels between events leading up to Japan's 1990 stock market crash and events in the United States and Canada today, with particular emphasis on the current policy stance in both countries toward budget surpluses and inflation. We argue there are good reasons to be concerned that history may be about to repeat itself.

    Download:
    Associated Program:
    Author(s):
    Marc-André Pigeon

  • Working Paper No. 302 | June 2000
    A Neo-Kaldorian Model

    This paper presents a simple growth model grounded in a stock-flow monetary accounting framework. The framework ensures that all stocks and all flows are accounted for and that the real and financial sides of the economy are coherent with one another. Credit, money, equities and stocks of real capital link periods of time with one another in articulated sequences. Wealth is allocated between assets on Tobinesque principles but no equilibrium condition is necessary to bring the "demand" for money into equivalence with its "supply." Growth and profit rates, as well as valuation, debt and capacity utilization ratios are analysed using simulations in which a growing economy is assumed to be shocked by changes in interest rates, liquidity preference, real wages, and the parameters which determine how firms finance investment.

    Download:
    Associated Program:
    Author(s):
    Marc Lavoie Wynne Godley

  • Working Paper No. 301 | May 2000

    It is commonly assumed that jobs in the United Sates require ever greater levels of skill and, more strongly, that this trend is accelerating as a result of the diffusion of information technology. This has led to substantial concern over the possibility of a growing mismatch between the skills workers possess and the skills employers demand, reflected in debates over the need for education reform and the causes of the growth in earnings inequality. However, efforts to measure trends have been hampered by the lack of direct measures of job skill requirements. This paper uses previously unexamined measures from the quality of Employment Surveys and the Panel Study of Income Dynamics to examine trends in job education and training requirements and provide a validation tool for skill measures in the Dictionary of Occupational Titles, whose quality has long been subject to question. Results indicate that job skill requirements have increased steadily from the 1970s through the 1990s but that there has been no acceleration in recent years that might explain the growth in earnings inequality. There has also been no dramatic change in the number of workers who are undereducated. These results reinforce the conclusions of earlier work that reports of a growing skills mismatch are likely overdrawn.

    Download:
    Associated Program:
    Author(s):
    Michael J. Handel

  • Working Paper No. 299 | March 2000

    The decision about how much to spend on a public program depends on the answers to two questions: Should the government pursue the goal of this program? Given that the program's goal should be adopted, what is the optimal level of spending to achieve it? If the answer to the first question is yes, it might seem desirable to set spending at the optimal level to achieve the goal. However, spending is often not set at that level, and there is likely to be an underfunding bias. This paper uses the median voter theorem to demonstrate that the level that is approved does not depend solely on the amount supporters think is necessary. Opponents of the program's goal and supporters of the goal who favor relatively less spending than other supporters favor may form a coalition that ensures that the level of spending approved will be lower than the level most supporters think is optimal. The more opponents there are and the more disagreement there is among supporters about the optimal level, the greater the difference between the actual level of spending and the amount the typical supporter believes is optimal

    Download:
    Associated Program:
    Author(s):
    Karl Widerquist

  • Working Paper No. 292 | December 1999
    Campaign Contributions, Policy Choices, and Election Outcomes

    This paper examines political action committees' motivations for giving campaign contributions to candidates for political office. First, the paper estimates the effect of campaign contributions received by candidates on the outcomes of the 1996 elections to the United States House of Representatives. Next, the paper uses a Congressional Quarterly survey of candidates' policy positions to determine the impact of contributions on the policy stances adopted by the candidates. The empirical results suggest that political action committees donate campaign funds to challengers in order to affect the outcome of the election. Campaign contributions received by challengers have a large impact on the election outcome but do not affect the challengers' policy stances on any of the five issues examined in this paper. Campaign contributions to incumbents do not raise their chances of election, however, and affect their policy decisions on only one issue. Some evidence is presented that PAC contributions to incumbents may be given primarily in order to secure unobservable services for the political action committees.

    Download:
    Associated Program:
    Author(s):
    Christopher Magee

  • Working Paper No. 275 | July 1999

    In this paper, the authors discuss Minsky's analysis of the evolution of one variety of capitalism—financial capitalism—which developed at the end of the nineteenth century and was the dominant form of capitalism in the developed countries after World War II. Minsky's approach, like those of Schumpeter and Veblen, emphasized the importance of market power in this stage of capitalism. According to Minsky, modern capitalism requires expensive and long-lived capital assets, which, in turn, necessitate financing of positions in these assets as well as market power in order to gain access to financial markets. It is the relation between finance and investment that creates instability in the modern capitalist economy. Financial capitalism emerged from World War II with an array of new institutions that made it stronger than ever before. As the economy evolved, it moved from this more successful form of financial capitalism to the fragile form of capitalism that exists today.

  • Working Paper No. 265 | March 1999

    This paper demonstrates that the terms of trade are determined by the equalization of profit rates across international regulating capitals, for socially determined national real wages. This provides a classical/Marxian basis for the explanation of real exchange rates, based on the same principle of absolute cost advantage which rules national prices. Large international flows of direct investment are not necessary for this result, since the international mobility of financial capital is sufficient. Such a determination of the terms of trade implies that international trade will generally give rise to persistent structural trade imbalances covered by endogenously generated capital flows which will fill any existing gaps in the overall balance of payments. It also implies that devaluations will not have a lasting effect on trade balances, unless they are also attended by fundamental changes in national real wages or productivities. Finally, it implies that neither the absolute nor the relative version of the Purchasing Power Parity hypothesis (PPP) will generally hold, with the exception that the relative version of PPP will appear to hold when a country experiences a relatively high inflation rate. Such patterns are well documented, and in contrast to comparative advantage or PPP theory, the present approach implies that the existing historical record is perfectly coherent. Empirical tests of the propositions advanced in this paper have been conducted elsewhere, with good results.

    Download:
    Associated Program:
    Author(s):
    Anwar M. Shaikh

  • Working Paper No. 263 | February 1999
    A Historical Perspective of European Economic and Monetary Integration

    This paper traces the history and the institutional background of European integration to the establishment of the economic and monetary union in the European Union (EU). After the establishment of the European Economic Community (EEC) in the late 1950s, attempts at monetary integration, and ultimately monetary union, tended to assume importance only as a result of financial crisis and then returned to being a vague objective as soon as the crisis recedes. In recent years, however, monetary integration has assumed greater urgency. Economic union, on the other hand, has followed a smoother transition.

    Economic integration was used after the Second World War to realize political goals, chiefly to anchor West Germany within the western European alliance. Since that time the economies of member states have slowly integrated. The economic environment of the 1950s is a far cry from the integrated community of today. In the 1950s European currencies were not convertible and domestic trade was highly protected. Intra-European trade was based on bilateral clearing arrangements institutionalized by the European Payments Union. Today EU currencies are fully convertible; capital controls, intra-EU tariffs, and quotas have been eliminated; and the single market has been completed.

    Monetary union has gone through a number of stages. The Werner Plan of the early 1970s, which set the goal of economic and monetary union by the end of the decade, was only partially implemented. Its failure can be put down to unfavorable international economic conditions and poor institutional structures. In the early 1980s a new monetary initiative, the European Monetary System (EMS), was launched. It struggled through its initial phase until it was replaced by the current euro arrangements. These successive stages ultimately culminated in the Maastricht Treaty, which laid out a precise path and timetable for economic and monetary union.

  • Working Paper No. 261 | January 1999

    This paper extends earlier work that argued that liquidity preference theory should be interpreted as a theory of value. Here I will argue that two theories of value are needed for analysis of a monetary production economy: the labor theory of value and the liquidity preference theory of value. Both Keynes and Marx were trying to develop a monetary theory of production; Marx, of course, adopted a labor theory of value in his analysis, and it was previously argued that Keynes adopted a liquidity preference theory in his. A monetary theory of production should adopt both, however, and I will argue that Keynes seems to have recognized this. Further, Keynes did adopt labor hours as the measure of value and said he agreed that labor produces all value. I admit it is still a leap to claim that Keynes accepted both theories of value. Instead, I argue he should have adopted both and will show that this is consistent with the purposes of the General Theory.

  • Public Policy Brief No. 47 | December 1998
    An Ethical Framework for Cost-Effective Medicine

    HMO medicine sets up an inevitable conflict between the physicians’ traditional fiduciary role and the financial interests of the health plan and its physicians. Regulatory interventions, such as the formulation of rules regarding clinical practice, put government in a micromanagement role it cannot hope to perform well. Government instead should focus on building a regulatory framework to protect patients that would deal with the ethical problems that flow from the very design of HMO medicine. It should address fundamental issues, principally, the financial incentives under which HMO physicians work, restrictions on communication with patients about care options not covered by their health plan, accountability for decisions to withhold care, and the return of care decisions to the province of the physician. The challenge for regulators is to retain the power of the economic incentive to encourage cost-conscious practice, but to separate it from the welfare of patients.

    Download:
    Associated Program:
    Author(s):
    Walter M. Cadette

  • Public Policy Brief Highlights No. 47A | December 1998
    An Ethical Framework for Cost-Effective Medicine
    HMO medicine sets up an inevitable conflict between the physicians’ traditional fiduciary role and the financial interests of the health plan and its physicians. Regulatory interventions, such as the formulation of rules regarding clinical practice, put government in a micromanagement role it cannot hope to perform well. Government instead should focus on building a regulatory framework to protect patients that would deal with the ethical problems that flow from the very design of HMO medicine. It should address fundamental issues, principally, the financial incentives under which HMO physicians work, restrictions on communication with patients about care options not covered by their health plan, accountability for decisions to withhold care, and the return of care decisions to the province of the physician. The challenge for regulators is to retain the power of the economic incentive to encourage cost-conscious practice, but to separate it from the welfare of patients.
    Download:
    Associated Program:
    Author(s):
    Walter M. Cadette

  • Working Paper No. 255 | October 1998

    This paper argues that economists require a particular concept of time to develop theory with greater explanatory power in describing and analyzing the sort of economy in which we are primarily interested--the monetary economy usually termed capitalism. Economists of various persuasions have recognized the importance of a concept of time, but we argue that a very specific concept is required. We propose a concept of time that is consistent with the perception and experience of time in a monetary or capitalist economy. This concept of time is determined by the debt cycle, and the length of this cycle is determined by the interest rate. Thus, while our proposed time measure is certainly historical and sequential in nature (months, years), it is not simply clock time: the length of economic time is fluid and is regulated by the interest rate, a variable of significance in dictating a host of socially important effects.

  • Working Paper No. 254 | October 1998
    Abba Lerner and Adolph Lowe on Economic Method, Theory, History, and Policy

    This paper argues that the ideas of Abba Lerner and Adolph Lowe contain overlapping and complementary insights and themes that may contribute to the development of a new approach to macroeconomics. They also have rather specific practical policy implications. Lerner's notions of functional finance and money as a creature of the state are combined with Lowe's structural analysis to forge an approach to macroeconomic theory and policy that considers both aggregate proportionality and balance and sectoral relations and that addresses issues regarding monetary production and effective demand as well as ongoing structural and technological change. Such a "new instrumental macroeconomics," focusing on full employment, price stability, and a decent standard of living for all, has important points of contact with recent proposals promoting job opportunities through direct job creation with a public service corps that benefits communities while serving as a buffer stock of labor providing price stability.

  • Working Paper No. 251 | September 1998

    Paul Davidson is one of the best known and most influential post-Keynesian economists. He has insisted throughout his career that economists should focus on real-world problems and that the purpose of economic policy is to help society become more humane and civilized. He is also known for his insistence on adhering to the words and ideas of John Maynard Keynes. This article reviews his contributions to monetary theory, international economics, aggregate supply theory, and environmental economics.

    Download:
    Associated Program:
    Author(s):
    Richard P. F. Holt J. Barkley Rosser Jr. L. Randall Wray

  • Working Paper No. 250 | September 1998

    Conventional exchange rate models are based on the fundamental hypothesis that, in the long run, real exchange rates will move in such a way as to make countries equally competitive. Thus they assume that, in the long run, trade between countries will be roughly balanced. The difficulty in assessing expectations about the consequences of trade arrangements (such as NAFTA or the EEC) is that these models perform quite poorly at an empirical level, making them an unreliable guide to economic policy. To have a sound foundation for economic policy requires operating from a theoretically grounded explanation of exchange rates that works well across a spectrum of developed and developing countries. This paper applies the theoretical and empirical foundation developed in Shaikh (1980, 1991, 1995), and previously applied to Spain, Mexico, and Greece (Roman 1997; Ruiz-Napoles 1996; Antonopoulos 1997), to the explanation of the exchange rates of the United States and Japan. Such a framework implies that it is a country's competitive position, as measured by the real unit costs of its tradables, that determines its real exchange rate. This determination of real exchange rates through real unit costs provides a possible explanation for why trade imbalances remain persistent and a policy rule-of-thumb for sustainable exchange rates. The aim is to show that a theoretically grounded, empirically robust, explanation of real exchange rate movements can be constructed that also can be of practical use to researchers and policymakers.

    Download:
    Associated Program:
    Author(s):
    Anwar M. Shaikh Rania Antonopoulos

  • Working Paper No. 247 | August 1998
    Measurement, Comparisons, and Implications

    The official poverty measure is based on the premise that all families should have sufficient income from either their own efforts or government support to boost them above a family-size-specific threshold. Given the current policy emphasis on self-reliance and a smaller role for government, this measure appears to have less policy relevance now than in prior years. We present here a new concept of poverty based on self-reliance—that is, the ability of a family, using its own resources, to support a level of consumption in excess of needs. Using a measure of net earnings capacity (NEC) to examine the size and composition of the self-reliant-poor population from 1975 to 1995, we find that self-reliance poverty has increased more rapidly than has official poverty. We find that families commonly thought to be the most impoverished—those headed by minorities, single women with children, and individuals with low levels of education—have the highest levels of self-reliance poverty, but have experienced the smallest increases in this poverty measure. Families commonly thought to be economically secure—those headed by whites, men, married couples, and highly educated individuals—have the lowest levels of self-reliance poverty, but have experienced the largest increases. We speculate that the trends in self-reliance poverty stem largely from underlying trends in the United States economy, in particular the relative decline of wage rates for whites and men and the rapidly expanding college-educated demographic group.

    Download:
    Associated Program:
    Author(s):
    Robert Haveman Andrew Bershadker

  • Working Paper No. 239 | July 1998
    Confronting the Risks in Managed Care

    HMO medicine has been effective in controlling once-runaway health care costs. But it sets up inevitable conflict between patient care and the financial well-being of the health plan and of its employee or contract physicians. This paper looks at the ethical problems posed by managed care (in particular, at its incentives to physicians to economize on care), and points to a regulatory framework to provide consumer protection. The trend to capitated payments is especially problematic. It relieves the insurer from interfering in medical decision-making as a means of cost control, but it pits the interests of physicians directly against the interest of patients.

    Policy makers, the finding is, should not try to micromanage HMO medicine, which they have done by mandating, for example, minimal hospital stays after childbirth. The real need is for regulatory oversight of financial incentives and disclosure. Health plans ought to be required to disclose the incentives under which their physicians are paid; to provide subscribers with honest information on health care coverage; and to be prohibited from imposing "gag rules" on physicians. Moreover, ERISA ought to be recast to hold health plans accountable for errant care decisions, which they are not now in many cases. Purchasing cooperatives, the conclusion also is, would play an especially useful role if managed care continues to take hold as the institutional norm.

    Download:
    Associated Program:
    Author(s):
    Walter M. Cadette

  • Working Paper No. 223 | January 1998

    Visiting Scholar Malcolm Sawyer, of the University of Leeds, commemorates Michal Kalecki's 100th birthday by considering how Kalecki's macroeconomic analysis of developed capitalist economies should be adapted in light of the institutional changes that have occurred since he did his major work. Sawyer believes that although Kalecki's reputation rests on his theoretical work, his theorizing was firmly based on his perceptions of the institutional, political, and social realities of the economies he sought to analyze. According to Sawyer, Kalecki's work is best viewed as a mixture of "high-brow a-institutional" theory and "low-brow" institution-specific applied theory. Because it is "virtually inevitable that the analysis of any . . . 'middle-brow' theorist will be rendered to some degree obsolete by the passage of time," Sawyer sets out to evaluate to what extent Kalecki's theories are still relevant and how they might be adapted for the new millennium.

  • Working Paper No. 221 | December 1997
    Exploring the Tacit Fringes of the Policy Formulation Process

    Economist Adolph Lowe's instrumental analysis examines the process of policy formulation as a regressive procedure of discovery. Taking as given a predetermined desired end state, the task of an innovator is to discover the technical and social path from the present position to the end state. The role for the economist in policy formulation, therefore, is not simply to examine the results of current policy, but to discover the means that will lead to the desired end state. Lowe cites others before him who had held a similar perspective—philosopher Charles Sanders Peirce, mathematician George Polya, and chemist Michael Polanyi—but Lowe does not elaborate on the connection between his analysis and theirs. Visiting Scholar Mathew Forstater, of Gettysburg College, investigates the relationship between the work of these scientists and Lowe's instrumentalism.

  • Working Paper No. 212 | November 1997

    Authors Karl Widerquist and Michael A. Lewis use a "multischool" approach to poverty policy, asking the following question: Given the many proposed causes for poverty, and the conflicting theories about how potential solutions would work, what conclusions can we draw about policy? They conclude that the guaranteed income is the most efficient and comprehensive policy to address poverty.

    Download:
    Associated Program:
    Author(s):
    Karl Widerquist Michael A. Lewis

  • Working Paper No. 193 | May 1997

    No further information available.

    Download:
    Associated Program:
    Author(s):
    Oren Levin-Waldman

  • Working Paper No. 175 | November 1996
    New Evidence on the Responsiveness of Business Capital Formation

    The responsiveness of business investment to user costs (interest rates, taxes, and depreciation rates) is important in determining the effect of fiscal policy and aggregate stabilization policy on the economy and for assessing the transmission mechanism of monetary policy to real economic variables. Although this responsiveness is central to the theoretical underpinnings of most economic models, empirical support for substantial responsiveness is lacking. In this working paper, Robert S. Chirinko of Emory University, Research Associate Steven M. Fazzari, of Washington University in St. Louis, and Andrew P. Meyer of the Federal Reserve Bank of St. Louis use micro data to evaluate the user cost elasticity of capital.

    The authors employ data obtained from the Compustat database on investment, cash flow, and sales for 4,112 firms for 1981 to 1991. They merge this with industry-level data obtained from Data Resources, Inc., on the user costs of 26 different capital assets variables. Unlike other studies in which user cost variables vary only over time and not across firms, Chirinko, Fazzari, and Meyer's user cost variables vary in both time-series and cross-sectional dimensions. The large number of firm-level observations in the Compustat data increase the precision of the estimates and allow a given parameter to be estimated over a relatively short time frame. The data also help to address questions of biases not easily dealt with when using aggregate time-series data.

    Download:
    Associated Program:
    Author(s):
    Robert S. Chirinko Steven M. Fazzari Andrew P. Meyer

  • Working Paper No. 156 | April 1996
    Still No Realignment

    The change in the composition of Congress resulting from the 1994 election was viewed by some Republicans as a "triumph of conservatism over the perceived abuses of liberalism." In this working paper, Resident Scholar Oren Levin-Waldman examines polling data to explore whether the rejection of Congressional incumbents was a function of their perceived corruption or a desire to elect representatives whose ideology better reflected those of the electorate. Levin-Waldman analyzes polling results in the context of two models that might explain the results of the 1994 election: a traditional model in which incumbents are rejected for failing to deliver on their campaign promises and a realignment model in which the rejection is part of a general pattern of political realignment.

    Realignments represent systemic changes in American politics, and they occur when an issue or issues polarizes voters significantly enough to motivate them to change party affiliation. Levin-Waldman points out that voter turnout in the 1994 election was not high. In addition, he notes that even if people were dissatisfied, there was no issue or set of issues that appeared to polarize voters. Neither the economy (and Clinton's handling of it) nor family financial situations appear to have been critical issues for voters. Levin-Waldman also finds that a majority of respondents felt that neither party could do a better job than the other. However, in reply to questions about solving specific problems (such as unemployment and health care), most voters in 1994 said that Republicans could do a better job—a reversal from 1992, when most respondents felt that Democrats could do a better job. Given the overwhelming Democratic victory of that year, Levin-Waldman questions whether the 1994 victory represents a trend.

    Download:
    Associated Program:
    Author(s):
    Oren Levin-Waldman

  • Public Policy Brief No. 14 | September 1994
    Public Capital: The Missing Link Between Investment and Economic Growth

    Following up on findings by J. Bradford DeLong and Lawrence Summers that a robust statistical relationship exists between productivity and private sector investment in plant and equipment, the author explores whether there is also a connection between economic growth and public spending. She argues that public investment in infrastructure stimulates private sector investment in plant and equipment. By providing empirical proof that public and private investment are complements in production, Erenburg supplies the missing link that explicitly ties public infrastructure to economic growth.

    Download:
    Associated Program:
    Author(s):
    Sharon J. Erenburg

  • Public Policy Brief No. 12 | May 1994
    Community-based Factoring Companies and Small Business Lending

    At a time when small businesses are suffering from a credit crunch, “niche” financial institutions are filling the void left by more traditional sources of financing, such as commercial banks. The authors argue that the most important of these niche players are community-based factor companies, which are rapidly expanding from their client base in apparel and textiles to finance a range of firms in everything from electronics to health care. The purchase of accounts receivable by factors enhances the balance sheets of their clients, making it easier for the clients to obtain bank financing. Also, because factors are more interested in the creditworthiness of a client’s customers than of the client itself, they are willing to extend loans in excess of collateral to rapidly growing businesses. Because factors are becoming an increasingly important source of financing for small and start-up businesses, the authors propose that factors be encouraged to play a broader role in financing firms in distressed communities by (1) making some factors eligible for funding and assistance under legislation regulating community development financial institutions and (2) by allowing investments by banks in factors to count toward compliance under the Community Reinvestment Act.