tag:blogger.com,1999:blog-59088308271350608522019-03-22T18:30:26.375-04:00Bond EconomicsBrian Romanchuk's commentary and books on bond market economics.Brian Romanchuknoreply@blogger.comBlogger77013BondEconomicshttps://feedburner.google.comtag:blogger.com,1999:blog-5908830827135060852.post-43811542744699151332019-03-21T13:33:00.000-04:002019-03-21T13:33:34.258-04:00Throwing In The Towel On A 4% Fed Funds Rate<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-omGW3908Nak/XJOe-5zoahI/AAAAAAAAECg/GJ0AcgWjVTMX0dYc4UnAXoYfee1W8c-1wCLcBGAs/s1600/c20190321_tsy10_fed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: U.S. Policy Rate, 10-year Treasury" border="0" data-original-height="500" data-original-width="600" src="https://1.bp.blogspot.com/-omGW3908Nak/XJOe-5zoahI/AAAAAAAAECg/GJ0AcgWjVTMX0dYc4UnAXoYfee1W8c-1wCLcBGAs/s1600/c20190321_tsy10_fed.png" title="" /></a></div><br />Since I do not do market forecasting, my comments on the actual state of bond markets have been sporadic. I just want to refresh my views, particularly since I have had a recent increase in readership. Although writing a book on recessions obviously skews my thinking, my argument is straightforward: the U.S. Treasury curve is priced so that the next big move is down, and the likely trigger for cuts would be a recession. That recession need not be imminent; if we look at the 10-year (as above), it just has to hit in the next 9 years or so.<br /><br /><a name='more'></a>Since that view is somewhat obvious, I will not belabour the point. At the moment, I am slogging through the deep theory parts of my book, and will return to the empirical side of recessions. At that point, I will have more interesting charts to share (although my book will likely truncate discussions to data ending in 2018).<br /><br />The quick summary of the theory is that the "interesting"* recessions are the result of a collapsed of fixed investment, and this quite often is associated with a disruption in credit markets. (On paper, the credit markets could avoid turmoil; in recent decades, this has not been the case. That said, the credit market dislocations need not be as drastic as seen in 2008.)<br /><br />Fixed investment in the United States has not been extraordinarily strong, so the advantage is that there is less need for drastic cutbacks. If one wants to sound like an internet Austrian, if investment is low, the scope for malinvestment is necessarily smaller. Nevertheless, we live in a world dominated by multinational corporations; disruptions could hit from any quarter (and this sensitivity goes beyond the conventional way of assessing the openness of an economy by looking at trade flows). However, one needs to only look at the 1998 episode (note the curve inversion in the chart above) to see an example of the United States shrugging off overseas turmoil (the Asian Crisis).<br /><br />The downside to yields is determined by one's assessment of recession odds. What about the upside?<br /><br />The Federal Reserve has been very transparent, and acting exactly like New Keynesian theory suggests they should. They believe their models, and their models assume that interest rates are a powerful lever for economic trends. The economy is stuck in some form of steady state, and so the level of the policy rate is not going to be far where the models would estimate is supposed to be.<br /><br />The only things that causes a breakout to the upside in yields are:<br /><ul><li>we get a surge in inflation/growth; or</li><li>we get new Fed personnel who like to hike rates, and they don't need any stinkin' models to do that. <i>(One may look at some retired Fed presidents to find examples of such thinking.)</i></li></ul><div>The data have wiggles, perhaps seasonality issues created by the Financial Crisis persist. It would not take much to get Fed policymakers to be more optimistic. That said, the glacial pace of rate hikes suggests that it would take a really long time to get the overnight rate to 4%.</div><div><br /></div><div>Realistically, either a change in personnel at the Fed -- or a reaction to a hefty loosening of fiscal policy -- is the main scare stories for big movements in bond yields. Such changes have precedents, but you cannot forecast them by reading the tea leaves of time series.</div><div><br /></div><div>Incidentally, the current configuration could be seen as a vindication of Modern Monetary Theory. If one assumes that interest rate policy is not particularly potent, relatively tight fiscal policy will eventually smash the policy rate to the zero lower bound. If the real world does not resemble neoclassical model dynamics, the estimated neutral rate ends up acting like a moving average of historical rates. Unless we get a personnel change at the central bank, we end up with observed rates chugging along at low levels.</div><h2>Concluding Remarks</h2><div>Unless something big injects some excitement into the system, the policy rate will look like a long-term moving average of the historical policy rate. In turn, that becomes the mean to which bond yields revert to.</div><br /><br /><b>Footnote:</b><br /><br />* Recessions are just a downturn in activity. There's a lot of ways that this can happen, such as a hefty fiscal tightening. However, what we are interested here are recessions that are the natural result of the processes of industrial capitalism.<br /><br />(c) Brian Romanchuk 2019<img src="http://feeds.feedburner.com/~r/BondEconomics/~4/JtVyV1ueoQg" height="1" width="1" alt=""/>Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com6http://www.bondeconomics.com/2019/03/throwing-in-towel-on-4-fed-funds-rate.htmltag:blogger.com,1999:blog-5908830827135060852.post-65354935614368079782019-03-20T07:00:00.000-04:002019-03-21T13:24:57.759-04:00Inherent Limitations Of Linear Economic Models<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-Ou8Ynxpt6d0/XJEw5nBwKQI/AAAAAAAAECQ/w8KWw_ICexILv9M-eY4jUPJIKSWOHgAXACKgBGAs/s1600/logo_DSGE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://2.bp.blogspot.com/-Ou8Ynxpt6d0/XJEw5nBwKQI/AAAAAAAAECQ/w8KWw_ICexILv9M-eY4jUPJIKSWOHgAXACKgBGAs/s1600/logo_DSGE.png" /></a></div>Linear models used to be a popular methodology in economics, such as the log-linearisations of dynamic stochastic general equilibrium (DSGE) models. Rather than look at particular models, it is simpler to examine the properties of linear models themselves to see why they are inherently unable to capture key features of recessions, or rely on “unforecastable shocks” that represent an absence of theory about recessions. Since the author is unaware of anyone putting forth linear models as being useful in this context, this discussion is kept brief, and is perhaps only of historical interest. For example, if one wants to examine why the Financial Crisis acted as a theoretical shock, we need to understand how it conflicted with the popular linear models of the time.<br /><i></i><br /><a name='more'></a><i>Note: This is an unedited draft of a section that will go into a chapter describing neoclassical theory in a book on recessions. The text refers to chapters and discussions that are not yet published. </i><br /><br /><i><b>UPDATE</b> Parts of this article will need a severe re-write. My belief when I wrote this article was that the Jordan canonical form was a standard part of every undergraduate linear algebra text. I engaged in hand-waving, under the assumption that anyone who knew about matrices would have once studied the Jordan canonical form. However, having consulted some introductory linear algebra texts in the local library, it has come to my attention that this is not the case. As such, the discussion of stability analysis needs to be taken out behind the barn and shot. To be clear, the assertions about the mathematics are correct, just that my attempt to explain it probably makes no sense to anyone who actually needs to read it to learn about the topic.</i><br /><h2>Introduction</h2>Although my approach here is generic, I will refer to a pre-crisis benchmark DSGE model, which is described in the European Central Bank working paper “An estimated stochastic dynamic general equilibrium model of the euro area,” by Frank Smets and Raf Wouters (URL<a href="https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp171.pdf">:https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp171.pdf</a>), published in August 2002. When one opens the paper, one is most likely mesmerised by the highly complex equations of sections 2.1 to 2.3, and the mathematical jargon. However, the actual model that is analysed in the empirical work is the much simpler linearised model of section 2.4. With a small amount of algebra, we can convert that linearised model into the generic form described below.<br /><br />Once we realise that this is actual model of interest, one wonders what all the fuss about DSGE macro is about. There are no optimisations, there is no mathematical entity corresponding to a representative household, equilibrium nowhere to be seen, etc. All those concepts appear in the back story in sections 2.1 to 2.3, and disappear once we jump to the linearised model. In other words, to the extent that models similar to the Smets-Wouters model failed, it had nothing to with those problems. Instead, the failure lies in the nature of linear models.<br /><br />One problem with the formalism used by Smets-Wouters (and appeared to be common across the linearised DSGE model literature), is that matrix notation was avoided. Rather than write out equations with dozens of symbols, every other field of applied mathematics would just compress the equations into matrix algebra. This is far more compact, and it is easy to apply long-existing mathematical results to characterise the solution properties.<br /><br />The downside with matrix algebra is the assumption that the reader is familiar with it. That said, I would guess that a reader that is unfamiliar with matrix algebra would most likely have difficulties deciphering the Smets-Wouters article as well. If the reader of this text is unfamiliar with matrix algebra, I apologise. Certain things I write may be unintelligible, but it should be possible to get a rough understanding of my arguments. Unfortunately, I no longer own any textbooks that justify my description herein; my characterisations of the stability properties of linear systems are stated without proof in every textbook that remains in my possession. However, this theory should be covered in most textbooks on linear algebra, or introductory control systems texts that cover state space techniques. (Historically, introductory texts focused on frequency domain approaches.)<br /><h2>Linear State Space Models</h2>If we take the Smets-Wouters linear model, we see that we have equations that define the values of economic variables. At time <i>t</i>, they are given in terms of the values of the economic variables as well as external disturbances at time <i>t+1, t, t-1</i>. We stack all of these economic variables in a vector denoted <i>v</i>. Let <i>n</i> be the number of variables, <i>v(t)</i> is an element in a <i>n</i>-dimensional space of real numbers.<br /><br />The dependence upon three points in time puts the set of equations outside the usual definitions of discrete time linear systems one would encounter in a linear systems textbook. However, we get around this by defining the state vector <i>x(t)</i> as being a vector of size <i>2n</i>, created by stacking the original vector <i>v(t)</i>, and its previous (lagged) value <i>v(t-1)</i>.<br /><br />Using some algebra (and shifting some equations by one time period, and assuming the inflation target is zero), we can convert equations (31)-(39) into the form of the canonical <i>2n</i>-dimensional time invariant discrete time linear system:<br /><br /><div style="text-align: center;"><i>x(t+1) = A x(t) + B d(t),</i></div><div style="text-align: center;"><i><br /></i></div>with <i>x</i> defined as above, and <i>d </i>being the vector of disturbances (with some disturbance series being time shifted to align to the canonical format)*, and <i>A, B</i> being appropriately sized matrices. In particular, <i>A</i> is a <i>2n×2n</i> matrix. If the inflation target is non-zero, we need to convert to using a canonical discrete time feedback control system representation, with the interest rate being the feedback variable, and the inflation target an external reference value. The shift to the feedback control configuration has no real effect on the stability analysis, other than first having to embed the central bank feedback law into the closed loop system, and adjusting the B matrix to account for the reference variable (the inflation target).<br /><h2>Stability Analysis</h2>We are interested in the stability properties of this system. From the perspective control systems, the stability of the system is the most important, trumping the effects of disturbances, for reasons that will be clear later. The system stability is entirely determined by the properties of matrix <i>A</i>; we can drop from consideration the matrix <i>B</i>. That is, we just look at the system:<br /><br /><div style="text-align: center;"><i>x(t+1) = Ax(t), x(0) = x_0</i>.</div><br />In English, the state variable starts at time zero at an initial condition <i>x_0</i> and then evolves by successively multiplying the vector by the matrix <i>A</i>. That is, <i>x(1) = A x_0, x(2) = A^2 x_0</i>, etc.<br /><br />We can then appeal to one of the crowning achievements of practically every introductory textbook on linear algebra: we can express every vector in the state space (in this case, which is <i>2n</i>-dimensional) as combination of <i>eigenvectors</i> and (unfortunately) <i>generalised eigenvectors</i>. For simplicity, I will skip the generalised eigenvectors, and hedge my statements about stability slightly to compensate.<br /><br />The (non-zero) vector <i>y</i> is an <i>eigenvector</i> of the matrix <i>A</i>, with an associated eigenvalue λ if:<br /><i>Ay = λy</i>.<br /><br />Why do we care? For the vector <i>y</i>, we can replace the matrix multiplication by a scalar multiplication. That is, if the initial state is <i>y</i>, the next state is <i>λy</i>, the next is λ^2<i>y</i>, etc. As can be seen, if λ is a real number greater than 1, the solution will march off to infinity (since <i>y</i> is assumed to be non-zero).<br /><br />Since we know we can decompose any vector in the state space to be a linear combination of eigenvectors (and generalised eigenvectors) courtesy of the results in said linear algebra textbook, we can express the trajectory of any initial condition to be a linear combination of the trajectories generated by starting at the eigenvectors/generalised eigenvectors.<br /><br />One complication to note is that the eigenvalues of a real matrix can contain pairs of complex numbers (complex conjugates). (A complex number is a number that has both a real part, and an imaginary part, where the imaginary part is a real number times the square root of -1. The square root of -1 is normally denoted <i>i</i>, but the systems engineering literature often follows the electrical engineering convention of using <i>j</i> for that concept. The explanation is that <i>i</i> is reserved for variables representing currents in electrical circuit analysis.) We can define stability concepts as follows.<br /><ul><li>The matrix <i>A</i> is <i>strictly stable</i> if the modulus of all eigenvalues has a magnitude strictly less than 1.</li><li>The matrix <i>A</i> is <i>strictly unstable</i> is any eigenvalue has a modulus strictly greater than 1.</li><li>(If the maximum of the moduli equal 1 exactly, one needs to start worrying about those darned generalised eigenvectors in the state space decomposition).</li></ul>What happens if the <i>A</i> matrix is strictly unstable? We will be able to find subspaces of the state space for which:<br /><ol><li>If the eigenvalue is real, all initial conditions that start in that subspace will grow in an exponential fashion.</li><li>If the eigenvalue is complex, there will be a subspace of dimension 2 in which real-valued vectors follow a trajectory defined by a sinusoid multiplies by an exponentially growing constant. This means that each component of the vector can be written in the form <i>a^k sin(ωk + α), </i>with <i>a >1</i>. (A complex eigenvector would grow by the complex eigenvalue, but a pair of such vectors will define a real sinusoid).</li></ol>In plain English, it either blows up in a fashion like compounding interest, or is a sinusoid with a fixed frequency that is growing exponentially.<br /><br />Once again, we can appeal to the argument we can express any vector in the state space as a linear combination of eigenvectors (and generalised eigenvectors), the system solution will have a component that is growing in an exponential fashion if the component corresponding to any unstable eigenvalue is non-zero. (There may be sub-components of the solution that are decaying exponentially as well.)<br /><br />If we assume that the initial state is randomly distributed in a uniform fashion in the original state space, the probability of it having a component in the “unstable sub-space” is zero. That is, we will almost certainly see the trajectory eventually growing in an exponential fashion (“blowing up”).<br /><br />We can now go back to our economic model. One thing to note is that these linearised models are not in terms of levels, they describe rates of change. If the system were strictly unstable, we would almost certainly see trajectories of variables like inflation (etc.), marching off to infinity. (Note that I am referring to the closed model, in which the central bank reaction function is specified.) Since we do not see that behaviour in the real world historical data for the euro area, we have to assume that the estimated model cannot be strictly unstable. (If we were trying to fit data that contains a hyperinflation, this would not apply.) If the system stability was on the knife edge (eigenvalues with a modulus of 1), the state variables would tend to form a random walk – drift in one direction or another in response to disturbances, without a tendency to revert to any particular level. If we look at the euro area data, that does not seem to be a plausible description either, as inflation has tended to remain near target. By implication, just a cursory glance at the data tells us that behaviour is consistent with a strictly stable system.<br /><br />For the system with disturbances, systems theory tells us that strictly stable system will do a good job of rejecting those disturbances. There is a great deal of engineering intuition behind that argument; engineering systems are designed to be stable for a reason. Obviously, sufficiently large disturbances can knock any system around: no amount of control systems wizardry can overcome the forces generated by flying an aircraft into a cliff. However, if we only have moderate disturbances, there will be a temporary movement in the state variable from its steady state that will die down as the magnitude of the disturbance wanes. If the disturbance were random, we would expect it to wax and wane in this fashion.<br /><br />This concept is typically formalised in systems theory by looking at the gain from the disturbance signal to the state variables, where the magnitude of signals is measured by its 2-norm: the room-mean-square of the time series. For a strictly stable system, the gain from disturbance to state variable is finite, but (roughly speaking) the gain will tend to infinity as the A matrix tends to the limit of being unstable.**<br /><br />As such, it is difficult to generate the patterns of deviations seen in economic data with the usual probability distributions for disturbances: We see small, erratic movements during lengthy expansions (in the modern era, developed country expansions have often lasted around ten years), with short-lived massive deviations. If disturbances were normally distributed (for example), we should see a wider range of deviations than we see in the data – “half recessions” or “mini-boomlets” – every few years.<br /><h2>How Can We Generate Recessions?</h2>There are two work-arounds to generate a trajectory that resembles a recession.<br /><ol><li>Have some disturbances have a probability of being very large at widely separated points in time.</li><li>Make the system matrices vary with time.</li></ol>The first is the standard interpretation. The Financial Crisis was just hit by “m-standard deviation” “unforecastable shock”; which resembles the terminology used by some dumbfounded commentators at the time.<br /><br />The second variant – making the system parameters vary with time – has some similarities to the “unforecastable shocks” in that the deviations have to generate large changes in behaviour at widely-separated points in time. There does not appear to be a way to characterise the change in parameters, since the in-recession time sample is so small. It just turns into an arbitrary shift to the model that allows it to fit any change in trajectory, which is a non-falsifiable methodology.<br /><br />It is hard to argue strongly against appealing to “random shocks” to explain wiggles in the data during an expansion. The issue is when we look at recessions. As noted above, we need to use questionable probability distributions to generate recessions; they just appear because we forced the model to generate similar behaviour. The wide separation of recessions means that we cannot really hope to fit the tail of the probability distribution with any degree of confidence. However, that statistical argument is secondary (and explains why I have not spent time analysing it any detail). As seen in my analysis of post-Keynesian theories <i>(note: which will appear in earlier chapters of the book, and only partly completed at the time of writing)</i>, there are predicted empirical regularities for recessions that show up in the data, such as debt buildup or fixed investment trends. If random shocks were truly generating recessions, they should happen at any time, and have no related empirical regularities (other than the footprint of the shock itself in the modelled time series). These models have literally nothing to say about recessions, other than non-forecastability (admittedly, my own view).<br /><br />These pre-Financial Crisis models have been heavily lambasted in heterodox academic publications, as well as in popular accounts. Meanwhile, the neoclassicals do not defend them particularly loudly (although actual admissions of the heterodox scholars being correct are rather thin on the ground). Therefore, I see no reason to explain the model weaknesses any further, at least in the context of recessions (which is what I am interested in). These models are perhaps defensible in the context of thinking about decision rules for inflation-targeting central banks, but applications beyond that are less evident. Furthermore, the burying of a simple linear model under DSGE mumbo-jumbo is inexcusable. The only real usefulness of the optimising mathematics is to distract the readers from the actual models. This distraction may have made matters worse, since people may have attributed properties to the models that simply did not exist.<br /><h2>Concluding Remarks</h2>The take-away from my arguments is straightforward: any linear model is going to be inadequate for explaining recessions, no matter what backstory is used to generate the linear model. If we want to extract any useful theory from neoclassical research, we need to roll up our sleeves, and work with the nonlinear models.<br /><br /><b>Footnotes:</b><br /><br />* Since they are exogenous variables, time shifts of disturbances have no effect on the solution beyond labelling issues.<br /><br />** That statement is not easily “proven,” since it is hard to define the concept of the matrix “tending to” instability. What is easy to demonstrate is that the gain is unbounded if the system is no longer stable. Also, we can fix particular systems with a free parameter, and show that the gain tends to infinity as the parameter value hits the limit of stability. This is much easier to do in the frequency domain, which is out of scope of this discussion.<br /><div><br /></div>(c) Brian Romanchuk 2019<img src="http://feeds.feedburner.com/~r/BondEconomics/~4/NT7PVnXOGbc" height="1" width="1" alt=""/>Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0http://www.bondeconomics.com/2019/03/inherent-limitations-of-linear-economic.htmltag:blogger.com,1999:blog-5908830827135060852.post-17693794627540075512019-03-17T11:16:00.000-04:002019-03-18T13:34:54.066-04:00Understanding DSGE Macro Models<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-2M9zzHiF1DU/XHqVrBWAaxI/AAAAAAAAEA8/juYZgVnbz6AFwG4OclF3LVLOk2TTZQRvQCKgBGAs/s1600/logo_DSGE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://3.bp.blogspot.com/-2M9zzHiF1DU/XHqVrBWAaxI/AAAAAAAAEA8/juYZgVnbz6AFwG4OclF3LVLOk2TTZQRvQCKgBGAs/s1600/logo_DSGE.png" /></a></div>Dynamic Stochastic General Equilibrium (DSGE) articles have attracted a great deal of attention in economic squabbling. In my view, the existing discussions of DSGE models -- mine included -- have been confusing and/or misleading. Properly understood, DSGE macro models are an attempt by neoclassical economists to weld together two standard optimisation problems, but with the defect that the neoclassicals lacked the notation to state the resulting problem clearly. This lack of clarity has made the debates about them unintelligible. Once we clean up notation, these models have a variety of obvious limitations, and it is unclear whether they have any advantages over stock-flow consistent models.<br /><br /><b>Update 2017-03-18</b> I finally tracked down an article that is useful for my purposes (one day after publishing this, of course). <a href="https://www.bankofengland.co.uk/working-paper/2019/monetary-financing-with-interest-bearing-money">This working paper from Bank of England researchers</a> acts an exception to some of my comments on the state of the literature. At present, I do not think I need to retract many comments (although a speculative one was heavily qualified), as I noted multiple times that I was basing my comments on the literature I examined. That said, the article conforms to my description of the "reaction function interpretation" (explained within the text). There is a technical update added below with further details.<br /><br /><a name='more'></a><h2>TLDR;</h2>This article is unusual (and self-indulgent) in that it consists of a <i>long</i> string of bullet points that do not attempt to follow a narrative arc. My defence is simple: given the confusing state of these models, any discussion of them is likewise confusing. There is a certain amount of repetition of concepts. Readers are assumed to have some mathematical knowledge; my text here is terse, and I do not offer much in the plain English explanation. Many of the technical points would turn into an entire article if I wrote in my usual style in which I offer lengthy explanations of concepts. Rather than write a few dozen articles, I just dumped the entire explanation into a single place.<br /><br />The following points are what I see as the key points that may be of interest to a casual reader. The main text has other technical points that may or may not be more important, but one would need to be at least slightly familiar with the model structure to follow them.<br /><ul><li>I am only discussing DSGE models that consist of the macro outcome where there are two distinct optimising agents, such as a representative household and firm.</li><li>I argue that any field of applied mathematics needs to result in tractable computational models, and that mathematical discussions can be related to operations on sets. The DSGE literature is characterised by long-running theoretical disputes that are debated in text, which the author argues is the result of the discipline of computation/set theory being lost. </li><li>I argue that the model descriptions used in the DSGE literature are not properly laid out from the perspective of set notation. The DSGE authors are attempting to join (at least) two optimisation problems. There seem to be two ways to formalise their attempts. They are either defining reaction functions for all agents in the model, or else searching over a set of exogenous prices to find a set of prices that lead to coherent model solutions. Both interpretations have difficulties that become quite obvious once we attempt to formalise them.</li><li>The author feels that textual descriptions of these models are misleading, both by the proponents, as well as critics.</li><li>The author has only sampled the DSGE macro literature; the comments reflect the texts read. One common assertion the author has encountered is that the issues outlined were dealt with in some other works. The author's contention is that this is not adequate; the texts I read should explicitly cite critical results.</li><li>Both formalisations of the DSGE model have very little that characterises the final model solution. For example, since the model structures are not formally specified, we have no way to relate them to any results on the existence and uniqueness of solutions. Furthermore, in the absence of a clear formalisation, there is no way to validate assertions about the mathematics.</li><li>The probabilistic treatment used cannot be interpreted as uncertainty. Rather, these appear to be deterministic models that have a randomisation of parameters ahead of model solution.</li><li>From a practical perspective, the so-called log-linearisations are the <i>de facto</i> models that are being worked with. Linear models have very strong features, such as the ease of fitting them to data. However, linear models will obviously not capture things like accounting constraints, nor take into account nonlinear effects. Furthermore, given the intractability of the nonlinear DSGE model, we have no reason to believe representations that the "log-linearisation" solution has a strong relationship to a solution of the nonlinear model.</li></ul><h2>Main Text</h2>There is no doubt that many of my statements will be disputed by neoclassical economists. I accept that one or more bullet points may need to be more heavily qualified, and the reader should keep that disclaimer in mind. That said, the overall thrust of the logic herein will not be deflected by adding a few qualifications to statements. I do make some speculative or editorial comments, which are generally qualified as such.<br /><br /><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script> I would caution readers of a heterodox leaning with a more literary approach to economics. I am largely confining myself to statements about DSGE mathematics. My comments may or may not dovetail with previous heterodox critiques. In my view, many existing heterodox critiques may be misleading, as they were based on neoclassical textual assertions about the models. In fact, my own analysis of the models was greatly delayed by accepting textual descriptions of what the models represent, which can be quite different from what they actually are. For example, the plausibility of the assumptions behind a model that is not in fact solved is a red herring.<br /><br />The points made are only loosely in a logical order, and they generally do not follow from each other. I only number the points to make them easier to refer to if someone ever wishes to discuss them.<br /><ol><li>(Editorial) In my view, the mantra "all models are wrong" is not taken seriously by the economists that appeal to it. The more accurate assessment is that "all macro models are terrible," an argument that is the subtext of my upcoming book on recessions. Although I am dunking on DSGE macro models herein, I have no reservations on dunking on <i>any</i> mathematical macro model, including the ones in <a href="https://www.books2read.com/b/mqZR5d">my own book on SFC models</a>. Anyone who attempts to defend DSGE macro models by demanding that I produce models that are "better" than DSGE macro models is grasping at straws.</li><li>(Editorial) If we look at the neoclassical literature, there are a number of theoretical disputes -- e.g., the Fiscal Theory of the Price Level, "neo-Fisherism" -- that are discussed in extremely long<i> textual</i> arguments, and drowning in citations of "who said what." This is much closer to the post-Keynesian academic debate of "what did the <i>General Theory</i> really mean?" than one would expect from a field associated with applied mathematics. This is because of the next point.</li><li>Most of what I refer to as "applied mathematics" is the analysis of models. A<i> model</i> is a creature of set theory. It is an entity that bundles a set of <i>variables</i> -- which are elements of sets, such as the set of real numbers, or sequences of real numbers -- and a <i>set </i>of logical relations that relate the variables, which we will term <i>constraints</i>. What sets a <i>model</i> apart from a generic mathematical description of set objects is that we partition the variables into two sets: which I term the <i>model input</i>, and the <i>model solution</i>. When we <i>solve the model,</i> we fix the elements of the set of inputs, apply model constraints, and determine what the set of <i>solutions</i> is. One serious concern is whether the solution exists and is unique, as we wish to relate the model to the real world. It is straightforward that a model whose solution set is empty offers no insight, while a set-valued solution (that is, non-unique) is hard to relate to real world data, where we typically have a single measured value for observations.</li><li>Although some pure mathematicians attempt to write texts that are strictly symbolic, such papers are unusable to most people. <i>Applied</i> mathematics uses textual short cuts to aid comprehension. However, the standard for rigorous applied mathematics is that we can ultimately relate all textual arguments to be being about sets: either a reference to a set, elements of sets, or logical statements about sets and set elements.</li><li>(Editorial) It is the author's position that a sub-field of <i>applied</i> mathematics needs to concern itself with models that have solutions that can be computed numerically in a reasonable time. Based on his experience in the area of nonlinear control theory, a field that concerns itself with the derivation of properties of non-computable solutions degenerates, and cannot be applied to the real world. Pure mathematicians are free to devote their attention to such mathematical objects, as publications are judged on their elegance, not their practical utility. When we turn to DSGE models, although the author has read assertions about the numerical computation of solutions, actual experience with the technique mainly involves reading neoclassical authors making <i>textual</i> assertions about the models; for example, "all else equal" arguments. The issues of ambiguity around notation and model structure discussed in other points raises question marks about any numerical techniques based on the existing literature: what exactly are they solving?</li><li>I am not holding DSGE macro expositions to some impossible standard. I was heavily in contact with rigorous pure mathematicians when I was in academia, and the expectation was that any high quality published mathematical paper in control theory will have at least one non-serious mathematical error (typically a typo) <i>per page</i>. (Pure mathematics journals may have higher standards.) Most fields of applied mathematics rely heavily on short cuts in notation (column inches are precious), but a competent mathematician will be able to fill in the gaps. For example, very few stock-flow consistent papers will specify that government consumption is an element of the set of positive real numbers, but only a complete dimwit would expect that government consumption would be an imaginary number. Writing applied mathematics is an art -- how much mathematical detail is really needed, versus adding complicated-looking content to add "prestige".</li><li>If we look at the <i>logical order of progression</i>, what I refer to as "DSGE macro models" (defined below) are constructed around the concepts of <i>single agent models</i>. These models are based on straightforward optimisation problems for a single agent. For example, we could define a household optimisation model (denote one such model $M_h$) or a firm's profit maximising model (denoted $M_f$). These are models that fit under the general category of optimal control theory, with an added characterisation that they feature vectors of price variables that are set exogenously. From what I have seen of such models, they are normally set up in a perfectly adequate fashion -- since they are explicitly based on results coming from 1960s era optimal control theory.</li><li>As a historical aside, optimal control theory was abandoned (except as a mathematical curiosity) by the control theory community by the time I started my doctorate in the 1990s. (Problems were apparent as soon as the techniques were applied, but it took time -- and disastrous engineering errors -- to pin down why optimal control techniques failed.) One could summarise the argument is that they are a disastrous guide to decision-making in an environment with any model uncertainty.</li><li>The concerns I have are with what I term DSGE macro models. As is clear in the expositions I have seen (at least one hundred articles over more than a decade, and a half dozen standard texts), they represent a desire to join at least two single agent optimisation problems into a single problem. (For example, the model in Chapter 2 of Gali's text is generated by welding together a household optimisation problem $M_h$, and a firm optimisation problem $M_f$.) Please note that the term "dynamic stochastic general equilibrium model" is more generic, and so there are DSGE models that lie outside the above set of models. My comments here are most likely not applicable to those other models.</li><li>What I have termed the "notation problem" has appeared in every single exposition of DSGE macro models I have encountered. There is an attempt to develop a macro model, denoted $M_m$. The generic problem is that the text refers to two single agent models (such as $M_h, M_f$), and then enforces an equality condition for variables between the two problems. For example, the number of hours worked (commonly denoted <i>n</i>) in $M_h$ has to equal the hours worked in $M_f$. The author argues that under conventional applied mathematics standards, that implies that the mathematical descriptions given for $M_h$ and $M_f$ are actually components of $M_m$, and so we can apply standard mathematical operations to the variables to those objects. In particular, if two set elements are equal, we can freely substitute them for each other in analysing the constraints. (For example, if we are faced with the constraints<i> {x=y</i>, <i>z=y+2</i>}, we can state that<i> z = x + 2</i>.) For the models $M_m$ based on $M_h, M_f $, the implication is that the production function in $M_f$ constrains output, and can be substituted into the model $M_h$. However, this substitution -- which is implied by the usual allowed operations on sets -- is directly contradicted by the mathematical exposition of model $M_m$ found in the text.</li><li>To clarify my notation, I am using an example of a two sector model with a household sector and a business sector as an example of a non-trivial macro model throughout the text. There are many other variations, such as splitting the business sector into sub-sectors. Referring to the macro models as having an arbitrary set of sub-problems does not add any value to the discussion, even though that would be strictly necessary to capture all cases.</li><li>(Argument from authority) The author has commonly encountered statements that the issues I have raised have been dealt with somewhere in the literature. To a limited extent, this defense can be used in applied mathematics. For example, very few people would dig through axioms on algebra to justify replacing $2x + x$ with $3x$. (I am unsure even where to find the appropriate axiom to justify that substitution.) That said, in the set of articles analysed by the author, key steps in proofs were <i>always</i> skipped over. <i>(If those steps had been supplied, I would not have been forced to </i>guess<i> as to how the notation is to be interpreted.)</i> In the author's opinion, every published text that was read could have been legitimately sent back for a re-write by a peer reviewer, as the resulting proofs require large leaps of faith. (To repeat, the possibility remains of a clean exposition in an unread text.) As for the existence of a proof being done correctly in article <i>A</i>, that has no bearing on the results of article<i> B</i>, unless <i>B</i> explicitly references <i>A </i>in the appropriate place. Readers cannot be expected to absorb the entire literature produced over fifty years to fill in the gaps of the description of a mathematical model -- which is after all, a series of statements about sets.</li><li>(Speculative) There appears to be two main ways in which to interpret the expositions of DSGE models. The first is more complex, and best captures the spirit of the contents of the papers. In the first interpretation, the DSGE author lays out the single agent optimisation problems, and then determines a set of first order conditions, which act as a reaction function. The reaction functions are then (somehow) stitched together. The second interpretation is simpler: the DSGE authors want to set up (at least) two optimisation problems that are run separately, and then pin down a set of exogenous prices that leads to a coherent solution. Both interpretations have problems, and these are discussed in the following points in a more formal fashion.</li><li>(Editorial/confession) Although many people are confused by DSGE mathematics, the author's difficulties with the exposition seemed to be unusual. Many people (including the DSGE model author community) did not see any difficulties. The author guesses that other readers are imposing a notion of logical time on the model exposition: first we have one model, then we have another model, then we join then. Each step occurs at a different "time," and so there is a compartmentalization that prevents the sort of substitutions that breaks the model (e.g., inserting the production function into the household problem). However, set entities are timeless, and so such a notion of logical time technically makes no sense.</li><li>(Speculative) We can try to define $M_m$ in terms of constraints (reaction functions) as follows. (Since the literature is itself does not lay out the structure of the models, parts of this description are unavoidably vague.) The researcher <i>R</i> lays out model $M_h$, and finds a set of constraints on the solution, denoted $F^R_h$. (It is denoted $F^R_h$ as the constraints of interest are <i>first order conditions</i>, although that is a misnomer with respect to $M_m$.) The model $M_f$ is then laid out, and then the constraints $F^R_f$ are derived. Then a model $M_m$ is postulated. Model $M_m$ has a set of variables that have labels that are the union of the labels of the variables in $M_h$ and $M_f$, and the constraints on the variables are the union of $F^R_h, F^R_f$. The constraints are a set of statements that are applied to the new set of variables, with the labels in the new problem matching the old labels. The addition of the label <i>R</i> to the <i>F</i> variables is not cosmetic: since there does not appear to be a systematic way of specifying the set of all "first order conditions," the set of constraints used in the model is an arbitrary choice of the researcher. The implication is that two distinct researchers -- $R_1, R_2$ -- may choose two distinct sets of "first order conditions", leading to two different macro models, even if based on the same underlying single agent optimisation problems. However, in the DSGE macro expositions seen by the author, the full specification of what constraints is chosen for model $M_m$ is never given as a single logical unit; the reader needs to guess which equations are to be incorporated into $M_m$.</li><li>There is no doubt that the previous explanation is not completely satisfactory, and needs to be cleaned up. The key is that we need to firmly set each variable within its proper model, and we cannot be confused by the fact that the variables share the same label: they are distinct mathematical objects. In fact, we need to define $M_m$ without any reference to $M_h, M_f$; it is a stand-alone mathematical object.</li><li>Since the arbitrary nature of first order conditions is probably not easily understood, we we step back and see what non-arbitrary conditions are. Take any standard optimisation problem $M_o$. It will have an objective function and constraints. These can easily be written out in standard mathematical notation. The logical operations imply a set of solutions, denoted $\{x^*\}$. The set $\{x^*\}$ is a well-defined, and can be characterised by standard theorems: is it non-empty, unique, etc. We can then make any number of statements about $\{x^*\}$, such as first order conditions. That is, we can contrast the well-defined nature of the rules defining $M_o$ versus the infinite number of true statements we can make about the solution. Furthermore, the well-defined nature of the definition of $M_o$ is what allows the possibility to find the solution $\{x^*\}$ numerically: we use them to derive true statements about the set $\{x^*\}$ so that we can determine a procedure so that variables defined in a numerical algorithm will converge to the true solution in some fashion. The simplest example of a condition that we can derive is as follows. If <i>u</i> is the objective function that is maximised, $u(x^*) > u(x) \forall x^* \in \{x^*\}, x \notin \{x^*\}.$ (That statement can be derived solely by applying the definition of optimal.)This is very different from trying to determine the set of variables that are consistent with an arbitrary set of mathematical statements: without knowledge about the properties of the set of solutions, we cannot hope to derive an algorithm that converges to elements of that set.</li><li>We now turn to the possibility of defining the problem as simultaneously solved optimisations based on a common set of exogenous prices. We first need some notation.</li><li>The first piece of notation is the notion of the exogenous price vector, which is denoted <i>P</i>. For standard household/firm problems, there would be three time series in <i>P</i>: goods prices (<i>p</i>), wages (<i>w</i>), and the discount factors in the yield curve (<i>Q). </i>The last element maps to the forward path of interest rates, and poses difficulties that will be discussed in later points.</li><li>We need the notion of an optimisation problem operator, <i>O</i>. For a single agent optimisation problem <i>v</i>, we denote the optimal solution ${x^*}$, where each $x^*$ (if it exists) is the vector of all time series in the model. The overall problem solution is characterised by $O_v P = \{x^*\}$; that is, the operator $O_v$ maps the exogenous price vector to the set of solution time series. Since we need to select particular time series, we define a set of operators $O_P^y$, which maps the exogenous price vector to the time series $y$ within $\{x^*\}$.</li><li>(Speculative) We can now define $M_m$ as follows. Let $C = \{c_i\}$ be a set of variables that are to be part of market clearing (e.g., number of hours worked, output, etc.). $M_m: \{P: O_h^{c_i}P = O_f^{c_i} P \forall c_i \in C \}.$ That is, this is just a statement about existence: the solution to the macro problem is the set of exogenous prices for which the single-agent optimisation operators lead to equal values for the variables to be cleared. <i>(Note: the market clearing operation would need to be made slightly more complex to take into account things like government purchases.)</i></li><li>(Editorial) Although this characterisation appears much cleaner, it in no way captures the spirit of the mathematical analysis seen in the literature sampled. Very simply, the author has invented the operator notation, and cannot point to a single example of overlap with the actual mathematical exposition. Although the notation is somewhat awkward, it is so straightforward that the author finds it very hard to understand why this was not being explicitly stated. For example, the entire derivation of first order conditions -- which represents the bulk of column inches in most expositions -- is entirely irrelevant to the model definition. The only explanation that can be given is that the DSGE authors went out of their way to avoid the word "existence" in describing the solutions of the models. </li><li>(Speculative) A slightly stronger variant of the "price clearing" characterisation is that the solution to $M_m$ is the limit of some iterative procedure (tâtonnement). (I believe that this was the historical interpretation.) This has a slight advantage, as there is a stronger definition of the set of solutions. Of course, this variant would require the search procedure to explicitly incorporated into the model definition -- which is not done. Furthermore, we have almost no useful description of the limit of an algorithm that only converges to its solution after an infinite number of iterations. Note that this description applies to any numerical attempts to solve the nonlinear problem: the actual model is the solution algorithm, and we would need to determine what it is actually converging towards.</li><li>The main mathematical defect of the "exogenous price vector" interpretation is rather unexpected: many of the models examined by the author were defective as interest rate determination is not properly analysed within the nonlinear model. (Disclaimer: This author only realised this issue very recently, and has not re-examined the literature to see how widespread it is.) Overlooking interest rate formation in models that are used to assert the supremacy of interest rate policy is not what one might expect. The problem is straightforward: forward interest rates (discount prices) are a critical component of the exogenous price vector facing the household sector. When we calculate the optimal solution set, we need to fix those prices. The following points describe the issue.</li><li>We would not have problems if interest rates were solely a function of other exogenous prices. For example, if $Q = f(p,w)$, which says that the central bank reaction function ($f$) is only a function of the expected path of wages and prices. In which case, the set of solutions over which we search is $p,w$, $Q$ is pinned down by that choice, and we can then solve the problem. However, not all central bank reaction functions depend solely on expected price trends.</li><li>(Speculative) The first way to characterise the solution for a more complex central bank reaction function is to search over all possible $P$, calculate the clearing solutions, and reject solutions for which the $Q$ vector does not match the chosen reaction function. The problem with this is straightforward: there does not appear to be a way to characterise that set. The central bank does not have a supply or demand function for bonds in the same fashion as the other agents: it has a mysterious veto property to eliminate solutions that are not compatible with the reaction function. It is very unclear that we can postulate any real world mechanism that gives a central bank that power. In any event, the set of solutions appears to be completely novel, depending on the choice of reaction function, and so the author does not see any way to apply any existing theories to questions of existence and uniqueness of solutions.</li><li>(Speculative) The second approach for complex central bank reaction function would be to embed the reaction function into the household optimisation problem. However, this is completely incompatible with the model expositions in the literature, and it is very unclear that the single agent problem fits within known optimisation problems. The concern is that the reaction function would be embedded in the household budget constraint, and that would need to be taken into account when calculating the solution. The resulting problem most likely falls outside the scope of existing optimal control techniques.</li><li>We return to the first characterisation of the solution, which defines the model as a set of constraints. One advantage of this characterisation is that we can throw the central bank reaction function into the list of constraints, and so it does not pose any added difficulties (since we already have almost no characterisation of the solution anyway). </li><li>There is a straightforward interpretation of "first order conditions" in the macro model: they are <i>reaction functions</i>. (The remaining mathematical constraints are either accounting identities, or the "laws of nature," such as the production function.) Most people who have looked at these models eventually come to understand that the idea is that the central bank does not "set interest rates," rather it specifies a <i>reaction function</i>, for example, a rule that is based on the output gap, and inflation expectations. This means we cannot think of the central bank setting interest rates as an exogenous time series, rather the level of interest rates are set by the state of the system. The same logic applies to all actors in the model. Although this interpretation makes the models easier to understand, it raises some issues when we run into the concepts of randomness, or uncertainty. This will be discussed in a later point. </li><li>Under the reaction function interpretation, the resulting model $M_m$ does not lie in the set of standard mathematical models for which we can apply existing theorems. For example, the author has never seen an actual validation of the existence and uniqueness of solutions; at most, appeals to various conditions, or perhaps a a suggestion that a fixed point theorem might apply. Since each model appears to be somewhat novel, we cannot be sure that they meet the stated conditions for existence and uniqueness theorems. The use of a fixed point theorem raises an obvious problem: these models refer to infinite sequences that are expected to grow in an exponential fashion. The difference between any two such non-identical sequence will have an unbounded norm, for every standard definition of a norm of a sequence. How can we validate that we have an operator norm less than one, which is a standard requirement for most fixed point theorems?</li><li>Under the reaction function interpretation, any solution of the macro model (if one indeed exists) is not the result of an optimisation within the model $M_m$. The constraints $F^R_h, F^R_m$ are not in any sense first order conditions of an optimisation within $M_m$, since they ignore the added constraints implied by the other model (and there is no optimisation problem within $M_m$). As a result, we cannot summarise the models as saying that solutions reflect agents optimising objective functions with respect to the closed macroeconomic model, rather they are following heuristics that are very likely to result in values for objective functions that are lower than what could be achieved ("sub-optimal") by ignoring the first order conditions that were derived from $M_h, M_f$ (but still respecting all accounting constraints, as well as the "laws of nature", such as the production function). That is, the usual textual interpretation of DSGE models -- that the agents are acting in a fashion to optimise objective functions, and so cannot be "fooled" by policymakers -- is literally incorrect. They are following heuristics, and can be fooled.</li><li>An alternative phrasing of the previous problem is that the behaviour of agents are the results of reaction functions. We have no reason to believe that a reaction function that leads to "optimal" behaviour in a single agent model is in any sense "optimal" in a model where its behavioural decisions interact with those with other agents. Truly "optimising" behaviour would require knowledge of the constraints of the actual model it is embedded in: $M_m$.</li><li>There is no way to interpret the model results in terms of how markets operate in the real world. In order to solve the model at time <i>t, </i>markets need to "clear" at all time points<i> t, t+1, t+2, ....</i> The "market clearing" cannot be imagined purely in terms of expected prices: the amount supplied needs to match the amount demanded. This implies that forward transactions have to be locked in for all time. This is quite different than the similar-looking arbitrage free models used in option and fixed income pricing. In an arbitrage-free model, we are testing whether an investment with zero net size has a positive expected value; the position size may be arbitrarily small. For DSGE models (without linear production and household objective functions), the marginal value of changes to forward transactions depends upon the size of the forward purchase, and so the market clearing cannot be thought of in terms of arbitrarily small transactions. In the real world, we are not realising contracts locked in just after the Big Bang, so this clearly is not a description of reality.</li><li>We need to keep in mind my argument that all decisions made by model actors are in the form of reaction functions. We cannot imagine model variables as being the result of thinking by actors. This means that we cannot imagine actors setting prices; in particular there is no thought process behind price determination. Instead, prices are arrived at automatically; if we insist on anthropomorphic thinking, there is an omniscient market maker that knows all reaction functions, and sets prices in a fashion to clear markets. </li><li>As a related point, under the reaction function interpretation, we cannot interpret the equilibrium as the result of coinciding "rational" views by agents, without taking an unusual definition of "rational." If equilibrium were the result of converging forecasts, the model agents would need to know the structure of the macro model $M_m$, as they need to know what the other agent's reaction functions and constraints are. However, if the agent knows those constraints, those constraints should have been taken into account when determining the first order conditions (reaction functions). If we want to characterise market clearing as the result of coinciding forecasts, it has to be phrased as: agents come up with coincident plans, under the assumption that all agents are following arbitrary reaction functions. Since there is no notion of optimisation within $M_m$, it definitely cannot be described as "optimising behaviour." </li><li>(Speculative - Updated) The solution to these problems under the reaction function interpretation is purely the result of the intersection of two sets of constraints. We have no other characterisation of what describes the solution. <strike>It may be that the only way to determine whether a point is in the set of solutions consistent with constraints is brute force numerical tests. If that indeed is the case, we are dealing with the discussion of solution sets that we have no realistic way to characterise.</strike> Update: the struck out comments were perhaps only applicable to many of the models that were "solved" via appeals to linearisation. Some models can obviously be solved numerically (see update below); the question is to what extent this is a general property.</li><li>It might be possible to fuse the two interpretations: the true problem is the "exogenous price" definition, but then we determine constraints (first order conditions). However, those first order conditions are mathematical trivia, as they do not offer any mechanism to determine whether a solution exists, and is unique. Characterising something that may not exist is a waste of time.</li><li>The time axis that appears in models is the equivalent to "forward time" in an arbitrage-free yield curve model. (<a href="http://www.bondeconomics.com/2019/03/the-awkward-question-of-time-in.html">Link to earlier article.</a>) Some issues around this definition of the time axis appear in points below.</li><li>(Speculative) There does not appear to be a practical way to fit these models to observed data. Under the assumption that we are not just realising contractual obligations locked in at "time zero", we have to accept that observed data would have to be the result of re-running a DSGE model at each time step, with variables set to historically realised values. However, we cannot observe any "expected" future values, only the result at the first time point. For example, one could try to argue that we can derive expected forward rates based on the spot yield curve, e.g., infer the short rate expected at <i>t+1</i> at time <i>t</i>. Unfortunately, we do not observe the <i>t+1</i> forward rate, rather we can see the market-traded forward rate at time <i>t</i>. We need to model the market in forward rates (or two-period bonds) to determine the market-clearing observed forward rate, which can diverge from the expected value (that is, there can be a term premium). Determining what factors caused the first period solution to change appears intractable, and is distinct from determining what causes expected values to change.</li><li>(Speculative) The treatment of stochastic variables cannot be interpreted in terms of uncertainty in decision-making. The "decision rules" derived from the single agent problems are based on particular realisations of the stochastic variable. In order to find the solution in the single agent problems, we need to "roll the dice" to determine the exact levels of variables for all time first, then determine the optimal choices. That is, we have a deterministic model, but the parameter values are randomised slightly before solving. </li><li>An alternative way of looking at the previous point is to note that when we solve the model at time <i>t</i>, all future actor behaviour results from fixed reaction functions. Those reaction functions at time <i>t </i>depend upon the realisation of random variables in the future. For example, we cannot think of a "shock" to a behavioural parameter within the model happening as the result of the passage of calendar time, rather it is a projected change to the reaction function. </li><li>Since the macro model appears intractable, we have almost no way to determine the probability distributions of variables. In the absence of probability distributions, casting constraints<i> F</i> in terms of expected values (as is sometimes done with the household budget constraint) provides almost no information about solutions.</li><li>If we look at standard two sector models with a firm and household, we see the fundamental constraints (that is, constraints that are not determined by behavioural choices) are accounting constraints (including compound interest), and one physical law of nature: the production function. Since accounting constraints will always hold. one can argue that there is really only one source of fundamental model uncertainty -- the production function. and (Obviously, if we add more elements to the models, more sources of natural uncertainty may appear,) The business sector reaction function is used to determine market-clearing conditions at time <i>t, </i>and that reaction function at time <i>t</i> depends upon the future values of the parameters in the production function ("productivity"). Therefore, the reaction function at time<i> t </i>faces no uncertainty regarding the future values of productivity. That is, we cannot interpret the model results as being the result of decision making under uncertainty.<i> </i></li><li>As a concrete example of the previous point, assume we are solving a standard two sector real business cycle model at time <i>t=0</i>. The value of the productivity parameter in the production function starts out as a constant, but features a random jump to a new value at <i>t=10</i>. The productivity parameter shows up in the reaction function of the business sector, and so determines the acceptable ratio of wages to prices in the market-clearing price vector, as well as the demand for worker hours. We cannot determine the trajectory of the consumption function without access to productivity for all times. As a result, any model that can be computed would require knowledge of the productivity parameter at time 10 in order to solve the problem. The implication is that we cannot interpret the change at time 10 as being a random event that happens at time 10, rather an event that happens with certainty in a scenario that has the probability associated with the probability of the productivity parameter reaching that value. It goes without saying that we could never compute the implied models in finite time if random values have probability distributions that are non-zero on sets of non-zero measure.</li><li>(Partly Speculative) In the absence of characterisations of the solutions to $M_m$, we cannot "linearise" the resulting model. It is very unclear that the "linearisations" used in empirical research correspond to any particular nonlinear model, including the macro DSGE model $M_m$. There are an infinite number of nonlinear models that give rise to the same linearisation.</li><li>(Speculative) There does not appear to be a way to model government default in a truly uncertain fashion and generate behaviour that can be compared to observed data. If a government defaults in its debt at time <i>t+N</i>, there is a major disruption to the household budget constraint, and behaviour at the initial period <i>t</i>. In order to get market clearing for all time, the realisation of the default random variable at time<i> t+N</i> has to be crystallised in the solution at time <i>t</i>. If we interpret historically observed data as being generated by a sequence of DSGE models, we will jump from a model where default is avoided with certainty, to a model where a default is known with certainty. If that were a realistic description of behaviour, we would not need such models -- we would already know exactly when each government will default in the future.</li><li>To continue the previous point, a default event would be very easily side-stepped in standard assumptions. Private sector agents would just revert to just holding money instead of the bonds that are to default in the period of default. A default event cannot surprise households, as that would be incompatible with the derived reaction function constraints associated with the household budget constraint.</li><li>Central bank reaction functions are needed to close the model. In some treatments (many? all?) the reaction function is only specified in terms of the log-linear model. It is unclear that the model can be linearised in the absence of the central bank reaction function for the original model.</li><li>(Partly speculative) Models that contain a business sector ($M_f$) with a nonlinear production function appear to not respect strong notions of stock-flow consistency (described later). The author has not see a budget constraint for a business sector in the macro models examined. Superficially, that opens the models to critiques about stock-flow consistency (since financial flows do not add up). This omission can be excused on the grounds that the budget constraint has no behavioural implications for the business sector, and financial asset holdings for the business sector can be inferred from the state of the government and household sectors. However, this does appear problematic for textual descriptions of the models, which sometimes equate household bond holdings to governmental bonds outstanding. Nevertheless, there are concerns with a strong notion of stock-flow consistency: are there black holes in the model? (As described by Tobin per [Lavoie2014, p. 264].) Under the author's interpretation of these models, there is no mechanism to reflux pure profits from the business sector to the household sector within the macro model. (Pure profits are profits above the cost of rental of capital.) If the model is nonlinear, pure profits are either strictly positive, or zero, in the case of a trivial solution with zero production. As such, business sector financial assets will be growing at a strictly greater rate than the compounded interest rate on any finite interval for a non-zero solution.</li><li>The standard technique of indexing firms to elements of the real interval [0,1] (as in Calvo pricing) cannot be interpreted as taking the limit of a finite number of firms. It is a standard result of undergraduate real analysis that the interval [0,1] cannot be the limit of a sequence of separate points.</li></ol><h2>Technical Update (2018-03-18 - Under Construction)</h2>The comments above were based on my survey of the literature, and as noted in the update at the top of the article, I managed to find an article that has information that previously eluded me. The article <a href="https://www.bankofengland.co.uk/working-paper/2019/monetary-financing-with-interest-bearing-money">"Monetary financing with interest-bearing money," by Richard Harrison and Ryland Thomas</a> has a feature that eluded my in my examination of the literature: a comprehensive list of the nonlinear constraints on the macro model. Please note that I have only given the article a cursory examination, but I believe my characterisations capture what is happening mathematically.<br /><br />There are at least two New Keynesian DSGE models derived in the paper, but the main model is defined by the model equations in Appendix B.4, on page 46. <i>This set of equations is the DSGE model. </i>Harrison and Thomas do not notation that specifically conforms to my description of reaction functions, but that is what they have done. The simulations shown are of that model (and possibly the other model they derived). All the stuff about households on a continuum, optimisations, etc.? Just back story for the actual model. In 1970s wargaming parlance, "chrome."<br /><br />My guess that this paper does conform to best practices for DSGE macro in 2019, and thus the "reaction function interpretation" reflects the current state of the art. Furthermore, my guess is that "exogenous price interpretation" is now mainly of historical interest, with only a few authors pursuing it. Most of the models I looked at dated from before the Financial Crisis, and I suspect that there is a significant shift in practices since then. Although neoclassicals might view that shift as a sign of progress, a less charitable interpretation is that this was just a catch up to the heterodox critiques (which are never cited, of course). I have discussed this with Alexander Douglas, and our consensus is that our projected academic article on this needs to be very conscious of these methodological shifts.<br /><br />Does the paper refute my criticisms of notation? Based on my cursory examination, I would argue not. Note the following quotation from<br /><blockquote class="tr_bq">Asset market clearing requires equality between government supply of assets and<br />private sector demand. Our notation removes superscripts for market clearing equilibrium asset stocks.</blockquote>This is followed by equation (10): $b^p_t = b^g_t = b_t$.<br /><br />From a technical mathematical point of view, that equation is just equivalent to:<br /><br /><ol><li>We assert that $b^p_t = b^g_t$,</li><li>We define $b_t = b^p_t (= b^g_t)$.</li></ol><div>The first statement implies that $b^p_t, b^g_t$ are the same mathematical object, which normally implies that they are lying in the same mathematical model, and hence we could freely substitute between the variable's constraints. I believe that this would cause havoc with the household budget equation. The appeal to "asset market clearing" and "Our notation removes superscripts" have a deeper meaning to Harrison and Thomas. I do not know exactly how they interpret this step, but I would interpret it as the operation of stitching together reaction functions. Operationally, if we look at the systems of equations, that is what is happening.</div><div><br /></div><div>The contents of this paper are not in my current area of interest, so I have no incentive to dig deeper into their very detailed algebra. However, I would use this as an example of how my interpretations will aid outside readers to follow the DSGE literature more easily.</div><br /><br />(c) Brian Romanchuk 2019<img src="http://feeds.feedburner.com/~r/BondEconomics/~4/etH4XXov-XE" height="1" width="1" alt=""/>Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2http://www.bondeconomics.com/2019/03/understanding-dsge-macro-models.html