tag:blogger.com,1999:blog-185354122020-02-28T22:29:37.316-08:00parsimonious pursuitsThe <a href="http://en.wikipedia.org/wiki/Open_notebook_science">open notebook</a> of E. E. HolmesUnknownnoreply@blogger.comBlogger421100tag:blogger.com,1999:blog-18535412.post-56417259094544317732019-02-01T12:27:00.000-08:002019-02-04T12:27:37.339-08:00Native R versus tensorflow for matrix math speed comparison<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on"><a href="https://www.dummies.com/web-design-development/other-web-software/create-vector-matrix-operations-tensorflow/">https://www.dummies.com/web-design-development/other-web-software/create-vector-matrix-operations-tensorflow/</a><br /><br />Out of curiosity, I coded up a simple speed comparison of a matrix multiplication problem in native R versus tfmatmul in the tensorflow package in R. &nbsp;This was on my laptop and tensorflow is meant to allow distributed computing on multiple CPU/GPU, so this isn't the right sort of application for tensorFlow. &nbsp;Nonetheless, tensorflow have tensor multiplication functions which would make some of the MARSS EM calculations more compact, avoiding for loops.<br /><br />For this test, I had a 30x60x900 matrix. &nbsp;I do the following tests<br /><br /><br /><ul style="text-align: left;"><li>X[j,,] %*% t(X[j,,]) and sum across 1st dim</li><li>X[,,j] %*% t(X[,,j]) and sum across 3rd dim</li></ul><br />The second test requires 2 calls to tftranspose which slows things down quite a bit, but matches the way I have arrays stored in MARSS (with time in 3rd dim). &nbsp;The 2 calls to transpose can be avoided by using tf$einsum but that doesn't seem to give any speed gains.<br /><br />Test 1 tensorflow is faster 2x. &nbsp;So not much gain. &nbsp;Note it is only faster for big matrices. &nbsp;Much slower for small matrices.<br /><br /><pre></pre><br />Unit: milliseconds<br />&nbsp; &nbsp; expr &nbsp; &nbsp; &nbsp;min &nbsp; &nbsp; &nbsp; lq &nbsp; &nbsp; mean &nbsp; median &nbsp; &nbsp; &nbsp; uq &nbsp; &nbsp; &nbsp; max neval<br />&nbsp; funy() 36.75568 43.13249 48.92774 46.30748 &nbsp;51.9293 &nbsp;70.86605 &nbsp; 100<br />&nbsp;funx(X) 79.40066 86.42067 97.32483 95.05786 103.7837 229.00092 &nbsp; 100</div><br />Test 2 tensorflow is slower 2x due to the extra tf$transpose call. &nbsp;However this depends on the size of the 3rd dimension. &nbsp;When I changed it so that the array was n^2 x 2n x n, tensorflow code was faster.<br /><br /><pre></pre><br />Unit: milliseconds<br />&nbsp; &nbsp; expr &nbsp; &nbsp; &nbsp;min &nbsp; &nbsp; &nbsp; lq &nbsp; &nbsp; mean &nbsp; median &nbsp; &nbsp; &nbsp; uq &nbsp; &nbsp; &nbsp;max neval<br />&nbsp; funy() 73.45847 80.07081 85.67677 86.22445 91.47107 97.61655 &nbsp; &nbsp;10<br />&nbsp;funx(X) 42.00226 43.86015 46.94928 46.94643 48.94002 55.19185 &nbsp; &nbsp;10<br /><br /><br />Code is below.<br /><br /><pre></pre><br /># This takes a 3D array, multiplies and sums up along 1st dimension<br /><br />library(tensorflow)<br /><br />x &lt;- tf$placeholder("float", shape=shape(NULL,NULL,NULL))<br />y = tf$matmul(x,tf$transpose(x, perm=c(0L,2L,1L)))<br />z &lt;- tf$reduce_sum(y, c(0L))<br /><br /># write comparison funcs<br /><br />funx&lt;-function(x){<br />&nbsp; tmp &lt;- tcrossprod(x[1,,])<br />&nbsp; for(i in 2:dim(x)){<br />&nbsp; &nbsp; tmp &lt;- tcrossprod(x[i,,])+tmp<br />&nbsp; }<br />&nbsp; tmp<br />&nbsp; }<br />funy&lt;-function(){ sess$run(z, feed_dict = dict(x = X)) }<br /><br />n &lt;-30<br />X = array(rnorm(n+2*n+3*n),dim=c(n, 2*n, n^2))<br />tmp &lt;- sess$run(z, feed_dict = dict(x = X))<br /><br />mx &lt;- microbenchmark( funy(), funx(X))<br />autoplot(mx)<br /><br /><br />######<br /><br /># This takes a 3D array, multiplies and sums up along 3rd dimension<br /><br />library(tensorflow)<br /><br />x &lt;- tf$placeholder("float", shape=shape(NULL,NULL,NULL))<br />y = tf$matmul(tf$transpose(x, perm=c(2L,0L,1L)),tf$transpose(x, perm=c(2L,1L,0L)))<br /># note you can write this more succinctly with einsum but it doesn't speed things up<br /># y =&nbsp;tf$einsum('ijl,kjl-&gt;ikl', x, x)<br />z &lt;- tf$reduce_sum(y, c(0L))<br /><br /><br /># write comparison funcs<br /><br />funx&lt;-function(x){<br />&nbsp; tmp &lt;- tcrossprod(x[,,1])<br />&nbsp; for(i in 2:dim(x)){<br />&nbsp; &nbsp; tmp &lt;- tcrossprod(x[,,i])+tmp<br />&nbsp; }<br />&nbsp; tmp<br />}<br />funy&lt;-function(){ sess$run(z, feed_dict = dict(x = X)) }<br /><br />n &lt;-30<br />X = array(rnorm(n+2*n+3*n),dim=c(n, 2*n, n^2))<br />tmp &lt;- sess$run(z, feed_dict = dict(x = X))<br /><br />mx &lt;- microbenchmark( funy(), funx(X), times=10)<br />autoplot(mx)<br /><br /><br /><br /></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-18535412.post-35140164890594626892017-05-31T17:19:00.000-07:002017-12-06T16:40:27.780-08:00Notes on computing the Fisher Information matrix for MARSS models. Part IV Recursion in Harvey 1989<div dir="ltr" style="text-align: left;" trbidi="on"><script type="text/x-mathjax-config"> MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } }); </script><script src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' type='text/javascript'></script><i>MathJax and blogger can be iffy. Try reloading if the equations don't show up.</i><br /><br /> Notes on computing the Fisher Information matrix for MARSS models <a href="http://parsimoniouspursuits.blogspot.com/2016/05/notes-on-computing-fisher-information.html">Part I Background</a>, <a href="http://parsimoniouspursuits.blogspot.com/2016/05/notes-on-computing-fisher-information_19.html">Part II Louis 1982</a>, <a href="http://parsimoniouspursuits.blogspot.com/2016/06/notes-on-computing-fisher-information.html">Part III Overview of Harvey 1989</a>. <br /><br /> Part III Introduced the approach of Harvey (1989) for computing the expected and observed Fisher Information matrices by using the prediction error form of the log-likelihood function. Here I show the Harvey (1989) recursion on page 143 for computing the derivatives in his equations. <h2>Derivatives needed for the 2nd derivative of the conditional log-likelihood</h2> Equations 3.4.66 and 3.4.69 in Harvey (1989) have first and second derivatives of $$v_t$$ and $$F_t$$ with respect to $$\theta_i$$ and $$\theta_j$$. These in turn involve derivatives of the parameter matrices and of $$\tilde{x}_{t|t}$$ and $$\tilde{V}_{t|t}$$. Harvey shows all the first derivatives, and it is easy to compute the second derivatives by taking the derivatives of the first. <br /><br />The basic idea of the recursion is simple, if a bit tedious. <ul><li>First we set up matrices for all the first derivatives of the parameters.</li> <li>Then starting from t=1 and working forward, we will do the recursion (described below) for all $$\theta_i$$ and we store the first derivatives of $$v_t$$, $$F_t$$, $$\tilde{x}_{t|t}$$ and $$\tilde{V}_{t|t}$$ with respect to $$\theta_i$$.</li><li>Then we go through the parameter vector a second time, to get all the second derivatives with respect to $$\theta_i$$ and $$\theta_j$$.</li><li>We input the first and second derivatives of $$v_t$$ and $$F_t$$ into equations 3.4.66 and 3.4.69 to get the observed Fisher Information at time t and add to the Fisher Information from the previous time step. The Fisher Information matrix is symmetric, so we can use an outer loop from $$\theta_1$$ to $$\theta_p$$ ($$p$$ is the number of parameters) and an inner loop from $$\theta_i$$ to $$\theta_p$$. That will be $$p(p-1)/2$$ loops for each time step. </ul>The end result with be the observed Fisher Information matrix using equation 3.4.66 and using 3.4.69. <h2>Outline of the loops in the recursion</h2> This is a forward recursion starting at t=1. We will save the previous time step's $$\partial v_t / \theta_i$$ and $$\partial F_t / \theta_i$$. That will be p x 2 (n x 1) vectors and n x 2 (n x n) matrices. We do not need to store all the previous time steps since this is a one-pass recursion unlike the Kalman smoother, which is forward-backward. <br /><br />Set-up<br/>Number of parameters = p.<br/>Create Iijt and oIijt which are p x p matrices.<br/>Create dvit which is a n x p matrix. n Innovations and p $$\theta_i$$.<br/>Create d2vijt which is a n x p x p array. n Innovations and p $$\theta_i$$.<br/>Create dFit which is a n x n x p array. n x n Sigma matrix and p $$\theta_i$$.<br/>Create d2Fijt which is a n x n x p x p array. n x n Sigma matrix and p $$\theta_i$$.<br/> <br/><br/>Outer loop from t=1 to t=T.<br/>Inner loop over all MARSS parameters: x0, V0, Z, a, R, B, u, Q. This is par$Z, e.g., and is a vector of the estimated parameters elements in Z.<br/>Inner loop over parameters in parameter matrix, so, e.g. over the rows in the column vector par$Z.<br/>Keep track of what parameter element I am on via p counter.<br/> <h3>The form of the parameter derivatives</h3> Within the recursion, we have terms like, $$\partial M/\partial \theta_i$$, where M means some parameter matrix. We can write M as $$vec(M)=f+D\theta_m$$, where $$\theta_m$$ is the vector of parameters that appear in M. This is the way that matrices are written in Holmes (2010). So \begin{equation} \begin{bmatrix}2a+c&b\\b&a+1\end{bmatrix} \end{equation} is written in vec form as \begin{equation} \begin{bmatrix}0\\0\\0\\1\end{bmatrix}+\begin{bmatrix}2&0&1\\ 0&1&0\\ 0&1&0\\ 1&0&0 \end{bmatrix}\begin{bmatrix}a\\b\\c\end{bmatrix} \end{equation} The derivative of this with respect to $$\theta_i=a$$ is \begin{equation} \label{dpar} \begin{bmatrix}0\\0\\0\\1\end{bmatrix}+\begin{bmatrix}2&0&1\\ 0&1&0\\ 0&1&0\\ 1&0&0 \end{bmatrix}\begin{bmatrix}1\\0\\0\end{bmatrix} \end{equation} So in MARSS, $$\partial M/\partial \theta_i$$ would be <pre><br />dthetai=matrix(0,ip,1); dthetai[i,]=1 #set up the d theta_i bit.<br />dM=unvec(f+D%*%dthetai,dim(M)) #only needed if M is matrix<br /></pre>The reason is that MARSS allows any linear constraint of the form $$\alpha+\beta a + \beta_2 b$$, etc. The vec form allows me to work with a generic linear constraint without having to know the exact form of that constraint. The model and parameters are all specified in vec form with f, D, and p matrices (lower case = column vector). <br /><br />The second derivative of a parameter matrix with respect to $$\theta_j$$ is always 0 since \ref{dpar} has no parameters in it, only constants. <h3>Derivatives of the innovations and variance of innovations</h3> Equation 3.4.71b in Harvey shows $$\partial v_t / \partial \theta_i$$. Store result in dvit[,p]. \begin{equation} \frac{\partial v_t}{\partial \theta_i}= -Z_t \frac{\partial \tilde{x}_{t|t-1}}{\partial \theta_i}- \frac{Z_t}{\partial \theta_i}\tilde{x}_{t|t-1}- \frac{\partial a_t}{\partial \theta_i} \end{equation} $$\tilde{x}_{t|t-1}$$ is the one-step ahead prediction covariance output from the Kalman filter, and in MARSSkf is xtt1[,t]. Next, use equation 3.4.73, to get $$\partial F_t / \partial \theta_i$$. Store result in dFit[,,p]. \begin{equation} \frac{\partial F_t}{\partial \theta_i}= \frac{\partial Z_t}{\partial \theta_i} \tilde{V}_{t|t-1} Z_t^\top + Z_t \frac{\partial \tilde{V}_{t|t-1}}{\partial \theta_i} Z_t^\top + Z_t \tilde{V}_{t|t-1} \frac{\partial Z_t^\top}{\partial \theta_i} + \frac{\partial (H_t R_t H_t^\top)}{\partial \theta_i} \end{equation} $$\tilde{V}_{t|t-1}$$ is the one-step ahead prediction covariance output from the Kalman filter, and in MARSSkf is denoted Vtt1[,,t]. <h3>Recursion for derivatives of states and variance of states</h3> <hr>If t=1<hr><ul><li><b>Case 1</b>. $$\pi=x_0$$ is treated as a parameter and $$V_0 = 0$$. For any $$\theta_i$$ that is not in $$\pi$$, $$Z$$ or $$a$$, $$\partial v_1/\partial \theta_i\ = 0$$. For any $$\theta_i$$ that is not in $$Z$$ or $$R$$, $$\partial F_1/\partial \theta_i\ = 0$$ (a n x n matrix of zeros). <br/><br/>From equation 3.4.73a: \begin{equation} \frac{\partial \tilde{x}_{1|0}}{\partial\theta_i } = \frac{\partial B_1}{\partial \theta_i} \pi + B_1 \frac{\partial \pi}{\partial \theta_i} + \frac{\partial u_t}{\partial \theta_i} \end{equation} From equation 3.4.73b and using $$V_0 = 0$$: \begin{equation} \frac{\partial \tilde{V}_{1|0}}{\partial\theta_i } = \frac{\partial B_1}{\partial \theta_i} V_0 B_1^\top + B_1 \frac{\partial V_0}{\partial \theta_i} B_1^\top + B_1 V_0 \frac{\partial B_1^\top}{\partial \theta_i} + \frac{\partial (G_t Q_t G_t^\top)}{\partial \theta_i} = \frac{\partial (G_t Q_t G_t^\top)}{\partial \theta_i} \end{equation} </li><br/><br/><li><b>Case 2</b>. $$\pi=x_{1|0}$$ is treated as a parameter and $$V_{1|0}=0$$. $\frac{\partial \tilde{x}_{1|0}}{\partial \theta_i}=\frac{\partial \pi}{\partial \theta_i} \text{ and } \partial V_{1|0}/\partial\theta_i = 0$. </li><br/><br/><li><b>Case 3</b>. $$x_0$$ is specified by a fixed prior. $$x_0=\pi$$ and $$V_0=\Lambda$$. The derivatives of these are 0, because they are fixed. <br/><br/>From equation 3.4.73a and using $$x_0 = \pi$$ and $$\partial \pi/\partial \theta_i = 0$$: \begin{equation} \frac{\partial \tilde{x}_{1|0}}{\partial\theta_i } = \frac{\partial B_1}{\partial \theta_i} \pi + B_1 \frac{\partial \pi}{\partial \theta_i} + \frac{\partial u_t}{\partial \theta_i}=\frac{\partial B_1}{\partial \theta_i} \pi + \frac{\partial u_t}{\partial \theta_i} \end{equation} From equation 3.4.73b and using $$V_0 = \Lambda$$ and $$\partial \Lambda/\partial \theta_i = 0$$: \begin{equation} \frac{\partial \tilde{V}_{1|0}}{\partial\theta_i } = \frac{\partial B_1}{\partial \theta_i} V_0 B_1^\top + B_1 \frac{\partial V_0}{\partial \theta_i} B_1^\top + B_1 V_0 \frac{\partial B_1^\top}{\partial \theta_i} + \frac{\partial (G_t Q_t G_t^\top)}{\partial \theta_i} = \frac{\partial B_1}{\partial \theta_i} \Lambda B_1^\top + B_1 \Lambda \frac{\partial B_1^\top}{\partial \theta_i} + \frac{\partial (G_t Q_t G_t^\top)}{\partial \theta_i} \end{equation} </li><br/><br/><li><b>Case 4</b>. $$x_{1|0}$$ is specified by a fixed prior. $$x_{1|0}=\pi$$ and $$V_{1|0} = \Lambda$$. $$\partial V_{1|0}/\partial\theta_i = 0$$ and $$\partial x_{1|0}/\partial\theta_i = 0$$. </li><br/><br/><li><b>Case 5</b>. Estimate $$V_0$$ or $$V_{1|0}$$. That is unstable (per Harvey 1989, somewhere). I don't allow that in the MARSS package. </li></ul><br/><br/>When coding this recursion, I will loop though the MARSS parameters (x0, V, Z, a, R, B, u, Q) and within that loop, loop through the individual parameters within the parameter vector. So say Q is diagonal and unequal. It has m variance parameters, and I'll loop through each. <br/><br/>Now we have $$\frac{\partial \tilde{x}_{1|0}}{\partial \theta_i}$$ and $$\frac{\partial \tilde{V}_{1|0}}{\partial \theta_i}$$ for $$t=1$$ and we can proceed. <hr>If t>1<hr> The derivative of $$\tilde{x}_{t|t-1}$$ is (3.4.73a in Harvey) \begin{equation} \frac{\partial \tilde{x}_{t|t-1}}{\partial\theta_i } = \frac{\partial B_t}{\partial \theta_i} \tilde{x}_{t-1|t-1} + B_t \frac{\partial \tilde{x}_{t-1|t-1}}{\partial \theta_i} + \frac{\partial u_t}{\partial \theta_i} \end{equation} Then we take the derivative of this to get the second partial derivative. \begin{align} \frac{\partial^2 \tilde{x}_{t|t-1}}{\partial\theta_i \partial\theta_j} = \frac{\partial^2 B_t}{\partial\theta_i \partial\theta_j} \tilde{x}_{t-1|t-1} + \frac{\partial B_t}{\partial \theta_i}\frac{\partial \tilde{x}_{t-1|t-1}}{\partial \theta_j} + \frac{\partial B_t}{\partial \theta_j} \frac{\partial \tilde{x}_{t-1|t-1}}{\partial \theta_i} + B_t \frac{\partial^2 \tilde{x}_{t-1|t-1}}{\partial\theta_i \partial\theta_j} + \frac{\partial^2 u_t}{\partial\theta_i \partial\theta_j}\\ = \frac{\partial B_t}{\partial \theta_i}\frac{\partial \tilde{x}_{t-1|t-1}}{\partial \theta_j} + \frac{\partial B_t}{\partial \theta_j} \frac{\partial \tilde{x}_{t-1|t-1}}{\partial \theta_i} + B_t \frac{\partial^2 \tilde{x}_{t-1|t-1}}{\partial\theta_i \partial\theta_j} \end{align} In the equations, $$\tilde{x}_{t|t}$$ is output by the Kalman filter. In MARSSkf, it is called xtt[,t]. $$\tilde{x}_{t-1|t-1}$$ would be called xtt[,t-1]. The derivatives of $$\tilde{x}_{t-1|t-1}$$ is from the next part of the recursion (below). </br></br>The derivative of $$\tilde{V}_{t|t-1}$$ is (3.4.73b in Harvey) \begin{equation} \label{derivVtt1} \frac{\partial \tilde{V}_{t|t-1}}{\partial\theta_i } = \frac{\partial B_t}{\partial \theta_i} \tilde{V}_{t-1|t-1} B_t^\top + B_t \frac{\partial \tilde{V}_{t-1|t-1}}{\partial \theta_i} B_t^\top + B_t \tilde{V}_{t-1|t-1} \frac{\partial B_t^\top}{\partial \theta_i} + \frac{\partial (G_t Q_t G_t^\top)}{\partial \theta_i} \end{equation} The second derivative of $$\tilde{V}_{t|t-1}$$ is obtained by taking the derivative of \ref{derivVtt1} and eliminating any second derivatives of parameters: \begin{align} \frac{\partial^2 \tilde{V}_{t|t-1}}{\partial\theta_i \partial\theta_j} = \frac{\partial B_t}{\partial \theta_i} \frac{\tilde{V}_{t-1|t-1}}{\partial\theta_j} B_t^\top + \frac{\partial B_t}{\partial \theta_i} \tilde{V}_{t-1|t-1} \frac{\partial B_t^\top}{\partial \theta_j} + \frac{\partial B_t}{\partial \theta_j} \frac{\partial \tilde{V}_{t-1|t-1}}{\partial \theta_i} B_t^\top + B_t \frac{\partial^2 \tilde{V}_{t-1|t-1}}{\partial\theta_i \partial\theta_j} B_t^\top + \\ B_t \frac{\partial \tilde{V}_{t-1|t-1}}{\partial \theta_i} \frac{\partial B_t^\top}{\partial \theta_j} + \frac{\partial B_t}{\partial \theta_j} \tilde{V}_{t-1|t-1} \frac{\partial B_t^\top}{\partial \theta_i} + B_t \frac{\tilde{V}_{t-1|t-1}}{\partial\theta_j} \frac{\partial B_t^\top}{\partial \theta_i} \end{align} In the derivatives, $$\tilde{V}_{t|t}$$ is output by the Kalman filter. In MARSSkf, it is called Vtt[,t]. $$\tilde{V}_{t-1|t-1}$$ would be called Vtt[,t-1]. The derivatives of $$\tilde{V}_{t-1|t-1}$$ is from the rest of the recursion (below). <hr>Rest of the recursion equations are the same for all t. From equation 3.4.74a: \begin{equation} \frac{\partial \tilde{x}_{t|t}}{\partial\theta_i } = \frac{\partial \tilde{x}_{t|t-1}}{\partial \theta_i} + \frac{\partial \tilde{V}_{t|t-1}}{\partial \theta_i} Z_t^\top F_t^{-1}v_t + \tilde{V}_{t|t-1} \frac{\partial Z_t^\top}{\partial \theta_i} F_t^{-1}v_t - \tilde{V}_{t|t-1} Z_t^\top F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}v_t + \tilde{V}_{t|t-1} Z_t^\top F_t^{-1}\frac{\partial v_t}{\partial \theta_i} \end{equation} $$\tilde{V}_{t|t-1}$$ is output by the Kalman filter. In MARSSkf, it is called Vtt1[,t]. $$v_t$$ are the innovations. In MARSSkf, they are called Innov[,t]. <br/><br/>From equation 3.4.74b: \begin{equation} \begin{split} \frac{\partial \tilde{V}_{t|t}}{\partial\theta_i } = & \frac{\partial \tilde{V}_{t|t-1}}{\partial \theta_i} - \frac{\partial \tilde{V}_{t|t-1}}{\partial \theta_i} Z_t^\top F_t^{-1}Z_t \tilde{V}_{t|t-1} - \tilde{V}_{t|t-1} \frac{\partial Z_t^\top}{\partial \theta_i} F_t^{-1}Z_t \tilde{V}_{t|t-1} + \tilde{V}_{t|t-1} Z_t^\top F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}Z_t \tilde{V}_{t|t-1} - \\ &\tilde{V}_{t|t-1} Z_t^\top F_t^{-1}\frac{\partial Z_t}{\partial \theta_i} \tilde{V}_{t|t-1} - \tilde{V}_{t|t-1} Z_t^\top F_t^{-1}Z_t \frac{\partial \tilde{V}_{t|t-1}}{\partial \theta_i} \end{split} \end{equation} Repeat for next element in parameter matrix.<br/>Repeat for parameter matrix.<br/><br/>&nbsp;&nbsp;&nbsp;&nbsp;Loop over i = 1 to p.<br/>&nbsp;&nbsp;&nbsp;&nbsp;Loop over j = i to p.<br/>&nbsp;&nbsp;&nbsp;&nbsp;Compute $$I_{ij}(\theta)$$ and add to previous time step. This is equation 3.4.69 with expectation dropped. Store in Iij[i,j] and Iij[j,i]. \begin{equation} I_{ij}(\theta)_t = I_{ji}(\theta)_t = \frac{1}{2}\left[ tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}\frac{\partial F_t}{\partial \theta_j}\right]\right] + \left(\frac{\partial v_t}{\partial \theta_i}\right)^\top F_t^{-1}\frac{\partial v_t}{\partial \theta_j} \end{equation} &nbsp;&nbsp;&nbsp;&nbsp;Add on to previous one: $I_{ij}(\theta) = I_{ij}(\theta) + I_{ij}(\theta)_t$ &nbsp;&nbsp;&nbsp;&nbsp;Repeat for next j.<br/>&nbsp;&nbsp;&nbsp;&nbsp;Repeat for next i.<br/><br/>Repeat for next t. <br/><br/>At the end, $$I_{ij}(\theta)$$ is the observed Fisher Information Matrix. <br/><br/>Note that $$Q$$ and $$R$$ do not appear in $$\partial v_t/\partial \theta_i$$, but all the other parameters do appear. So the second term in $$I_{ij}(\theta)$$ is always zero between $$Q$$ and $$R$$ and any other parameters. In the second term, $$u$$ and $$a$$ do not appear, but every other terms do appear. So the first term in $$I_{ij}(\theta)$$ is always zero between $$u$$ and $$a$$ and any other parameters. This means that there is always zero covariance between $$u$$ or $$a$$ and $$Q$$ or $$R$$. But this will not be the case between $$Q$$ or $$R$$ and $$B$$ or $$Z$$. <br/><br/>Part of the motivation of implementing the Harvey (1989) recursion is that currently in MARSS, I use a numerical estimate of the Fisher Information matrix by using one of R's functions to return the Hessian. But it often returns errors. I might improve it if I constrained it. If I am only estimating $$u$$, $$a$$, $$Q$$ and $$R$$, I could do a two-step process. Get the Hessian holding the variances at the MLEs and then repeat with $$u$$ and $$a$$ at the MLEs. </div> Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-45686687663813820372016-06-01T16:07:00.002-07:002017-06-13T13:49:51.364-07:00Notes on computing the Fisher Information matrix for MARSS models. Part III Overview of Harvey 1989<div dir="ltr" style="text-align: left;" trbidi="on"><script type="text/x-mathjax-config"> MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } }); </script><script src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' type='text/javascript'></script><i>MathJax and blogger can be iffy. Try reloading if the equations don't show up.</i><br /><br /> Notes on computing the Fisher Information matrix for MARSS models <a href="http://parsimoniouspursuits.blogspot.com/2016/05/notes-on-computing-fisher-information.html">Part I Background</a>, <a href="http://parsimoniouspursuits.blogspot.com/2016/05/notes-on-computing-fisher-information_19.html">Part II Louis 1982</a> <br /><br /> Part II discussed the approach by Louis 1982 which uses the full-data likelihood and the first derivative of that that is part of the M-step of the EM algorithm. The conclusion of part II was that that approach is doable but computationally expensive because it scales with $$T^2$$ at least. <br /><br />Here I will review the more common approach (Harvey 1989, pages 140-142, section 3.4.5 Information matrix) which uses the prediction error form of the likelihood function to calculate the observed Fisher Information $$\mathcal{I}(\hat{\theta},y)$$. A related paper is Cavanaugh and Shumway (1996), which presents an approach for calculating the <i>expected</i> Fisher Information. <h2>Harvey 1989 recursion for the expected and observed Fisher Information matrix</h2>Harvey (1989), pages 140-142, shows how to write the Hessian of the log-likelihood function using the prediction error form of the likelihood. The prediction error form is: \begin{equation}\label{peformlogL} \log L = \sum_{t=1}^T l_t = \sum_{t=1}^T p(y_t|y_1^{t-1}) \end{equation} The Hessian of the log-likelihood can then be written as \begin{equation}\label{hessian} \frac{\partial^2 \log L}{\partial\theta_i \partial\theta_j}=\sum{\frac{\partial^2 l_t}{\partial\theta_i \partial\theta_j}} \end{equation} and this can be written in terms of derivatives of the innovations $$v_t$$ and the variance of the innovations $$F_t$$. This is shown in Equation 3.4.66 in Harvey (1989). There are a couple differences between the equation below and 3.4.66 in Harvey. First, 3.4.66 has a typo; the $$[I - F_t v_t v_t^\top]$$ should be within the trace (as below). Second, I have written out the derivative with respect to $$\theta_j$$ that appears in the first trace term. \begin{equation}\label{liket} \begin{gathered} \frac{\partial^2 l_t}{\partial\theta_i \partial\theta_j} = \frac{1}{2} tr\left[ \left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_j} F_t^{-1} \frac{\partial F_t}{\partial \theta_i} - F_t^{-1}\frac{\partial^2 F_t}{\partial\theta_i \partial\theta_j} \right] \left[I - F_t^{-1}v_t v_t^\top\right] \right] - \\ \frac{1}{2}tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}\frac{\partial F_t}{\partial \theta_j}F_t^{-1}v_t v_t^\top\right] + \\ \frac{1}{2}tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}\left[ \frac{\partial v_t}{\partial \theta_j}v_t^\top + v_t\frac{\partial v_t^\top}{\partial \theta_j}\right]\right] - \\ \frac{\partial^2 v_t^\top}{\partial\theta_i \partial\theta_j}F_t^{-1}v_t + \frac{\partial v_t^\top}{\partial \theta_i} F_t^{-1}\frac{\partial F_t}{\partial \theta_j} F_t^{-1} v_t - \frac{\partial v_t^\top}{\partial \theta_i} F_t^{-1} \frac{\partial v_t}{\partial \theta_j} \end{gathered} \end{equation} The Fisher Information matrix is the negative of the expected value (over all possible data) of \ref{hessian}: \begin{equation}\label{FisherInformation2} I(\theta) = -E\left[ \frac{\partial^2 \log L}{\partial\theta_i \partial\theta_j} \right] \end{equation} Thus for the Fisher Information matrix, we take the expectation (over all possible data) of the sum (over t) of Equation 3 (3.4.66 in Harvey 1989). On pages 141-142, Harvey shows that the expected value of Equation 3 can be simplified and the i,j element of the Fisher Information matrix can be written as (Equation 3.4.69 in Harvey 1989): \begin{equation}\label{Iij} I_{ij}(\theta) = \frac{1}{2}\sum_t \left[ tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}\frac{\partial F_t}{\partial \theta_j}\right]\right] + E\left[\sum_t\left(\frac{\partial v_t}{\partial \theta_i}\right)^\top F_t^{-1}\frac{\partial v_t}{\partial \theta_j}\right] \end{equation} Equation \ref{Iij} (3.4.69 in Harvey 1989) is the Fisher Information and is evaluated at the true parameter values $$\theta$$. We do not know $$\theta$$ and instead we estimate the Fisher Information using our estimates of $$\theta$$. The two estimates of $$I(\theta)$$ that are used are called the expected and observed Fisher Information matrices. The expected Fisher Information is \begin{equation}\label{expectedFisherInformation2} I(\hat{\theta}) = -E\left[ \frac{\partial^2 \log L}{\partial\theta_i \partial\theta_j} \right] |_{\theta=\hat{\theta}} = -E\left[ \sum{\frac{\partial^2 l_t}{\partial\theta_i \partial\theta_j}} \right] |_{\theta=\hat{\theta}} \end{equation} and the observed Fisher Information is \begin{equation}\label{observedFisherInformation2} \mathcal{I}(\hat{\theta},y) = - \frac{\partial^2 \log L}{\partial\theta_i \partial\theta_j} |_{\theta=\hat{\theta}} = - \sum{\frac{\partial^2 l_t}{\partial\theta_i \partial\theta_j}} |_{\theta=\hat{\theta}} \end{equation} The $$|_{\theta=\hat{\theta}}$$ means 'evaluated at'. $$l_t$$ is a function of $$\theta$$. We take the derivative of that function and then evaluate that derivative at $$\theta = \hat{\theta}$$. The expectation (which is an integral) is over that possible values of the data $$y$$ which are generated from the model with $$\theta$$. <br /><br />The observed Fisher Information drops the expectation and the expected Fisher Information does not. The expectation is taken over all possible data, and we have only one observed data set. On first blush, it may seem that it is impossible to compute the expectation and that we must always use the observed Fisher Information. However, for some models, one can write down the expectations analytically. One could simulate from the MLEs to get the expectations---this is the idea behind bootstrapping. In a bootstrapping approach one uses the MLE to generate data. This is an approximation since what we would like is to simulate data from the true parameters. The mean and variance of data generated from the MLEs versus data generated the true parameters often have nice asymptotic properties. <br /><br />However it is common to use the observed Fisher Information matrix. This is what one is using when one uses the Hessian of the log-likelihood function evaluated at the MLEs. To get an analytical equation for the observed Fisher Information matrix, we use Equation 3 for $$l_t$$ and take the sum to get the Hessian of the log-likelihood function (\ref{hessian}). This is the same Hessian that you can get numerically. In R, you can use the fdHess function in the nmle package or the optim function. <h3>Partially observed, partially expected Fisher Information matrix</h3> Equation \ref{Iij} (Equation 3.4.69 in Harvey) is a simplification the expected value of the sum of equation 3. The simplification occurs because a number of terms in equation 3 drop out or cancel out when you take the expectation (see bottom of page 141 in Harvey 1989). The only terms that remain are those shown in equation \ref{Iij}. Harvey (1989) does not say how to compute the expectation in equation \ref{Iij} (which is his 3.4.69). Cavanaugh and Shumway (1996) do not say how to compute it either and suggest that it is infeasible (page 1 in paragraph after their equation 1). Instead they say that you can drop the expectation in equation \ref{Iij} and get the observed Fisher Information: \begin{equation}\label{obsIij} \mathcal{I}_{ij}(\theta) = \frac{1}{2}\sum_t \left[ tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_i}F_t^{-1}\frac{\partial F_t}{\partial \theta_j}\right]\right] + \sum_t\left(\frac{\partial v_t}{\partial \theta_i}\right)^\top F_t^{-1}\frac{\partial v_t}{\partial \theta_j} \end{equation} This however is halfway between the expected Fisher Information matrix and the observation Fisher Information matrix because equation \ref{Iij} is what you get <i>after</i> doing the expectation and dropping some of the terms in equation 3. If you compare what you get from equation \ref{obsIij} and what you get from a numerical estimate of the Hessian of the log-likelihood function at the MLE, you will see that they are different. The variance of the former is less than the variance of the latter. This is what you expect since the former has had the expectation applied to some terms in equation 3 (Harvey's 3.4.66). <br /><br />This does not mean that equation \ref{obsIij} should not be used, but rather that if you compare it to the output from a numerically computed Hessian, they will not be the same. In <a href="http://parsimoniouspursuits.blogspot.com/2017/05/notes-on-computing-fisher-information.html">Part IV</a>, I show Harvey's recursion for computing the first derivatives of $$v_t$$ and $$F_t$$ needed in equations 3 and \ref{Iij}. I extend this recursion to get the second derivative also. Once we have all these, we can use equation \ref{observedFisherInformation2} with equation 3 to compute the observed Fisher Information matrix and use equation \ref{Iij} to compute the "observed/expected" Fisher Information. <h3>Writing Equation 3 in vec form</h3>We can compute the Hessian of the log-likelihood by using a for loop of i from 1 to p with an inner for loop for j from i to p. The Hessian is symmetric so the inner loop only needs to go from i to p. However, we can also write the Hessian for time step t in a single line without any for loops using the Jacobian matrices for our derivatives. With the t subscripts of F and v dropped: \begin{equation} \begin{gathered} \frac{1}{2} J_F^\top ( F^{-1} \otimes F^{-1}) J_F - J_F^\top ( F^{-1}vv^\top F^{-1} \otimes F^{-1} ) J_F -\frac{1}{2} ( I_p \otimes [ F^{-1} - F^{-1} v_t v_t^\top F^{-1} ] ) \mathcal{J}_F + \\ \frac{1}{2} J_F^\top [3 F^{-1}v \otimes F^{-1} + F^{-1} \otimes F^{-1}v] J_v - \mathcal{J_v}^\top (I_p \otimes F^{-1} v) - J_v^\top F^{-1} J_v \end{gathered} \end{equation} This may or may not be faster but it is more concise. Go to <a href="http://parsimoniouspursuits.blogspot.com/2017/05/notes-on-computing-fisher-information.html">Part IV</a> to see how to compute these Jacobians using Harvey's recursion. <h3>Derivation of the observed Fisher Information matrix (equation 9)</h3> Note, I am going to drop the t subscript on F and v because things are going to get cluttered; $$v_1$$ will refer to the 1st element of the $$n \times 1$$ column vector v and $$F_{11}$$ is the (1,1) element of the matrix F. There has to be a loop to go through all the $$F_t$$ and $$v_t$$ for t=1 to T. <h4>Terms 1 and 2 of equation 3</h4>The first term of equation 3 is \begin{equation} \begin{gathered} \frac{1}{2} tr\left[ \left[ F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} \frac{\partial F}{\partial \theta_i} - \frac{1}{2} F^{-1}\frac{\partial^2 F}{\partial\theta_i \partial\theta_j} \right] \left[I - F^{-1}v v^\top\right] \right] = \\ \frac{1}{2} tr\left[ F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} \frac{\partial F}{\partial \theta_i}\left[I - F_t^{-1}v v^\top\right]\right] - \frac{1}{2} tr\left[ F^{-1}\frac{\partial^2 F}{\partial\theta_i \partial\theta_j} \left[ I - F^{-1}v v^\top \right] \right] = \\ \frac{1}{2} tr\left[ F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} \frac{\partial F}{\partial \theta_i} \right] - \frac{1}{2} tr\left[ F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} \frac{\partial F}{\partial \theta_i}F^{-1}v v^\top \right] - \frac{1}{2} tr\left[ F^{-1}\frac{\partial^2 F}{\partial\theta_i \partial\theta_j} \left[I - F^{-1}v v^\top\right]\right] \end{gathered} \end{equation} The second term of equation 3 is \begin{equation} - \frac{1}{2} tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_i} F_t^{-1} \frac{\partial F_t}{\partial \theta_j}F_t^{-1}v_t v_t^\top \right] \end{equation} All the matrices within the traces above are symmetric. The trace of products of symmetric matrices is permutation invariant. That means that if A, B, C, and D are symmetric matrices, $$tr(ABCD) = tr(ACBD) = tr(ACDB)$$, etc. Thus the second term can be rearranged to match the middle term in the first term. Terms 1 + 2 of equation 3 can thus be written as \begin{equation}\label{term12eqn3} \frac{1}{2}tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_j} F_t^{-1} \frac{\partial F_t}{\partial \theta_i} \right] - tr\left[ F_t^{-1}\frac{\partial F_t}{\partial \theta_j} F_t^{-1} \frac{\partial F_t}{\partial \theta_i}F_t^{-1}v_t v_t^\top \right] - \frac{1}{2} tr\left[ F_t^{-1}\frac{\partial^2 F_t}{\partial\theta_i \partial\theta_j} \left[I - F_t^{-1}v_t v_t^\top\right]\right] \end{equation} We can write the first trace of equation \ref{term12eqn3} as a vector product using the relation $$tr(A^\top B) = vec(A)^\top vec(B)$$. Note that the matrices in the traces in equation \ref{term12eqn3} are symmetric. If A is symmetric, $$A^\top = A$$ and $$tr(AB) = vec(A)^\top vec(B)$$. \begin{equation} \begin{gathered} tr\left[ F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} \frac{\partial F}{\partial \theta_i} \right] = vec\left( F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} \right)^\top vec\left( \frac{\partial F}{\partial \theta_i} \right) = \\ \left( ( F^{-1} \otimes F^{-1} ) vec\left( \frac{\partial F}{\partial \theta_j} \right) \right)^\top vec\left( \frac{\partial F}{\partial \theta_i} \right) = \\ vec\left( \frac{\partial F}{\partial \theta_j} \right)^\top ( F^{-1} \otimes F^{-1}) vec\left( \frac{\partial F}{\partial \theta_i} \right) \end{gathered} \end{equation} That is for the i,j element. This matrix is symmetric so it is also the j,i element. The derivative of $$vec(F)$$ with respect to $$\theta$$ (as opposed to the j-th element of $$\theta$$) is the Jacobian matrix of $$vec(F)$$. \begin{equation}\label{JF} J_F = \begin{bmatrix}\frac{\partial vec(F)}{\theta_1} & \frac{\partial vec(F)}{\theta_2} & \dots & \frac{\partial vec(F)}{\theta_p}\end{bmatrix} = \begin{bmatrix} \frac{\partial F_{11}}{\theta_1} & \frac{\partial F_{11}}{\theta_2} & \dots & \frac{\partial F_{11}}{\theta_p}\\ \frac{\partial F_{21}}{\theta_1} & \frac{\partial F_{21}}{\theta_2} & \dots & \frac{\partial F_{21}}{\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial F_{nn}}{\theta_1} & \frac{\partial F_{nn}}{\theta_2} & \dots & \frac{\partial F_{nn}}{\theta_p} \end{bmatrix} \end{equation} The full matrix for the first part of equation \ref{term12eqn3} is then \begin{equation} \frac{1}{2} J_F^\top ( F^{-1} \otimes F^{-1}) J_F \end{equation} <br><br>The middle trace of equation \ref{term12eqn3} is similar to the first and we end up with: \begin{equation} \begin{gathered} vec\left( \frac{\partial F}{\partial \theta_j} \right)^\top ( F^{-1} \otimes F^{-1}) vec\left( \frac{\partial F}{\partial \theta_i} F^{-1}vv^\top \right) = \\ vec\left( \frac{\partial F}{\partial \theta_j} \right)^\top ( F^{-1} \otimes F^{-1}) ( vv^\top F^{-1} \otimes I_n) vec\left( \frac{\partial F}{\partial \theta_i} \right) = \\ vec\left( \frac{\partial F}{\partial \theta_j} \right)^\top ( F^{-1}vv^\top F^{-1} \otimes F^{-1}) vec\left( \frac{\partial F}{\partial \theta_i} \right) \end{gathered} \end{equation} We can write this in terms of the Jacobian of vec(F): \begin{equation} J_F^\top ( F^{-1}vv^\top F^{-1} \otimes F^{-1} ) J_F \end{equation} <br><br>The third part of equation \ref{term12eqn3} involves the second derivatives $$\partial^2 F/\partial\theta_i \partial\theta_j$$. \begin{equation} \begin{gathered} tr\left[ F^{-1} \frac{\partial^2 F}{\partial\theta_i \partial\theta_j} [I - F^{-1}v v^\top ] \right] = tr\left[ [I - F^{-1}v v^\top ] F^{-1} \frac{\partial^2 F}{\partial\theta_i \partial\theta_j} \right] = \\ vec\left( F^{-1} - F^{-1}v v^\top F^{-1} \right)^\top vec\left( \frac{\partial^2 F}{\partial\theta_i \partial\theta_j} \right) = \\ vec\left( F^{-1} - F^{-1}v v^\top F^{-1} \right)^\top \frac{\partial vec( \partial F/\partial\theta_i )}{\partial\theta_j} \end{gathered} \end{equation} Again this is the i,j term. The term on the bottom line on the right is the $$(\theta_i,\theta_j)$$ term of the Jacobian of the vec of the Jacobian of F: \begin{equation} \mathcal{J}_F = \begin{bmatrix}\frac{\partial vec(J_F)}{\partial\theta_1} & \frac{\partial vec(J_F)}{\partial\theta_2} & \dots & \frac{\partial vec(J_F)}{\partial\theta_p}\end{bmatrix} = \begin{bmatrix} \frac{\partial F_{11}}{\theta_1\theta_1} & \frac{\partial F_{11}}{\theta_1\theta_2} & \dots & \frac{\partial F_{11}}{\theta_1\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial F_{nn}}{\theta_1\theta_1} & \frac{\partial F_{nn}}{\theta_1\theta_2} & \dots & \frac{\partial F_{nn}}{\theta_1\theta_p}\\ \frac{\partial F_{11}}{\theta_2\theta_1} & \frac{\partial F_{11}}{\theta_2\theta_2} & \dots & \frac{\partial F_{11}}{\theta_2\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial F_{nn}}{\theta_2\theta_1} & \frac{\partial F_{nn}}{\theta_2\theta_2} & \dots & \frac{\partial F_{nn}}{\theta_2\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial F_{11}}{\theta_p\theta_1} & \frac{\partial F_{11}}{\theta_p\theta_2} & \dots & \frac{\partial F_{11}}{\theta_p\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial F_{nn}}{\theta_p\theta_1} & \frac{\partial F_{nn}}{\theta_p\theta_2} & \dots & \frac{\partial F_{nn}}{\theta_p\theta_p}\\ \end{bmatrix} \end{equation} <br><br>The full matrix for the second part of term 1 + 2 in Equation 3 is then \begin{equation} ( I_p \otimes [ F^{-1} - F^{-1} v v^\top F^{-1} ] ) \mathcal{J}_F \end{equation} The subscript on the $$I$$ indicates the size of the identity matrix. In this case, it is a $$p \times p$$ matrix. <h4>Term 3 of equation 3</h4>With the t subscripts dropped, the 3rd term of equation 3 is \begin{equation}\label{term3eqn3} \frac{1}{2} tr\left[ F^{-1}\frac{\partial F}{\partial \theta_i}F^{-1} \left( \frac{\partial v}{\partial \theta_j}v^\top + v\frac{\partial v^\top}{\partial \theta_j}\right) \right] \end{equation} Using the same procedure as for the above terms, we can write this in terms of vecs. If $$b$$ and $$a$$ are $$1 \times n$$ column vectors, \begin{equation} vec(ab^\top) = (b \otimes I_n)vec(a) = (b \otimes I_n)a = (I_n \otimes a)vec(b) = (I_n \otimes a)b \end{equation} Thus, \begin{equation} \begin{gathered} vec\left( \frac{\partial v}{\partial \theta_j}v^\top\right) = (v \otimes I_n)\frac{\partial v}{\partial \theta_j} \\ vec\left( v (\partial v/\partial \theta_j)^\top \right) = (I_n \otimes v)\frac{\partial v}{\partial \theta_j} \end{gathered} \end{equation} and \begin{equation} vec\left( \frac{\partial v}{\partial \theta_j}v^\top + v(\partial v/\partial \theta_j)^\top \right) = (v \otimes I_n + I_n \otimes v)\frac{\partial v}{\partial \theta_j} \end{equation} When A is symmetric, $$tr(AB) = vec(A)^\top vec(B)$$. Thus term 3 of equation 3 can be written as \begin{equation} \begin{gathered} \frac{1}{2} tr\left[ F^{-1}\frac{\partial F}{\partial \theta_i}F^{-1} \left( \frac{\partial v}{\partial \theta_j}v^\top + v\frac{\partial v^\top}{\partial \theta_j}\right) \right] = vec\left( \frac{\partial F}{\partial \theta_i} \right)^\top (F^{-1} \otimes F^{-1}) (v \otimes I_n + I_n \otimes v)\frac{\partial v}{\partial \theta_j} \\ vec\left( \frac{\partial F}{\partial \theta_i} \right)^\top (F^{-1}v \otimes F^{-1} + F^{-1} \otimes F^{-1}v) \frac{\partial v}{\partial \theta_j} \end{gathered} \end{equation} This is the i,j term of the Fisher Information matrix from term 3 in equation 3. To get all terms, we use the Jacobian of vec(F) as above and the Jacobian of v: \begin{equation} \frac{1}{2} J_F^\top (F^{-1} \otimes F^{-1}) (v \otimes I_n + I_n \otimes v) J_v = \frac{1}{2} J_F^\top [F^{-1} v \otimes F^{-1} + F^{-1} \otimes F^{-1}v] J_v \end{equation} where $$J_F$$ is defined in equation \ref{JF} and $$J_v$$ is \begin{equation}\label{Jv} J_v = \begin{bmatrix} \frac{\partial v_{1}}{\theta_1} & \frac{\partial v_{1}}{\theta_2} & \dots & \frac{\partial v_{1}}{\theta_p}\\ \frac{\partial v_{2}}{\theta_1} & \frac{\partial v_{2}}{\theta_2} & \dots & \frac{\partial v_{2}}{\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial v_{n}}{\theta_1} & \frac{\partial v_{n}}{\theta_2} & \dots & \frac{\partial v_{n}}{\theta_p} \end{bmatrix} \end{equation} <h4>Term 4 of equation 3</h4>The 4th term of equation 3 is \begin{equation}\label{term4eqn3} - \frac{\partial^2 v^\top}{\partial\theta_i \partial\theta_j}F^{-1}v \end{equation} This is for the i,j term of the Fisher Information matrix. An equation for all terms can be written as a junction of the the Jacobian of $$vec(J_v)$$: \begin{equation} \mathcal{J}_v = \begin{bmatrix}\frac{\partial vec(J_v)}{\partial\theta_1} & \frac{\partial vec(J_v)}{\partial\theta_2} & \dots & \frac{\partial vec(J_v)}{\partial\theta_p}\end{bmatrix} = \begin{bmatrix} \frac{\partial v_{1}}{\theta_1\theta_1} & \frac{\partial v_{1}}{\theta_1\theta_2} & \dots & \frac{\partial v_{1}}{\theta_1\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial v_{n}}{\theta_1\theta_1} & \frac{\partial v_{n}}{\theta_1\theta_2} & \dots & \frac{\partial v_{n}}{\theta_1\theta_p}\\ \frac{\partial v_{1}}{\theta_2\theta_1} & \frac{\partial v_{1}}{\theta_2\theta_2} & \dots & \frac{\partial v_{1}}{\theta_2\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial v_{n}}{\theta_2\theta_1} & \frac{\partial v_{n}}{\theta_2\theta_2} & \dots & \frac{\partial v_{n}}{\theta_2\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial v_{1}}{\theta_p\theta_1} & \frac{\partial v_{1}}{\theta_p\theta_2} & \dots & \frac{\partial v_{1}}{\theta_p\theta_p}\\ \vdots & \vdots & \vdots & \vdots \\ \frac{\partial v_{n}}{\theta_p\theta_1} & \frac{\partial v_{n}}{\theta_p\theta_2} & \dots & \frac{\partial v_{n}}{\theta_p\theta_p}\\ \end{bmatrix} \end{equation} The right of equation \ref{term4eqn3}, $$F^{-1}v$$ is a $$n \times 1$$ matrix. We need to write this as the $$np \times p$$ matrix: \begin{equation} \begin{bmatrix} F^{-1}v & 0_{n \times 1} & \dots & 0_{n \times 1}\\ 0_{n \times 1} & F^{-1}v & \dots & 0_{n \times 1}\\ \vdots & \vdots & \vdots & \vdots\\ 0_{n \times 1} & 0_{n \times 1} & \dots & F^{-1}v \end{bmatrix} = I_p \otimes F^{-1}v \end{equation} Thus the full matrix for the i,j terms in the Fisher Information matrix from term 4 of equation 3 is \begin{equation} - \mathcal{J_v}^\top (I_p \otimes F^{-1}v) \end{equation} <h4>Term 5 of equation 3</h4>Term 5 is \begin{equation}\label{term5eqn3} \frac{\partial v^\top}{\partial \theta_i} F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} v \end{equation} This is a scalar and thus its vec is equal to itself. We can rewrite equation \ref{term5eqn3} using the following relation: \begin{equation} vec(a^\top ABC c ) = (c^\top \otimes a^\top) vec (ABC) = a^\top (c^\top \otimes I_n) vec(ABC) = c^\top (a^\top \otimes I_n) (C^\top \otimes A) vec(B) = c^\top (a^\top C^\top \otimes A) vec(B) \end{equation} Thus equation \ref{term5eqn3} is \begin{equation} \frac{\partial v^\top}{\partial \theta_i} F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} v = \frac{\partial v^\top}{\partial \theta_i} (v^\top \otimes I_n) (F^{-1} \otimes F^{-1}) vec\left( \frac{\partial F}{\partial \theta_j} \right) \end{equation} This is for the i,j term of the Fisher Information matrix. For the full matrix, we use the Jacobian of v (equation \ref{Jv}) and the Jacobian of vec(F) (equation \ref{JF}): J_v^\top (v^\top \otimes I_n) (F^{-1} \otimes F^{-1}) J_F = J_v^\top (v^\top F^{-1} \otimes F^{-1}) J_F \end{equation} <h4>Term 6 of equation 3</h4>Term 6 is \begin{equation}\label{term6eqn3} - \frac{\partial v^\top}{\partial \theta_i} F^{-1} \frac{\partial v}{\partial \theta_j} \end{equation} This is for the i,j term of the Fisher Information matrix and we can write it immediately as the full matrix in terms of the Jacobian of v: \begin{equation} \frac{\partial v^\top}{\partial \theta_i} F^{-1}\frac{\partial F}{\partial \theta_j} F^{-1} v = J_v^\top F^{-1} J_v \end{equation} <h4>Putting all the terms together</h4>Putting all the terms together, we have the full observed Fisher Information matrix: \begin{equation} \begin{gathered} \frac{1}{2} J_F^\top ( F^{-1} \otimes F^{-1}) J_F - J_F^\top ( F^{-1}vv^\top F^{-1} \otimes F^{-1} ) J_F -\frac{1}{2} ( I_p \otimes [ F^{-1} - F^{-1} v_t v_t^\top F^{-1} ] ) \mathcal{J}_F + \\ \frac{1}{2} J_F^\top [F^{-1}v \otimes F^{-1} + F^{-1} \otimes F^{-1}v] J_v - \mathcal{J_v}^\top (I_p \otimes F^{-1}v) + J_v^\top (v^\top F^{-1} \otimes F^{-1}) J_F - J_v^\top F^{-1} J_v \end{gathered} \end{equation} We can simplify this a little by noting that all terms are symmetric matrices and the transpose or a symmetric matrix is equal to itself. \begin{equation} J_v^\top (v^\top F^{-1} \otimes F^{-1}) J_F = J_F^\top (F^{-1} v \otimes F^{-1}) J_v \end{equation} Thus the full observed Fisher Information matrix is \begin{equation} \begin{gathered} \frac{1}{2} J_F^\top ( F^{-1} \otimes F^{-1}) J_F - J_F^\top ( F^{-1}vv^\top F^{-1} \otimes F^{-1} ) J_F -\frac{1}{2} ( I_p \otimes [ F^{-1} - F^{-1} v_t v_t^\top F^{-1} ] ) \mathcal{J}_F + \\ \frac{1}{2} J_F^\top [3 F^{-1}v \otimes F^{-1} + F^{-1} \otimes F^{-1}v] J_v - \mathcal{J_v}^\top (I_p \otimes F^{-1} v) - J_v^\top F^{-1} J_v \end{gathered} \end{equation} </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-70062446549174364042016-05-19T20:48:00.000-07:002017-05-31T11:09:56.893-07:00Notes on computing the Fisher Information matrix for MARSS models. Part II Louis 1982<div dir="ltr" style="text-align: left;" trbidi="on"><script type="text/x-mathjax-config"> MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } }); </script><script src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' type='text/javascript'></script><i>MathJax and blogger can be iffy. Try reloading if the equations don't show up.</i><br /><br /> Part II. Background on Fisher Information is in <a href="http://parsimoniouspursuits.blogspot.com/2016/05/notes-on-computing-fisher-information.html">Part I</a>. <br /><br /> So how do we compute $$I(\hat{\theta})$$ or $$\mathcal{I}(\hat{\theta},y)$$ (in Part I)? In particular, can we use the analytical derivatives of the full log-likelihood that are part of the EM algorithm? Many researchers have worked on this idea. My notes here were influenced by this blog post <a href="http://statisticalrecipes.blogspot.com/2012/03/em-algorithm-and-confidence-intervals.html">EM Algorithm: Confidence Intervals </a>on the same topic, which got me started. This blog post is mainly a discussion of the result by Louis (1982) on calculation of the Fisher Information matrix from the 'score' function that one takes the derivative of in the M-step of the EM algorithm. </br></br>The 'score' function used in the EM algorithm for a MARSS model is \begin{equation} Q(\theta | \theta_j) = E_{X|y,\theta_j } [\log f_{XY}(X,y|\theta) ] \end{equation} It is the expected value taken over the hidden random variable $$X$$ of the full data log-likelihood at $$Y=y$$ ; full means it is a function of all the random variables in the model, which includes the hidden or latent variables. $$x, y$$ is the full 'data', the left side of the $$x$$ state equation and the $$y$$ observation equation. We take the expectation of this full data likelihood conditioned on the observed data $$y$$ and $$\theta_j$$ which is the value of $$\theta$$ at the j-th iteration of the EM algorithm. Although $$Q(\theta | \theta_j)$$ looks a bit hairy, actually the full-data likelihood may be very easy to write down and considerably easier than the data likelihood $$f(y|\theta)$$. The hard part is often the expectation step, however for MARSS models the Kalman filter-smoother algorithm computes the expectations involving $$X$$ and Holmes (2010) shows how to compute the expectations involving $$Y$$, which comes up when there are missing values in the dataset (missing time steps, say). </br></br>In the M-step of the EM algorithm, we take the derivative of $$Q(\theta | \theta_j)$$ with respect to $$\theta$$ and solve for the $$\theta$$ where $\frac{\partial Q(\theta | \theta_j ) }{\partial \theta} = 0.$ It would be nice if one could use the following to compute the observed Fisher Information $- \frac{\partial^2 Q(\theta | \hat{\theta}) }{\partial \theta^2 } \right|_{\theta = \hat{\theta} }$ $$Q(\theta | \hat{\theta})$$ is our score function at the end of the EM algorithm, when $$\theta = \hat{\theta}$$. $$Q$$ is a function of $$\theta$$, the model parameters, and will have terms like $$E(X|Y=y, \hat{\theta})$$, the expected value of $$X$$ conditioned on $$Y=y$$ and the MLE. Those are the expectations coming out of the Kalman filter-smoother. We take the second derivative of $$Q$$ with respect to $$\theta$$. That is straight-forward for the MARSS equations. You take the first derivative of $$Q$$ with respect to $$\theta$$, which you already have from the update or M-step equations, and take the derivative of that with respect to $$\theta$$. <br /><br />Conceptually, this $- \left.\frac{\partial^2 Q(\theta | \hat{\theta}) }{\partial \theta^2 } \right|_{\theta = \hat{\theta} } = \left.\frac{\partial^2 E_{X|y,\hat{\theta} } [\log f(X,y|\theta) ] }{\partial \theta^2 } \right|_{\theta = \hat{\theta} }$ looks a bit like the observed Fisher Information: \begin{equation}\label{obsFI} \mathcal{I}(\hat{\theta},y) = - \left.\frac{\partial^2\log f(y|\theta)}{\partial \theta^2} \right|_{\theta=\hat{\theta}} \end{equation} except that instead of the data likelihood $$f(y|\theta)$$, we use the expected likelihood $$E_{X|y,\hat{\theta} } [\log f_{XY}(X,y|\theta) ]$$. The expected likelihood is the full likelihood with the $$X$$ and $$XX^\top$$ random variables replaced by their expected values assuming $$\theta = \hat{\theta}$$ and $$Y=y$$. The problem is that $$E_{X|y,\theta } [\log f(X,y|\theta) ]$$ is a function of $$\theta$$ and by fixing it at $$\hat{\theta}$$ we are not accounting for the uncertainty in that expectation. What we need is something like <br \><br \>Information with X fixed at expected value - Information on expected value of X <br \><br \>so we account for the fact that we have over-estimated the information from the data by treating the hidden random variable as fixed. The same issue arises when we compute confidence intervals using the estimate of the variance without accounting for the fact that this is an estimate and thus has uncertainty. Louis (1982) and Oakes (1999) are concerned with how to do this correction or adjustment. <br /><h2>Louis 1982 approach</h2>The following is equations 3.1, 3.2 and 3.3 in Louis (1982) translated to the MARSS case. In the MARSS model, we have two random variables, $$X(t)$$ and $$Y(t)$$. The joint distribution of $$\{X(t), Y(t) \}$$ conditioned on $$X(t-1)$$ is multivariate normal. Our full data set includes all time steps, $$\{X, Y \}$$. <br /><br />Let's call the full state at time t $$\{x ,y\}$$, the value of the $$X$$ and $$Y$$ at all times t. The full state can be an unconditional random variable, $$\{X,Y\}$$ or a conditional random variable $$\{X,y\}$$ (conditioned on $$Y=y$$. Page 227 near top of Louis 1982 becomes \begin{equation} \lambda(x,y,\theta) = \log\{ f_{XY}(x,y|\theta) \} \label{lambdaz} \end{equation} \begin{equation} \lambda^*(y,\theta) = \log\{ f_Y(y|\theta) \} = \log \int_X f_{XY}(x,y|\theta)dx \label{lambday} \end{equation} $$f(.|\theta)$$ is the probability distribution of the random variable conditioned on $$\theta$$. $$\lambda$$ is the full likelihood; 'full' means is includes both $$x$$ and $$y$$. $$\lambda^*$$ is the likelihood of $$y$$ alone. It is defined by the marginal distribution of $$y$$ ; the integral over $$X$$ on the right side of \ref{lambday}. For a MARSS model, the data likelihood can be written easily as a function of the Kalman filter recursions (which is why you can write a recursion for the information matrix based on derivatives of $$\lambda^*$$; see Part III). <br /><br />Next equation down. Louis doesn't say this and his notation is not totally clear, but the expectation right above section 3 (and in his eqn 3.1) is a conditional expectation. This is critical to know to follow his derivation of equation 3.1 in the appendix. $$\theta_j$$ is his $$\theta(0)$$; it is the value of $$\theta$$ at the last EM iteration. \begin{equation}\label{expLL} E_{X|y,\theta_j}[ \lambda( X, y, \theta)] = \int_X \lambda( X, y, \theta) f_{X|Y}(x|Y=y, \theta_j) dx \end{equation} My 'expectation' notation is a little different than Louis'. The subscript on the E shows what is being integrated *($$X$$ ) and what are the conditionals. The term $$f_{X|Y}(x|Y=y, \theta_j)$$ is the probability of $$x$$ conditioned on $$Y=y$$ and $$\theta=\theta_j$$. The subscript on $$f$$ indicates that we are using the probability distribution of $$x$$ conditioned on $$Y=y$$. For the EM algorithm, we need to distinguish between $$\theta$$ and $$\theta_j$$ because we maximize with respect to $$\theta$$ not $$\theta_j$$. If we just need the expectation at $$\theta$$, no maximization step, then we just use $$\theta$$ in $$f(.|\theta)$$ and the subscript on E. <br /><br />Before moving on with the derivation, notice that in \ref{expLL}, we fix $$y$$, the data. We are not treating that as a random variable. We could certainly treat $$E_{\theta_j}[ \lambda( \{X, y\}, \theta)]$$ as some function $$g(y)$$ and consider the random variable $$g(Y)$$. But Louis (1982) will not go that route. $$y$$ is fixed. Thus we are talking about the <em>observed</em> Fisher Information rather than the <em>expected</em> Fisher Information. The latter would take an expectation over the possible $$y$$ generated by our model with parameters at the MLE. <br /> <h3>Derivation of equation 3.1 in Louis 1982</h3> Now we can derive equation 3.1 in Louis (1982). I am going to combine the info in Louis' section 3.1 and the appendix on the derivation of 3.1. Before proceeding, Louis is using 'denominator' format for his matrix derivations; I normally use denominator format but I will follow his convention here. $$\theta$$ is a column vector of parameters and the likelihood $$f(.|\theta)$$ is scalar. Under 'denominator format', $$f^\prime(.|\theta) = df(.|\theta)/d\theta)$$ will be a column vector. $$f^{\prime\prime}(.|\theta) = d^2f(.|\theta)/d\theta d\theta^\top)$$ will be a matrix in Hessian format (the first $$d\theta$$ goes 1 to $$n$$ down columns and the second $$d\theta$$ does 1 to $$n$$ across rows). <br/><br/>Take the derivative of \ref{lambdaz} with respect to $$\theta$$ to define $$S(z,\theta)$$. \begin{equation} S(x,y,\theta)=\lambda^\prime(x,y,\theta)=\frac{d \log\{f_{XY}(x,y|\theta)\} }{d \theta}= \frac{df(x,y|\theta)/d\theta}{f(x,y|\theta)} = \frac{f^\prime(x,y|\theta)}{f(x,y|\theta)}\label{Sz} \end{equation} Take the derivative of the far right side of \ref{lambday} with respect to $$\theta$$ to define $$S^*(y,\theta)$$. For the last step (far right), I used $$f_Y(y|\theta) = \int_X f_{XY}(x,y|\theta)dx$$, the definition of the marginal distribution , to change the denominator. \begin{equation}\label{Sy} S^*(y,\theta)=\lambda^{*\prime}(y,\theta)=\frac{ d \log \int_X f_{XY}(x,y|\theta)dx }{d \theta} = \frac{ \int_X f_{XY}^\prime(x,y|\theta) dx }{ \int_X f_{XY}(x,y|\theta)dx } = \frac{ \int_X f_{XY}^\prime(x,y|\theta) dx }{ f_Y(y|\theta) } \end{equation} Now multiply the integrand in the numerator by $$f_{XY}(x,y|\theta)/f_{XY}(x,y|\theta)$$. The last step (far right) uses \ref{Sz}. \begin{equation}\label{intfprime} \int_X f_{XY}^\prime(x,y|\theta) dx = \int_X \frac{f_{XY}^\prime(x,y|\theta)f_{XY}(x,y|\theta)}{f_{XY}(x,y|\theta)} dx = \int_X \frac{f_{XY}^\prime(x,y|\theta)}{f_{XY}(x,y|\theta)}f_{XY}(x,y|\theta) dx = \int_X S(x,y,\theta) f_{XY}(x,y|\theta) dx \end{equation} We combine \ref{Sy} and \ref{intfprime}: \begin{equation}\label{Sstar} S^*(y,\theta)= \frac{ \int_X f_{XY}^\prime(x,y|\theta) dx }{ f_Y(y|\theta) } = \int_X S(x,y,\theta) \frac{ f_{XY}(x,y|\theta) }{ f_Y(y|\theta) } dx = \int_X S(x,y,\theta) f_{X|Y}(x|Y=y,\theta) dx \end{equation} The second to last step used the fact that $$f_Y(y|\theta)$$ does not involve $$x$$ thus we can bring it into the integral. This gives us $$f_{XY}(x,y|\theta)/f_Y(y|\theta)$$. This is the probability of $$x$$ conditioned on $$Y=y$$ . <br \><br \>The last step in the derivation of equation 3.1 is to recognize that the far right side of \ref{Sstar} is the conditional expectation in 3.1. Louis does not actually write out the expectation in 3.1 and the notation is rather vague. But the expectation in equation 3.1 is the conditional expectation on the far right side of \ref{Sstar}. \begin{equation}\label{Louise3p1} S^*(y,\theta)=\int_X S(x,y,\theta) f_{X|Y}(x|Y=y,\theta) dx=E_{X|y,\theta} [ S(X,y,\theta) ] \end{equation} using my notation for a conditional expectation which slightly different than Louis'. At the MLE, $$S^*(y,\hat{\theta})=0$$ since that is how the MLE is defined (it's where the derivative of the data likelihood is zero). <h3>Derivation of equation 3.2 in Louis 1982</h3> The meat of Louis 1982 is equation 3.2. The observed Fisher Information matrix \ref{obsFI} is \begin{equation}\label{obsFI32} \mathcal{I}(\theta,y) = B^*(y,\theta) = -S^\prime(x,y,\theta) = - \lambda^{*\prime\prime}(y,\theta) = - \frac{\partial^2\log f_Y(y|\theta)}{\partial \theta \partial \theta^\top} \end{equation} The first 3 terms on the left are just show that all are notation that refers to the observed Fisher Information. The 4th term is one of the ways we can compute the observed Fisher Information at $$\theta$$ and the far right term shows that derivative explicitly. <br /><br /> We start by taking the second derivative of \ref{lambdaz} with respect to $$\theta$$ to define $$B(x,y,\theta)$$. We use $$S^\prime(z,\theta)$$ as written in \ref{Sz}. \begin{equation}\label{B1} \mathcal{I}(\theta,x,y) = B(x,y,\theta)=-\lambda^{\prime\prime}(x,y,\theta) = -S^\prime(x,y,\theta) = -\frac{d[f_{XY}^\prime(x,y|\theta)/f_{XY}(x,y|\theta)]}{d \theta^\top} \end{equation} The transpose of $$d\theta$$ is because we are taking the second derivative $$d^2 l/d\theta d\theta^\top$$ (the Hessian of the log-likelihood); $$d\theta d\theta$$ wouldn't make sense as that that would be a column vector times a column vector. <br /><br />To do the derivative on the far right side of \ref{B1}, we first need to recognize the form of the equation. $$f_{XY}^\prime(x,y|\theta)$$ is a column vector and $$f(x,y|\theta)$$ is a scalar, thus the thing we are taking the derivative of has the form $$\overrightarrow{h}(\theta)/g(\theta)$$; the arrow over $$h$$ is indicating that it is a (column) vector while $$g()$$ is a scalar. Using the chain rule for vector derivatives, we have $\frac{ d (\overrightarrow{h}(\theta)/g(\theta))}{d \theta^\top} = \frac{d\overrightarrow{h}(\theta)}{d \theta^\top}\frac{1}{g(\theta)} - \frac{\overrightarrow{h}(\theta)}{ g(\theta)^2 }\frac{ g(\theta) }{ d \theta^\top }$ Thus (notice I'm writing the equation for the negative of $$B(x,y,\theta)$$, \begin{equation}\label{B2} -B(x,y,\theta) = \frac{d(f_{XY}^\prime(x,y|\theta)/f_{XY}(x,y|\theta))}{d \theta^\top} = \frac{f_{XY}^{\prime\prime}(x,y|\theta)}{f_{XY}(x,y|\theta)} - \frac{f_{XY}^\prime(x,y|\theta)f^\prime(z|\theta)^\top}{ f_{XY}(x,y|\theta)^2 }= \frac{f_{XY}^{\prime\prime}(x,y|\theta)}{f_{XY}(x,y|\theta)} - S(x,y|\theta)S(x,y|\theta)^\top \end{equation} <br /><br />Let's return to \ref{obsFI32} and take the derivative of $$\lambda^{*\prime}(y,\theta)$$ with respect to $$\theta$$ using the form shown in equation \ref{Sy}. I have replaced the integral in the denominator by $$f_Y(y|\theta)$$ and used the same chain rule used for \ref{B2}. \begin{align} \begin{split} \lambda^{*\prime\prime}(y,\theta)= d\left( \int_X f_{XY}^\prime(x,y|\theta) dx \middle/ f_Y(y|\theta) \right)/d\theta^\top = \\ \frac{\int_X f_{XY}^{\prime\prime}(x,y|\theta) dx }{f_Y(y|\theta)}- \frac{\int_X f_{XY}^\prime(x,y|\theta)dx }{f_Y(y|\theta)} \left(\frac{\int_X f_{XY}^\prime(x,y|\theta)dx}{f_Y(y|\theta)}\right) = \frac{\int_X f_{XY}^{\prime\prime}(x,y|\theta) dx }{f_Y(y|\theta)}- S^*(y|\theta)S^*(y|\theta)^\top \end{split} \end{align} The last substitution uses \ref{Sy}. Thus, \begin{equation}\label{B4} \lambda^{*\prime\prime}(y,\theta)= \frac{\int_X f_{XY}^{\prime\prime}(x,y|\theta) dx }{f_Y(y|\theta)}- S^*(y|\theta)S^*(y|\theta)^\top \end{equation} <br\>Let's look at the integral of the second derivative of $$f_{XY}(x,y|\theta)$$ in \ref{B4}: \begin{equation}\label{B5} \left( \int_X f_{XY}^{\prime\prime}(x,y|\theta) dx \middle/ f_Y(y|\theta) \right) = \int_X \frac{f_{XY}^{\prime\prime}(x,y|\theta) dx}{ f_{XY}(x,y|\theta) }\frac{f_{XY}(x,y|\theta)}{ f_Y(y|\theta)} dx= \int_X \frac{f_{XY}^{\prime\prime}(x,y|\theta) dx}{ f_{XY}(x,y|\theta) }f_{X|Y}(x|Y=y,\theta) dx \end{equation} This is the conditional expectation $$E_{X|y,\theta} [ f_{XY}^{\prime\prime}(x,y|\theta) dx/f_{XY}(x,y|\theta) ]$$ that we see 5 lines above the references in Louis (1982). Using \ref{B2} we can write this in terms of $$B(x,y|\theta)$$: \begin{equation}\label{B6} \int_X \frac{f_{XY}^{\prime\prime}(z|\theta) dx}{ f_{XY}(x,y|\theta) } = -B(x,y|\theta)+S(x,y|\theta)S(x,y|\theta)^\top \end{equation} Combining \ref{B4}, \ref{B5}, and \ref{B6}, we can write the equation above the references in Louis: \begin{equation}\label{B7} \lambda^{*\prime\prime}(y,\theta)= E_{X|y,\theta} [ - B(X,y|\theta)+S(X,y|\theta)S(X,y|\theta)^\top]-S^*(y|\theta)S^*(y|\theta)^\top \end{equation} The negative of this is the observed Fisher Information (\ref{obsFI32}) which gives us equation 3.2 in Louis (1982): \begin{equation}\label{Louismain} \mathcal{I}(\theta,y) = E_{X|y,\theta} [ B(X,y|\theta)] - E_{X|y,\theta} [ S(X,y|\theta)S(X,y|\theta)^\top]+S^*(y|\theta)S^*(y|\theta)^\top \end{equation} <h3>Derivation of equation 3.3 in Louis 1982</h3> Louis states that "The first term in (3.2) is the conditional expected full data observed information matrix, while the last two produce the expected information for the conditional distribution of X given $$X \in R$$." His X is my $$\{X,Y\}$$ and $$X \in R$$ means $$Y=y$$ in my context. He writes this in simplified form with $$X$$ replaced by $$XY$$: $I_Y = I_{XY} - I_{X|Y}$ $\mathcal{I}(\theta,y) = E_{X|y,\theta} [\mathcal{I}(\theta,X,y)] - I_{X|Y}$ Let's see how this is the case. <br \><br \>The full data observed information matrix is $\mathcal{I}(\theta,x,y) = -\lambda^{\prime\prime}(x,y|\theta) = B(x,y,\theta)$ This is simply the definition that Louis gives to $$B(x,y,\theta)$$. We do not know $$x$$ so we do not know the full data observed Information matrix. But we have the distribution of $$x$$ conditioned on our data $$y$$. $E_{X|y,\theta} [ B(X,y|\theta)]$ is thus the expected full data observed information matrix conditioned on our observed data $$y$$. So this is the first part of his statement.<br\><br\>The second part of his statement takes a bit more effort to work out. First we substitute $$S^*(y|\theta)$$ with $$E_{X|y,\theta} [ S(X,y|\theta) ]$$ from \ref{Louise3p1}. This gives us this: \begin{equation}\label{ES1} E_{X|y,\theta} [ S(X,y|\theta)S(X,y|\theta)^\top ]-S^*(y|\theta)S^*(y|\theta)^\top = E_{X|y,\theta} [ S(X,y|\theta)S(X,y|\theta)^\top ]-E_{X|y,\theta} [ S(X,y|\theta) ]E_{X|y,\theta} [ S(X,y|\theta)^\top ] \end{equation} Using the computational form of the variance, $$var(X)=E(XX)-E(X)E(X)$$, we can see that \ref{ES1} is the conditional variance of $$S(X,y|\theta)$$. $var_{X|y,\theta}( S(X,y|\theta) )$ But the variance of the first derivative of $$f^\prime(X|\theta)$$ is the <em>expected</em> Fisher Information of $$X$$ . In this case, it is the expected Fisher Information of the hidden state $$X$$, where we specify that $$X$$ has the conditional distribution $$f_{X|Y} (X | Y=y,\theta)$$. Thus we have the second part of Louis' statement. <br/><br/> <h3>Relating Louis 1982 to the update equations in the MARSS EM algorithm</h3> The main result in Louis (1982) (\ref{Louismain}) can be written \begin{equation}\label{Louismain2} \mathcal{I}(\theta,y) = E_{X|y,\theta} [ B(X,y|\theta)] - var_{X|y,\theta} [ S(X,y|\theta) ] \end{equation} The M-step of the EM algorithm involves the first derivative of the log-likelihood with respect to $$\theta$$, $$S(X,y|\theta)$$, since it involves setting this derivative to zero: \begin{equation} Q^\prime(\theta | \theta_j) = d( E_{X|y,\theta_j } [\log f_{XY}(X,y|\theta) ])/d\theta = E_{X|y,\theta_j } [\log f^\prime_{XY}(X,y|\theta) ] = E_{X|y,\theta_j } [ S(X,y|\theta) ] \end{equation} With the MARSS model, $$S(X,y|\theta)$$ is analytical and we can also compute $$B(X,y|\theta)$$, the second derivative, analytically. <br /><br /> 'The difficulty arises with this term: $$var_{X|y,\theta} [ S(X,y|\theta) ]$$. The $$S(X,y|\theta)$$ is a summation from $$t=1$$ to $$T$$ that involves $$X_t$$ or $$X_t X_{t-1}^top$$ for some parameters. When we do the cross-product, we will end up with terms like $$E[ X_t X_{t+k}^\top ]$$ and $$E[ X_t X_t^\top X_{t+k}X_{t+k}^\top ]$$. The latter is not a problem; all the random variables in a MARSS models are multivariate normal and the k-th central moments can be expressed in terms of the first and second moments , but that will still leave us with terms like $$E[ X_t X_{t+k}^\top ]$$---the smoothed covariance between $$X$$ at time $$t$$ and $$t+k$$ conditioned on all the data ($$t=1:T$$). <br /><br />Computing these is not hard. These are the the n-step apart smoothed covariances. Harvey (1989), page 148, discusses how to use the Kalman filter to get the n-step ahead prediction covariances and a similar approach can be used (presumably) to get the $$V(t,t+k)$$ smoothed covariances. However this will end up being computationally expensive because we will need all of the $$t,t+k$$ combinations, i.e., {1,3}, {1,4}, ..., {2,3}, {2,4}, .... etc.. That will be a lot: T + T-1 + T-2 + T-3 = $$T(T+1)/2$$, smoothed covariances. Lystig and Hughes (2012) and Duan and Fulop (2011) discuss this issue for in a related application of the approach in Louis (1982). They suggest that you do not need to include covariances with a large time separation because the covariance goes to zero. You just need to include enough time-steps. <h3>Conclusion</h3> I think the approach of Louis (1982) is not viable for MARSS models. The derivatives $$B(x,y|\theta)$$ and $$S(x,y|\theta)$$ are straight-forward (if tedious) to compute analytically following the approach in Holmes (2010). But the computing all the n-step smoothed covariances is going to be very slow and each computation involves many matrix multiplications. However, one could compute $$\mathcal{I}(\theta,y)$$ via simulation using \ref{Louismain2}. It is easy enough to simulate $$X$$ using the MLEs and then you compute $$B(x_b,y|\theta)$$ and $$S(x_b,y|\theta)$$ for each where $$x_b$$ is the bootstrapped $$x$$ time series and $$y$$ is the data. I don't think it makes sense to do that for MARSS models since there are two recursion approaches for computing the observed and expected Fisher Information using $$f(y|\theta)$$ and the Kalman filter equations (Harvey 1989, pages 140-142; Cavanaugh and Shumway 1996). <hr> <h2>Footnotes</h2> Given a joint probability distribution of $$\{X,Y\}$$, the marginal distribution of $$Y$$ is $$\int_X f(X,Y) dx$$. Discussions of the estimators for MARSS models often use the property of the marginal distributions of a multivariate normal without actually stating that this property is being used. The step in the derivation will just say, 'Thus' with no indication of what property was just used. <br \>Reviewed here: http://fourier.eng.hmc.edu/e161/lectures/gaussianprocess/node7.html If you have a joint likelihood of some random variables, and you want the likelihood of a subset of those random variables, then you compute the marginal distribution. i.e. you integrate over the random variables you want to get rid of: $L(\theta | y) ] = \int_X L(\theta | X,Y) p(x|Y=y, \theta_j) dx |_{Y=y}$. So we integrate out $$X$$ from the full likelihood and then set $$Y=y$$ to get the likelihood we want to maximize to get the MLE $$\theta$$ (if we want MLEs). <br \><br \>The marginal likelihood is a little different. The marginal likelihood is used when you want to get rid of some of the parameters, nuisance parameters. The integral you use is different: $L(\theta_1|y) = \int_{\theta_2} p(y|\theta_1,\theta_2) p(\theta_2|\theta_1)d\theta_2$ This presumes that you have $$p(\theta_2|\theta_1)$$. <br \><br \>The expected likelihood is different yet again: $E_{X,Y|Y=y,\theta_j} [L(\theta | X,Y) ] = \int_X L(\theta | X,Y) p(x|Y=y, \theta_j) dx$. On the surface it looks like the equation for $$L(\theta|y)$$ but it is different. $$\theta_j$$ is not $$\theta$$. It is the parameter value at which we are computing the expected value of $$X$$. Maximizing the $$E_{X,Y|Y=y,\theta_j} [L(\theta | X,Y) ]$$ will increase the likelihood but will not take you to the MLE---you have to imbed this maximization in the EM algorithm that walks up the likelihood surface. <br/><br/> P(A|B) = P(A \cup \B)/P(B) <br/><br/> I normally think about $$Y$$ as being partially observed (missing values) so I also take the expectation over $$Y(2)$$ conditioned on $$Y(1)$$, where (1) means observed and (2) means missing. In Holmes (2010), this is done in order to derive general EM update equations for the missing values case. But my notation is getting hairy, so for this write-up, I'm treating $$Y$$ as fully observed; so no $$Y(2)$$ and I've dropped the integrals (expectations) over $$Y(2)$$. <br/><br/> http://people.missouristate.edu/songfengzheng/Teaching/MTH541/Lecture%20notes/Fisher_info.pdf <br/><br/> https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Higher_moments <br /><br /><h2>Papers and online references</h2>Ng, Krishnan and McLachlan 2004<br />The EM algorithm. Section 3.5 discusses standard errors approaches<br />https://www.econstor.eu/dspace/bitstream/10419/22198/1/24_tk_gm_skn.pdf<br />http://hdl.handle.net/10419/22198 <br /><br />Efron and Hinkley 1978<br />(argues that the observed Fisher Information is better than expected Fisher Information in many/some cases. The same paper argues for the likelihood ratio method for CIs)<br />Assessing the accuracy of the maximum likelihood estimator: observed versus expected Fisher Information<br />https://www.stat.tamu.edu/~suhasini/teaching613/expected_observed_information78.pdf <br /><br />Hamilton 1994<br />http://web.pdx.edu/~crkl/readings/Hamilton94.pdf <br /><br />Hamilton's exposition assumes you know the marginal distribution of a multivariate normal. Scroll down to the bottom.<br />http://fourier.eng.hmc.edu/e161/lectures/gaussianprocess/node7.html <br /><br />Meilijson 1989<br />Fast improvement to the EM algorithm on its own terms<br />http://www.jstor.org/stable/pdf/2345847.pdf <br /><br />Oakes 1999<br />Direct calculation of the information matrix via the EM algorithm<br />http://www.jstor.org/stable/pdf/2680653.pdf?_=1463187953783 <br /><br />Ho, Shumway and Ombao 2006<br />(this has a brief statement that Oakes 1999 derivatives are hard to compute. It doesn't say why. It says nothing of Louis 1982.)<br />Chapter 7, The state-space approach to modeling dynamic processes<br />Models for Intensive Longitudinal Data<br />https://books.google.com/books?hl=en&lr=&id=Semo20xZ_M8C <br /><br />Louis 1982<br />(so elegant. alas, MARSS deals with time series data...)<br />Finding the observed information matrix when using the EM algorithm<br />http://www.jstor.org/stable/pdf/2345828.pdf<br />http://www.markirwin.net/stat220/Refs/louis1982.pdf <br /><br />Lystig and Hughes 2012 <br />(helped me better understand why Louis 1982 is hard for MARSS models)<br />Exact computation of the observed information matrix for hidden Markov models<br />http://www.tandfonline.com.offcampus.lib.washington.edu/doi/abs/10.1198/106186002402 <br /><br />Duan and Fulop 2011<br />(also helped me better understand why Louis 1982 is hard for MARSS models)<br />A stable estimator for the information matrix under EM for dependent data<br />http://www.rmi.nus.edu.sg/DuanJC/index_files/files/EM_Variance_March%205%202007.pdf<br />http://link.springer.com/article/10.1007/s11222-009-9149-4 <br /><br />Naranjo 2007 (didn't use)<br />State-space models with exogenous variables and missing data, PhD U of FL<br />http://etd.fcla.edu/UF/UFE0021568/naranjo_a.pdf <br /><br />Dempster, Laird, Rubin 1977<br />(didn't really use but looked up more info on the 'score' function Q)<br />Maximum likelihood for incomplete data via the EM algorithm<br />http://web.mit.edu/6.435/www/Dempster77.pdf <br /><br />van Dyk, Meng and Rubin 1995<br />(this looks promising)<br />Maximum likelihood estimation via the ECM algorithm: computing the asymptotic variance<br />http://wwwf.imperial.ac.uk/~dvandyk/Research/95-sinica-secm.pdf <br /><br />Cavanaugh and Shumway 1996<br />On computing the expected Fisher Information Matrix for state-space model parameters<br /><br /><br />Harvey 1989, pages 140-143, Section 3.4.5 Information matrix<br />Forecasting, structural time series models and the Kalman filter<br /> </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-87610442110605460232016-05-18T17:52:00.000-07:002017-05-31T13:32:48.789-07:00Notes on computing the Fisher Information matrix for MARSS models. Part I Background<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on"><script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script><i>MathJax and blogger can be iffy. Try reloading if the equations don't show up and then wait, like 30-60 seconds for the equations to magically appear (fingers crossed).</i><br /><br />The Fisher Information is defined as \begin{equation}\label{FisherInformation} I(\theta) = E_{Y|\theta}\{ [\partial\log L(\theta|Y)/\partial\theta]^2 \} = \int_x [\partial\log L(\theta|y)/\partial\theta]^2 f(y|\theta)dy \end{equation} In words, it is the expected value (taken over all possible data) of the square of the gradient (first derivative) of the log likelihood surface at $$\theta$$. It is a measure of how much information data (from our experiment or monitoring) have about $$\theta$$. The log-likelihood surface is for a fixed set of data and the $$\theta$$ vary. The peak is at the MLE, which is not $$\theta$$, so the surface has some gradient (slope) at $$\theta$$ since the peak is at the MLE not $$\theta$$. The Fisher Information is the expected value (over possible data) of those gradients (squared). It can be shown that the Fisher Information can also be written as $I(\theta) = - E_{Y|\theta}\{ \partial^2\log L(\theta|Y)/\partial\theta^2 \} = -\int_y [\partial^2\log L(\theta|y)/\partial\theta^2 f(y|\theta)dy$ So the Fisher Information is the average (over possible data) convexity of the log-likelihood surface at $$\theta$$. That doesn't quite make sense to me. When I imagine the surface, that the convexity at a non-peak value $$\theta$$ is not intuitively the information. The gradient squared, I understand, but the convexity at a non-peak? Note, my $$y$$ should be understood to be some multi-dimensional data set (multiple sites over multiple time points, say), and is comprised of multiple samples. Often in this case Fisher Information is written $$I_n(\theta)$$ and if the data points are all independent, $$I(\theta)=\frac{1}{n} I_n(\theta)$$. However I'm not using that notation. My $$I(\theta)$$ is referring to the Fisher Information for a dataset not individual data points within that data set. We do not know $$\theta$$ so we need to use an estimator for the Fisher Information. A common approach is to use $$I(\hat{\theta})$$, the Fisher Information at the MLE $$\theta$$ as an estimator of $$I(\theta)$$ because: $I(\hat{\theta}) \xrightarrow{P} I(\theta)$ This is called the <i>expected</i> Fisher Information and is computed at the MLE: \begin{equation}\label{expectedFisherInformation} I(\hat{\theta}) = - E_{Y|\hat{\theta}}\{ \partial^2\log L(\theta|Y)/\partial \theta^2 \} |_{\theta=\hat{\theta}} \end{equation} That $$|_{\theta=\hat{\theta}}$$ at the end means that after doing the derivative with respect to $$\theta$$, we replace $$\theta$$ with $$\hat{\theta}$$. It would not make sense to do the substitution before since $$\hat{\theta}$$ is a fixed value and so you cannot take the derivative with respect to it. This is a viable approach if you can take the derivative of the log-likelihood with respect to $$\theta$$ and can take the expectation over the data. You could always do that expectation using simulation of course. You just need to be able to simulate data from your model with $$\hat{\theta}$$. Another approach is to drop the expectation. This is termed the <i>observed</i> Fisher Information: \begin{equation}\label{observedFisherInformation} \mathcal{I}(\hat{\theta},y) = - \left.\frac{\partial^2\log L(\theta|y)}{\partial \theta^2} \right|_{\theta=\hat{\theta}} \end{equation} where $$y$$ is the one dataset we collected. The observed Fisher Information is the curvature of the log-likelihood function around the MLE. When you estimate the variance of the MLEs from the Hessian of the log-likelihood (output from say some kind of Newton method or any other algorithm that uses the Hessian of the log-likelihood), then you are using the observed Fisher Information matrix. Efron and Hinkley (1978) (and Fisher they say in their article) say that the observed Fisher Information is a better estimate of the variance of $$\hat{\theta}$$, while Cavanaugh and Shumway (1996) show results from MARSS models that indicate that the expected Fisher Information has lower mean squared error (though may be more biased; mean squared error measures both bias and precision). So how do we compute $$I(\hat{\theta})$$ or $$\mathcal{I}(\hat{\theta},y)$$? In particular, I am interested in whether I can use the analytical derivatives of the full log-likelihood that are part of the EM algorithm? <a href="http://parsimoniouspursuits.blogspot.com/2016/05/notes-on-computing-fisher-information_19.html">Notes on computing the Fisher Information matrix for MARSS models. Part II EM</a> </div><hr /><h2>Footnotes</h2> See any detailed write-up on Fisher Information. For example page 2 of these <a href="http://people.missouristate.edu/songfengzheng/Teaching/MTH541/Lecture%20notes/Fisher_info.pdf">lecture notes on Fisher Information</a>.<br /> The motivation for computing the Fisher Information is to get an estimate of the variance of $$\hat{\theta}$$ for standard errors on the parameter estimates, say. $$var(\hat{\theta}) \xrightarrow{P} \frac{1}{I(\theta)}$$.<br /> Note I'm using the notation of Cavanaugh and Shumway (1996). Efron and Hinkley (1978) use $$\mathscr{I}(\theta)$$ for the expected Fisher Information and $$I(\theta)$$ for the observed Fisher Information. Cavanaugh and Shumway (1996) use $$I(\theta)$$ for the expected Fisher Information and $$\mathcal{I}(\theta,Y)$$ for the observed Fisher Information. I use the same notation as Cavanaugh and Shumway (1996) except that they use $$I_n()$$ and $$\mathcal{I}_n$$ to be explicit that the data have $$n$$ data points. I drop the $$n$$ since I'm interested in the Fisher Information of the dataset not individual data points and if I need to use the information of the j-th data point, I would just write $$I_j()$$. The other difference is that I use $$y$$ to refer to the data. In my notation, $$Y$$ is the random variable 'data' and $$y$$ is a particular realization of that random variable. In some cases, I use $$y(1)$$. That is when the random variable $$Y$$ is only partially observed (meaning there are missing data points or time steps); $$y(1)$$ is the observed portion of $$Y$$. <br /><h2>References I looked at while working on this</h2><b>Fisher Info Lectures</b><br /><br />http://people.missouristate.edu/songfengzheng/Teaching/MTH541/Lecture%20notes/Fisher_info.pdf http://www.math.umt.edu/patterson/Information.pdf http://www.stat.umn.edu/geyer/old03/5102/notes/fish.pdf I also studied the <a href="https://en.wikipedia.org/wiki/Fisher_information">Wikipedia Fisher Information page</a>. Cavanaugh and Shumway (1996) have a succinct summary of Fisher Information in their introduction and I adopted their notation. <b>Papers</b>Efron and Hinkley 1978 (argues that the observed Fisher Information is better than expected Fisher Information in many/some cases. The same paper argues for the likelihood ratio method for CIs) Assessing the accuracy of the maximum likelihood estimator: observed versus expected Fisher Information https://www.stat.tamu.edu/~suhasini/teaching613/expected_observed_information78.pdf <br /><br />Cavanaugh and Shumway 1996<br />On computing the expected Fisher Information Matrix for state-space model parameters<br /> </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-24690736664503414102016-04-28T17:09:00.001-07:002017-01-31T11:45:49.381-08:00Analysis of PhD and Baccalaureate origin of math faculty (Part I)<div dir="ltr" style="text-align: left;" trbidi="on">I've been pondering the educational paths of math faculty, so I decided to collect some data by visiting the faculty websites and looking at CVs.&nbsp; I started with the top 20 or so schools on this ranking <a href="http://www.phds.org/rankings/mathematics">http://www.phds.org/rankings/mathematics</a> and then added a few.&nbsp; I added some schools like U of WA, U of FL and U of ID for more regional diversity.&nbsp; I only collected data on PhD and undergrad institution from faculty who got their PhD in the US.&nbsp; If they got their undergrad degree in another country, I noted the country.&nbsp; If no undergrad institution was listed, I added with undergrad 'unknown'.&nbsp; I only included full, associate and assistant faculty.&nbsp; I excluded lecturers and research faculty.&nbsp; I took data from 30 institutions (below).&nbsp; I was able to get PhD data on 761 faculty (656 male/105 female) and undergrad data on 577 of these (489 male/88 female). Here where the faculty data were collected broken out by institution. The number in parentheses is the number of faculty for which I was able to collect data.<br /><br /><pre class="GEWYW5YBFEB" id="rstudio_console_output" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important; word-break: break-all;" tabindex="0">CalTech(5), Columbia(21), Cornell(30), Harvard(23), MIT(48), NYU(43), Penn State(36), Princeton(34), Rutgers(55), Stanford(31), U Chicago(50), U of AZ(29), U of FL(33),U of ID(14), U of IL UC(36), U of MD(25), U of Mich(57), U of MN(26), U of MN Duluth(2), U of Rochester(13), U of T Austin(58), U of Utah(32), U of WA(47), U of WI(45), UC Berk(73), UC Davis(17), UC Irvine(18), UCLA(38), UPenn(14), Yale(17)</pre><br /><br />Here are the first set of plots.&nbsp; These plots show where faculty (whose info was posted) got their PhDs and bachelors. &nbsp;Only ca 50% of faculty post CVs so this is a sample of the faculty. &nbsp;Only faculty, not lecturers or research faculty included. &nbsp;But I did include assistant and associate faculty. &nbsp;Note, I excluded faculty who got their PhD in another country. &nbsp;That's about 10% (except at CalTech where it is about 75%).<br /><br />Plot 1 is just the <b>Group 1</b> institutions. &nbsp;<b>Harvard, Princeton, MIT, UC Berkeley, NYU, Stanford.</b><br />Why these? You'll see in plot 2. &nbsp;Plot 1 shows that this group is closed. &nbsp;Almost all faculty within this group got their PhD from institutions within this group. &nbsp;For the bachelor degrees, about 30% got their undergrad degree in another country. &nbsp;For those that got their bachelor's in the US, 40% got their bachelors from Group 1 and 50% got their undergrad from the Ivies+MIT+Stanford (excluding UC Berk).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-l0m1jBaJ4nI/VyKbZME2DBI/AAAAAAAAXaQ/VgTRek6QVOcRUmpgCETB7QdItQ331hMXQCLcB/s1600/Fig1-Closed-DPS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="353" src="https://2.bp.blogspot.com/-l0m1jBaJ4nI/VyKbZME2DBI/AAAAAAAAXaQ/VgTRek6QVOcRUmpgCETB7QdItQ331hMXQCLcB/s640/Fig1-Closed-DPS.jpg" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>Click figure to see full size.<br /><br />Plot 2 shows just faculty from OUTSIDE Group 1. &nbsp;These are 23 large research universities. &nbsp;See the figure for the list. &nbsp;Within this group of 23,<br /><ul style="text-align: left;"><li>56% of faculty got their PhD from Group 1 (right figure). <i>This was how I defined Group 1</i>--the schools whose PhDs showed up disproportionately.&nbsp;</li><li>23% got it from a University of XYZ (excluding UC Berkeley).&nbsp; This includes Canada flagships (so U of Toronto) but excludes, say, U of Rochester.</li><li>2% got it from a XYZ State institution (incl SUNY) </li></ul>&nbsp;Group 1 shows up disproportionately in the undergrad degrees too. &nbsp;If the faculty got the undergrad degree in the US (about 60% of them), then<br /><div><ul style="text-align: left;"><li>35% got their undergrad degree from a Group 1 institution</li><li>35% got it from the Ivies+MIT+Stanford. However, Dartmouth is an outlier as few of its undergrads show up. </li><li>16% got it from a University of XYZ and 7% got it from a XYZ State institution. &nbsp;This includes Canada flagships (so U of Toronto) but excludes, say, U of Rochester.</li><li>13% got it from a small liberal arts college.&nbsp; 27 different LACs appear, and almost all appear only once.&nbsp; The exception is Reed which appears 4x.</li><li>5.6% got if from the UC system (includes UC Berkeley which is 3.6 percent)</li><li>This means that over 2x as many faculty got their undergrad from a LAC than the entire UC system (188,000 undergrads).&nbsp; However, there are many LAC institutions and the total sum of their enrollment is likely greater than 188,000 undergrads.&nbsp;&nbsp;</li><li>43 out of the 248 faculty in this sample got their undergrad degree from Harvard or Princeton.&nbsp; That's 17%!&nbsp;&nbsp; It is somewhat higher in Group 1, 25%.</li></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-cDI2qQK0ua4/VyKbbWR6LJI/AAAAAAAAXaU/mrFfnO7w6PkRW_wB5BE1NepkDdKYivJxQCKgB/s1600/Fig2-Bac-PhD-Origins.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="354" src="https://4.bp.blogspot.com/-cDI2qQK0ua4/VyKbbWR6LJI/AAAAAAAAXaU/mrFfnO7w6PkRW_wB5BE1NepkDdKYivJxQCKgB/s640/Fig2-Bac-PhD-Origins.jpg" width="640" /></a></div><br />Related work: There is much work on this in other fields however I have not seen work that also looks at baccalaureate origin.<br /><br />2015 <a href="http://advances.sciencemag.org/content/1/1/e1400005">Systematic inequality and hierarchy in faculty hiring networks</a> See esp the list of references in this paper.</div></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-52944350792432911162015-04-23T18:18:00.002-07:002015-04-23T18:43:00.383-07:00Another way to get R package download statsThis is code from Mark Scheuerell that was adapted from this post by Felix Schonbrodt for a different way to get download stats: <a href="http://www.nicebread.de/finally-tracking-cran-packages-downloads/">http://www.nicebread.de/finally-tracking-cran-packages-downloads/</a><pre><br />## adadpted from code by Felix Schonbrodt<br />## http://www.nicebread.de/finally-tracking-cran-packages-downloads/<br /><br /><br />## ======================================================================<br />## Step 1: Download all log files<br />## ======================================================================<br /><br /># start & end dates 12 months prior to current date<br />this.year = as.numeric(format(Sys.time(), "%Y"))<br />start <- as.Date( paste(this.year-1,"-",format(Sys.time(), "%m-%d"),sep="") )<br />today <- as.Date(Sys.time())<br /><br />all_days <- seq(start, today, by = 'day')<br /><br />year <- as.POSIXlt(all_days)$year + 1900<br />urls <- paste0('http://cran-logs.rstudio.com/', year, '/', all_days, '.csv.gz')<br /><br /># only download the files you don't have:<br />missing_days <- setdiff(as.character(all_days), tools::file_path_sans_ext(dir("CRANlogs"), TRUE))<br /><br />dir.create("CRANlogs")<br />for (i in 1:length(missing_days)) {<br /> print(paste0(i, "/", length(missing_days)))<br /> download.file(urls[i], paste0('CRANlogs/', missing_days[i], '.csv.gz'))<br />}<br /><br /><br />## ======================================================================<br />## Step 2: Load single data files into one big data.table<br />##<br />## NOTE: this step takes FOREVER to run<br />## ======================================================================<br /><br />file_list <- list.files("CRANlogs", full.names=TRUE)<br /><br />logs <- list()<br />for (file in file_list) {<br /> print(paste("Reading", file, "..."))<br /> logs[[file]] <- read.table(file, header = TRUE, sep = ",", quote = "\"",<br /> dec = ".", fill = TRUE, comment.char = "", as.is=TRUE)<br />}<br /><br /># rbind together all files<br />library(data.table)<br />dat <- rbindlist(logs)<br /><br /># add some keys and define variable types<br />dat[, date:=as.Date(date)]<br />dat[, package:=factor(package)]<br />dat[, country:=factor(country)]<br />dat[, weekday:=weekdays(date)]<br />dat[, week:=strftime(as.POSIXlt(date),format="%Y-%W")]<br /><br />setkey(dat, package, date, week, country)<br /><br />save(dat, file="CRANlogs/CRANlogs.RData")<br /><br /># for later analyses: load the saved data.table<br /># load("CRANlogs/CRANlogs.RData")<br /><br /><br />## ======================================================================<br />## Step 3: Plot results<br />## ======================================================================<br /><br /># vector of pkgs to compare<br />pkgs <- c("MARSS","dlm")<br /><br /># vector of plot colors<br />clr <- seq(length(pkgs))<br /><br /># downloads of selected pkgs by week<br />com1 <- dat[J(pkgs), length(unique(ip_id)), by=c("week", "package")]<br /><br /># total downloads to date<br />com1[, sum(V1), by=package]<br /><br /># cumulative downloads by week<br />com1$C1 <- (com1[, cumsum(V1), by=package])V1<br /><br /># nicer form for plotting<br />plotdat <- cast(com1,week ~ package, value="C1")<br /><br /># plot cumulative downloads over time<br />matplot(plotdat,<br /> type="l", lty="solid", lwd=2, col=clr,<br /> ylab="Cumulative downloads",<br /> xlab="Week of 2013")<br /><br />legend(x="topleft", legend=colnames(plotdat)[-1],<br /> lty="solid", lwd=2, col=clr)<br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-10562072777699757942014-01-10T15:45:00.000-08:002014-01-14T17:54:17.326-08:00More general formulation. Tests 2<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on">More tests today of formulating MARSS model as in the napkin math of <a href="http://parsimoniouspursuits.blogspot.com/2014/01/more-general-formulation-of-marss-model.html">previous post</a>.&nbsp; Better.&nbsp; Slightly faster (3-5%).&nbsp; I need to think more carefully if the x0 treatment is identical. data needs to have NA added column since X is x(t) over y(t-1) so at t=1 you have x(1) over y(0). There is never any data at t=0.<br /><br /><i>Update: But adding NA to the start is not the same as using x00, and I will need to recode the Kalman filter to get the right result with x = [x(t) y(t-1)]' at t=1.&nbsp;&nbsp; It happens to work here, but other tests suggests not in general.&nbsp; Probably not a fruitful direction since perhaps it is not really necessary to have constraints across B and Z, though it feels 'complete'.</i><br /><br />To do: This helps, but previous where I incorporated U into B slowed things down a lot.&nbsp; Why is that? Most likely because of the Q=0 bit and hits the OmgQ code.&nbsp; How about adding a y=1 row and setting Q=1 so as not to hit that code?&nbsp; With new formulation, I can have U*y(n+1) so U*1.&nbsp; That should work.&nbsp; "working using Tt form 2.R" in MARSS sandbox dir.<br /><br /></div>Estimates are pretty similar but not identical. <br /><pre>#spp interaction example<br />royale.dat = log(t(isleRoyal[,2:3]))<br />z.royale.dat=(royale.dat-apply(royale.dat,1,mean,na.rm=TRUE))/<br /> sqrt(apply(royale.dat,1,var,na.rm=TRUE))<br />Q=matrix(list(0),2,2);diag(Q)=c("q1","q2")<br />royale.model.1=list(Z="identity", B="unconstrained",<br /> Q=Q, R="diagonal and unequal",<br /> U="zero", tinitx=0)<br />cntl.list=list(allow.degen=FALSE,maxit=200)<br />tic()<br />kemfit=MARSS(z.royale.dat, model=royale.model.1, control=list(allow.degen=FALSE))<br />toc()<br /><br />a=summary(kemfitmodel)<br />tinitx=a$tinitx<br />m=dim(a$B);n=dim(a$Z)<br />Bt=matrix(list(0),n+m,n+m);Bt[1:m,1:m]=a$B;Bt[(m+1):(n+m),1:m]=a$Z<br />Zt=matrix(list(0),n,m+n); Zt[1:n,(m+1):(m+n)]=diag(1,n)<br />Qt=matrix(list(0),m+n,m+n); Qt[1:m,1:m]=a$Q; Qt[(m+1):(n+m),(m+1):(n+m)]=a$R<br />x0t=rbind(a$x0,matrix(list(0),n,1))<br />V0t=matrix(list(0),n+m,n+m); VV0t[1:m,1:m]=a$V0<br />Ut=rbind(a$U,a$A)<br /><br />newa = list(B=Bt, Z=Zt, U=Ut, A="zero", Q=Qt, R="zero", x0=x0t, V0=V0t, tinitx=tinitx)<br />inits.list=list(x0=matrix(1+kemfit$model$data[,1],m,1))<br />ddat=cbind(NA,kemfit$model$data)<br />tic()<br />kemfita = MARSS(ddat, model=newa, control=list(allow.degen=FALSE),inits=inits.list)<br />toc()<br />p1=coef(kemfit); p2=coef(kemfita)<br />rbind(c(p1$B,p1$Z,p1$U,p1$Q,p1$R,p1$x0,kemfit$logLik),c(p2$B,p2$U,p2$Q,p2$x0,kemfitlogLik))<br /><br />#Works with this kemfit too<br />dat = t(harborSealWA)<br />dat = dat[2:4,] #remove the year row<br />#fit a model with 1 hidden state and 3 observation time series<br />tic()<br />kemfit = MARSS(dat, model=list(U=matrix(c("N","S","S"),3,1),tinitx=0), control=list(allow.degen=FALSE))<br />toc()<br /></pre></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-22099052372942347572014-01-06T14:48:00.000-08:002014-01-14T14:49:33.052-08:00More general formulation of the MARSS model<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on">Napkin math.&nbsp; I've been pondering for some time how to formulate the MARSS model in a more general way to more fully allow constraints across parameter matrices and across the X and Y parts of the model.&nbsp; I also want to allow X to be observed.<br /><br /><i>Update 1/6/2014: This doesn't seem to get me anywhere.&nbsp; The EM algorithm requires that estimated matrix elements fall on rows of Q (and R) which are non-zero. Even putting U in B (or A into Z), thus one additional row---slows down the EM algorithm.&nbsp; Merging the y and x together in a matrix, means I have NAs in the y*, representing the unobserved x in the stacked y-x.&nbsp; That leads to problems estimating x_0 because R=0 for those.&nbsp; That problem is fixable, but the others are more intractable.&nbsp; Given that just putting U into B didn't seem to get me anywhere, I'm going to drop this tangent and work on other stuff.&nbsp;&nbsp; The test code for putting U into B is below.</i><br /><br />The first napkin shows how I think I want to do this.&nbsp; e_t is iid 0,1 Gaussian noise.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-dK7aQG7kxXo/UsskbkvwHmI/AAAAAAAAT2c/emE9-IRpV00/s1600/photo.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-dK7aQG7kxXo/UsskbkvwHmI/AAAAAAAAT2c/emE9-IRpV00/s1600/photo.JPG" width="320" /></a></div><br />The second napkin shows how to set this up as a standard MARSS eqn, but involves a var-cov error matrix with a bunch of 0s.&nbsp; That's bad because wherever 0 rows appear in Q (or R), that row of B (or Z) cannot be estimated with the EM algorithm because it falls out of the likelihood equation that you integrate to get the updated B.&nbsp; That's a general difficulty with the EM approach.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-RrWIYP8X7So/UssjxpCZ4GI/AAAAAAAAT2M/IskXF0S5KA0/s1600/photo2.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-RrWIYP8X7So/UssjxpCZ4GI/AAAAAAAAT2M/IskXF0S5KA0/s1600/photo2.JPG" width="320" /></a></div><br />Also, the Q' and R' matrices (from the Cholesky transformation) above will have different constraints than the original Q and R matrices.&nbsp; That I think makes this bottom formulation above impossible and takes me back to the top formulation.<br /><br />Reference<br />Here's a write up of ARMA models in state-space form, which seems to have nothing to do with the scratches above but my reformulation is motivated by thinking about (among other things) rewriting ARMA models in state-space form.<br /><a href="http://www-stat.wharton.upenn.edu/~stine/stat910/lectures/14_state_space.pdf">http://www-stat.wharton.upenn.edu/~stine/stat910/lectures/14_state_space.pdf</a><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br /></div><pre>#12-20 notes<br />library(matlab)<br /># Test of some new ways to form the marss model to allow constraints across B and U<br />#harborSealWA is a n=5 matrix of logged population counts<br />dat = t(harborSealWA)<br />dat = dat[2:4,] #remove the year row<br />#fit a model with 1 hidden state and 3 observation time series<br />tic() #7.94 sec<br />kemfit = MARSS(dat, model=list(U=matrix(c("N","S","S"),3,1)), control=list(allow.degen=FALSE))<br />toc()&nbsp;</pre><pre>&nbsp;</pre><pre>#reformat model to put U in B;</pre><pre>a=summary(kemfitmodel)<br />m=dim(a$B)<br />Bt=cbind(rbind(a$B,matrix(0,1,m)),matrix(c(a$U,1),m+1,1)); #KFAS Tt<br />Zt=cbind(a$Z,a$A)<br />Qt=matrix(list(0),m+1,m+1); Qt[1:m,1:m]=a$Q; Qt[m+1,m+1]=0<br />x0t=rbind(a$x0,1)<br />V0t=matrix(list(0),m+1,m+1); V0t[1:m,1:m]=a$V0<br /><br />newa = list(B=Bt, Z=Zt, U="zero", A="zero", Q=Qt, R=a$R, x0=x0t, V0=V0t, tinitx=a$tinitx)<br />#will get same value but need to run longer<br />tic() #9.76 sec<br />kemfita = MARSS(kemfit$model$data, model=newa, control=list(allow.degen=FALSE))<br />toc()<br />rbind(c(coef(kemfit,type="vector"),kemfit$logLik),c(coef(kemfita,type="vector"),kemfita$logLik))&nbsp;</pre><pre>&nbsp;</pre><pre>#This is an alternate approach that uses a y=1 row 9.78sec</pre><pre>#and x is [x y]' </pre><pre>a=summary(kemfit$model)<br />tinitx=a$tinitx<br />m=dim(a$B);n=dim(a$Z)<br />Bt=matrix(list(0),n+m+1,n+m+1);Bt[1:m,1:m]=a$B;Bt[(m+1):(n+m),1:m]=a$Z; Bt[1:m,n+m+1]=a$U<br />Zt=matrix(list(0),n+1,m+n+1); Zt[,(m+1):(m+n+1)]=diag(1,n+1)<br />Qt=matrix(list(0),m+n+1,m+n+1); Qt[1:m,1:m]=a$Q; Qt[(m+1):(n+m),(m+1):(n+m)]=a$R;Qt[m+n+1,m+n+1]=1<br />x0t=rbind(a$x0,matrix(list(0),n+1,1)); x0t[n+m+1,1]=1;<br />V0t=matrix(list(0),n+m+1,n+m+1)<br />Ut="zero"<br /><br />newa = list(B=Bt, Z=Zt, U=Ut, A="zero", Q=Qt, R="zero", x0=x0t, V0=V0t, tinitx=tinitx)<br />inits.list=list(x0=matrix(1+kemfit$model$data[,1],m,1))<br />ddat=cbind(NA,kemfit$model$data); ddat=rbind(ddat,1)<br />tic()<br />kemfita = MARSS(ddat, model=newa, control=list(allow.degen=FALSE),inits=inits.list)<br />toc()<br />p1=coef(kemfit); p2=coef(kemfita)<br />rbind(c(p1$B,p1$Z,p1$U,p1$Q,p1$R,p1$x0,kemfit$logLik),c(p2$B,p2$U,p2$Q,p2$x0,kemfita$logLik))</pre><pre>&nbsp; </pre><pre>#same reformat code can be run with this kemfit<br />#spp interaction example<br />royale.dat = log(t(isleRoyal[,2:3]))<br />z.royale.dat=(royale.dat-apply(royale.dat,1,mean,na.rm=TRUE))/<br /> sqrt(apply(royale.dat,1,var,na.rm=TRUE))<br />royale.model.1=list(Z="identity", B="unconstrained",<br /> Q="diagonal and unequal", R="diagonal and unequal",<br /> U="zero", tinitx=1)<br />cntl.list=list(allow.degen=FALSE,maxit=200)<br />tic()<br />kemfit=MARSS(z.royale.dat, model=royale.model.1, control=list(allow.degen=FALSE))<br />toc()</pre><pre>&nbsp;</pre><br />&nbsp; </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-89745157216051119482013-12-20T12:36:00.001-08:002013-12-20T12:42:47.475-08:00Quantifying R package downloads using the CRAN mirror statsBecause I have to justify all the time I spend working on the MARSS package, I collect stats for how much it is downloaded relative to other R packages. To be honest, I think downloads stats are not really helpful to getting some recognition for the work---not that it hurts. The only that thing that really counts are citations of the published paper on MARSS, and that relies on users citing the paper. Even then the citations are not 'worth' as much since the paper is in a software journal. Fact is, one published paper in a high-impact paper cited 3 times is still "worth" a lot more than a R package downloaded hundreds times a day. Such is the research life. </br></br>Here the R code to get package stats off the CRAN mirror. See also this post by Felix Schonbrodt for a completely different way to get download stats: <a href="http://www.nicebread.de/finally-tracking-cran-packages-downloads/">http://www.nicebread.de/finally-tracking-cran-packages-downloads/</a><pre><br />require(XML)<br />require(RCurl)<br />require(httr)<br />require(stringr)<br /><br />#read in table 13 which is the download stats table<br />a=readHTMLTable("http://cran.r-project.org/report_cran.html", which=13, stringsAsFactors=FALSE)<br />b=(as.numeric(a$reqs))<br />filename=a$file<br />#detect which filenames are .tar.gz files and which are .zip. (packages)<br />pkg=str_detect(filename,"tar.gz") & str_detect(filename, "/src/contrib/")<br />pkg2=str_detect(filename,".zip") & str_detect(filename, "/bin/windows/contrib/r-release")<br />#detect which are documentation<br />docum=str_detect(filename,".pdf")<br /><br />#make some plots<br />par(mfrow=c(3,1))<br />#get the pkgname---because I need to deal with multiple versions of packages and I only want to count 1 of those<br />pkgname=sapply(filename[pkg],function(x){ tmp=str_split(str_split(x,"_")[],"/")[]; tmp[length(tmp)] })<br />#go through and just get the pkg version that has the max downloads<br />pkgcount=c()<br />for(i in unique(pkgname)){<br /> pkgcount=c(pkgcount,max(b[pkg][pkgname==i]))<br />}<br />#figure out which filename is MARSS<br />marsspkg=str_detect(filename,"tar.gz") & str_detect(filename, "/src/contrib/") & str_detect(filename, "MARSS")<br />#max(b[marsspkg]) means uses the count for whatever MARSS version is maximum to deal with multiple versions listed<br />titl=paste("Index of All R Source Package Downloads\ntop ",format(100*sum(pkgcount>max(b[marsspkg]))/length(pkgcount),digits=1),"%",sep="")<br />hist(log(pkgcount),main=titl,xlab="log(downloads)")<br />abline(v=log(sum(b[marsspkg])),col="red")<br />text(log(max(b[marsspkg])),2000,"MARSS",pos=4)<br /><br />pkgname=sapply(filename[pkg2],function(x){ tmp=str_split(str_split(x,"_")[],"/")[]; tmp[length(tmp)] })<br />marsspkg2=str_detect(filename,".zip") & str_detect(filename, "/bin/windows/contrib/r-release") & str_detect(filename, "MARSS")<br />pkgcount=c()<br />for(i in unique(pkgname)){<br /> pkgcount=c(pkgcount,max(b[pkg2][pkgname==i]))<br />}<br />titl=paste("Index of All R Package Windows Binaries Downloads\ntop ",format(100*sum(pkgcount>max(b[marsspkg2]))/length(pkgcount),digits=1),"%",sep="")<br />hist(log(pkgcount),main=titl,xlab="log(downloads)")<br />abline(v=log(sum(b[marsspkg2])),col="red")<br />text(log(max(b[marsspkg2])),1000,"MARSS",pos=4)<br /><br />titl=paste("Index of R Package Documentation Downloads\ntop ",format(100*sum(b[docum]>max(b[marssdocum]))/length(b[docum]),digits=1),"%",sep="")<br />marssdocum=str_detect(filename,".pdf") & str_detect(filename, "MARSS")<br />hist(log(b[docum]),main=titl,xlab="log(downloads)")<br />abline(v=log(max(b[marssdocum])),col="red")<br />text(log(sum(b[marssdocum])),2000,"MARSS",pos=4)<br /><br />#dlm compared to marss<br />dlmpkg=str_detect(filename,"tar.gz") & str_detect(filename, "/src/contrib/") & str_detect(filename, "/dlm_")<br />max(b[dlmpkg])<br />max(b[marsspkg]) #max to deal with different package versions and only use one<br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-43844958714609743922013-12-12T15:21:00.002-08:002014-01-14T10:06:09.183-08:00Automating testing of package version updates (MARSS specific)<div dir="ltr" style="text-align: left;" trbidi="on">With a major update to MARSS in the works with the EM algorithm translated to C++, I realized I needed to bite the bullet and automate the testing of the package updates.&nbsp; The main testing in the MARSS package occurs in the code in the extensive User Guide, but one of the tedious tasks for each version update has been making sure that I don't break anything with new updates and that any differences between output using different versions are expected (due to an intended change).&nbsp;&nbsp; I didn't want to duplicate effort put into the User Guide code by making special test code.&nbsp; Instead I just wanted to rerun the code from different package versions and make sure everything matched.&nbsp; I came up with a way to automate this by having different versions in different locations (in this case the base R library versus my local library).&nbsp; This also has the side benefit of testing that the R code supplied with the package can be sourced (I discovered a number of problems this way).<br /><br />Here's the code and comments explain what it is doing.&nbsp; Each User Guide chapter comes with a code file that allows users to replicate the chapter's examples.&nbsp; The Sweave file for each chapter is written such with special labels for code chunks that are to appear in these R files and the makefile assembles these into the R files supplied with the package.&nbsp; I'll have to change this a bit to facilitate using it as test code:<br /><ul style="text-align: left;"><li>Avoid not exporting code to the R file.&nbsp; The code in the Sweave files is flagged with "CS_" if I want it to appear in the R files for the users.&nbsp; I need to make sure I export all code that makes stuff in the chapters.&nbsp; Otherwise I risk not testing some of the code in the User Guide.&nbsp; Right now I pick-and-choose a bit and exports mostly examples.</li><li>Avoid reusing object names.&nbsp; I tend to use "kem" and "kemfit" over and over the chapter code.&nbsp; If I do that there won't be a separate object created for each bit of code.&nbsp;</li><li>Use set.seed() to ensure that objects from random number generators are the same. </li></ul>Here is the test code.&nbsp; Basic idea is to load one MARSS version, run test code, save all objects as list.&nbsp; Repeat for 2nd version.&nbsp; Compare the lists from the two versions.&nbsp; Report any differences.<br /><ul style="text-align: left;"></ul><pre># ###########################################<br /># This compares output from two different MARSS versions<br /># using the R code in the doc folder<br /># How to run<br /># Install one version of MARSS into the base R library<br /># Install a second version into the local R library<br /># Open the unit test.R file<br /># RShowDoc("versiontest.R", package="MARSS")<br /># Source the code.<br /># Note: Using 'build and reload' from RStudio builds the package into the local<br /># library but does not install the doc or help files<br /># Use Install from zip and install from a .tar.gz file instead<br /># ###########################################<br /><br />#make sure MARSS isn't loaded<br />try(detach(package:MARSS),silent=TRUE)<br /><br />#New version should be in the local library<br />lib.loc = Sys.getenv("R_LIBS_USER")<br />unittestvrs=packageVersion("MARSS", lib.loc = lib.loc)<br />library(MARSS, lib.loc = lib.loc)<br /><br />#Get whatever code files are in the doc directory; these are tested<br />unittestfiles = dir(path=paste(lib.loc,"/MARSS/doc",sep=""), pattern="*[.]R", full.names = TRUE)<br /><br />cat("Running code with MARSS version", as.character(unittestvrs), "\n")<br />for(unittestfile in unittestfiles){<br /> #clean the workspace but keep objects needed for the unit test<br /> rm(list = ls()[!(ls()%in%c("unittestfile","unittestfiles","unittestvrs"))])<br /> #set up name for log files<br /> tag=strsplit(unittestfile,"/")[]<br /> tag=tag[length(tag)]<br /> tag=strsplit(tag,"[.]")[]<br /> #run the code which will create objects<br /> cat("Running ",unittestfile, "\n")<br /> sink(paste("outputNew-",tag,".txt",sep=""))<br /> #wrapped in try so it keeps going if the code has a problem<br /> #set the seed so any random nums are the same<br /> set.seed(10)<br /> try(source(unittestfile))<br /> sink()<br /> #make a list of objects created by the test code<br /> testNew = mget(ls()[!(ls()%in%c("unittestfile","unittestfiles","unittestvrs"))])<br /> save(testNew,file=paste(tag,unittestvrs,".Rdata",sep=""))<br />}<br />#detach the new version<br />detach(package:MARSS)<br /><br />#Repeat for an older version of MARSS which is in the R library (no local library)<br />lib.loc = paste(Sys.getenv("R_HOME"),"/library",sep="")<br />unittestvrs=packageVersion("MARSS", lib.loc = lib.loc)<br />library(MARSS, lib.loc = lib.loc)<br />cat("\n\nRunning code with MARSS version", as.character(unittestvrs), "\n")<br />for(unittestfile in unittestfiles){<br /> rm(list = ls()[!(ls()%in%c("unittestfile","unittestfiles","unittestvrs"))])<br /> tag=strsplit(unittestfile,"/")[]<br /> tag=tag[length(tag)]<br /> tag=strsplit(tag,"[.]")[]<br /> cat("Running ",unittestfile, "\n")<br /> sink(paste("outputOld-",tag,".txt",sep=""))<br /> set.seed(10)<br /> try(source(unittestfile))<br /> sink()<br /> testOld = mget(ls()[!(ls()%in%c("unittestfile","unittestfiles","unittestvrs"))])<br /> save(testOld,file=paste(tag,unittestvrs,".Rdata",sep=""))<br />}<br />detach(package:MARSS)<br /><br />#Now start comparing the lists made using different versions of MARSS<br />cat("\n\nStarting object comparisons\n")<br />for(unittestfile in unittestfiles){<br /> #Get the file name<br /> tag=strsplit(unittestfile,"/")[]<br /> tag=tag[length(tag)]<br /> tag=strsplit(tag,"[.]")[]<br /> #Load in the 2 lists, testNew and testOld<br /> vrs=packageVersion("MARSS", lib.loc = Sys.getenv("R_LIBS_USER"))<br /> load(file=paste(tag,vrs,".Rdata",sep=""))<br /> lib.loc = paste(Sys.getenv("R_HOME"),"/library",sep="")<br /> vrs=packageVersion("MARSS", lib.loc = lib.loc)<br /> load(file=paste(tag,vrs,".Rdata",sep=""))<br /> <br /> #Compare the lists and report any differences<br /> cat("Checking ", tag, "\n")<br /> if(!identical(names(testNew), names(testOld))){<br /> cat("ERROR: Names of the test lists not identical\n\n")<br /> next<br /> }<br /> good=rep(TRUE,length(names(testNew)))<br /> for(ii in 1:length(names(testNew))){<br /> if(!identical(testNew[[ii]], testOld[[ii]])) good[ii] = FALSE<br /> }<br /> if(!all(good)){<br /> cat("ERROR: The following objects are not identical\n")<br /> cat(names(testNew)[!good])<br /> cat("\n\n")<br /> }else{<br /> cat("PASSED\n\n")<br /> }<br />}<br /></pre></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-16031878083876086372013-12-11T14:58:00.001-08:002015-07-27T18:29:32.647-07:00More native R versus RcppArmadillo speed test comparisons for EM algorithm<div dir="ltr" style="text-align: left;" trbidi="on">Following on my <a href="http://parsimoniouspursuits.blogspot.com/2013/12/speed-comparisons-using-native-r-and.html">previous post</a>, I continue to evaluate whether time spent writing C++ code for some of my EM algorithm in MARSS is time well-spent.&nbsp; Today I wrote a small function for one of the update equations, the R update.&nbsp; Below is a little function in R to do the biggest part of that update: <br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">test = function(Z, A, dR, kf, Ey){<br />&nbsp; sum1 = t.dR.dR = 0<br />&nbsp; TT = dim(kf[["xtT"]])<br />&nbsp; t.dR.dR = t.dR.dR + crossprod(dR)<br />&nbsp; for (i in 1:TT) {<br />&nbsp;&nbsp;&nbsp; hatyt = Ey[["ytT"]][,i,drop=FALSE]; hatyxt=sub3D(Ey[["yxtT"]],t=i); hatOt = sub3D(Ey[["OtT"]],t=i)<br />&nbsp;&nbsp;&nbsp; hatPt = kf[["VtT"]][,,i]+tcrossprod(kf[["xtT"]][,i,drop=FALSE])<br />&nbsp;&nbsp;&nbsp; hatxt = kf[["xtT"]][,i,drop=FALSE]<br />&nbsp;&nbsp;&nbsp; sum1a = (hatOt - tcrossprod(hatyxt, Z) - tcrossprod(Z, hatyxt)- tcrossprod(hatyt, A) - tcrossprod(A, hatyt) + tcrossprod(Z%*%hatPt, Z) + tcrossprod(Z%*%hatxt, A) + tcrossprod(A, Z%*%hatxt) + tcrossprod(A)) + A%*%t.A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></span><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">&nbsp;&nbsp;&nbsp; sum1 = sum1 + crossprod(dR, vec(sum1a))<br />&nbsp; }&nbsp; <br />&nbsp; return(sum1)<br />}</span></span><br />Z is a matrix, A a matrix, dR a 3D array, kf a list with a 3D array and 2D matrix I need, Ey is a list with 2 3D arrays and 1 2D matrix I need.<br /><br />Here's some RcppArmadillo (C++) to replicate the function above:<br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">// [[Rcpp::depends(RcppArmadillo)]]<br /><br />#include &lt;RcppArmadillo.h&gt;<br /><br />using namespace arma;<br /><br />// [[Rcpp::export]]<br />vec Rupdate2(mat Z, mat A, Rcpp::NumericVector vecdR, Rcpp::List&amp; kf, Rcpp::List&amp; Ey) {<br />&nbsp; Rcpp::NumericVector kfVtT = kf["VtT"], EyyxtT = Ey["yxtT"], EyOtT = Ey["OtT"]; <br />&nbsp; mat ytT = Ey["ytT"], xtT = kf["xtT"];<br />&nbsp; vec dRDim = vecdR.attr("dim"), VtTDim = kfVtT.attr("dim"), yxtTDim = EyyxtT.attr("dim"), OtTDim = EyOtT.attr("dim");<br />&nbsp; unsigned int TT = xtT.n_cols, m = xtT.n_rows, n = ytT.n_rows, p = dRDim;<br />&nbsp; cube VtT(kfVtT.begin(), VtTDim, VtTDim, VtTDim, false);<br />&nbsp; cube yxtT(EyyxtT.begin(), yxtTDim, yxtTDim, yxtTDim, false);<br />&nbsp; cube OtT(EyOtT.begin(), OtTDim, OtTDim, OtTDim, false);<br />&nbsp; cube cubedR(vecdR.begin(), dRDim, dRDim, dRDim, false);<br />&nbsp; vec hatyt(n), hatxt(m), sum1=zeros(p);<br />&nbsp; mat hatyxt(n,m), hatOt(n,n), hatPt(m,m), sum1a(n,n), dR=cubedR.slice(0);<br />&nbsp; for (unsigned int i = 0; i&lt;TT; i++) {<br />&nbsp;&nbsp;&nbsp; if(dRDim&gt;1) dR=cubedR.slice(i);<br />&nbsp;&nbsp;&nbsp; hatyt=ytT.col(i); hatxt=xtT.col(i);<br />&nbsp;&nbsp;&nbsp; hatOt=OtT.slice(i); hatyxt=yxtT.slice(i);<br />&nbsp;&nbsp;&nbsp; hatPt=VtT.slice(i) + hatxt * hatxt.t();<br />&nbsp;&nbsp;&nbsp; sum1a = hatOt - hatyxt * Z.t() - Z * hatyxt.t()- hatyt * A.t() - A * hatyt.t() + Z * hatPt * Z.t() + (Z * hatxt) * A.t() + A * (hatxt.t() * Z.t()) + A*A.t();<br />&nbsp;&nbsp;&nbsp; sum1 = sum1 + dR.t() * vectorise(sum1a);<br />&nbsp; }<br />&nbsp; return sum1;<br />}</span></span><br /><br />A few notes<br /><ul style="text-align: left;"><li>Rcpp::List&amp; is to make it pass the list by reference instead of memory</li><li><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">cubedR(vecdR.begin(), dRDim, dRDim, dRDim, false) is the way to do the same thing when you need to construct a cube.&nbsp; See comments by Rcpp developer on <a href="http://stackoverflow.com/questions/18866130/passing-large-matrices-to-rcpparmadillo-function-without-creating-copy-advanced">this SO post</a>.</span></span></li></ul>I tried it on a 15 x 20 matrix of data and 15 x 154 matrix and both time got about a 5 fold increase in speed:<br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0"><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">2 1000 3.15 1.010<br />1 1000 16.42 5.263</pre></span></pre><br /><br />So not orders of magnitude like I'd hoped, but probably enough to speed up the EM part by 50% when all is said and done. </div><xmp>################################################ ## Self-contained benchmark example for blog ################################################ require(utils) require(Rcpp) require(RcppArmadillo) require(rbenchmark) require(MARSS) #these are internal functions to MARSS since, surprisingly, R doesn't have these #in a 3D array say M[2,3,5] get the 2D matrix M[2,3,1] -> sub3D(M, t=1) sub3D=MARSS:::sub3D # turn a 2D matrix into a column vector vec=MARSS:::vec test = function(Z, A, dR, kf, Ey){ sum1 = t.dR.dR = 0 TT = dim(kf[["xtT"]]) t.dR.dR = t.dR.dR + crossprod(dR) for (i in 1:TT) { hatyt = Ey[["ytT"]][,i,drop=FALSE] hatyxt= sub3D(Ey[["yxtT"]],t=i) hatOt = sub3D(Ey[["OtT"]],t=i) hatPt = kf[["VtT"]][,,i]+tcrossprod(kf[["xtT"]][,i,drop=FALSE]) hatxt = kf[["xtT"]][,i,drop=FALSE] sum1a = (hatOt - tcrossprod(hatyxt, Z) - tcrossprod(Z, hatyxt)- tcrossprod(hatyt, A) - tcrossprod(A, hatyt) + tcrossprod(Z%*%hatPt, Z) + tcrossprod(Z%*%hatxt, A) + tcrossprod(A, Z%*%hatxt) + tcrossprod(A)) #A%*%t.A #sum1a = symm(sum1a) #enforce symmetry function from MARSSkf sum1 = sum1 + crossprod(dR, vec(sum1a)) } return(sum1) } #if this fun saved to file Rupdate.cpp, use sourceCpp("Rupdate.cpp") sourceCpp(code=' // [[Rcpp::depends(RcppArmadillo)]] #include <RcppArmadillo.h> using namespace arma; // [[Rcpp::export]] vec Rupdate(mat& Z, mat& A, Rcpp::NumericVector vecdR, Rcpp::List& kf, Rcpp::List& Ey) { Rcpp::NumericVector kfVtT = kf["VtT"], EyyxtT = Ey["yxtT"], EyOtT = Ey["OtT"]; mat ytT = Ey["ytT"], xtT = kf["xtT"]; vec dRDim = vecdR.attr("dim"), VtTDim = kfVtT.attr("dim"), yxtTDim = EyyxtT.attr("dim"), OtTDim = EyOtT.attr("dim"); unsigned int TT = xtT.n_cols, m = xtT.n_rows, n = ytT.n_rows, p = dRDim; cube VtT(kfVtT.begin(), VtTDim, VtTDim, VtTDim, false); cube yxtT(EyyxtT.begin(), yxtTDim, yxtTDim, yxtTDim, false); cube OtT(EyOtT.begin(), OtTDim, OtTDim, OtTDim, false); cube cubedR(vecdR.begin(), dRDim, dRDim, dRDim, false); vec hatyt(n), hatxt(m), sum1=zeros(p); mat hatyxt(n,m), hatOt(n,n), hatPt(m,m), sum1a(n,n), dR=cubedR.slice(0); for (unsigned int i = 0; i<TT; i++) { if(dRDim>1) dR=cubedR.slice(i); hatyt=ytT.col(i); hatxt=xtT.col(i); hatOt=OtT.slice(i); hatyxt=yxtT.slice(i); hatPt=VtT.slice(i) + hatxt * hatxt.t(); sum1a = hatOt - hatyxt * Z.t() - Z * hatyxt.t()- hatyt * A.t() - A * hatyt.t() + Z * hatPt * Z.t() + (Z * hatxt) * A.t() + A * (hatxt.t() * Z.t()) + A*A.t(); sum1 = sum1 + dR.t() * vectorise(sum1a); } return sum1; }' ) #test w 15 x t matrix of data nr=15 #rows for(t in c(20, 154)){ dat = t(apply(matrix(rnorm(nr*t),nr,t),1,cumsum)) #create nr random walks fit = MARSS(dat, silent=TRUE) Ey = print(fit, what="Ey", silent=TRUE) kf = print(fit, what="kfs", silent=TRUE) model = coef(fit, type="matrix") dR=fitmodel$free[["R"]] Z=model$Z A=modelA res <- benchmark(test(Z, A, dR, kf, Ey), Rupdate(Z, A, dR, kf, Ey), columns = c("test", "replications","elapsed", "relative"),order="relative",replications=1000) cat("test with ",nr,"x",t," matrix\n") print(res[,1:4]) cat("are the results the same? ") cat(identical(unname(test(Z, A, dR, kf, Ey)), Rupdate(Z, A, dR, kf, Ey))) cat("\n") } </xmp>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-75737691176096177612013-12-10T22:58:00.002-08:002013-12-11T15:09:47.144-08:00Speed comparisons using native R and RcppArmadillo<div dir="ltr" style="text-align: left;" trbidi="on"><div>I spent some time today learning the <a href="http://cran.r-project.org/web/packages/RcppArmadillo/index.html">RcppArmadillo</a> package that allows you to run C++ code from the Armadillo linear algebra library. I developed my example (below) from the examples directory in the inst directory of RcppArmadillo, but I also learned a lot from other posts<br /><ul style="text-align: left;"><li><a href="http://markovjumps.blogspot.com/2011/12/r-array-to-rcpparmadillo-cube.html">http://markovjumps.blogspot.com/2011/12/r-array-to-rcpparmadillo-cube.html</a></li><li><a href="http://www.stat.ubc.ca/~andy.leung/doc/seminar/RcppDemo.pdf">http://www.stat.ubc.ca/~andy.leung/doc/seminar/RcppDemo.pdf</a></li><li><a href="http://people.math.aau.dk/~sorenh/teaching/2012-ASC/day5aux/Rcpp-workshop.pdf">http://people.math.aau.dk/~sorenh/teaching/2012-ASC/day5aux/Rcpp-workshop.pdf </a></li><li><a href="http://markovjumps.blogspot.com/2013/10/passing-armadillo-matrices-by-value-and.html">http://markovjumps.blogspot.com/2013/10/passing-armadillo-matrices-by-value-and.html</a></li><li><a href="http://adv-r.had.co.nz/Rcpp.html">http://adv-r.had.co.nz/Rcpp.html</a></li><li><a href="http://faculty.washington.edu/heagerty/Courses/b572/public/rccp_demo.R">http://faculty.washington.edu/heagerty/Courses/b572/public/rccp_demo.R</a></li></ul></div>and the <a href="http://arma.sourceforge.net/docs.html#Col">actual documentation for Armadillo</a>.&nbsp; And I spent some time on <a href="http://www.cplusplus.com/doc/tutorial/variables/">this C++ tutorial</a> since I don't actually know C++ (though eons ago I programmed in Fortran and C+ and C++ reminds me why I like matlab and R better).&nbsp; I downloaded the source files for RcppArmadillo from CRAN and extracted the tar file to get example files for "kalman" which I edited for my purposes.<br /><br />The following code compares this computation in native R versus Armadillo C++<br /><br />Y=0<br />for(i in 1:nrows(A)) Y = Y+(A%*%B) %*% t(A[i])<br /><br />Here's the benchmark comparison and you can see that Armadillo C++ is considerably faster.&nbsp; Note, I had to futz a bit to find a computation where C++ was much faster, but this particular computation is very similar to one I make in the EM step for the MARSS package.<br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0"> test replications elapsed relative<br />2 crossprodCpp(A, B) 10 0.14 1.000<br />1 crosstest(A, B) 10 9.50 67.857</pre><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">&nbsp;</pre>Here is the code to run this particular example. Just install the RcppArmadillo and benchmark R packages and you should be good to go. You don't need to install Armadillo. Just source the code below. Unfortunately, MARSS stores things in arrays and it turns out that passing these to RcppArmadillo is tedious and my initial speed tests were not promising, but that's another post.<br /><br />Update: Turns out I picked a problem where Armadillo excels and used tcrossprod where it is not so efficient. If I'd used<br />&nbsp;&nbsp;&nbsp; Y = Y + A%*%(B%*%t(a))+B%*%t(a)<br />Rcpp is only 4x faster.&nbsp; If I'd used<br />&nbsp;&nbsp;&nbsp; Y&nbsp; = Y + tcrossprod(A%*%B,B) + tcrossprod(B,B)<br />or<br />&nbsp;&nbsp;&nbsp; Y&nbsp; = Y +A%*%B%*%B + B%*%B<br />Rcpp is no faster.&nbsp; <br /><br />CODE ---------------------------------------------------- <br />src='<br />// [[Rcpp::depends(RcppArmadillo)]]<br /><br />#include &lt;RcppArmadillo.h&gt;<br /><br />using namespace arma;<br /><br />// [[Rcpp::export]]<br />mat crossprodCpp(mat A, mat B) {<br />&nbsp; unsigned int n = A.n_cols;<br />&nbsp; colvec a;<br />&nbsp; mat Y = zeros(n, 1);&nbsp; <br />&nbsp; for (unsigned int i = 0; i&lt;n; i++) {<br />&nbsp; a = A.row(i).t();<br />&nbsp; Y = Y + (A*B) * a + B * a;&nbsp; <br />&nbsp; }<br />&nbsp; return Y;<br />}'<br /><br />require(utils)<br />require(RcppArmadillo)<br />require(rbenchmark)<br /><br />sourceCpp(code=src)<br /><br />crosstest = function(A,B){<br />&nbsp; Y = 0<br />&nbsp; for(i in 1:dim(A)){<br />&nbsp;&nbsp;&nbsp; a=A[i,,drop=FALSE]<br />&nbsp;&nbsp;&nbsp; Y = Y + tcrossprod(A%*%B,a)+tcrossprod(B,a)<br />&nbsp; }<br />&nbsp; return(Y)<br />}<br /><br /><br />A=diag(200); B=diag(200)<br /><br />res &lt;- benchmark(crosstest(A,B), crossprodCpp(A,B),<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; columns = c("test", "replications",<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "elapsed", "relative"),<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; order="relative",<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; replications=10)<br /><br />print(res[,1:4])</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-53040520071557580642013-11-26T14:48:00.001-08:002013-11-26T14:48:14.785-08:00Speeding up R (MARSS specific)<div dir="ltr" style="text-align: left;" trbidi="on">Some notes on MARSS 3.6 update.<br /><br />MARSS is generally slow since it is in native R but is slower than needed since it is doing a lot of matrix manipulations.&nbsp; I got a 10-20 increase in speed for large matrix (10,000 rows, 100s cols) problems (which arise when say n=120 and 120 r's are being estimated) by the following in order of how much it helped<br /><br /><ul style="text-align: left;"><li>Replace all instances of t(A)%*%B and A%*%t(B) with crossprod(A,B) and tcrosprod(A,B), respectively.</li><li>vectorize the one case where I was using for(i in 1:nrow){ for(j in 1:ncol) {} } to do something to each element of matrix, element by element.&nbsp; This only was done once, but ground the code to a halt for big matrices.</li><li>Used R profiling to find that a slow diagonal matrix test was slowing my Kalman filter function.&nbsp; Replaced that with a fast test.</li><li>Found all cases where I was subsetting arrays, like A[,,i], and replaced with code like this if(dim(A)==1) dim(A)=dim(A)[1:2].&nbsp; This torched the dimnames, so I needed to be careful to rest those if needed.</li><li>Made sure I was not recreating matrices unnecessarily.&nbsp; The diagonal matrices created for degenerate R and Q were getting created over and over.&nbsp; Made a flag so that they are created only once and only updated if new 0s appear.</li></ul>Things I tried that didn't help<br /><ul style="text-align: left;"><li>using the Matrix package and sparse matrices, but that only helped when n was really big and hurt when n was small. </li><li>vectorize the for loop over time using block diagonal matrices and the Matrix package.&nbsp; See previous blog post on that test.&nbsp; I was really bummed that didn't speed things up dramatically.</li></ul>To do:<br /><ul style="text-align: left;"><li>Bust out the degen code into a bit with if(allow.degen==TRUE).</li><li>Clean up the set-up and testing code for marssMODELs</li><li>Maybe do.&nbsp; If there are no zeros on the diagonals in A, then the solve(A)%*%b call can be sped up with solve(A,b).&nbsp; If I isolate the degen code, then I could us that.&nbsp; Right now I need to do a robust inverse that deals with 0s on the diagonal and structures that prevent solve() from working.&nbsp; I bet I cannot use this though because the problem is not just 0s on the diagonal.</li></ul>References<br />Faster R notes:&nbsp; <a href="http://pj.freefaculty.org/blog/?p=122">http://pj.freefaculty.org/blog/?p=122 </a><br /><br />Some speed testing and profiling code<br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">require(MARSS)<br />plankdat=lakeWAplanktonTrans<br />plankdat=plankdat[plankdat[,"Year"]&gt;=1980 &amp; plankdat[,"Year"]&lt;1990,]<br /># create vector of phytoplankton group names<br />phytoplankton = c("Cryptomonus", "Diatoms", "Greens",<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "Bluegreens", "Unicells", "Other.algae")<br /># get only the phytoplankton<br />dat.spp.1980 = as.matrix(plankdat[,phytoplankton])<br />dat.spp.1980 = t(dat.spp.1980)<br />&nbsp;</span></span><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">cntl.list = list(maxit=50)<br />model.list = list(m=2, R="diagonal and equal")<br />model.list = list(m=2, R="diagonal and unequal")</span></span><br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#quick speed testing using matlab<br />require(matlab)<br />n=10<br />a=c()<br />for(i in 1:n) a=rbind(a,dat.spp.1980)<br />a=a+matrix(rnorm(length(a),0,.1),dim(a),dim(a))<br />R=matrix(list(0),dim(a),dim(a))<br />diag(R)=as.character(1:dim(a))<br />model.list = list(m=2, R=R)<br />tic()<br />kemz.2 = MARSS(a, model=model.list, z.score=TRUE, form="dfa", control=cntl.list)<br />toc()</span></span><br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#R profiling<br />Rprof(tmp&lt;-tempfile())<br />kemz.2 = MARSS(dat.spp.1980, model=model.list, z.score=TRUE, form="dfa", control=list(maxit=50))<br />Rprof()<br />summaryRprof(tmp)</span></span><br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#system.time</span></span><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">system.time(MARSS(dat.spp.1980, model=model.list, z.score=TRUE, form="dfa", control=list(maxit=50)))</span></span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-76916644847007369572013-11-25T14:38:00.000-08:002014-01-14T10:08:22.512-08:00Speed test using Matrix to do block summation<div dir="ltr" style="text-align: left;" trbidi="on">I've spent the last week <a href="http://parsimoniouspursuits.blogspot.com/2013/11/speeding-up-r-marss-specific.html">speeding up the MARSS package by getting rid of some expensive matrix manipulations</a>.&nbsp; It turns out that subscripting a large matrix and taking the transpose is really slow in R.&nbsp; Some of my design matrices have 100s of thousands of rows, so that was getting slow.<br /><br />In the process, I've been thinking about how to speed up the EM algorithm by getting rid of the "for" loops over time.&nbsp;&nbsp; One idea is to use the Matrix package and use block diagonal matrices instead of arrays to hold the time-varying matrices coming out of the Kalman filter.&nbsp; Preliminary speed test was not promising however.&nbsp; It was slow to use matrix multiplication to do a simultaneous summation.&nbsp; Speed tests are below.&nbsp; Doing the summation with matrix multiplication took consistently about twice the time.<br /><br />Idea is to replace the for loop over 3rd dim (time) of array:<br />for(i in 1:TT) sum1=sum1+a[,,i]%*%b[,,i]<br /><br />with this<br />sum1 = II.row %*% a.blockdiag %*% b.blockdiag %*% II.col<br />where a.blockdiag is a block diagonal with each a[,,i] a block down the diagonal, b.blockdiag is similar, II.row is a row of TT identity matrices and II.col is a col of TT identity matrices.<br /><br />Here's some speed test code<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">require(Matrix)<br />TT=100; n=20<br />a=array(1:(n*n),dim=c(n,n,TT))<br />xtT=matrix(1:n,n,TT)</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br /></span><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#set up the II.row and II.col</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">II.row = II.col = Diagonal(n)<br />for(i in 1:(TT-1)) II.row = cBind(II.row,Diagonal(n))<br />for(i in 1:(TT-1)) II.col = rBind(II.col,Diagonal(n))<br />I.row=Matrix(1,1,TT)<br />I.col=Matrix(1,TT,1)</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br />#set up the block diag matrices; Matrix wants a list of matrices for bdiag()</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">for(i in 1:TT) b[[i]]=a[,,i]</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">b=list()<br />d=bdiag(b)<br />for(i in 1:TT) b[[i]]=xtT[,i,drop=FALSE]<br />e=bdiag(b)</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br />#speed test&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">require(matlab)<br />tic()</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">for(j in 1:100) <br />sum2=II.row%*%d%*%e%*%I.col<br />toc()<br /><br />tic()</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">for(j in 1:100){<br />sum1=0; for(i in 1:TT) sum1=sum1+a[,,i]%*%xtT[,i]</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">}<br />toc()<br /><br />tic()<br />for(j in 1:100){<br />sum2=I.row%*%crossprod(e,d)%*%II.col<br />}<br />toc()<br /><br />tic()<br />for(j in 1:100){<br />sum1=0; for(i in 1:TT) sum1=sum1+crossprod(xtT[,i,drop=FALSE],a[,,i])<br />}<br />toc()</span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-91005622323493247762013-11-05T17:45:00.002-08:002013-11-05T17:45:52.697-08:00Fitting big state-space models with glmnet?Brian Dennis once showed me an algorithm for fitting state-space models using a big matrix of all the data. I viewed the approach as unworkable for anything but small data sets. Maybe glmnet could be used? Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-5013346967739121252013-10-11T14:01:00.003-07:002014-01-14T10:10:53.727-08:00Dixon and Coles tests repeated with glmnet(..., lambda=0)<div dir="ltr" style="text-align: left;" trbidi="on">I repeated the tests from <a href="http://parsimoniouspursuits.blogspot.com/2013/10/dixon-and-coles-2-player-model-with.html">my previous post</a> using glmnet(..., lambda=0).&nbsp; Given how well this worked with real soccer data, I was surprised that it would not work with 'better' simulated data.&nbsp; Maybe I do need to constrain the model?<br /><br /><i>Update: I figured out that this is a problem with family="poisson" when mu is pathologically huge (like 1e365).&nbsp; Works fine on more realistic mu.&nbsp; I tested on various soccer fits with speedglm() versus glmnet() and they gave the same estimates.&nbsp; I recoded rank.teams() in fbRanks to allow use of glmnet.&nbsp;</i> <br /><br />In all tests, the network of games played (who plays who) is real.&nbsp; See previous post for a description of the tests.<br />Test 1: attack and defense strength for teams are drawn i.i.d.&nbsp; NON-CONVERGENCE<br />Test 2: attack and defense strength drawn with same mean and variance but age groups do not have different means.&nbsp; NON-CONVERGENCE<br />Test 3: attack and defense strengths set from estimated values from speedglm using real data.&nbsp; glmnet works well and seems a little better than speedglm.<br /><br />What is up with test 1 and test 2?&nbsp;&nbsp; Works fine if I don't pass in lambda=0, while test 3 works fine if I do.&nbsp;&nbsp; Maybe too many 0s if I draw strengths randomly?&nbsp; Variance within age groups is maybe less than 1 or 2 (real data)?<br /><br /><br /><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-35972860164588445252013-10-11T14:00:00.000-07:002013-10-17T12:42:58.784-07:00Follow-up on the glmnet problems I had for fitting poisson models<div dir="ltr" style="text-align: left;" trbidi="on"><span style="font-family: inherit;">After emailing the maintainer of glmnet, I figured out what was going on with glmnet.&nbsp; It is not a bug with intercept estimation as I had thought, but rather that when mu (that generated the poisson counts) gets very large, as in 1e+13 large, the glmnet algorithm likelihood surface (or whatever surface it is maximizing) gets very, very, very flat.&nbsp; So it shows convergence at the default thresh of 1e-7 long before it is near the correct maximum. &nbsp;&nbsp;</span><br /><br /><span style="font-family: inherit;">Here is simulated data:</span><br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">N=1000; p=50<br />nzc=p<br />x=matrix(rnorm(N*p),N,p)<br />beta=rnorm(nzc)<br />f = x[,seq(nzc)]%*%beta<br />mu=exp(f)<br />y=rpois(N,mu)</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br /></span><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#intercept should be 0; it's not anywhere close</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="poisson")<br />coef(fit)[1,]<br />fit2=glm(y~x,family="poisson")<br />coef(fit2)</span><br /><br /><span style="font-family: inherit;">So there are 2 problems</span><br /><span style="font-family: inherit;">1) I should have used exact=TRUE in my coef() call</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">cor(coef(fit2),as.numeric(coef(fit,s=0)))&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"> 0.4459366&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">cor(coef(fit2),as.numeric(coef(fit,s=0,exact=TRUE)))&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"> 0.8289489&nbsp;</span><br />2) I should have set thresh much lower<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="poisson",thresh=1e-10)&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">cor(coef(fit2),as.numeric(coef(fit,s=0,exact=TRUE)))&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"> 0.9995521 </span><span style="font-family: inherit;">&nbsp;</span><br /><br /><span style="font-family: inherit;">Nonetheless, this still doesn't work</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="poisson",thresh=1e-10,intercept=FALSE)&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">Warning message: from glmnet Fortran code (error code -2); Convergence for 2th lambda value not reached after maxit=100000 iterations; solutions for larger lambdas returned&nbsp;</span><br /><br /><span style="font-family: inherit;">Maybe it would converge if I set maxit much higher, but glm doesn't have trouble:</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit2=glm(y~-1+x,family="poisson")</span></span><br /><br /><span style="font-family: inherit;">At this point, I decided to upgrade R from 2.15.3 to 3.0.2 After the upgrade, my glmnet() call threw a y&lt;0 error about a 1/4 of the time even though y was not &lt; 0 ever.&nbsp; Here's the email I sent to the maintainer that sorted out all the oddness:</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br /></span><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">-------------------------------</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br /></span><br /><div>Hi,<br /><br />Thanks for the response and edited code. &nbsp;I had tried standardize=FALSE and setting thresh=1e-10 before writing, but I had still been getting the result so thought that the slow convergence was being caused by an intercept estimation issue.&nbsp; Here's an example with some sample output that shows what I was seeing every so often:<br /><div class="im"><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">N=1000; p=50; nzc=p<br />x=matrix(rnorm(N*p),N,p)<br />beta=rnorm(nzc)<br />f = x[,seq(nzc)]%*%beta &nbsp;#intercept is 0<br />mu=exp(f)<br />y=rpois(N,mu)</span></div><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">poisson",standardize=FALSE,</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">thresh=1e-10,maxit=1e7)<br />coef(fit,s=0,exact=TRUE) &nbsp;<b>#big intercept</b><br /> 8.289213<br />cor(coef(fit2),as.numeric(</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">coef(fit,s=0,exact=TRUE))) <b>#low correlation with glm()</b><br /> 0.4740912</span><br /><br />The reason I thought it was an intercept issue was that I had gotten this error for what seemed an easy problem.<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">poisson",<b>intercept=FALSE</b>)<br />Warning message:<br />from glmnet Fortran code (error code -2); Convergence for 2th lambda value not reached after maxit=100000 iterations; solutions for larger lambdas returned <br /></span><br />while this had no trouble<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit2=glm(y~-1+x,family="</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">poisson")</span><br />I thought it odd that glmnet() had trouble on what seemed a relatively simple problem and I had never observed glmnet to not converge when glm did. I've been using glmnet for awhile and had never seen it have trouble with a model that glm solved easily.&nbsp; I thought it had something to do with a problem with setting intercept=FALSE and that led to the query and code I originally sent you.<br /><br />However, after writing you I updated from R 2.15.3 to R 3.0.2.&nbsp; And then glmnet starting returning an error that y&lt;0 a quarter of the time with my sample code.&nbsp; I tracked this down to a change in the rpois() behavior in 3.0.2 :<br /><br />R 2.15.2 and 2.15.3<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">rpois(1,1e10)</span></div><div><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">&nbsp; 10000025096 (e.g.)</span></div><div><br />R 3.02<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">rpois(1,1e10)<br /> NA</span><br /><br />When those NA appeared, glmnet reported some y&lt;0 and thus returned an error.&nbsp;&nbsp; At that point, I realized that the example code was producing mu's that were exceedingly large. &nbsp;And when mu was very large, that's when I was seeing the low correlation. &nbsp;For example, for the example shown above max(mu) was 4.831217e+12&nbsp; .&nbsp; After reading your email, I see that even though thresh=1e-10, with this big of mu, the glmnet() algorithm was not near the maximum and was approaching the maximum very, very slowly so thresh would need to be even smaller than 1e-10.</div><div>But if I change the code so that mu is more reasonable, then glmnet() has no convergence issues:</div><div>#draw x from normal with smaller variance</div><div><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">N=1000; p=50; nzc=p<br />x=matrix(rnorm(N*p,0,<span style="background-color: red;"><b>0.1</b></span>),N,p)</span><br /><div class="im"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><br />beta=rnorm(nzc)<br />f = x[,seq(nzc)]%*%beta &nbsp;#intercept is 0<br />mu=exp(f)<br />y=rpois(N,mu)</span></div><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">poisson",standardize=FALSE)<br />fit2=glm(y~x,family="poisson")<br />cor(coef(fit2),as.numeric(</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">coef(fit,s=0,exact=TRUE)))<br /> 1</span><br />Now this works fine too<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit=glmnet(x,y,family="</span><wbr></wbr><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">poisson",intercept=FALSE)</span></div><div>So, no bug just interesting (different) behavior for glm vs glmnet for really large mu.</div><div>Regards,<br /><br />Eli<br /><br />-----------------------------------------------------------<br /><br />And now this code works fine<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">N=1000; p=50<br />nzc=p<br />x=matrix(rnorm(N*p,0,0.1),N,p)<br />beta=rnorm(nzc)<br />f = x[,seq(nzc)]%*%beta&nbsp; #intercept is 0<br />mu=exp(f)<br />y=rpois(N,mu)<br />fit=glmnet(x,y,family="poisson",intercept=FALSE,lambda=0)<br />fit2=glm(y~-1+x,family="poisson")<br />cor(coef(fit2),as.numeric(coef(fit)[-1]))<br /> 1</span></div><br /><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-81710344791170868552013-10-11T13:53:00.000-07:002013-10-17T13:07:50.469-07:00glm, speedglm, glmnet comparison (part 1 repeated with lambda=0)<div dir="ltr" style="text-align: left;" trbidi="on">glmnet does OLS when you set alpha=1 and lambda=0, so should return the same values as glm.&nbsp; I repeated the part 1 tests with lambda=0.&nbsp; Works fine for family="gaussian".&nbsp; Crashes for family="poisson".<br /><br /><span style="color: blue;">Update: what's going on is I inadvertently created mu's for the poisson that are huge.&nbsp; max(y) = 1e8 to 1e10 .&nbsp; When facs = 20 enough draws, every so often sum(betas) &gt; 17 or so.&nbsp; And then exp(betas) = HUGE.&nbsp; That is where the problem is.&nbsp; When I used more smaller betas, so that sum(betas) never greater than 10 (say), the convergence problem disappears.</span><br /><br />Test is a model with facs factors and levs levels.&nbsp; betas drawn from normal(0,1). &nbsp; I took care in this test to set the first level of each factor to 0, same as glm does, and estimated the intercept.<br /><br />with family gaussian all is fine<br /><br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0"><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"> facs glm speedglm glmnet<br />a 5 0.09 0.03 0.02<br />a 10 0.20 0.06 0.05<br />a 20 0.67 0.22 0.15<br />a 50 3.67 1.27 0.09<br />a 100 14.24 1.67 0.17</span></pre><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span> <br />with family poisson it doesn't work<br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0"><span class="GNVMTOMCLAB ace_constant" style="color: #c5060b;">Warning messages:<br /></span><span class="GNVMTOMCLAB ace_constant" style="color: #c5060b;">1: from glmnet Fortran code (error code -1); Convergence for 1th lambda value not reached after maxit=100000 iterations; solutions for larger lambdas returned <br /></span><span class="GNVMTOMCLAB ace_constant" style="color: #c5060b;">2: In getcoef(fit, nvars, nx, vnames) :<br /> an empty model has been returned; probably a convergence issue</span></pre><br /><br />timings<br /><span class="Apple-style-span" style="-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; background-color: #e1e2e5; border-collapse: separate; color: black; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-align: -webkit-left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0"> glm speedglm glmnet<br />a 5 0.22 0.11 0.05<br />a 10 0.44 0.22 0.52<br />a 20 1.33 0.75 86.09</pre><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">&nbsp;</pre><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">&nbsp;</pre><br />R code<br />library(glmnet)<br />library(speedglm)<br /><br />n = 10000<br />levs = 10<br /><br />res=obj=c()<br />for(facs in c(5,10,20)){<br />&nbsp; beta=matrix(rnorm(levs*facs,0,1),levs,facs)<br />&nbsp; <br />&nbsp; levx = c()<br />&nbsp; for(i in 1:facs) levx=cbind(levx,sample(levs,n,replace=TRUE))<br />&nbsp; <br />&nbsp; y &lt;- apply(levx, 1, function(x){ rpois(1,exp(sum(beta[x+seq(0,levs*(facs-1),by=levs)]))) })<br />&nbsp; x = data.frame(levx)<br />&nbsp; for(i in 1:facs) x[,i]=factor(x[,i],levels=1:levs)<br />&nbsp; dat = cbind(y=y,x)<br />&nbsp; <br />&nbsp; cat(facs);cat("\n")<br />&nbsp; <br />&nbsp; #set up the formula<br />&nbsp; fooform = "y~1"<br />&nbsp; for(i in 1:facs) fooform=paste(fooform,"+X",i,sep="")<br />&nbsp; <br />&nbsp; #fit glm<br />&nbsp; #don't do glm if facs&gt;500; too slow<br />&nbsp; if(facs&lt;500){<br />&nbsp;&nbsp;&nbsp; a=c(facs, system.time(fit&lt;-glm(formula(fooform), data=dat, family="poisson")))<br />&nbsp;&nbsp;&nbsp; b=c(facs, object.size(fit))<br />&nbsp; }else{<br />&nbsp;&nbsp;&nbsp; a=c(facs, NA)<br />&nbsp;&nbsp;&nbsp; b=c(facs, NA)<br />&nbsp; }<br />&nbsp; <br />&nbsp; #fit speedglm<br />&nbsp; a=c(a,system.time(fit2&lt;-speedglm(as.formula(fooform), data=dat, family=poisson(log))))<br />&nbsp; b=c(b, object.size(fit))<br />&nbsp; <br />&nbsp; #fit glmnet<br />&nbsp; sx.j = as.vector(levx+matrix(seq(0,levs*(facs-1),by=levs),n,facs,byrow=TRUE))<br />&nbsp; sx.i = rep(1:n,facs)<br />&nbsp; #remove the 1st level of each factor; do sx.i first because test depends on sx.j<br />&nbsp; sx.i = sx.i[!(sx.j %in% seq(1,levs*facs,by=levs))]<br />&nbsp; sx.j = sx.j[!(sx.j %in% seq(1,levs*facs,by=levs))]<br />&nbsp; sx=sparseMatrix(sx.i,sx.j,<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; x=1,dims=c(n,levs*facs))<br />&nbsp; a=c(a, system.time(fit&lt;-glmnet(sx, y, lambda=0, family="poisson")))<br />&nbsp; b=c(b, object.size(fit))<br />&nbsp; <br />&nbsp; res=rbind(res,a)<br />&nbsp; obj=rbind(obj,b)<br />}<br /><br />#plot 1<br />plot(res[,1],res[,2],type="l", ylab="seconds", xlab="number of explanatory variables",ylim=c(0,500))<br />lines(res[,1],res[,3],col="red",lty=2)<br />lines(res[,1],res[,4],col="blue",lty=3)<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br /><br />#plot 2<br />plot(obj[,1],log(obj[,2]),type="l", ylab="object size (log(M))", xlab="number of explanatory variables",ylim=c(10,23))<br />lines(obj[,1],log(obj[,3]),col="red",lty=2)<br />lines(obj[,1],log(obj[,4]),col="blue",lty=3)<br />abline(h=log(8042*1e6))<br />abline(h=log(2000*1e6))<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-85837390179931466722013-10-10T12:35:00.001-07:002013-10-10T18:09:26.906-07:00Dixon and Coles 2-player model with glmnet: Tests with simulated real data<div dir="ltr" style="text-align: left;" trbidi="on"><i>Update: all these tests are using the default lambda for glmnet.&nbsp; Since posting this I discovered that passing in lambda=0 to glmnet forces it to return the equivalent glm estimates.&nbsp; speedglm and glmnet estimates are much more similar and glmnet seems more robust with lambda=0.&nbsp; Next post repeats the analyses here but with lambda=0 passed into glmnet.</i><br /><br />Now I will take real soccer match data, with all its non-random mixing, but substitute randomly generated scores from teams with known attack and defend strengths.&nbsp; I do a series of tests to try replicate the problem shown in&nbsp; <a href="http://parsimoniouspursuits.blogspot.com/2013/10/dixon-and-coles-2-player-model-fit-with.html">Dixon and Coles model fit with glmnet: Test 1 with real data.</a><b>Test 1</b> is with i.i.d. attack strengths and didn't reproduce the problem.&nbsp; <b>Test 2</b> is a little more realistic and has correlated attack and defend strengths (teams with strong attack tend to have strong defense) mimicking the real data.&nbsp; This didn't reproduce the problem either. <b>Test 3</b> uses the attack and defense strengths that came out of speedglm used on the real data.&nbsp; These have the property that different age groups have different strengths.&nbsp; This finally replicates the problem and shows that it is glmnet that is producing the bad estimates (cannot recover the values used to produce the simulated data).&nbsp; The full R code to run all the tests is at the bottom.&nbsp; <i>Note all these results are using glmnet coefficients at lambda = max(lambda) .</i><br /><br /><b>Test 1: Attack and defend strengths are i.i.d normal (the bold bit)</b><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">sim_data=rank_data<br />sim_datascores = rank_data$scores[!(is.na(rank_data$scores$home.score) &amp; is.na(rank_data$scores$away.score)),]<br />teams = unique(c(as.character(sim_data$scores$home.team),as.character(sim_data$scores$away.team)))<br />nteams = length(teams)<br /><b>sim.attack = rnorm(nteams)<br />sim.defend = rnorm(nteams)-.4</b><br />sim.scores = sim_data$scores<br />ngames = dim(sim.scores)<br />for(i in 1:ngames){<br />&nbsp; sim.scores$home.score[i] = rpois(1,exp(sim.attack[which(teams==sim.scores$home.team[i])] - sim.defend[which(teams==sim.scores$away.team[i])]))<br />&nbsp; sim.scores$away.score[i] = rpois(1,exp(sim.attack[which(teams==sim.scores$away.team[i])] - sim.defend[which(teams==sim.scores$home.team[i])]))<br />}<br />sim_datascores=sim.scores</span><br /><br />Make sure the sim and real data look remotely similar (the 0.4 added to sim.defend was to get the mean goals scored similar and # of 0s similar).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-0p_Omw2jrtc/UlbcPD6ylpI/AAAAAAAATxg/lF5Sh0ZrVWU/s1600/glm-comp-dixon-cole-2b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="264" src="http://1.bp.blogspot.com/-0p_Omw2jrtc/UlbcPD6ylpI/AAAAAAAATxg/lF5Sh0ZrVWU/s320/glm-comp-dixon-cole-2b.png" width="320" /></a></div><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">par(mfrow=c(1,2))<br />hist(sim.scoreshome.score,breaks=0:500,xlim=c(0,20),ylim=c(0,10000),main="simulated",xlab="home score")<br />hist(rank_data$scores$home.score,breaks=0:500,xlim=c(0,20),ylim=c(0,10000),main="real",xlab="home score")</span></span><br /><br />Not horrible.&nbsp; I'll go with that.<br /><br />Fit the simulated data with speedglm and glmnet<br />glmnet run with default settings (except family="poisson") <br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#using fbRanks 2.0</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">age=c("B01","B00","B99","B98","B97","B96")<br />fbRanks.sim=rank.teams(scores=sim_data$scores, teams=sim_data$teams, age=age, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="glmnet")<br />p.sim=print(fbRanks.sim,silent=TRUE)$ranks[]<br />fbRanks.sim.spdglm=rank.teams(scores=sim_data$scores, teams=sim_data$teams, age=age, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="speedglm")<br />p.spd=print(fbRanks.sim.spdglm,silent=TRUE)$ranks[]</span><br /><br /><br />Plot estimates against true<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">par(mfrow=c(1,2))<br />plot(log(p.sim$attack),sim.attack[match(p.sim$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main="glmnet",xlab="estimated attack",ylab="true attack")<br />plot(log(p.spd$attack),sim.attack[match(p.spd$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main="speedglm",xlab="estimated attack",ylab="true attack")</span><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Nb85-1magWM/UlbhpSlY59I/AAAAAAAATxw/I5YQa4zn-Xk/s1600/glm-comp-dixon-cole-2c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="212" src="http://1.bp.blogspot.com/-Nb85-1magWM/UlbhpSlY59I/AAAAAAAATxw/I5YQa4zn-Xk/s320/glm-comp-dixon-cole-2c.png" width="320" /></a></div><br />Compare to each other:<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#compare to each other<br />par(mfrow=c(1,1))<br />teamsb=union(as.character(p.sim$team),as.character(p.spd$team))<br />plot(p.spd$total[match(teamsb,p.spd$team)],p.sim$total[match(teamsb,p.sim$team)])<br />abline(0,1)</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-E5XjtPBcXZI/UlbjXjxCULI/AAAAAAAATx8/WMbR4WveDV4/s1600/glm-comp-dixon-cole-2d.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="212" src="http://1.bp.blogspot.com/-E5XjtPBcXZI/UlbjXjxCULI/AAAAAAAATx8/WMbR4WveDV4/s320/glm-comp-dixon-cole-2d.png" width="320" /></a></div>That looks pretty good, except for the points about -2 below the 1-1 line.<br /><br /><b>Test 2: attack and defense strengths are correlated (the bold bit)</b><br />For this test, I am going to generate attack and defend strengths with the same mean and var-cov matrix as in the real data.&nbsp; The previous post&nbsp; <a href="http://parsimoniouspursuits.blogspot.com/2013/10/dixon-and-coles-2-player-model-fit-with.html">HERE</a> got estimates of attack and defend strength from the real data.&nbsp; I use those estimates (called <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">p.b</span>) here.<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"></span><br />Replace the bolded bit in the Test 1 code with this<span style="font-family: &quot;Courier New&quot;,Courier,monospace;">&nbsp; mu=apply(cbind(log(p.b$attack),-1*log(p.b$defense)),2,mean,na.rm=TRUE)<br />&nbsp; Sigma = cov(cbind(log(p.b$attack),-1*log(p.b$defense)),use="na.or.complete")<br />&nbsp; tmp=mvrnorm(nteams,mu=mu,Sigma=Sigma)<br />&nbsp; sim.attack = tmp[,1]<br />&nbsp; sim.defend = tmp[,2]</span><br /><br />Now my attack and defend strengths have the same structure as the estimates from speedglm from the real data (the <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">p.b$attack</span> and <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">p.b$defense</span>).<br /><br />Plot estimates against true<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">par(mfrow=c(1,2))<br />mse.sim = mean((log(p.sim$attack)-sim.attack[match(p.sim$team,teams)])^2)<br />plot(log(p.sim$attack),sim.attack[match(p.sim$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main=paste("glmnet\nmse =",mse.sim),xlab="estimated attack",ylab="true attack")<br />mse.spd = mean((log(p.spd$attack[p.spd$attack!=0])-sim.attack[match(p.spd$team[p.spd$attack!=0],teams)])^2,na.rm=TRUE)<br />plot(log(p.spd$attack),sim.attack[match(p.spd$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main=paste("speedglm\nmse =",mse.spd),xlab="estimated attack",ylab="true attack")</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"></span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-eQflzgN6qMs/Ulbz1CV3FbI/AAAAAAAATyM/5EeZn8l0QTc/s1600/glm-comp-dixon-cole-3a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="306" src="http://4.bp.blogspot.com/-eQflzgN6qMs/Ulbz1CV3FbI/AAAAAAAATyM/5EeZn8l0QTc/s400/glm-comp-dixon-cole-3a.png" width="400" /></a></div><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">That looks ok.</span></span><br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">Look at total strength which combines attack and defense strengths.&nbsp; print(fbRanks) is subtracting the mean total, so I'm doing my abline with the mean added back on.&nbsp;</span></span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">par(mfrow=c(1,2))<br />sim.total = (sim.attack+sim.defend)/log(2)<br />plot(p.sim$total,sim.total[match(p.sim$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main="glmnet",xlab="estimated",ylab="true")<br />abline(mean(sim.total[match(p.sim$team,teams)]),1)<br />abline(mean(sim.total[match(p.sim$team,teams)])+sin(pi/4),1,col="red")<br />abline(mean(sim.total[match(p.sim$team,teams)])-1*sin(pi/4),1,col="red")<br />plot(p.spd$total,sim.total[match(p.spd$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main="speedglm",xlab="estimated",ylab="true")<br />abline(mean(sim.total[match(p.spd$team,teams)]),1)<br />abline(mean(sim.total[match(p.spd$team,teams)])+sin(pi/4),1,col="red")<br />abline(mean(sim.total[match(p.spd$team,teams)])-1*sin(pi/4),1,col="red")<br />&nbsp;</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-W8SOqJAvFLo/Ulb0mZLRhjI/AAAAAAAATyU/v7YZZryEI84/s1600/glm-comp-dixon-coles-3b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="316" src="http://1.bp.blogspot.com/-W8SOqJAvFLo/Ulb0mZLRhjI/AAAAAAAATyU/v7YZZryEI84/s640/glm-comp-dixon-coles-3b.png" width="640" /></a></div>I really want to be within those red lines which off the true total by +/- 1.0.&nbsp; A lot of my estimates are outside that.&nbsp; Let's look at what fraction fall outside the +/- 1.0 for different numbers of games played.&nbsp; I expect that estimates are better for teams that have played more games.<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#compare total against games played<br />par(mfrow=c(1,2))<br />sim.total = (sim.attack+sim.defend)/log(2)<br />nrange=1:30<br />fracbig = c()<br />for(nlim in nrange){<br />&nbsp; fracbig = c(fracbig, sum(abs(p.sim$total[p.sim$n&gt;nlim]-sim.total[match(p.sim$team[p.sim$n&gt;nlim],teams)])&gt;1,na.rm=TRUE)/sum(p.sim$n&gt;nlim))<br />}<br />par(mfrow=c(1,1))<br />plot(nrange,fracbig,type="l",xlab="number of games played")<br />fracbig = c()<br />for(nlim in nrange){<br />&nbsp; fracbig = c(fracbig, sum(abs(p.spd$total[p.spd$n&gt;nlim]-sim.total[match(p.spd$team[p.spd$n&gt;nlim],teams)])&gt;1,na.rm=TRUE)/sum(p.spd$n&gt;nlim))<br />}<br />lines(nrange,fracbig,col="red")<br />title("fraction outside abs(est total - true total)&gt;1\nglmnet black and speedglm red")</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-s6zYCgij1Gw/Ulb4JUih5mI/AAAAAAAATyg/0pEM7Fsx5Q8/s1600/glm-comp-dixon-coles-3c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="340" src="http://4.bp.blogspot.com/-s6zYCgij1Gw/Ulb4JUih5mI/AAAAAAAATyg/0pEM7Fsx5Q8/s400/glm-comp-dixon-coles-3c.png" width="400" /></a></div>This suggests that glmnet is performing better than speedglm when I use a minimum number of games (so just don't show estimates for teams with few games).<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />But I still haven't replicated the problem seen in the real data.&nbsp; Test 3, make the age groups have different means.<br /><br /><b>Test 3: use estimates from speedglm as attack and defense strength.&nbsp;</b> Now attack and defense are correlated, but also the different age groups have different mean strengths (order teams tend to be stronger).&nbsp; For this I am just going to use the estimates from cluster.1 in fbRanks.b from the previous Dixon &amp; Coles post.&nbsp; It so happens that rank_data has 9 unique clusters.<br /><br />Replace the attack and strength generating code with this<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">&nbsp; tmp=p.b[!is.na(p.b$attack) &amp; !is.na(p.b$defense) &amp; !(p.b$defense==0) &amp; !(p.b$attack==0),]<br />&nbsp; teams = as.character(tmp$team)<br />&nbsp; nteams = length(teams)<br />&nbsp; sim.attack = log(tmp$attack)<br />&nbsp; sim.defend = log(tmp$defense)</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">&nbsp; #get rid of data from teams that are not in cluster.1<br />&nbsp; sim.scores = sim_data$scores[sim_data$scores$home.team %in% teams &amp;&nbsp;&nbsp;&nbsp;&nbsp; sim_data$scores$away.team %in% teams,]</span><br /><br />FINALLY, this replicates the problem of multiple parallel lines.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-YlBrOfzHAKw/Ulb9YTTvdfI/AAAAAAAATyw/90aUadtKFWU/s1600/glm-comp=dixon-coles-4a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="308" src="http://3.bp.blogspot.com/-YlBrOfzHAKw/Ulb9YTTvdfI/AAAAAAAATyw/90aUadtKFWU/s640/glm-comp=dixon-coles-4a.png" width="640" /></a></div>And it is glmnet that is having problems:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-sSR-uLuanLQ/Ulb-cyv1_rI/AAAAAAAATy4/P0HKmPQqhyo/s1600/glm-comp-dixon-coles-4b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="330" src="http://1.bp.blogspot.com/-sSR-uLuanLQ/Ulb-cyv1_rI/AAAAAAAATy4/P0HKmPQqhyo/s640/glm-comp-dixon-coles-4b.png" width="640" /></a></div>And the consequences for the total strength estimate are very bad.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-lnHnuU2_oVs/Ulb_wl-Y6xI/AAAAAAAATzA/UZOUl6uxaPI/s1600/glm-comp-dixon-coles-4c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="346" src="http://3.bp.blogspot.com/-lnHnuU2_oVs/Ulb_wl-Y6xI/AAAAAAAATzA/UZOUl6uxaPI/s400/glm-comp-dixon-coles-4c.png" width="400" /></a></div>Next I'll work on trying to tweak glmnet's settings and see if I can get around this problem.<br /><br /><b>Full R code for running these simulations and tests</b><br />depends on fbRanks 2.0<br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">library(fbRanks)</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#load in rank_data from RData file and run code to produce p.b&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#takes 10 min or so </span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#fbRanks.b=rank.teams(scores=rank_data$scores, teams=rank_data$teams, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="speedglm")</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">#p.b=print(fbRanks.b,silent=TRUE)$ranks[] #cluster.1<br />&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">testt = 1; #Test with i.i.d. strengths<br />testt = 2; #Test with correlated strengths<br />testt = 3; #use the speedglm estimates from the real data<br />sim_data=rank_data<br />sim_data$scores = rank_data$scores[!(is.na(rank_data$scores$home.score) &amp; is.na(rank_data$scores$away.score)),]<br />teams = unique(c(as.character(sim_data$scores$home.team),as.character(sim_data$scores$away.team)))<br />nteams = length(teams)<br />sim.scores = sim_data$scores<br />if(testt == 1){<br />sim.attack = rnorm(nteams)<br />sim.defend = rnorm(nteams)-.4<br />}<br />if(testt==2){<br />&nbsp; mu=apply(cbind(log(p.b$attack),-1*log(p.b$defense)),2,mean,na.rm=TRUE)<br />&nbsp; Sigma = cov(cbind(log(p.b$attack),-1*log(p.b$defense)),use="na.or.complete")<br />&nbsp; tmp=mvrnorm(nteams,mu=mu,Sigma=Sigma)<br />&nbsp; sim.attack = tmp[,1]<br />&nbsp; sim.defend = tmp[,2]<br />}<br />if(testt==3){<br />&nbsp; tmp=p.b[!is.na(p.b$attack) &amp; !is.na(p.b$defense) &amp; !(p.b$defense==0) &amp; !(p.b$attack==0),]<br />&nbsp; teams = as.character(tmp$team)<br />&nbsp; nteams = length(teams)<br />&nbsp; sim.attack = log(tmp$attack)<br />&nbsp; sim.defend = log(tmp$defense)<br />&nbsp; sim.scores = sim_data$scores[sim_data$scores$home.team %in% teams &amp; sim_data$scores$away.team %in% teams,]<br />}<br />ngames = dim(sim.scores)<br />for(i in 1:ngames){<br />&nbsp; sim.scores$home.score[i] = rpois(1,exp(sim.attack[which(teams==sim.scores$home.team[i])] - sim.defend[which(teams==sim.scores$away.team[i])]))<br />&nbsp; sim.scores$away.score[i] = rpois(1,exp(sim.attack[which(teams==sim.scores$away.team[i])] - sim.defend[which(teams==sim.scores$home.team[i])]))<br />}<br />sim_data$scores=sim.scores<br />age=c("B01","B00","B99","B98","B97","B96")<br />fbRanks.sim=rank.teams(scores=sim_data$scores, teams=sim_data$teams, age=age, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="glmnet")<br />fbRanks.sim.spdglm=rank.teams(scores=sim_data$scores, teams=sim_data$teams, age=age, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="speedglm")<br />p.sim=print(fbRanks.sim,silent=TRUE)$ranks<br />p.spd=print(fbRanks.sim.spdglm,silent=TRUE)$ranks<br />if(testt!=3){ #use cluster 1; there are many<br />&nbsp; p.sim=p.sim[]<br />&nbsp; p.spd=p.spd[]<br />}<br /><br />#compare estimates agains true<br />par(mfrow=c(1,2))<br />mse.sim = mean((log(p.sim$attack[p.sim$attack!=0])-sim.attack[match(p.sim$team[p.sim$attack!=0],teams)])^2,na.rm=TRUE)<br />plot(log(p.sim$attack),sim.attack[match(p.sim$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main=paste("glmnet\nmse =",mse.sim),xlab="estimated attack",ylab="true attack")<br />abline(0,1)<br />mse.spd = mean((log(p.spd$attack[p.spd$attack!=0])-sim.attack[match(p.spd$team[p.spd$attack!=0],teams)])^2,na.rm=TRUE)<br />plot(log(p.spd$attack),sim.attack[match(p.spd$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main=paste("speedglm\nmse =",mse.spd),xlab="estimated attack",ylab="true attack")<br />abline(0,1)<br /><br />#compare total<br />par(mfrow=c(1,2))<br />sim.total = (sim.attack+sim.defend)/log(2)<br />plot(p.sim$total,sim.total[match(p.sim$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main="glmnet",xlab="estimated",ylab="true")<br />abline(mean(sim.total[match(p.sim$team,teams)]),1)<br />abline(mean(sim.total[match(p.sim$team,teams)])+sin(pi/4),1,col="red")<br />abline(mean(sim.total[match(p.sim$team,teams)])-1*sin(pi/4),1,col="red")<br />plot(p.spd$total,sim.total[match(p.spd$team,teams)],<br />&nbsp;&nbsp;&nbsp;&nbsp; main="speedglm",xlab="estimated",ylab="true")<br />abline(mean(sim.total[match(p.spd$team,teams)]),1)<br />abline(mean(sim.total[match(p.spd$team,teams)])+sin(pi/4),1,col="red")<br />abline(mean(sim.total[match(p.spd$team,teams)])-1*sin(pi/4),1,col="red")<br /><br />#compare total against games played<br />sim.total = (sim.attack+sim.defend)/log(2)<br />nrange=1:30<br />fracbig = c()<br />for(nlim in nrange){<br />&nbsp; fracbig = c(fracbig, sum(abs(p.sim$total[p.sim$n&gt;nlim]-sim.total[match(p.sim$team[p.sim$n&gt;nlim],teams)])&gt;1,na.rm=TRUE)/sum(p.sim$n&gt;nlim))<br />}<br />par(mfrow=c(1,1))<br />plot(nrange,fracbig,type="l",xlab="number of games played",ylim=c(0,.6))<br />fracbig = c()<br />for(nlim in nrange){<br />&nbsp; fracbig = c(fracbig, sum(abs(p.spd$total[p.spd$n&gt;nlim]-sim.total[match(p.spd$team[p.spd$n&gt;nlim],teams)])&gt;1,na.rm=TRUE)/sum(p.spd$n&gt;nlim))<br />}<br />lines(nrange,fracbig,col="red")<br />title("fraction outside abs(est total - true total)&gt;1\nglmnet black and speedglm red")<br /><br /><br />#compare to each other<br />par(mfrow=c(1,2))<br />teamsb=union(as.character(p.sim$team),as.character(p.spd$team))<br />plot(log(p.spd$attack[match(teamsb,p.spd$team)]),log(p.sim$attack[match(teamsb,p.sim$team)]),<br />&nbsp;&nbsp;&nbsp;&nbsp; xlab="speedglm estimate",ylab="glmnet estimate",main="attack estimates")<br />abline(0,1)<br />plot(log(p.spd$defense[match(teamsb,p.spd$team)]),log(p.sim$defense[match(teamsb,p.sim$team)]),<br />&nbsp;&nbsp;&nbsp;&nbsp; xlab="speedglm estimate",ylab="glmnet estimate",main="defend estimates")<br />abline(0,1)<br /><br />#plot 1 make sure the sim and real data look kind of similar<br />par(mfrow=c(1,2))<br />hist(sim.scores$home.score,breaks=0:500,xlim=c(0,20),ylim=c(0,10000),main="simulated",xlab="home score")<br />hist(rank_data$scores$home.score,breaks=0:500,xlim=c(0,20),ylim=c(0,10000),main="real",xlab="home score")</span><b><br /></b><br /><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-36288295605154095182013-10-10T09:32:00.000-07:002013-10-10T17:39:38.266-07:00Dixon and Coles 2-player model fit with glmnet: Test 1 with real data<div dir="ltr" style="text-align: left;" trbidi="on">Follow up on <a href="http://parsimoniouspursuits.blogspot.com/2013/10/dixon-and-coles-2-player-model-in-glm.html">Dixon and Coles 2-player model in glm, speed, and glmnet</a><br /><br />Initial tests with simulated data with random mixing---meaning the graph of interactions across players has no 'clusters'---was promising and suggested that glmnet is both much faster and more robust.&nbsp; However, real social networks (and 2-player systems can be thought of as a type of social network) are highly non-random.&nbsp; The norm is a network with clusters in which players interact strongly and where there is lower (and potentially quite low) interactions across clusters.&nbsp; The result is going to be a likelihood surface with strong ridges.&nbsp; I don't fully understand the algorithm used by glmnet, but if it is using any kind of ascent algorithm, it might get stuck on these ridges.<br /><br />It looks like this might be happening.&nbsp; Here is a plot of speedglm versus glmnet for some real soccer match data spanning 6 age groups.&nbsp; Age groups are clusters and within age groups there are further clusters (states and leagues).&nbsp; <span style="color: blue;"><b>Update:</b> I was using glmnet default lambda which does not quite replicate glm behavior.&nbsp; I was using the coefficients at min(fitlambda) which was almost a saturated model but still not quite the same. See update below where I pass in lambda=0 to force glmnet to return glm-equivalent estimates.&nbsp; The speedglm and glmnet estimates are now identical.&nbsp; See updated plot below.</span><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-TSVr-V0qJIc/UlbUznUke_I/AAAAAAAATxQ/G-x9-AGLsXI/s1600/glm-comp-dixon-cole-2a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="330" src="http://1.bp.blogspot.com/-TSVr-V0qJIc/UlbUznUke_I/AAAAAAAATxQ/G-x9-AGLsXI/s400/glm-comp-dixon-cole-2a.png" width="400" /></a></div><br />The estimates should be parallel to the 1-1 line (but not necessarily on it).&nbsp; Notice the data seem to fall on multiple 1-1 lines.&nbsp; This suggests that individual clusters (age groups) are ok, but glmnet is stopping before getting to a solution that gets all those the same 1-1 line.&nbsp; However to understand what is going wrong this to work with simulated data where I know the "truth".<br /><br /><b>R code to produce plot above.&nbsp;&nbsp;</b><br />rank_data is a data.frame of the 2013 match data for WA and OR youth boys select soccer teams.&nbsp; About 2500 teams.&nbsp; The data includes age B02, B95 and B94 (B=boys, 02=birth year), but I left that off to speed up speedglm.&nbsp; The problem shown in the plot above is much much worse with those ages added.<br /><br />#using fbRanks R package 2.0<br />library(fbRanks)<br />#this was using default lambda for glmnet and getting coefficients using coef(fit, s=min(fitlambda))<br />#where fit is what is returned from the fit=glmnet() call <br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">age=c("B01","B00","B99","B98","B97","B96")<br />fbRanks.a=rank.teams(scores=rank_data$scores, teams=rank_data$teams, age=age, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="glmnet")<br />fbRanks.b=rank.teams(scores=rank_data$scores, teams=rank_data$teams, age=age, min.date=min.date, max.date=max.date, silent=TRUE, time.weight.eta=best.eta, date.format="%m/%d/%Y", fun="speedglm")<br />p.a=print(fbRanks.a,silent=TRUE)$ranks[]<br />p.b=print(fbRanks.b,silent=TRUE)$ranks[]<br />teams=union(as.character(p.a$team),as.character(p.b$team))<br />plot(p.a$total[match(teams,p.a$team)],p.b$total[match(teams,p.b$team)],ylab="speedglm",xlab="glmnet")<br />abline(0,1)</span><br /><br /><b><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">Update</span></span></b><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">Passing in lambda=0 to the glmnet call fixes the problem.</span></span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">Same code as above but with glmnet(...., lambda=0) call in the rank.teams() function.</span></span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">The estimates are not identical but my analysis with simulated data suggests that glmnet estimates are more robust. </span></span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;"><br /></span></span><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-BF-z1clMoI8/UldGt4QUUNI/AAAAAAAATzQ/VZd8huGbrc8/s1600/glm-comp-dixon-cole-update-a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="472" src="http://2.bp.blogspot.com/-BF-z1clMoI8/UldGt4QUUNI/AAAAAAAATzQ/VZd8huGbrc8/s640/glm-comp-dixon-cole-update-a.png" width="640" /></a></div><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;"><br /></span></span><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-family: inherit;">&nbsp;</span> </span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-87923041131305280292013-10-08T15:50:00.002-07:002013-10-08T15:53:51.312-07:00Dixon and Coles' 2-player model in glm, speedglm and glmnet<div dir="ltr" style="text-align: left;" trbidi="on">Back to Dixon and Coles model applied to a huge team pool.&nbsp; <span style="font-family: inherit;">This is part of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">series of posts comparing glm, speedglm and glmnet</a> and is related to stuff I have been playing with regarding <a href="http://parsimoniouspursuits.blogspot.com/search/label/2-player">massive 2-player estimation problems</a>.</span><br /><br /><i><span style="font-family: inherit;">Dixon, M. J. and Coles, S. G. (1997), Modelling Association Football Scores and Inefficiencies in the Football Betting Market. Journal of the Royal Statistical Society: Series C (Applied Statistics), 46:&nbsp;265–280. doi:&nbsp;10.1111/1467-9876.00065</span></i><br /><br /><span style="font-family: inherit;">The last post used a model that is pretty similar to Dixon and Coles model.&nbsp; I'll tweak it a bit and do the same series of tests.&nbsp; Some oddities about how I set up the model, which are irrelevant for the purpose of these speed tests, but I note them anyhow.&nbsp;&nbsp;</span><br /><ul style="text-align: left;"><li><span style="font-family: inherit;">In the simulated data, attack and defend strengths of a team are uncorrelated.&nbsp; This is not true.&nbsp; They are highly correlated.&nbsp; This is btw why I can't use glmer on real data, i.e. can't just treat the strengths as random effects.&nbsp; I tried and failed to figure how to treat different random effects as correlated with glmer in the <a href="http://cran.r-project.org/web/packages/lme4/index.html">lme4 R package</a>.&nbsp; Easy enough in a bayesian-glmer, but that is slow....&nbsp;</span></li><li><span style="font-family: inherit;">The model I passed to glm is unidentifiable.&nbsp; One of the coefficients needs to be set to 0.&nbsp; But glm and speedglm are smart enough to figure out what to do though they complain a bit.&nbsp; Easy enough to fix.&nbsp; Just specify the glm model as having 2 factor (attack and defend) and treat the subjects as levels.&nbsp; Easy but not relevant for this test.&nbsp; Btw I set this model up the correct way for glm normally.&nbsp; I did it the other way here since my code from previous posts was closer to that.</span></li><li><span style="font-family: inherit;">For glmnet, I don't know how to specify lambda for the model with 1 coefficient equal to 0. I'm sure it is possible to compute that, but I don't know how.&nbsp; So I used the default lambda.&nbsp; This will lead to some variable number of 0s.&nbsp; Seems to work ok, in fact, works better than glm no doubt the extra constraints help.</span></li></ul><b>The model</b><br />Two subjects in each "competition".&nbsp; One is attacking, other is defending.&nbsp; Each subject has an attack and defend strength.&nbsp; The outcome of the competition is a poisson distributed random variable with mean = exp(attack strength of attacker + defend strength of defender).&nbsp; If a subject has a low (very negative) defend strength, then only strong attackers can score against them.&nbsp; If a subject has a high attack strength, they score against all but the strongest defenders.<br /><br /><b>The simulation</b><br />I draw attack and defend strengths randomly from normal distributions.&nbsp; I sample 2 subjects from the pool and assign them randomly to attack or defend.&nbsp; I compute the 'result' as a random number generated from the appropriate poisson distribution.&nbsp; Repeat for 10*nSubjects number of competitions.&nbsp; So the size of the my dataset is increasing with the number of subjects, which I do to make the data a bit more realistic.&nbsp; This gives me about 10 competitions per subject.&nbsp; So on average 5 competitions to get their attack strength and 5 to get their defend strength.<br /><br /><b>Results</b><br />glm is desperately slow.&nbsp; 25 minutes for 1500 subjects! And that's a small test case.&nbsp; Speedglm is considerably speedier at 7.5 minutes. But glmnet is HALF A SECOND for this problem and more robust.<br /><div class="separator" style="clear: both; text-align: center;"></div><br /><a href="http://3.bp.blogspot.com/-xDBFdXzhZBE/UlSITW4i02I/AAAAAAAATw8/n3jaXJDayn8/s1600/glm-comp-attack-a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="312" src="http://3.bp.blogspot.com/-xDBFdXzhZBE/UlSITW4i02I/AAAAAAAATw8/n3jaXJDayn8/s640/glm-comp-attack-a.png" width="640" /></a><br />glm is not particularly robust for this problem.&nbsp; Top plot shows glmnet versus true.&nbsp; Shows about what I expect.&nbsp; Looks pretty good to me.&nbsp; It widens out for low attack or defend strengths because those are cases where you get 0s from the poisson.&nbsp; The way I set it up (adding the attack and defend strengths as opposed to subtracting defend strength from attack strength), negative defend strength equals strong defense and negative attack equals weak attack.<br /><br />Next plot shows glm versus true.&nbsp; Bah, look at all those 1e-15 values. Ok, I made it hard for glm with sometimes only 1-2 competitions with which to estimate a strength, but glmnet did a lot better by setting some coef to 0s and thus constraining the problem a bit.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-s65b8OjySCI/UlSITb_eSBI/AAAAAAAATwo/n6yY4LHUAIE/s1600/glm-comp-attack-c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="http://3.bp.blogspot.com/-s65b8OjySCI/UlSITb_eSBI/AAAAAAAATwo/n6yY4LHUAIE/s400/glm-comp-attack-c.png" width="400" /></a><a href="http://1.bp.blogspot.com/-DVRgV-89MTw/UlSITcWguDI/AAAAAAAATw0/2lrKOshDBeU/s1600/glm-comp-attack-d.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="http://1.bp.blogspot.com/-DVRgV-89MTw/UlSITcWguDI/AAAAAAAATw0/2lrKOshDBeU/s400/glm-comp-attack-d.png" width="400" /></a></div>&nbsp;When the values were not 1e-15, they matched glmnet's values:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-uz0yT-az730/UlSIT_HmbEI/AAAAAAAATxA/ALqkKOjfHNg/s1600/glm-comp-attack-e.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="http://1.bp.blogspot.com/-uz0yT-az730/UlSIT_HmbEI/AAAAAAAATxA/ALqkKOjfHNg/s400/glm-comp-attack-e.png" width="400" /></a></div><br /><b>R code</b><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace; font-size: small;">library(glmnet)<br />library(speedglm)<br /><br />#Treat subject as a explanatory variable (x). There will be nSubject x variables<br />#This allows us to take into account that subject 1 in the player1, player2, or player3 is the same subject<br />nPlayers=2 #how many 1s in each row of data<br />res=obj=c() <br />for(nSubjects in c(100,200,500,1000,1500)){<br />&nbsp; n=10*nSubjects<br />&nbsp;&nbsp;&nbsp; #variance of the distribution of the player pool x's<br />&nbsp;&nbsp;&nbsp; mean.x = 0<br />&nbsp;&nbsp;&nbsp; sig2.x = 1<br />&nbsp;&nbsp;&nbsp; #true.x is what we are trying to estimate<br />&nbsp;&nbsp;&nbsp; true.attack = rnorm(nSubjects, mean.x, sig2.x)<br />&nbsp;&nbsp;&nbsp; true.defend = rnorm(nSubjects, mean.x, sig2.x)<br />&nbsp; <br />&nbsp;&nbsp;&nbsp; #There are 2 ways to set this up.&nbsp; Treat subject as level in factors attack and defend<br />&nbsp;&nbsp;&nbsp; #or create 2 (attack+defend) x n.x explanatory variables that are 0/1<br />&nbsp;&nbsp;&nbsp; #they are mathematically equivalent to glmnet<br />&nbsp; levx=y=c() #levx is holder for player #; y is data<br />&nbsp; x = matrix(0,n,2*nSubjects) #x is the n x 2nSubjects explanatory variable matrix needed by glm<br />&nbsp; for(i in 1:n){<br />&nbsp;&nbsp;&nbsp; #draw nPlayers randomly for each of the n competitions<br />&nbsp;&nbsp;&nbsp; levx=rbind(levx,sample(nSubjects, 2, replace=FALSE))<br />&nbsp;&nbsp;&nbsp; x[i,c(levx[i,]+c(0,nSubjects))]=1 #set x var for attacksubject = 1 if subject is present; same for defendsubject<br />&nbsp;&nbsp;&nbsp; y=c(y,rpois(1,exp(true.attack[levx[i,1]]+true.defend[levx[i,2]])))<br />&nbsp; }<br />&nbsp; #set the colnames on the explanatory variables<br />&nbsp; colnames(x)=c(paste("attack",1:nSubjects,sep=""),</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace; font-size: small;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; paste("defend",1:nSubjects,sep=""))<br />&nbsp; dat = cbind(y=y,x)<br />&nbsp; dat = data.frame(dat) #glm wants a data frame<br />&nbsp; <br />&nbsp; cat(nSubjects); cat(" ")<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #set up the formula<br />&nbsp;&nbsp;&nbsp; fooform = "y~-1"<br />&nbsp;&nbsp;&nbsp; fooform=paste(fooform,paste("+attack",1:nSubjects,collapse="",sep=""),</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace; font-size: small;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; paste("+defend",1:nSubjects,collapse="",sep=""),sep="")<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #fit glm<br />&nbsp;&nbsp;&nbsp; a=c(nSubjects, system.time(fit1&lt;-glm(formula(fooform), data=dat, family="poisson")))<br />&nbsp;&nbsp;&nbsp; b=c(nSubjects, object.size(fit1))<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #fit speedglm<br />&nbsp;&nbsp;&nbsp; a=c(a,system.time(fit2&lt;-speedglm(formula(fooform), data=dat)))<br />&nbsp;&nbsp;&nbsp; b=c(b, object.size(fit2))<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #fit glmnet<br />&nbsp;&nbsp;&nbsp; #1 row for each player; col is just the player number</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace; font-size: small;">&nbsp;&nbsp;&nbsp; #need to t() levx so as.vector works by row&nbsp;</span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace; font-size: small;">&nbsp; &nbsp; #add that 0,nSubjects bit because I have 1:nSubjects for attack and another 1:nSubjects for defend &nbsp; sx=sparseMatrix(rep(1:n,each=nPlayers),as.vector(t(levx+matrix(c(0,nSubjects),n,2,byrow=TRUE))),x=1)<br />&nbsp;&nbsp;&nbsp; a=c(a, system.time(fit3&lt;-glmnet(sx, y, intercept=FALSE, family="poisson")))<br />&nbsp;&nbsp;&nbsp; b=c(b, object.size(fit3))<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; res=rbind(res,a)<br />&nbsp;&nbsp;&nbsp; obj=rbind(obj,b)<br />&nbsp; }<br />&nbsp;&nbsp; <br />#plot 1<br />plot(res[,1],res[,2],type="l", ylab="seconds", xlab="number of subjects")<br />lines(res[,1],res[,3],col="red",lty=2)<br />lines(res[,1],res[,4],col="blue",lty=3)<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br />title("Time to compute on my laptop")<br /><br />#plot 2<br />plot(res[,1],res[,2]/res[,4],xlab="number of subjects",ylab="glm (or speedglm) speed/glmnet speed", type="l")<br />lines(res[,1],res[,3]/res[,4], col="red")<br />title("relative speed of glm (black)\n and speedglm (red) to glmnet")<br /><br />#plot 3<br />par(mfrow=c(1,2))<br />coef.glm = coef(fit1) #since I didn't est an intercept<br />coef.speedglm = coef(fit2)<br />coef.glmnet.attack = coef(fit3,s=min(fit3$lambda))[2:(nSubjects+1)]<br />coef.glmnet.defend = coef(fit3,s=min(fit3$lambda))[(nSubjects+2):(2*nSubjects+1)]<br />#true to glmnet<br />plot(true.attack,coef.glmnet.attack,ylab="estimated beta",main="glmnet")<br />plot(true.defend,coef.glmnet.defend,ylab="estimated beta",main="glmnet")<br />#glm to glmnet<br />plot(coef.glm[1:nSubjects],coef.glmnet.attack,ylab="from glmnet",xlab="from glm",main="attack")<br />plot(coef.glm[(nSubjects+1):(2*nSubjects)],coef.glmnet.defend,ylab="from glmnet",xlab="from glm",main="defend")<br />#glm to true<br />plot(true.attack,coef.glm[1:nSubjects],ylab="estimated beta",main="glm")<br />plot(true.defend,coef.glm[(nSubjects+1):(2*nSubjects)],ylab="estimated beta",main="glm")</span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-85398133903284300162013-10-08T12:19:00.003-07:002013-10-08T12:31:17.485-07:00Multiplayer problem revisited with glm vs speedglm vs glmnet<div dir="ltr" style="text-align: left;" trbidi="on">Here I discuss a specific problem: using glm to estimate player "strengths" in a multi-player problem where the player pool is huge.&nbsp; <span style="font-family: inherit;">This is part of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">series of posts I did comparing glm, speedglm and glmnet</a>.&nbsp; And this is related to stuff I have been playing with regarding <a href="http://parsimoniouspursuits.blogspot.com/search/label/2-player">massive 2-player estimation problems</a>.</span><br /><b><br /></b><b>Scenario</b><br />Imagine we have a series of competitions where each competition consists of nPlayers randomly chosen from a pool of nSubjects.&nbsp; Each subject has a 'strength' and the outcome of the competition is some function of the additive strengths.&nbsp; Some examples might be:<br /><ul style="text-align: left;"><li>players are pulling on a rope and we measure force (on um a scale of -Inf to Inf....).&nbsp; y ~ normal(sum(strengths))</li><li>players are playing a game where they score 0-10 (or so).&nbsp; y ~ poisson(exp(sum(strengths)))</li><li>players are playing a win/lose game.&nbsp; y ~ binomial(logit(sum(strengths)))</li></ul>Here I just use a normal to be simple.<br /><br /><b>Set up in glm framework</b><br />We don't want to treat the subjects at levels and player1, player2, etc as a factor since subject i could be player1, player2, etc in any one competition but they are still the same subject.&nbsp; Instead we treat subject as a 0/1 explanatory variable.&nbsp; 1 = subject was in competition. 0 = they were not.&nbsp; Our data consists of nSubjects explanatory variables with nPlayer 1s in each rows.&nbsp; So...most of our explanatory variable data is all zeros and we are going to have a huge number of explanatory variables.&nbsp; We can expect that glmnet will excel here.<br /><br />The R code below shows how to set up the simulated data and then set the model up for glm, speedglm and glmnet.&nbsp; It's pretty similar to the R code from my previous posts.&nbsp; But here I show some plots of estimated betas (subject strengths) versus true values, which requires getting the coefficients out of glmnet. <a href="http://parsimoniouspursuits.blogspot.com/2013/10/getting-coefficients-out-of-glmnet.html">Read this post on what glmnet does, how to get coefficients out of glmnet and why I pass lambda=1e-6 into my glmnet call.</a><br /><br /><b>Results of speed test (R code below)</b><br /><br />Now that I pass in lambda=1e-6, glmnet didn't get slower as nSubjects increased.&nbsp; Wow.&nbsp; It was basically instantaneous for these tests while glm took about a minute.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-WnBNaYNMHyU/UlRX-6ePbpI/AAAAAAAATwI/Af0KZ4WKyfQ/s1600/glm-comp-part5a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="299" src="http://1.bp.blogspot.com/-WnBNaYNMHyU/UlRX-6ePbpI/AAAAAAAATwI/Af0KZ4WKyfQ/s320/glm-comp-part5a.png" width="320" /></a></div>&nbsp;Relative speed correspondingly skyrockets for glm and speedglm versus glmnet for this problem.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-O-xcou_Vin8/UlRYBZ52UfI/AAAAAAAATwY/jPFdnWdyQAs/s1600/glm-comp-part5b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="299" src="http://2.bp.blogspot.com/-O-xcou_Vin8/UlRYBZ52UfI/AAAAAAAATwY/jPFdnWdyQAs/s320/glm-comp-part5b.png" width="320" /></a></div>&nbsp;Estimates however are basically identical.&nbsp; First plot shows glmnet versus true and the next show glmnet versus glm and speedglm versus glm estimates.&nbsp; Yes, they are on the 1-1 line.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-mcR-229u35c/UlRX_LiemmI/AAAAAAAATwM/LK5XkKTqQ_o/s1600/glm-comp-part5c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-mcR-229u35c/UlRX_LiemmI/AAAAAAAATwM/LK5XkKTqQ_o/s1600/glm-comp-part5c.png" /></a></div><b>R code</b><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">library(glmnet)<br />library(speedglm)<br /><br />#3 players<br />#Treat subject as a explanatory variable (x). There will be nSubject x variables<br />#This allows us to take into account that subject 1 in the player1, player2, or player3 is the same subject<br />nPlayers = 3<br />n = 10000<br /><br />res=obj=c() <br />for(nSubjects in c(100,200,500,1000,1500)){<br />&nbsp;&nbsp;&nbsp; #subject strength is drawn from a normal<br />&nbsp;&nbsp;&nbsp; beta=rnorm(nSubjects,0,1)<br />&nbsp;&nbsp;&nbsp; levx=y=c() #levx is holder for player #; y is data<br />&nbsp;&nbsp;&nbsp; x = matrix(0,n,nSubjects) #x is the n x nSubjects explanatory variable matrix needed by glm<br />&nbsp;&nbsp;&nbsp; for(i in 1:n){<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #draw nPlayers randomly for each of the n competitions<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; levx=rbind(levx,sample(nSubjects, nPlayers, replace=FALSE))<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; x[i,levx[i,]]=1 #set x var for subject = 1 if subject is present in this competition<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #outcome of competition is normal (could be binomial-win/loss, or poisson, or whatever)<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; y=c(y,rnorm(1,sum(beta[levx[i,]])))<br />&nbsp;&nbsp;&nbsp; }<br />&nbsp;&nbsp;&nbsp; #set the colnames on the explanatory variables<br />&nbsp;&nbsp;&nbsp; colnames(x)=paste("X",1:nSubjects,sep="")<br />&nbsp;&nbsp;&nbsp; dat = cbind(y=y,x)<br />&nbsp;&nbsp;&nbsp; dat = data.frame(dat) #glm wants a data frame<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; cat(nSubjects); cat(" ")<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #set up the formula<br />&nbsp;&nbsp;&nbsp; fooform = "y~-1"<br />&nbsp;&nbsp;&nbsp; for(i in 1:nSubjects) fooform=paste(fooform,"+X",i,sep="")<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #fit glm<br />&nbsp;&nbsp;&nbsp; a=c(nSubjects, system.time(fit1&lt;-glm(formula(fooform), data=dat)))<br />&nbsp;&nbsp;&nbsp; b=c(nSubjects, object.size(fit1))<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #fit speedglm<br />&nbsp;&nbsp;&nbsp; a=c(a,system.time(fit2&lt;-speedglm(formula(fooform), data=dat)))<br />&nbsp;&nbsp;&nbsp; b=c(b, object.size(fit2))<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; #fit glmnet<br />&nbsp;&nbsp;&nbsp; #1 row for each player; col is just the player number; need to t() levx so as.vector works by row<br />&nbsp;&nbsp;&nbsp; sx=sparseMatrix(rep(1:n,each=nPlayers),as.vector(t(levx)),x=1)<br />&nbsp;&nbsp;&nbsp; a=c(a, system.time(fit3&lt;-glmnet(sx, y, intercept=FALSE, lambda=1e-6)))<br />&nbsp;&nbsp;&nbsp; b=c(b, object.size(fit3))<br />&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; res=rbind(res,a)<br />&nbsp;&nbsp;&nbsp; obj=rbind(obj,b)<br />&nbsp; }<br />&nbsp;&nbsp; <br />#plot 1<br />plot(res[,1],res[,2],type="l", ylab="seconds", xlab="number of categories")<br />lines(res[,1],res[,3],col="red",lty=2)<br />lines(res[,1],res[,4],col="blue",lty=3)<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br />title("Time to compute on my laptop")<br /><br />#plot 2<br />plot(res[,1],res[,2]/res[,4],xlab="number of subjects",ylab="glm (or speedglm) speed/glmnet speed", type="l")<br />lines(res[,1],res[,3]/res[,4], col="red")<br />title("relative speed of glm (black)\n and speedglm (red) to glmnet")<br /><br />#plot 3<br />par(mfrow=c(3,1))<br />coef.glm = coef(fit1) #since I didn't est an intercept<br />coef.speedglm = coef(fit2)<br />coef.glmnet = coef(fit3,s=min(fit3lambda))[2:(nSubjects+1)]<br />plot(beta,coef.glmnet,ylab="estimated beta",main="glmnet estimates vs true")<br />plot(coef.glm,coef.glmnet,ylab="from glmnet",xlab="estimated beta from glm")<br />plot(coef.glm,coef.speedglm,ylab="from speedglm",xlab="estimated beta from glm")</span></span><br /><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-72425894976448313922013-10-08T11:29:00.000-07:002013-10-10T17:19:27.802-07:00Getting coefficients out of glmnet<div dir="ltr" style="text-align: left;" trbidi="on">Surprisingly, figuring out how to get the coefficients out of a <a href="http://cran.r-project.org/web/packages/glmnet/index.html">glmnet</a> fit&nbsp; took me about 2 hours of reading posts on stackexchange and R forums.&nbsp; I would have given up except I saw a blog where someone said they used glmnet to do glms, so I knew it was possible.&nbsp; Turns out it is really easy but you need to know what coef(glmnet.fit) is outputting.&nbsp; The problem was that I was avoiding reading the paper accompanying glmnet and couldn't really understand the output until I bucked up and read the paper.&nbsp; <span style="font-family: inherit;">This is part of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">series of posts I did comparing glm, speedglm and glmnet</a>.</span><br /><br />This paper describes the glmnet package and its algorithms<br />Jerome Friedman, Trevor Hastie, Robert Tibshirani (2010). Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1), 1-22. URL <a href="http://www.jstatsoft.org/v33/i01/">http://www.jstatsoft.org/v33/i01/</a>.<br /><br />glmnet is used to find a reduced regression model that leads to minimized mean squared error---so you have many, many explanatory variables but most of these don't increase the predictive value of the model and you want to find the optimal reduced model.&nbsp; The parameter 'lambda' is a measure of the model size.&nbsp; When you type coef(fit) after a glmnet fit, you get all the fits for the lambda used.&nbsp; The first columns are for small models, so most of the coef are 0.&nbsp; As you increase column number, the models get bigger and bigger.&nbsp; The idea, in a normal glmnet use, is to find the size of model that minimizes mean squared error.&nbsp; Here's one of the plots from their paper showing mean square error of predictions (cross-validation) versus size of model fit (with glmnet):<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-F9xJvCBcVWQ/UlRGyuGRDxI/AAAAAAAATv4/64wUBWggyzo/s1600/glmnet-lambda.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-F9xJvCBcVWQ/UlRGyuGRDxI/AAAAAAAATv4/64wUBWggyzo/s1600/glmnet-lambda.png" /></a></div>In this example they had 100 explanatory variables.&nbsp; The size of the model is at the top.&nbsp; The corresponding lambda is at the bottom.&nbsp; In normal use, you do some cross-validation (glmnet has functions for that) and use that to make a plot like above and select the lambda (size of model) that minimizes the mean squared error for your problem.<br /><br />The output for coef(fit), where fit is a glmnet fit, is a 1+#coefs&nbsp; X #lambdas matrix.&nbsp; You can call coef() with the argument s to specify the lambda level you want.&nbsp; Why "s" and not "lambda"??&nbsp; Anyhow "s" is "lambda" in the coef call.&nbsp; So let's say I wanted the coefficients at lambda = log(-2), about the minimum in the figure above.&nbsp;&nbsp; I would use the following:<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit3&lt;-glmnet(sx, y)<br />coef.glmnet = coef(fit3, s=log(-2))[2:(nSubjects+1)]</span><br /><br />However, I am not using glmnet that way.&nbsp; I'm not trying to find a reduced model. I want to fit a saturated model---meaning I want to estimate all the coefficients.&nbsp; I want an estimate of strength for every subject in my model.&nbsp; Though I suppose 0 is an estimate, I don't want that.&nbsp; So I want to force glmnet to fit the saturated model (careful, sometimes you do need to fix a coef to 0 to have a solveable model; e.g. models with factors with multiple levels).<br /><br />I had trouble figuring out how to force glmnet to do this.&nbsp; <span style="color: blue;">[update: turns out that setting lambda=0 makes glmnet return the "glm" parameters.&nbsp; See below.]&nbsp;</span> argument <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">lambda.min.ratio</span> should do the trick but seemed to have no effect. However passing in argument <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">lambda</span> to set your own lambda values seems to work.&nbsp; Friedman et al (2010) says to not pass in just one lambda value because the algorithm works better with a "warm start".&nbsp; Hmm, I'm not sure what a "warm start" is but my guess is that it is a saturated model.&nbsp; In other words, that the algorithm works better is you start with the full model and work down.&nbsp;&nbsp; So...I'm just going to start with the full model.&nbsp; <i>Need to make sure that the full model is identifiable!</i><br /><br />I should be able to compute the lambda for the saturated model and send that to glmnet, but I couldn't figure out how to compute that.&nbsp; So after some futzing, it seemed like passing in <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">lambda=1e-6</span> forced glmnet to fit the saturated model for my toy problems.&nbsp; So my call to glmnet and corresponding coef() call looks like so:<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">fit3&lt;-glmnet(sx, y, lambda=1e-6)<br />coef.glmnet = coef(fit3)</span><br /><br />I don't need to pass <span style="font-family: &quot;Courier New&quot;,Courier,monospace;">s</span> into coef(fit) since I only have 1 column because I passed in one value of lambda.<br /><br /><b>Update, a week later</b><br />Fitting the saturated or near-saturated models worked for most of my test cases---until I tried to use it on real soccer match data which has a clustered structure.&nbsp; <a href="http://parsimoniouspursuits.blogspot.com/2013/10/dixon-and-coles-2-player-model-fit-with.html">See this post</a>. &nbsp;<b> </b>In the process of trying to come up with a work-around for the problems <a href="http://parsimoniouspursuits.blogspot.com/2013/10/dixon-and-coles-2-player-model-with.html">explored more fully here</a>, I came across a forum post where someone wrote that you can pass in lambda=0, to get glmnet to duplicate the behavior of glm. I had already tried passing in lambda really small to get it to fit the saturated model.&nbsp; The estimates were almost exactly the same but not quite and I ran into the problem that glmnet complained when I gave it a saturated model that is non-identifiable.&nbsp; glm deals with this by setting one of the factor coefficients to 0 to make the model identifiable, but I couldn't figure out how to set that constraint for glmnet.&nbsp; But turns out passing in lambda=0 works just fine for those non-identifiable models.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-44140401758357609822013-10-07T15:05:00.002-07:002013-10-08T11:37:05.646-07:00glm, speedglm, glmnet comparison (Part 4: models with a few categorical variables but many levels)<div dir="ltr" style="text-align: left;" trbidi="on"><a href="http://parsimoniouspursuits.blogspot.com/2013/10/glm-speedglm-glmnet-comparison-part-3.html">Part 3</a> talked about models with many categorical variables but fairly low numbers of categories within those variables.&nbsp; <a href="http://parsimoniouspursuits.blogspot.com/2013/10/glm-speedglm-glmnet-comparison-part-2.html">Part 2</a> discussed that the size of the relative size of the model matrix compared to its sparse matrix representation scales with levels/2, so we might expect that glm would get even worse relative to glmnet as we try to use it to estimate a model with categorical variables where there are many (1000s or 10000s) of levels for the variable.&nbsp; <span style="font-family: inherit;">This is part of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">series of posts I did comparing glm, speedglm and glmnet</a>.</span><br /><br />In this test, I use again n=10,000.&nbsp; I use 2 categorical explanatory variables and allow the number of levels to go up. <br /><br />Speed comparison<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8qx5889wECs/UlMsf6sPn9I/AAAAAAAATvg/Lfef2K0nfGw/s1600/glm-comp-part4a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8qx5889wECs/UlMsf6sPn9I/AAAAAAAATvg/Lfef2K0nfGw/s1600/glm-comp-part4a.png" /></a></div>&nbsp;Here's the relative comparison.&nbsp; glm is getting slower and slower relative to glmnet as the number of categories goes up.&nbsp; For 1000 categories (and 2 factors), it was 750 times slower. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Ja7zJX4xPb4/UlMsjakdsZI/AAAAAAAATvo/LICBvPpRy1I/s1600/glm-comp-part4b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Ja7zJX4xPb4/UlMsjakdsZI/AAAAAAAATvo/LICBvPpRy1I/s1600/glm-comp-part4b.png" /></a></div>bottom line is for speedglm.&nbsp; It's not red as the title suggests.&nbsp; Little jiggle for glm at 200 is because glmnet was so fast so didn't get good speed estimate (likely affected by surfing I was doing while running the test).<br /><br /><b>RAM notes</b><br />At about 2500 categories, I use up the 8M of RAM on my laptop with glm.&nbsp; There is no discernible jump in RAM use at 2500 categories for glmnet.&nbsp; I don't have a good way to measure RAM use during the function calls.&nbsp; I can watch it using the Windows performance monitor while running code, but I don't know how to get the max memory used with R code.&nbsp; Rprofmem() didn't seem to get me what I wanted nor does gc().<br /><br /><b>R code</b><br /><br />library(glmnet)<br />library(speedglm)<br /><br />n = 10000<br />facs = 2<br /><br />res=obj=c()<br />for(levs in c(50,100,200,300,500,1000)){<br />&nbsp; beta=matrix(rnorm(levs*facs,0,1),levs,facs)<br />&nbsp; <br />&nbsp; levx = c()<br />&nbsp; for(i in 1:facs) levx=cbind(levx,sample(levs,n,replace=TRUE))<br />&nbsp; <br />&nbsp; y &lt;- apply(levx, 1, function(x){ sum(beta[x+seq(0,10*(facs-1),by=10)]) }) + rnorm(n)<br />&nbsp; x = data.frame(levx)<br />&nbsp; for(i in 1:facs) x[,i]=factor(x[,i],levels=1:levs)<br />&nbsp; dat = cbind(y=y,x)<br /><br />&nbsp; cat(levs);cat("\n")<br />&nbsp; <br />&nbsp; #set up the formula<br />&nbsp; fooform = "y~-1"<br />&nbsp; for(i in 1:facs) fooform=paste(fooform,"+X",i,sep="")<br /><br />&nbsp; #fit glm<br />&nbsp; a=c(levs, system.time(fit&lt;-glm(formula(fooform), data=dat)))<br />&nbsp; b=c(levs, object.size(fit))<br />&nbsp; <br />&nbsp; #fit speedglm<br />&nbsp; a=c(a,system.time(fit&lt;-speedglm(as.formula(fooform), data=dat)))<br />&nbsp; b=c(b, object.size(fit))<br />&nbsp; <br />&nbsp; #fit glmnet<br />&nbsp; sx=sparseMatrix(rep(1:n,facs),as.vector(levx+matrix(seq(0,levs*(facs-1),by=levs),n,facs,byrow=TRUE)),x=1)<br />&nbsp; a=c(a, system.time(fit&lt;-glmnet(sx, y)))<br />&nbsp; b=c(b, object.size(fit))<br />&nbsp; <br />res=rbind(res,a)<br />obj=rbind(obj,b)<br />}<br /><br />#plot 1<br />plot(res[,1],res[,2],type="l", ylab="seconds", xlab="number of categories")<br />lines(res[,1],res[,3],col="red",lty=2)<br />lines(res[,1],res[,4],col="blue",lty=3)<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br />title(paste(facs,"explanatory variables"))<br /><br />#plot 2<br />plot(res[,1],res[,2]/res[,4],xlab="number of categories",ylab="glm (or speedglm) speed/glmnet speed", type="l")<br />lines(res[,1],res[,3]/res[,4], col="red")<br />title("relative speed of glm (black) and speedglm (red) to glmnet")<br /><br />#plot 2<br />plot(obj[,1],log(obj[,2]),type="l", ylab="object size (log(M))", xlab="number of explanatory variables",ylim=c(10,23))<br />lines(obj[,1],log(obj[,3]),col="red",lty=2)<br />lines(obj[,1],log(obj[,4]),col="blue",lty=3)<br />abline(h=log(8042*1e6))<br />abline(h=log(2000*1e6))<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br /><br /><br /><br /><br /><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-27387432118272230662013-10-07T14:19:00.001-07:002013-10-08T11:36:50.133-07:00glm, speedglm, glmnet comparison (Part 3: models with many categorical explanatory variables)<div dir="ltr" style="text-align: left;" trbidi="on">In <a href="http://parsimoniouspursuits.blogspot.com/2013/10/glm-speedglm-versus-glmnet-comparison.html">Part 1</a> of the glm, speedglm, glmnet comparison, I looked at models with continuous explanatory variables.&nbsp; In Part 3, I look at speeds for models with lots of categorical explanatory variables.&nbsp; This will use the sparse matrix representation of a model matrix for models with categorical explanatory variables.&nbsp; <a href="http://parsimoniouspursuits.blogspot.com/2013/10/glm-speedglm-glmnet-comparison-part-2.html">Part 2</a> talked about that.&nbsp; <span style="font-family: inherit;">This is part of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">series of posts I did comparing glm, speedglm and glmnet</a>.</span><br /><br />Here I have a model that looks like this<br /><br />y ~ factor1 + factor2 + factor3 + ... + factor-k<br /><br />where k is big or # of levels gets big.&nbsp; This is the case where the model matrix gets huge, and we might expect glm to really bog down.&nbsp; <br /><br />First test.&nbsp; # of categorical explanatory variables gets big.&nbsp; I set n to 10,000 and number of levels per explanatory variable at 10.&nbsp; So yes glm is getting really slow relative to glm.&nbsp; In fact, I didn't do more than 300 variables for glm since the speed was going down so much.&nbsp; speedglm also shows trouble as number of explanatory variables gets big.&nbsp; R code to generate this is below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-bUCpHnf33ok/UlMhp6HWYGI/AAAAAAAATu8/I1hteKXOkQ8/s1600/glm-comp-part3a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-bUCpHnf33ok/UlMhp6HWYGI/AAAAAAAATu8/I1hteKXOkQ8/s1600/glm-comp-part3a.png" /></a></div><br /><br />But look at glm with categorical explanatory variables versus continuous explanatory variables.&nbsp; Yipes.&nbsp; Using glm with large numbers (1000s) of categorical explanatory variables is not going to work.&nbsp; We are slowing down quickly and using up RAM.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-xWzEcbaJyho/UlMiFjcWrSI/AAAAAAAATvE/8CXuoZs1Nco/s1600/glm-comp-part3b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-xWzEcbaJyho/UlMiFjcWrSI/AAAAAAAATvE/8CXuoZs1Nco/s1600/glm-comp-part3b.png" /></a></div><br /><br />Here is the object size<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Pe0I498G4AM/UlMjXUBbgTI/AAAAAAAATvQ/ll8VpxBE_BI/s1600/glm-comp-part3c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Pe0I498G4AM/UlMjXUBbgTI/AAAAAAAATvQ/ll8VpxBE_BI/s1600/glm-comp-part3c.png" /></a></div><br /><br /><b>R code</b><br /><br /><code>library(glmnet)<br />library(speedglm)<br /><br />n = 10000<br />levs = 10<br /><br />res=obj=c()<br />for(facs in c(50,100,200,300,500,1000)){<br />&nbsp; beta=matrix(rnorm(levs*facs,0,1),levs,facs)<br />&nbsp; <br />&nbsp; levx = c()<br />&nbsp; for(i in 1:facs) levx=cbind(levx,sample(levs,n,replace=TRUE))<br />&nbsp; <br />&nbsp; y &lt;- apply(levx, 1, function(x){ sum(beta[x+seq(0,10*(facs-1),by=10)]) }) + rnorm(n)<br />&nbsp; x = data.frame(levx)<br />&nbsp; for(i in 1:facs) x[,i]=factor(x[,i],levels=1:levs)<br />&nbsp; dat = cbind(y=y,x)<br /><br />&nbsp; cat(facs);cat("\n")<br />&nbsp; <br />&nbsp; #set up the formula<br />&nbsp; fooform = "y~-1"<br />&nbsp; for(i in 1:facs) fooform=paste(fooform,"+X",i,sep="")<br /><br />&nbsp; #fit glm<br />&nbsp; #don't do glm if facs&gt;500; too slow<br />&nbsp; if(facs&lt;500){<br />&nbsp; a=c(facs, system.time(fit&lt;-glm(formula(fooform), data=dat)))<br />&nbsp; b=c(facs, object.size(fit))<br />&nbsp; }else{<br />&nbsp;&nbsp;&nbsp; a=c(facs, NA)<br />&nbsp;&nbsp;&nbsp; b=c(facs, NA)<br />&nbsp; }<br />&nbsp; <br />&nbsp; #fit speedglm<br />&nbsp; a=c(a,system.time(fit&lt;-speedglm(as.formula(fooform), data=dat)))<br />&nbsp; b=c(b, object.size(fit))<br />&nbsp; <br />&nbsp; #fit glmnet<br />&nbsp; sx=sparseMatrix(rep(1:n,facs),as.vector(levx+matrix(seq(0,levs*(facs-1),by=levs),n,facs,byrow=TRUE)),x=1)<br />&nbsp; a=c(a, system.time(fit&lt;-glmnet(sx, y)))<br />&nbsp; b=c(b, object.size(fit))<br />&nbsp; <br />res=rbind(res,a)<br />obj=rbind(obj,b)<br />}<br /><br />#plot 1<br />plot(res[,1],res[,2],type="l", ylab="seconds", xlab="number of explanatory variables",ylim=c(0,500))<br />lines(res[,1],res[,3],col="red",lty=2)<br />lines(res[,1],res[,4],col="blue",lty=3)<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br /><br />#plot 2<br />plot(obj[,1],log(obj[,2]),type="l", ylab="object size (log(M))", xlab="number of explanatory variables",ylim=c(10,23))<br />lines(obj[,1],log(obj[,3]),col="red",lty=2)<br />lines(obj[,1],log(obj[,4]),col="blue",lty=3)<br />abline(h=log(8042*1e6))<br />abline(h=log(2000*1e6))<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br /><br /><!--500--><!-----></code></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-30669482012725465862013-10-07T12:53:00.000-07:002013-10-08T12:49:18.669-07:00Writing a model matrix in sparse matrix form<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on"><span style="font-family: inherit;">The last post <a href="http://parsimoniouspursuits.blogspot.com/2013/10/glm-speedglm-versus-glmnet-comparison.html">glm, speedglm, glmnet comparison (part 1)</a> showed that glmnet gives us a big speed and object size advantage for a vanilla regression when we have many explanatory variables (1000s).&nbsp; In my next posts, I will look at models with lots of factors.&nbsp; Here glmnet has an even bigger advantage because we can use sparse matrix notation to pass in our model.&nbsp; Before using sparse matrices with glmnet, I want to review how to specify a sparse matrix in R.&nbsp; This uses the <a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;ved=0CCwQFjAA&amp;url=http%3A%2F%2Fcran.r-project.org%2Fpackage%3DMatrix&amp;ei=nmFUUsCGM9HSiALltYCoCg&amp;usg=AFQjCNFWKX9U2LNbd-V8BJ6UqyHw3f5xxg&amp;sig2=iJn_IJ2rXKRSgWQvOUipPw&amp;bvm=bv.53760139,d.cGE">Matrix R package</a>.&nbsp; This is part of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">series of posts I did comparing glm, speedglm and glmnet</a>.</span><br /><br /><span style="font-family: inherit;">Let's say we had 5 data points and 2 explanatory variables X1 and X2.&nbsp; Each has 2 levels, "a" and "b".&nbsp; Our data look like so, a 5x2 matrix:</span><br /><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">X1 X2<br />a a<br />b a<br />b a<br />b b<br />a b</pre><br />glm will represent this will a model matrix that will expand that out into binary form with a column for each level-factor combination. It'll look something like this*, a 2x2x5 matrix:<br /><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">X1a X1b X2a X2b<br />1 0 1 0<br />0 1 1 0<br />0 1 1 0<br />0 1 0 1<br />1 0 0 1</pre><br />*Ok, assume that no intercept is estimated and ignore that we have to set one factor to 0 (or something) to make the problem identifiable.<br /><br />This model matrix will have (#data points) rows and (#factors * #levels) columns AND it is almost all zeros. As the number of factors or levels gets big, this will get very wasteful and will will run out of RAM and everything slows down. <br /><br />We can represent this more concisely in sparse matrix form. For sparse matrix form, we just need the row and columns that are 1s.<br /><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0">row col<br />1 1<br />2 2<br />3 2<br />4 2<br />5 1<br />1 3<br />2 3<br />3 3<br />4 4<br />5 4</pre><br />The sparse matrix representation has (#factors * #data points) rows and 2 cols (a col for row # and one for col #). The ratio of the size of the original model matrix to the sparse model matrix is #levels/2. So as number of levels and factors gets big, sparse model matrix form will save lots of space.</div></div><b>How to write the model matrix in R</b><br /><br />Here's a little piece of code to make the sparse matrix from a data frame where each column of the data frame is a factor.&nbsp; It uses the numeric representation of a factor, so as.numeric(factor).<br /><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-size: x-small;">x=data.frame(X1=c("a","b","b","b","a"), X2=c("a","a","a","b","b"))<br />cols=0<br />sx=c() #the sparse matrix representation<br />for(i in 1:ncol(x)){</span></span><br /><span style="font-family: &quot;Courier New&quot;,Courier,monospace;"><span style="font-size: x-small;">&nbsp; #we need to add on the numbers of cols used for previous factors<br />&nbsp; sx=rbind(sx,cbind(1:nrow(x),as.numeric(x[,i])+cols))<br />&nbsp; cols = cols + length(levels(x[,i]))<br />}</span></span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-15127590807109561352013-10-03T19:47:00.000-07:002013-10-08T12:50:41.063-07:00glm, speedglm versus glmnet comparison tests (part 1)<div dir="ltr" style="text-align: left;" trbidi="on">I spent part of today learning <a href="http://cran.r-project.org/web/packages/glmnet/index.html">glmnet</a>, another R package for speedier generalized linear regression.&nbsp;&nbsp; <a href="http://parsimoniouspursuits.blogspot.com/2013/10/getting-coefficients-out-of-glmnet.html">Read this post for some background on what glmnet does</a> This is for massive linear regression problems where you are trying to find a minimal model and where the model matrix is so huge that it is maxing out your RAM and the computation is getting slow.&nbsp; The RAM is the kicker.&nbsp; If the computation is just slow, you can wait but if it requires more RAM than you have then you are stuck.&nbsp; glm() is very RAM hungry due to the model.matrix that it constructs.&nbsp; This gets enormous as the number of response variables gets huge.&nbsp; This post is based on this one by someone else <a href="http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r/">using-sparse-matrices-in-r</a>&nbsp;&nbsp; This is the first of a <a href="http://parsimoniouspursuits.blogspot.com/search/label/glm-speedglm-glmnet%20comparison">whole series of posts I did comparing glm, speedglm and glmnet</a>.<br /><br />Speed test #1&nbsp; My first speed test used a simple gaussian errors regression with continuous response variables (meaning not factors, not categorical).&nbsp; First plot shows speed in seconds (on my laptop).&nbsp; Model is y ~ x1 + x2 + ... + xn, family="gaussian".&nbsp; R code for the test is below (and shows how to set each up for glm(), speedglm() and glmnet() ).<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-dW1xvf62nfs/UlMVOhkOSgI/AAAAAAAATuk/9CiHE8lGeJ0/s1600/glm-comparison-par1a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="293" src="http://3.bp.blogspot.com/-dW1xvf62nfs/UlMVOhkOSgI/AAAAAAAATuk/9CiHE8lGeJ0/s400/glm-comparison-par1a.png" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"></div>Plot 2 shows object size.&nbsp; The top 2 lines show 2M RAM and 8M RAM <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-x-hYVDFe1lg/UlMVRfXIJEI/AAAAAAAATuw/MNSfi0PUwJo/s1600/glm-comp-part1b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="293" src="http://4.bp.blogspot.com/-x-hYVDFe1lg/UlMVRfXIJEI/AAAAAAAATuw/MNSfi0PUwJo/s400/glm-comp-part1b.png" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>library(glmnet)<br />library(speedglm)<br />n =10000 #number of data points (y)<br />res = obj = c() #holders for output<br />#p is number of response variables <br />for(p in c(100,500,1000,1500,2000)){<br />#create random covariate values <br />x = matrix(rnorm(n * p), n, p)<br />beta = rnorm(p) #random betas<br />y = x %*% beta + rnorm(n) #the response variable<br /><br />cat(p);cat("\n")<br />#vanilla glm<br />#glm.fit = glm(y ~ x)<br />a=c(p, system.time(fit&lt;-glm(y ~ x)))<br />b=c(p, object.size(fit))<br /><br />#speedglm<br />da=data.frame(y=y, x)<br />#spdglm.fit = speedglm(y ~ x, data=da)<br />a=c(a,system.time(fit&lt;-speedglm(y ~ x, data=da)))<br />b=c(b, object.size(fit))<br /><br />#glmnet<br />#glmnet.fit = glmnet(x, y)<br />a=c(a, system.time(fit&lt;-glmnet(x, y)))<br />b=c(b, object.size(fit))<br /><br />res=rbind(res,a)<br />obj=rbind(obj,b) <br />}<br /><br />#plot 1<br />plot(res[,1],res[,2],type="l", ylab="seconds", xlab="number of response variables")<br />lines(res[,1],res[,3],col="red",lty=2)<br />lines(res[,1],res[,4],col="blue",lty=3)<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)<br /><br />#plot 2<br />plot(obj[,1],log(obj[,2]),type="l", ylab="object size (log(M))", xlab="number of response variables",ylim=c(10,23))<br />lines(obj[,1],log(obj[,3]),col="red",lty=2)<br />lines(obj[,1],log(obj[,4]),col="blue",lty=3)<br />abline(h=log(8042*1e6))<br />abline(h=log(2000*1e6))<br />legend("topleft",c("glm","speedglm","glmnet"),col=c("black","red","blue"),lty=1)</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-39015660787345309942013-09-27T12:58:00.000-07:002013-09-27T14:25:48.168-07:00Test with real data<div dir="ltr" style="text-align: left;" trbidi="on">The previous test indicated that the 2 factor model is working.&nbsp; But test with real data is problematic.<br /><br />First good news. Estimates are correlated within well connected groups. Here's the soccer data<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-_YnOxDz37M8/UkXhyq6yoQI/AAAAAAAATrw/sDpOdzssEXU/s1600/Sept+27+test+to+soccer+data.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="624" src="http://3.bp.blogspot.com/-_YnOxDz37M8/UkXhyq6yoQI/AAAAAAAATrw/sDpOdzssEXU/s640/Sept+27+test+to+soccer+data.png" width="640" /></a></div><br /><br />It's not a cloud.&nbsp; That's something.&nbsp; What's with it not being on the 1-1 line?&nbsp; Wrong prior?<br /><br />But when I add more groups that are loosely connected to each other.&nbsp; It breaks down and starts looking like a cloud.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-2c4bDYDvsZI/UkXl36v9eyI/AAAAAAAATr8/awVLrSldPz4/s1600/Sept+27+3+ages.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="390" src="http://3.bp.blogspot.com/-2c4bDYDvsZI/UkXl36v9eyI/AAAAAAAATr8/awVLrSldPz4/s400/Sept+27+3+ages.png" width="400" /></a></div><br />This makes sense as there is not a whole lot of information to sort out groups against each other.<br /><div class="separator" style="clear: both; text-align: center;"></div><ul style="text-align: left;"><li>Add a smoother step to condition on all the data.&nbsp; ALAS it depends on retaining the covariances.&nbsp; But still I don't need to retain all of them and make a nxn var-cov matrix (which is what is hogging the RAM).&nbsp; That matrix is incredibly sparse and working with the whole thing is the problem.&nbsp; Look in to methods for storing sparse matrices.</li><li>&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-57DSpOp-sN8/UkX1Uv91gjI/AAAAAAAATsk/02T26XnpNqo/s1600/photo.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="http://3.bp.blogspot.com/-57DSpOp-sN8/UkX1Uv91gjI/AAAAAAAATsk/02T26XnpNqo/s400/photo.JPG" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"></div></li><li>Do some sims to see if being off the 1-1 is from the prior.&nbsp;&nbsp;</li><li>The filterglm() is still working ok for a well-mixed group so still has potential. Do some sims to understand how the lack of mixing hurts.&nbsp;&nbsp; Elo starts all new players at a low level.&nbsp; I'm kind of doing that too.&nbsp; Do some time-series to understand how players move "up".&nbsp; I have priors for the groups.&nbsp; Why not use that as a better prior?</li><li>Or perhaps a hierarchical approach?&nbsp; Where I define 'groups' and try to estimate the group mean?&nbsp; It would work, but to pedantic.&nbsp; I like the organic 'crowd-sourced' ranking idea better.&nbsp; The structure is idiosyncratic to the problem and I don't see how I'd organically get the groups.</li></ul><b>Code to run this test</b><br />f<span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">ormula=y~-1+attack+defend<br />dat=read.csv("2013 match data/2013-2014/boys-scores-master.csv", stringsAsFactors=FALSE)<br />attack = c(dathome.team, dat$away.team)<br />defend = c(dat$away.team, dat$home.team)<br />y = c(dat$home.score,dat$away.score)<br />moddat = data.frame(y=y,attack=attack, defend=defend,stringsAsFactors=FALSE)<br />#No NaN allowed in this approach<br />moddat = moddat[!is.na(moddat$y),]<br />#now make the factors<br />levs = unique(c(moddat$attack,moddat$defend))<br />moddat = data.frame(y=moddat$y,attack=factor(moddat$attack, levels=levs), <br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; defend=factor(moddat$defend, levels=levs))<br />test=filterglm(formula, moddat)<br />test2=as.data.frame(test)<br /><br />#load in the fbRanks speedglm object for the 9-24 data above<br />test3=print(fbRanks,silent=TRUE,age="B00",region="WA")$ranks[]<br />#resort to match whatever is output by print.fbRanks<br />test4=test2[match(test3$team,rownames(test2)),]<br />#print.fbRanks is showing exp(attack)<br />plot(log(test3$attack),test4$attack.mean,xlab="speedglm estimate",ylab="filterglm estimate")<br />abline(a=-1*mean(log(test3$attack)),b=1)<br /><br />par(mfrow=c(2,2))<br />for(i in c("B01","B00","B99","B98")){<br />#load in the fbRanks speedglm object for the 9-24 data above<br />test3=print(fbRanks,silent=TRUE,age=i,region=c("OR","WA"))$ranks[]<br />#resort to match whatever is output by print.fbRanks<br />test4=test2[match(test3$team,rownames(test2)),]<br />#print.fbRanks is showing exp(attack)<br />plot(log(test3$attack)-mean(log(test3$attack),na.rm=TRUE),test4$attack.mean-mean(test4$attack.mean,na.rm=TRUE),xlab="speedglm estimate",ylab="filterglm estimate")<br />title(i)<br />abline(a=0,b=1)<br />}<br /><br />par(mfrow=c(2,2))<br />i = c("B00","B99","B98")<br />&nbsp; #load in the fbRanks speedglm object for the 9-24 data above<br />&nbsp; test3=print(fbRanks,silent=TRUE,age=i,region=c("OR","WA"))$ranks[]<br />&nbsp; #resort to match whatever is output by print.fbRanks<br />&nbsp; test4=test2[match(test3$team,rownames(test2)),]<br />&nbsp; #print.fbRanks is showing exp(attack)<br />&nbsp; plot(log(test3$attack)-mean(log(test3$attack),na.rm=TRUE),test4$attack.mean-mean(test4$attack.mean,na.rm=TRUE),xlab="speedglm estimate",ylab="filterglm estimate")<br />&nbsp; title(i)<br />&nbsp; abline(a=0,b=1)</span></span><br /><br /><b>New functions</b><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">filterglm = function(formula, data, weights = NULL){<br />&nbsp; #this code is specific to the soccer ranking problem<br />&nbsp; #it requires that factors have the same levels; not a requirement<br />&nbsp; #the data must be a dataframe with attack and defend<br />&nbsp; tf &lt;- terms(formula)<br />&nbsp; M &lt;- model.frame(tf, data)<br />&nbsp; names.x = levels(data$attack)<br />&nbsp; n.x = length(names.x)<br />&nbsp; M = lapply(M,function(x){if(is.factor(x))x=as.numeric(x) else x})<br />&nbsp; M = as.data.frame(M)<br />&nbsp; #mean and variance<br />&nbsp; est.x=matrix(c(0, 0,1,1), n.x, 4, byrow=TRUE)<br />&nbsp; rownames(est.x)=names.x<br />&nbsp;colnames(est.x)=c(paste(colnames(attr(tf,"factors")),".mean",sep=""),</span></span><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">&nbsp;&nbsp;&nbsp;&nbsp; paste(colnames(attr(tf,"factors")),".var",sep=""))<br />&nbsp; n.trials = dim(M)<br />&nbsp; for(i in 1:n.trials){<br />&nbsp;&nbsp;&nbsp; #go through each contest sequentially and update the factors estimates<br />&nbsp;&nbsp;&nbsp; prior.xtt = matrix(c(est.x[M[i,2],1],est.x[M[i,3],2]),2,1)<br />&nbsp;&nbsp;&nbsp; prior.Ptt = diag(c(est.x[M[i,2],3],est.x[M[i,3],4]))<br />&nbsp;&nbsp;&nbsp; out=filter.update(M[i,1],prior.xtt=prior.xtt, prior.Ptt=prior.Ptt)<br />&nbsp;&nbsp;&nbsp; est.x[M[i,2],1]=out$post.xtt<br />&nbsp;&nbsp;&nbsp; est.x[M[i,3],2]=out$post.xtt<br />&nbsp;&nbsp;&nbsp; #post.Ptt is not a diagonal matrix! Think about it.&nbsp; It shouldn't be.<br />&nbsp;&nbsp;&nbsp; #but I don't retain the information regarding covariance between player estimates<br />&nbsp;&nbsp;&nbsp; #this is where this approach loses efficiency relative to an approach that <br />&nbsp;&nbsp;&nbsp; #analyzes all the data jointly.&nbsp; But I'm assume I never have the data.....<br />&nbsp;&nbsp;&nbsp; est.x[M[i,2],3]=diag(out$post.Ptt)<br />&nbsp;&nbsp;&nbsp; est.x[M[i,3],4]=diag(out$post.Ptt)<br />&nbsp; }<br />&nbsp; return(est.x)<br />}</span></span><br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">filter.update=function(y, prior.xtt = matrix(0,2,1), prior.Ptt = diag(1,2), Q=diag(0,2)){<br />&nbsp;&nbsp;&nbsp; require(KFAS)<br />&nbsp;&nbsp;&nbsp; n=1; TT=1; m=2<br />&nbsp;&nbsp;&nbsp; B=diag(1,2); t.B=B<br />&nbsp;&nbsp;&nbsp; Z=matrix(c(1,-1),1,2)<br />&nbsp;&nbsp;&nbsp; Q=diag(0,2)&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; P1inf=matrix(0,m,m) <br />&nbsp;&nbsp;&nbsp; if(packageVersion("KFAS")=="0.9.11")<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stop("KFAS 1.0.0 required and you have old version")<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #kfas.model=SSModel(y, Z=Z, T=B, R=diag(1,m), Q=Q, a1=prior.xtt, P1=prior.Ptt, P1inf=P1inf, distribution="Poisson")<br />&nbsp;&nbsp;&nbsp; else<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; kfas.model=SSModel(y ~ -1+SSMcustom( Z=Z, T=B, R=diag(1,m), Q=Q, a1=prior.xtt, P1=prior.Ptt, P1inf=P1inf), distribution="poisson")<br />&nbsp;&nbsp;&nbsp; ks.out=KFS(kfas.model)<br />&nbsp;&nbsp;&nbsp; return(list(post.xtt=ks.out$alphahat[1:2],post.Ptt=ks.outV[1:2,1:2,1]))<br />}</span></span></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-18535412.post-42438555120237530602013-09-27T11:46:00.002-07:002013-09-27T11:46:56.520-07:00Test of the 2-player filter model with attack and defend<div dir="ltr" style="text-align: left;" trbidi="on">Same ideas as yesterday, except now players have different types of x's depending on whether they are player 1 or 2 (i.e. attacking and defending).&nbsp; I did this after testing the idea on a real dataset, seeing no correlation to speedglm, then I tested against a known dataset and saw no correlation there.&nbsp; So I did this to see if the problem is the 2 factors types or a bug in the code I wrote this AM.&nbsp; Looks like the later.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-GG1-US-vqIc/UkXRt16nu7I/AAAAAAAATrg/orULhPKqrp8/s1600/Sept+27+attack+and+defend.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="360" src="http://1.bp.blogspot.com/-GG1-US-vqIc/UkXRt16nu7I/AAAAAAAATrg/orULhPKqrp8/s640/Sept+27+attack+and+defend.png" width="640" /></a></div><br />Here's the 2nd test.&nbsp; Uses the simple.update() function from yesterday's post.&nbsp; Requires KFAS 1.0.0.&nbsp; I tried 0.9.11, and though it looks to have the poisson, it returns NaN if count is 0.<br /><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">sim.poisson.test2 = function(n.x=1000, n.trials=10*1000){<br />&nbsp; #variance of the distribution of the player pool x's<br />&nbsp; mean.x = 0<br />&nbsp; sig2.x = 1<br />&nbsp; #true.x is what we are trying to estimate<br />&nbsp; true.attack = rnorm(n.x, mean.x, sig2.x)<br />&nbsp; true.defend = rnorm(n.x, mean.x, sig2.x)<br />&nbsp; dat = matrix(0,n.trials,3)<br />&nbsp; for(i in 1:n.trials){<br />&nbsp;&nbsp;&nbsp; dat[i,2:3] = sample(1:n.x,2)<br />&nbsp;&nbsp;&nbsp; dat[i,1] = rpois(1,exp(true.attack[dat[i,2]]-true.defend[dat[i,3]]))<br />&nbsp; }<br />&nbsp; <br />&nbsp; #start everyone with an estimate and uncertainty<br />&nbsp; #corresponding to the player pool mean and variance<br />&nbsp; est.x=matrix(c(mean.x, mean.x, sig2.x, sig2.x),n.x,4,byrow=TRUE)<br />&nbsp; for(i in 1:n.trials){<br />&nbsp;&nbsp;&nbsp; #go through each contest sequentially and update the player x's<br />&nbsp;&nbsp;&nbsp; prior.xtt = matrix(c(est.x[dat[i,2],1],est.x[dat[i,3],2]),2,1)<br />&nbsp;&nbsp;&nbsp; prior.Ptt = diag(c(est.x[dat[i,2],3],est.x[dat[i,3],4]))<br />&nbsp;&nbsp;&nbsp; out=simple.update(dat[i,1],prior.xtt=prior.xtt, prior.Ptt=prior.Ptt, distribution="poisson")<br />&nbsp;&nbsp;&nbsp; est.x[dat[i,2],1]=outpost.xtt<br />&nbsp;&nbsp;&nbsp; est.x[dat[i,3],2]=out$post.xtt<br />&nbsp;&nbsp;&nbsp; #post.Ptt is not a diagonal matrix! Think about it.&nbsp; It shouldn't be.<br />&nbsp;&nbsp;&nbsp; #but I don't retain the information regarding covariance between player estimates<br />&nbsp;&nbsp;&nbsp; #this is where this approach loses efficiency relative to an approach that <br />&nbsp;&nbsp;&nbsp; #analyzes all the data jointly.&nbsp; But I'm assume I never have the data.....<br />&nbsp;&nbsp;&nbsp; est.x[dat[i,2],3]=diag(out$post.Ptt)<br />&nbsp;&nbsp;&nbsp; est.x[dat[i,3],4]=diag(outpost.Ptt)<br />&nbsp; }<br />&nbsp; par(mfrow=c(1,2))<br />&nbsp; plot(est.x[,1],true.attack,xlab="estimated attack",ylab="true attack")<br />&nbsp; plot(est.x[,2],true.defend,xlab="estimated defend",ylab="true defend")<br />}</span></span></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-18535412.post-89314104529724751112013-09-26T12:30:00.003-07:002013-09-26T12:30:39.092-07:00Sept 23 2013 Papers<div dir="ltr" style="text-align: left;" trbidi="on">25 Years of Forecasting w Time Series Models<br /><a href="http://www.est.uc3m.es/esp/nueva_docencia/comp_col_get/lade/tecnicas_prediccion/Practicas0708/Practica1/25%20years%20of%20time%20series%20forecasting%20%28Gooijer%20and%20Hyndman%29.pdf">http://www.est.uc3m.es/esp/nueva_docencia/comp_col_get/lade/tecnicas_prediccion/Practicas0708/Practica1/25%20years%20of%20time%20series%20forecasting%20%28Gooijer%20and%20Hyndman%29.pdf</a></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-18535412.post-89594540623776547152013-09-26T12:29:00.000-07:002013-09-26T14:17:13.212-07:00update equations for the 2-player contest with a poisson link function<div dir="ltr" style="text-align: left;" trbidi="on"><div data-canvas-width="84.88000000000001" data-font-name="Times" dir="ltr" style="font-family: serif; font-size: 16px; left: 141.76px; top: 941.12px; transform-origin: 0% 0% 0px; transform: scale(0.999764, 1);"><i>The equations for various link functions are here but it's for a univariate case.&nbsp; Crud.&nbsp; I need univariate y and bivariate x.</i><br />Harvey, A.C. and C. Fernandes, 1989, Time series models for count or qualitative observations,</div><div data-canvas-width="109.2704" data-font-name="Times" dir="ltr" style="font-family: serif; font-size: 16px; left: 321.84px; top: 941.12px; transform-origin: 0% 0% 0px; transform: scale(0.999424, 1);">J. Bus, Statist., 7, pp. 407-423.&nbsp;</div><div data-canvas-width="109.2704" data-font-name="Times" dir="ltr" style="font-family: serif; font-size: 16px; left: 321.84px; top: 941.12px; transform-origin: 0% 0% 0px; transform: scale(0.999424, 1);">http://www.tandfonline.com/doi/abs/10.1080/07350015.1989.10509750#.UkSIXxD0eUM </div><div data-canvas-width="109.2704" data-font-name="Times" dir="ltr" style="font-family: serif; font-size: 16px; left: 321.84px; top: 941.12px; transform-origin: 0% 0% 0px; transform: scale(0.999424, 1);"></div><div data-canvas-width="109.2704" data-font-name="Times" dir="ltr" style="font-family: serif; font-size: 16px; left: 321.84px; top: 941.12px; transform-origin: 0% 0% 0px; transform: scale(0.999424, 1);">See also Lambert 1996 </div><div data-canvas-width="109.2704" data-font-name="Times" dir="ltr" style="font-family: serif; font-size: 16px; left: 321.84px; top: 941.12px; transform-origin: 0% 0% 0px; transform: scale(0.999424, 1);">http://www.statsoc.ulg.ac.be/publis/Lambert1996JRSSC.pdf<br /><br />But the new version of KFAS coming out in October 2013 has a Kalman filter for poisson errors.&nbsp; So I can use KFAS's filter to get the updates.&nbsp; However, it is painfully slow and the gaussian filter is no faster than the poisson filter, so there must be an easy way to compute the updates.&nbsp; Anyhow, this works...but is slow... However(!) and this is super key, it does not take any RAM to speak of.&nbsp; I saw no jump.&nbsp; It just slows as the number of players increases and the number of contests increases.&nbsp; This is key since doing a big glm on the whole dataset with 1000-10,000 factors takes enormous amounts of RAM.<br /><br />This at least shows proof of concept for an online filter version of Dixon and Cole's model.&nbsp; Need to figure out how to compute the poisson filter though so I could write it in javascript and have it on a website, say, without having to interact with a computation back-end. <br /><br /><b>#Note this depends on KFAS version 1.0.0 which isn't out yet.</b><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">simple.update=function(y, prior.xtt = matrix(0,2,1), prior.Ptt = diag(1,2), Q=diag(0,2), R=.1, distribution="gaussian"){<br />&nbsp; if(distribution=="gaussian"){<br />&nbsp;&nbsp;&nbsp; #This is a Kalman filter<br />&nbsp;&nbsp;&nbsp; #y is the response (data point)<br />&nbsp;&nbsp;&nbsp; #Q is how mean x varies in time<br />&nbsp;&nbsp;&nbsp; #R is how y (response) is variable with given mu1-mu2<br />&nbsp;&nbsp;&nbsp; #y ~ N(mu1-mu2,R)<br />&nbsp;&nbsp;&nbsp; Z=matrix(c(1,-1),1,2); tZ = t(Z)<br />&nbsp;&nbsp;&nbsp; Ptt1 = prior.Ptt + Q<br />&nbsp;&nbsp;&nbsp; xtt1 = prior.xtt<br />&nbsp;&nbsp;&nbsp; Kt = Ptt1%*%tZ%*%solve(Z%*%Ptt1%*%tZ + R)<br />&nbsp;&nbsp;&nbsp; xtt = xtt1 + Kt%*%(y-Z%*%xtt1)<br />&nbsp;&nbsp;&nbsp; Ptt = (diag(1,2)-Kt%*%Z)%*%Ptt1<br />&nbsp;&nbsp;&nbsp; return(list(post.xtt=xtt,post.Ptt=Ptt))<br />&nbsp; }<br />&nbsp; if(distribution=="poisson"){<br />&nbsp;&nbsp;&nbsp; require(KFAS)<br />&nbsp;&nbsp;&nbsp; n=1; TT=1; m=2<br />&nbsp;&nbsp;&nbsp; B=diag(1,2); t.B=B<br />&nbsp;&nbsp;&nbsp; Z=matrix(c(1,-1),1,2)<br />&nbsp;&nbsp;&nbsp; R=matrix(r)<br />&nbsp;&nbsp;&nbsp; Q=diag(0,2)&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; P1inf=matrix(0,m,m)&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; kfas.model=SSModel(y ~ -1+SSMcustom( Z=Z, T=B, R=diag(1,m), Q=Q, a1=prior.xtt, P1=prior.Ptt, P1inf=P1inf), distribution="poisson")<br />&nbsp;&nbsp;&nbsp; ks.out=KFS(kfas.model)<br />&nbsp;&nbsp;&nbsp; return(list(post.xtt=ks.outalphahat[1:2],post.Ptt=ks.out$V[1:2,1:2,1]))<br />&nbsp; }<br />&nbsp; if(distribution=="kfas-gaussian"){<br />&nbsp;&nbsp;&nbsp; require(KFAS)<br />&nbsp;&nbsp;&nbsp; n=1; TT=1; m=2<br />&nbsp;&nbsp;&nbsp; B=diag(1,2); t.B=B<br />&nbsp;&nbsp;&nbsp; Z=matrix(c(1,-1),1,2)<br />&nbsp;&nbsp;&nbsp; R=matrix(r)<br />&nbsp;&nbsp;&nbsp; Q=diag(0,2)<br />&nbsp;&nbsp;&nbsp; P1inf=matrix(0,m,m)&nbsp;&nbsp;&nbsp; <br />&nbsp;&nbsp;&nbsp; kfas.model=SSModel(y ~ -1+SSMcustom( Z=Z, T=B, R=diag(1,m), Q=Q, a1=prior.xtt, P1=prior.Ptt, P1inf=P1inf), H=R)<br />&nbsp;&nbsp;&nbsp; ks.out=KFS(kfas.model)<br />&nbsp;&nbsp;&nbsp; return(list(post.xtt=ks.out$alphahat[1:2],post.Ptt=ks.out$V[1:2,1:2,1]))<br />&nbsp; }<br />&nbsp; <br />}</span></span><br /><br /><b>Some code to run a little test</b><br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">sim.poisson.test = function(n.x=1000, n.trials=10*1000){<br />&nbsp; #variance of the distribution of the player pool x's<br />&nbsp; mean.x = 0<br />&nbsp; sig2.x = 1<br />&nbsp; #true.x is what we are trying to estimate<br />&nbsp; true.x = rnorm(n.x, mean.x, sig2.x)<br />&nbsp; dat = matrix(0,n.trials,3)<br />&nbsp; for(i in 1:n.trials){<br />&nbsp;&nbsp;&nbsp; dat[i,2:3] = sample(1:n.x,2)<br />&nbsp;&nbsp;&nbsp; dat[i,1] = rpois(1,exp(true.x[dat[i,2]]-true.x[dat[i,3]]))<br />&nbsp; }<br />&nbsp; <br />&nbsp; #start everyone with an estimate and uncertainty<br />&nbsp; #corresponding to the player pool mean and variance<br />&nbsp; est.x=matrix(c(mean.x, sig2.x),n.x,2,byrow=TRUE)<br />&nbsp; for(i in 1:n.trials){<br />&nbsp;&nbsp;&nbsp; #go through each contest sequentially and update the player x's<br />&nbsp;&nbsp;&nbsp; prior.xtt = matrix(est.x[dat[i,2:3],1])<br />&nbsp;&nbsp;&nbsp; prior.Ptt = diag(est.x[dat[i,2:3],2])<br />&nbsp;&nbsp;&nbsp; out=simple.update(dat[i,1],prior.xtt=prior.xtt, prior.Ptt=prior.Ptt, distribution="poisson")<br />&nbsp;&nbsp;&nbsp; est.x[dat[i,2:3],1]=out$post.xtt&nbsp;&nbsp;&nbsp; est.x[dat[i,2:3],2]=diag(outpost.Ptt)<br />&nbsp; }<br />&nbsp; plot(est.x[,1],true.x,xlab="estimated x",ylab="true x")<br />}</span></span><br /><span class="Apple-style-span" style="background-color: #e1e2e5; border-collapse: separate; font-family: 'Lucida Console'; font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: 15px; orphans: 2; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px;"></span><br /><pre class="GNVMTOMCABB" style="-webkit-user-select: text; border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-style: none; border-width: initial; font-family: 'Lucida Console'; font-size: 10pt !important; line-height: 1.2; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; outline-color: initial; outline-style: none; outline-width: initial; white-space: pre-wrap !important;" tabindex="0"><span class="GNVMTOMCHAB ace_keyword">system.time(sim.poisson.test(n.trials=5*1000))</span></pre><br /><br />output.&nbsp; Note performance is rather better than one would get normally since this has really weak players against strong ones.&nbsp; You might expect that strong players tend to play strong players....<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-AJwz3t2xUfU/UkSjfDJtA2I/AAAAAAAATrE/kNssjJZgGAM/s1600/poisson+test+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="280" src="http://2.bp.blogspot.com/-AJwz3t2xUfU/UkSjfDJtA2I/AAAAAAAATrE/kNssjJZgGAM/s400/poisson+test+1.png" width="400" /></a></div><br /></div></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-65788848792764375782013-09-26T11:03:00.002-07:002013-09-26T11:04:20.709-07:00Toy example of a filter for computing the 2-player hidden x's<div dir="ltr" style="text-align: left;" trbidi="on">Follow-up on <a href="http://parsimoniouspursuits.blogspot.com/2013/09/relationship-of-elo-algorithm-to.html">relationship-of-elo-algorithm-to-logistic-regression</a><br /><br />This morning I coded up an implementation of the idea for an Elo-like (or Kalman-like or filter-esque or Bayesian) update algorithm for 2-player data.&nbsp; It is Elo-like because the idea is that you never store the data nor the player information.&nbsp; Rather each player knows their own estimated <i>mean x</i> and <i>uncertainty</i> in that estimate. They meet another player and have a contest.&nbsp; At the end of the contest, they exchange their priors (their mean <i>x</i> and uncertainty before the contest) and each updates their own estimate of their <i>x</i>.<br /><br />I'm going to tackle first an easy problem for which I already know the update equation.&nbsp; Ultimately, I want to use this for a problem where I will have to derive the update equation.&nbsp; I would prefer a closed form update equation, but the principle will work even if I have to do a numerical update&nbsp; using a MCMC algorithm.<br /><br />Set-up of the problem:<br />Assume a large player pool. Player x's are drawn from a Normal distribution with mean mu and variance pi.&nbsp; We assume that we know what this distribution is but don't known the individual player x's .&nbsp; Our objective is to estimate those x's.&nbsp; Two players are drawn at random.&nbsp; One is chosen (randomly) to be #1 (attacker) and the other is #2 (defender).&nbsp; They have a contest.&nbsp; The outcome of this contest is a Normal distribution with mean (x.attacker - x.defender) and variance of contest.var.&nbsp; Again we assume we know a lot about the nature of this contest, so we know the contest variance and we know the outcome is normally distributed.&nbsp; But we don't know the <i>x</i>'s of the players in the contest.&nbsp; Our players start with an estimated <i>x</i> and uncertainty of mu and pi (the distribution of x's in the player population).&nbsp; They head out and randomly encounter other players and have contests with them.&nbsp; After each contest, the individual players update their <i>x</i> estimate and uncertainty in that.<br /><br />Code is below.&nbsp; This plot summarizes the results with 1000 players and 10,000 or 5,000 contests.&nbsp; The attacker and defender were chosen randomly.&nbsp; The mean number of contests per player was 20 for 10,000 contests (each contest includes 2 players) and 10 for 5,000 contests.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-E5qwujSQZus/UkRyiqkjWDI/AAAAAAAATq0/mBRo28V63Gg/s1600/Sept+25+B.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-E5qwujSQZus/UkRyiqkjWDI/AAAAAAAATq0/mBRo28V63Gg/s1600/Sept+25+B.png" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><br />First, cool. This works!!&nbsp; Second, it depends a lot on the characteristic of the contest.&nbsp; If the contest involves a lot of luck (bottom row) then there is a loose relationship between mu1-mu2 and the outcome, then many contests are needed to get a good estimate of players' x's.&nbsp; If the contest outcome is closely related to m1-mu2 (top row), then fewer contests are needed.<br /><br />The R code:<br /><span style="font-size: x-small;"><span style="font-family: &quot;Courier New&quot;,Courier,monospace;">simple.update=function(y, prior.xtt = matrix(0,2,1), prior.Ptt = diag(1,2), Q=diag(0,2), R=.1){<br />&nbsp; #This is a Kalman filter<br />&nbsp; #y is the response (data point)<br />&nbsp; #Q is how mean x varies in time<br />&nbsp; #R is how y (response) is variable with given mu1-mu2<br />&nbsp; #y ~ N(mu1-mu2,R)<br />&nbsp; Z=matrix(c(1,-1),1,2); tZ = t(Z)<br />&nbsp; Ptt1 = prior.Ptt + Q<br />&nbsp; xtt1 = prior.xtt<br />&nbsp; Kt = Ptt1%*%tZ%*%solve(Z%*%Ptt1%*%tZ + R)<br />&nbsp; xtt = xtt1 + Kt%*%(y-Z%*%xtt1)<br />&nbsp; Ptt = (diag(1,2)-Kt%*%Z)%*%Ptt1<br />&nbsp; return(list(post.xtt=xtt,post.Ptt=Ptt))<br />}<br /><br />sim.test = function(r=1, n.x=1000, n.trials=10*1000){<br />#variance of the distribution of the player pool x's<br />mean.x = 0<br />sig2.x = 1<br />#true.x is what we are trying to estimate<br />true.x = rnorm(n.x, mean.x, sig2.x)<br />dat = matrix(0,n.trials,3)<br />for(i in 1:n.trials){<br />&nbsp; dat[i,2:3] = sample(1:n.x,2)<br />&nbsp; dat[i,1] = rnorm(1,true.x[dat[i,2]]-true.x[dat[i,3]],r)<br />}<br /><br />#start everyone with an estimate and uncertainty<br />#corresponding to the player pool mean and variance<br />est.x=matrix(c(mean.x, sig2.x),n.x,2,byrow=TRUE)<br />for(i in 1:n.trials){<br />&nbsp; #go through each contest sequentially and update the player x's<br />&nbsp; prior.xtt = matrix(est.x[dat[i,2:3],1])<br />&nbsp; prior.Ptt = diag(est.x[dat[i,2:3],2])<br />&nbsp; out=simple.update(dat[i,1],prior.xtt=prior.xtt, prior.Ptt=prior.Ptt)<br />&nbsp; est.x[dat[i,2:3],1]=outpost.xtt<br />&nbsp; #post.Ptt is not a diagonal matrix! Think about it.&nbsp; It shouldn't be.<br />&nbsp; #but I don't retain the information regarding covariance between player estimates<br />&nbsp; #this is where this approach loses efficiency relative to an approach that <br />&nbsp; #analyzes all the data jointly.&nbsp; But I'm assume I never have the data.....<br />&nbsp; est.x[dat[i,2:3],2]=diag(out\$post.Ptt)<br />}<br /><br />plot(est.x[,1],true.x,xlab="estimated x",ylab="true x")<br />}<br /><br />par(mfrow=c(3,3))<br />r=.1<br />hist(rnorm(1000,0,sqrt(r)),main="Dist of contest outcomes\nr=.1",xlab="contest outcome")<br />sim.test(r=r)<br />title("Mean 20 contests\nper player")<br />sim.test(r=r,n.trials=5*1000)<br />title("Mean 10 contests\nper player")<br /><br />r=1<br />hist(rnorm(1000,0,sqrt(r)),main="r=.5",xlab="contest outcome")<br />sim.test(r=r)<br />sim.test(r=r,n.trials=5*1000)<br /><br />r=2<br />hist(rnorm(1000,0,sqrt(r)),main="r=1",xlab="contest outcome")<br />sim.test(r=r)<br />sim.test(r=r,n.trials=5*1000)</span></span><br /><br />Ok, that's great.&nbsp; This is nothing new.&nbsp; It's just an implementation of Elo's idea but<br /><ul style="text-align: left;"><li>&nbsp;in a slightly different context</li><li>different link function between response variable and hidden variables</li><li>players retain information about the uncertainty in their estimated <i>x</i></li></ul><i>But </i>it now points me in the direction of an algorithm for a generic contest link function (Bernoulli for a success-fail contest, Poisson for a contest with points or goals, Negative binomial, etc)<i>.</i> The Gaussian link function is nice since the update equation (kalman filter) is closed form.&nbsp; If I have to resort to a numerical updater (gibbs or mcmc), it's going to get slow.<br /><div><br /></div></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-18535412.post-30508530449032643802013-09-25T12:21:00.001-07:002013-10-08T12:17:08.984-07:00Relationship of Elo algorithm to logistic regression<div dir="ltr" style="text-align: left;" trbidi="on">Follow up on <a href="http://parsimoniouspursuits.blogspot.com/2013/09/strategy-for-asynchronous-update.html">strategy-for-asynchronous-update</a><br />See ** at bottom for where this all is going.<br />Huh, what's this have to do with Elo algorithm?&nbsp; The Elo algorithm is a solution to a problem similar to the <i>reverse</i> 2-player logistic regression described half-way down.<br /><br />y ~ b, link=f(b)*<br /><br />y is 0,1 data (success, failure).&nbsp; In a typical logistic regression, we use the logistic function to link "t" to probability of success. <br /><br />prob of success = p = 1/(1+exp(-t))<br /><br />&nbsp;Then we assume some function that relates our covariate x to t. &nbsp; Vanilla approach is a linear relationship: t = a + bx<br /><br />* except that we think of this in the inverse (logit).&nbsp; a+bx = g(x) = log(p/(1-p))&nbsp; or log odds is a linear function of x.<br /><br />The objective of this simple logistic regression is to estimate a and b, given x's associated with y's (0,1 data).&nbsp; An iterative algorithm is used where we start with some estimate of a and b, and then keep updating that (e.g. Newton method).<br /><br />So now let's reverse the problem.<br /><br />We know (assume) a and b but we do not know x.&nbsp; We want an algorithm that gets us the x(i) where i is our i-th individual (say).&nbsp; That's seems easy enough but we need multiple trials for each i.&nbsp; Then we get an estimate of p = successes/trials.&nbsp; We plug in p, a and b into the logistic equation and solve for x.<br /><br />So that's not very interesting.&nbsp; It becomes more interesting when we have a 2 players in each trial.<br /><br />t = a + bx(i) + bx(j)<br /><br />We want to solve for the x's .&nbsp; How to do that?&nbsp; First imagine that you have the data** on a bunch of trails.<br /><br />** Don't you always have to 'have the data'?&nbsp; No.&nbsp; Where this is going is an algorithm where no one has the data.&nbsp; Each player has an estimate of their x and their uncertainty about this estimate.&nbsp; Two players come together and have a trial.&nbsp; Each updates their estimate and uncertainty given their information about both players' x's.&nbsp; Then they go off and find another player to have a trial with.&nbsp; The 'data' is never kept; players only keep their current estimate of their x and their estimate of its uncertainty.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-34017723992235283292013-09-25T11:05:00.000-07:002013-10-08T12:17:50.300-07:00Strategy for asynchronous update algorithm for Dixon and Cole's model<div dir="ltr" style="text-align: left;" trbidi="on">Problem: Synchronous updating for Dixon and Cole's model--speedglm(family=poisson(log)), is ultimately unscalable. So glm(y ~ factor(x)) at some point reaches a limit as the levels in x go to infinity.<br /><br />It works until one maxes out the RAM.&nbsp; Second it doesn't not allow parallization.&nbsp; But parallel is not the right idea.&nbsp; Parallel means each agent (agent is analogy for something that does a computation) works on an isolated part of the computation.&nbsp; I want something more like the exercise that Rachel lead at the ISEES workshop.&nbsp; The post-in notes are all on the wall.&nbsp; Many agents come up and move the post-it notes at once.&nbsp; Each messing up the others work.&nbsp; There is no compartmentalization.&nbsp; But there is something like 'importance sampling'.&nbsp; The contentious post-it notes are moving more.&nbsp; The non-contentious ones are quickly settled.&nbsp;&nbsp; Idea is to 'set loose' many 'bugs' in the data and these go to work on the data.<br /><br />Imagine rating an effectively infinite number of 'players'.&nbsp; I'm using 'teams' but this isn't about sports but about estimating a model from enormous 2-player datasets with effectively infinite numbers of players.&nbsp; Players could be cell-phone numbers and the contest&nbsp; something about a call between 2 phones and you are trying to rank some characteristic of the phone numbers.<br /><br />Relation to EM algorithm.&nbsp; At each step the LL increases.&nbsp; Ultimately the max is reached.<br />* compute expected value of hidden state conditioned on all the data<br />&nbsp;&nbsp; - forward/backward smoother<br />* compute ML of parameters conditioned on data and expected value of hidden state<br /><br />Relation to bayesian algorithm<br />* Start with prior on hidden state<br />* Get 1 data point, update to posterior of hidden state<br />* <span style="background-color: yellow;">Need a closed form update equation</span><br /><br />Relation to MCMC<br />* MCMC algorithm is getting the posterior surface<br />* Same idea but I want the 'strengths surface'.&nbsp; The x-axis is 'player'.&nbsp; It is a factor in glm lingo or random effect in glmer lingo.&nbsp; It is discrete, but effectively infinite.&nbsp; The y-axis is strength. </div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-18535412.post-30038417195396158032007-09-28T12:54:00.000-07:002007-09-28T13:11:59.630-07:00Does biological complexity add realism?From a report that will remain unnamed:<br />"At the other end of the spectrum are formulations such as IBMs which require detailed knowledge of physiological and metabolic processes and how these influence the vital rates of fecundity and survivorship. The realism of these approaches is further enhanced through incorporation of density influences on or stochastic variation in these processes. Such data are difficult to obtain, yet their inclusion into appropriate models permits the most detailed assessments."<br /><br />As usual, addition of biological complexity into a model is equated with adding realism. Realism is good and permits a better risk assessment. I completely disagree with this general statement -- BECAUSE details are unknown. I would argue unknowable, but most would disagree.<br /><br />But back to this mechanistic detailed model is better for risk assessment. Let's use an analogy. I am a witness to a crime. I get a brief glimpse of the perpetrator. I report the crime and am working with a police artist to create a composite of the criminal.<br /><br />Artist: Male or Female<br />Me: Female<br />Artist: Hair?<br />Me: Black<br />Artist: Eye color?<br />Me: I didn't see that.<br />Artist: Hmm, well we know that all humans have eye color so to make this realistic we need to pick an eye color.<br />Me: I didn't see her eyes.<br />Artist: Ok, let's use the maximum likelihood estimate and make them brown.<br />Artist: Height?<br />Me: Average<br />Artist: Hmm, well to make this realistic let's use the average height of women, 5' 6".<br />Artist: Clothing?<br />Me: Dark pants and light t-shirt. I didn't see the shoes.<br />Artist: Ok, let's add some realism. Blue jeans, sound ok?, light t-shirt..hmm, a woman wouldn't wear a regular t-shirt, let's make it a v-neck. Shoes...crocs, everyone is wearing those nowadays.<br />Me: Well, I really don't recall the specifics, that could be what she was wearing.<br />Artist: Ok, I'm going to go off and make a detailed photorealistic picture of this woman.....<br /><br />The artist comes back with a photorealistic picture. It definitely looks like a real human woman, but it does not look like the criminal. In this case, more realism just hinders the investigation. It would be better to stick with "black haired average height woman" even though that is vague. It might not end up being all that useful, but it rules out many suspects.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-69285122251832543622007-02-26T13:26:00.000-08:002007-03-05T17:51:38.309-08:00neutral models of metapopulation dynamicsDiscuss the population patterns that occur via neutral models of dispersal. Illustrate that these patterns occur in large collections of spatially-structured populations. Illustrate that complex patterns of population density can occur via patterns of dispersal. Analogous to Hubbell's work on neutral models of diversity.<br /><br />Neutral models of population distributions<br /><br />Colloquially people think of different rates of population growth or decline as an indication of population robustness? However Can we see what is going on?<br /><br />Is it possible to detect habitat heterogeneity? At low dispersal, we see the effect of heterogeneity but as dispersal increases, <br /><br /><br />Within a metapopulation, there is a canonical relationship between the year-to-year variability within the total population and the variability in growth rates between sub-populations.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-1160515341705994522006-10-10T14:06:00.000-07:002006-10-17T18:41:33.438-07:00RwebAn example of folks with a R server up and running<br /><br />http://www.stat.umn.edu/geyer/old03/5601/examp/parm.html<br /><br />http://www.math.montana.edu/Rweb/<br /><br />http://bayes.math.montana.edu/Rweb/Resources.htmlUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-1151007562389094952006-06-22T13:19:00.000-07:002006-10-17T18:41:32.937-07:00prediction errorEfron, B. 2004. The estimation of prediction error: covariance penalties and cross-validation. <br />Journal of the American Statistical Association 99: 619-632.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-1149369863619448552006-06-03T14:22:00.000-07:002006-10-17T18:41:32.851-07:00predictive vs interpolative accuracyPredictive Accuracy as an<br />Achievable Goal of Science<br />Malcolm R. Forster†‡<br />University of Wisconsin-Madison<br /><br />What has science actually achieved? A theory of achievement should (1) define what has been achieved, (2) describe the means or methods used in science, and (3) explain how such methods lead to such achievements. Predictive accuracy is one truth-related achievement of science, and there is an explanation of why common scientific practices (of trading off simplicity and fit) tend to increase predictive accuracy. Akaike’s explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides a clear formulation of many open problems of research.<br /><br />http://philosophy.wisc.edu/forster/papers/PSA2000.pdfUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-1139941741290448402006-02-14T10:28:00.000-08:002006-10-17T18:41:32.544-07:00ok so LR depends on nested-nesssince the models are not nested, the usual LR test statistic will not have an asymptotic Chi-square distribution and hence the statistic you compute will not have a meaningful interpretation.<br /><br />http://www.biostat.wustl.edu/archives/html/s-news/2004-03/msg00200.html<br /><br />However, Burnham and Anderson argue that the ranking of models with AICc is not limited by this pg 88.<br /><br />page 61 in PRNN<br /><br />Ripley says that NIC criterion is based on penalty<br /><br />2p* = trace[KJ^-1]<br /><br />If the model is adequate (or true), J=K, and p* is the number of parameters and NIC becomes AIC. These results are based on asymptotic normality of the parameter estimates.<br /><br />Moody 1991, 1992 (uses effective number of parameters)<br />Murata et al 1991 (on the effective number of parameters)<br />cf. maybe first Draper 1995 JRSS<br /><br />Fisher information<br />http://en.wikipedia.org/wiki/Fisher_information_matrixUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-18535412.post-1139941510297467832006-02-14T10:17:00.000-08:002006-10-17T18:41:32.477-07:00Determining correct model complexityx X (X is the set of possible data)<br /><br />Let's specify some statistic t(x)<br /><br />from x estimate the deviance [t(X)-t(x)]^2 = s^hat<br /><br />On average how big is this deviance?<br /><br />Akaike -> 2p<br /><br />http://www.stat.columbia.edu/~cook/movabletype/archives/2004/12/against_parsimo.html<br /><a href="http://www.stat.columbia.edu/~gelman/stuff_for_blog/hirschman.pdf">Against parsimony</a><br /><a href="http://philosophy.wisc.edu/forster/">Occam’s Razor and the Relational Nature of Evidence</a><br />Tutorial<br />ftp://ftp.cs.utoronto.ca/pub/radford/bayes-tut.psUnknownnoreply@blogger.com0