tag:blogger.com,1999:blog-73791109607960141702018-04-21T05:44:29.066-04:00Clueless FundatmaA random walk through a subset of things I care about. Science, math, computing, higher education, open source software, economics, food etc.Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.comBlogger735125tag:blogger.com,1999:blog-7379110960796014170.post-32524171806262884912018-04-15T21:43:00.000-04:002018-04-15T21:43:02.562-04:00Diffusion in Higher Dimensions<div dir="ltr" style="text-align: left;" trbidi="on">In the previous posts (<a href="http://sachinashanbhag.blogspot.com/2018/03/a-primer-on-diffusion-random-walks-in-1d.html">1</a> and <a href="http://sachinashanbhag.blogspot.com/2018/04/diffusion-and-random-walks.html">2</a>), we wrote down the probability or concentration distribution of a bunch of Brownian diffusors initially at \(x = 0\) (delta function), \[p_{1D}(x, t) = \dfrac{1}{\sqrt{4 \pi Dt}} \exp\left(-\dfrac{x^2}{4Dt}\right)\]<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-t6U7xEzGgEc/WqK3AauNtMI/AAAAAAAAD0c/2jz6_6LZM4EH-jv7j9LywdM4zzbsnjUJwCLcBGAs/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="588" data-original-width="957" height="122" src="https://2.bp.blogspot.com/-t6U7xEzGgEc/WqK3AauNtMI/AAAAAAAAD0c/2jz6_6LZM4EH-jv7j9LywdM4zzbsnjUJwCLcBGAs/s200/1.png" width="200" /></a></div>The PDF is normalized on the domain \(x \in [-\infty, \infty]\) so that, \[\int_{-\infty}^{\infty} p_{1D}(x,t)\, dx = 1.\] In 2D, \(\langle r^2(t) \rangle = \langle x^2(t) \rangle + \langle y^2(t) \rangle\). If diffusion is isotropic, then \(\langle r^2(t) \rangle = 2Dt + 2Dt = 4Dt\). In this case,<br />\begin{align}<br />p_{2D}(r, t) & = p_{1D}(x, t) \, p_{1D}(y, t)\\<br />& = \dfrac{1}{\sqrt{4 \pi Dt}} \dfrac{1}{\sqrt{4 \pi Dt}} \exp\left(-\dfrac{1}{2} \dfrac{x^2+y^2}{2Dt}\right)\\<br />& =\dfrac{1}{4 \pi Dt} \exp\left(-\dfrac{r^2}{4Dt}\right)<br />\end{align}<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-70h0m5FwLiI/WqK3DQWQjAI/AAAAAAAAD0g/YNwaetjTjXAj3cTu1F_owitOAp4GDGKWwCLcBGAs/s1600/2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="893" data-original-width="871" height="200" src="https://4.bp.blogspot.com/-70h0m5FwLiI/WqK3DQWQjAI/AAAAAAAAD0g/YNwaetjTjXAj3cTu1F_owitOAp4GDGKWwCLcBGAs/s200/2.png" width="195" /></a></div><br />The PDF is normalized such that, \[\int_{0}^{\infty} (2\pi r) \, p_{2D}(r,t)\, dr = 1.\]<br />Finally, for isotropic 3D diffusion, \[p_{3D}(r, t) = \left(\dfrac{1}{4 \pi Dt}\right)^{3/2} \exp\left(-\dfrac{r^2}{4Dt}\right).\] The PDF is normalized such that, \[\int_{0}^{\infty} (4\pi r^2) \, p_{3D}(r,t)\, dr = 1.\] In summary, for \(d\) = 1, 2, or 3 dimensions<br />\[p_{dD}(r, t) = \left(\dfrac{1}{4 \pi Dt}\right)^{d/2} \exp\left(-\dfrac{r^2}{4Dt}\right).\]</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/I3XtiJNhGqA" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/04/diffusion-in-higher-dimensions.htmltag:blogger.com,1999:blog-7379110960796014170.post-37641877555138566992018-04-07T09:32:00.001-04:002018-04-07T09:32:33.539-04:00Notebooks and Exploration<div dir="ltr" style="text-align: left;" trbidi="on">The Atlantic has a nice article on genesis and evolution of Mathematica and Jupyter notebooks, and how the latter was inspired by the former. It is provocatively (unfortunately) titled, "<a href="https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/">The Scientific Paper is Obsolete</a>".<div><br /></div><div>The article itself is more thoughtful and nuanced.<br /><br />It is a reflection on the use of notebooks as exploratory vehicles, and as computational essays. This is indeed how I use Jupyter notebooks these days. I use them as a pre-processing tool (exploratory mode) when I have to design a new lecture or lab, or plan a set of new calculations. I also use them as a post-processing tool, especially in my research. Once all the raw computation is done, I can play with the results interactively, and eventually interleave a narrative and charts. This notebook often becomes the starting point of the "Results and Discussion" section of any resulting paper.<br /><br />Here are some passages from the article that I found interesting or appealing:<br /><blockquote class="tr_bq">The notebook interface was the brainchild of Theodore Gray, who was inspired while working with an old Apple code editor. Where most programming environments either had you run code one line at a time, or all at once as a big blob, the Apple editor let you highlight any part of your code and run just that part. Gray brought the same basic concept to Mathematica, with help refining the design from none other than Steve Jobs.<br /> <br />“I’ve noticed an interesting trend,” Wolfram wrote in a blog <a href="http://blog.stephenwolfram.com/2016/09/how-to-teach-computational-thinking/">post</a>. “Pick any field X, from archeology to zoology. There either is now a ‘computational X’ or there soon will be. And it’s widely viewed as the future of the field.” </blockquote><blockquote class="tr_bq">A 1997 <a href="http://www.catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/index.html#catbmain">essay</a> by Eric S. Raymond titled “The Cathedral and the Bazaar,” in some sense the founding document of the modern open-source movement, challenged the notion that complex software had to be built like a cathedral, “carefully crafted by individual wizards or small bands of mages working in splendid isolation.” Raymond’s experience as one of the stewards of the Linux kernel (a piece of open-source software that powers all of the world’s 500 most powerful supercomputers, and the vast majority of mobile devices) taught him that the “great babbling bazaar of differing agendas and approaches” that defined open-source projects was actually a strength. “The fact that this bazaar style seemed to work, and work well, came as a distinct shock,” he wrote.<br /> <br />The Mathematica notebook is the more coherently designed, more polished product—in large part because every decision that went into building it emanated from the mind of a single, opinionated genius. “I see these Jupyter guys,” Wolfram said to me, “they are about on a par with what we had in the early 1990s.” They’ve taken shortcuts, he said. “We actually want to try and do it right.”</blockquote></div></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/VjED_L1tZEI" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/04/notebooks-and-exploration.htmltag:blogger.com,1999:blog-7379110960796014170.post-7302487387432240252018-04-04T16:47:00.001-04:002018-04-04T16:47:49.303-04:00Writing Technical Papers<div dir="ltr" style="text-align: left;" trbidi="on"><div>Here is some decent advise on how to improve the quality of technical writing:</div><div><br /></div><a href="http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005619">Ten simple rules for structuring papers</a><br />PLOS Computational Biology, 2017.<br /><div><br /><a href="http://onlinelibrary.wiley.com/doi/10.1002/adma.200400767/abstract">Whitesides’ Group: Writing a Paper</a><br />Advanced Materials, 2004.<br /><br /></div><div><a href="https://ctl.yale.edu/sites/default/files/files/Schultz_ResearchPaper_NaturalSciences_formatted.pdf">Writing a Research Paper in the Natural Sciences</a><br />Graduate Writing Lab, Yale University, 2015.</div><div><br /><a href="https://www.chronicle.com/article/10-Tips-on-How-to-Write-Less/124268">10 Tips on How to Write Less Badly</a><br />Michael Munger, CHE, 2010.</div></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/oi7pyn-nfgI" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/04/writing-technical-papers.htmltag:blogger.com,1999:blog-7379110960796014170.post-18445345480159698202018-04-03T11:20:00.000-04:002018-04-03T11:20:32.344-04:00Diffusion and Random Walks<div dir="ltr" style="text-align: left;" trbidi="on">In the <a href="https://sachinashanbhag.blogspot.com/2018/03/a-primer-on-diffusion-random-walks-in-1d.html">previous post</a>, we saw how the probability distribution \(p(x,N)\) after \(N\) random steps on a unit lattice is given by, \[p(x, N) = \dfrac{1}{\sqrt{2 \pi N}} \exp\left(-\dfrac{x^2}{2N}\right)\] If the average step size is \(b\) instead of \(b=1\), then we can generalize, and write the formula as:<br />\[p(x, N) = \dfrac{1}{\sqrt{2 \pi Nb^2}} \exp\left(-\dfrac{x^2}{2Nb^2}\right)\] Now consider a Gaussian random walk in 1D. Suppose the stepsize at each step is drawn from a normal distribution \(\mathcal{N}(0, 1)\). While it has the same average stepsize as a walk on the lattice, an individual step may be shorter or longer than b=1.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-8EZfQdULWp4/WqKzMvErZ0I/AAAAAAAAD0Q/tUzGJRZGK1csYq49GuUBbYk0FW2f6765wCLcBGAs/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://3.bp.blogspot.com/-8EZfQdULWp4/WqKzMvErZ0I/AAAAAAAAD0Q/tUzGJRZGK1csYq49GuUBbYk0FW2f6765wCLcBGAs/s400/1.png" width="400" /></a></div><br />In polymer physics, where a Gaussian coil is often used as a model for polymer conformations, \(b\) is called the Kuhn length, and \(N\) is proportional to the molecular weight.<br /><br />Due to the connection between Brownian motion and random walks, the mean squared distance travelled by a particle in 1D with self-diffusivity \(D\) is \(\langle x^2(t) \rangle = 2Dt\). Similarly, the mean end-to-end squared distance of a Gaussian random walk is given by, \[\langle x^2(N) \rangle = \int_{-\infty}^{\infty} x^2 p(x, N) dx = Nb^2 \equiv 2Dt = \langle x^2(t) \rangle.\] This allows us to re-parameterize the equation for the position of a Brownian diffusors. \[p(x, t) = \dfrac{1}{\sqrt{4 \pi Dt}} \exp\left(-\dfrac{x^2}{4Dt}\right)\] Look at the correspondence between \(t\) and \(N\), and \(b\) and \(\sqrt{2D}\).</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/UL1uAE51n04" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/04/diffusion-and-random-walks.htmltag:blogger.com,1999:blog-7379110960796014170.post-70602880383001157942018-03-27T16:39:00.001-04:002018-03-27T16:39:19.658-04:00A Primer on Diffusion: Random Walks in 1D<div dir="ltr" style="text-align: left;" trbidi="on">Consider a particle, initially at the origin, jumping around randomly on a 1D lattice. The particle tosses a fair coin, and decides to jump left or right.<br /><br />A particular trajectory of the particle may look like the following:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-fEF5qi5upaA/WqGrFJ4-mNI/AAAAAAAADzc/LnRyIt0zf-YUeBUiky7dYjTwOHzBaCplACLcBGAs/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://3.bp.blogspot.com/-fEF5qi5upaA/WqGrFJ4-mNI/AAAAAAAADzc/LnRyIt0zf-YUeBUiky7dYjTwOHzBaCplACLcBGAs/s400/1.png" width="400" /></a></div><br />Suppose the particle makes \(n_{+}\) hops to the right, and \(n_{-}\) hops to the left. Then, the total number of steps \(N = n_{+} + n_{-}\), and the position at the end is \(x = n_{+} - n_{-}\).<br /><br />The process is probabilistic, and the outcome of any single trajectory is impossible to predict. However, let us enumerate the number of ways in which a random walk of \(N\) steps, results in \(n_{+}\) hops to the right. This is given by, \begin{align*}<br />W(x, N) & = {}^N C_{n_{+}}\\<br />& = \dfrac{N!}{N-n_{+}!n_{+}!}\\<br />& = \dfrac{N!}{n_{-}!n_{+}!}<br />\end{align*} The probability \(p(x, N)\) of ending up at \(x\) after \(N\) steps can be obtained by dividing \(W(x, N)\) by the total number of paths. Since we can make two potential choices at each step, the total number of paths is \(2^N\).<br />\[p(x, N) = \dfrac{W(x,N)}{2^N}.\]<br />For large \(N\), Stirling's approximation is \(N! \approx \sqrt{2 \pi N} (N/e)^N\). For \(x \ll N\), this implies, \[p(x, N) = \dfrac{1}{\sqrt{2 \pi N}} \exp\left(-\dfrac{x^2}{2N}\right)\]<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-ZeEna10qiSI/WqGwBQXKrcI/AAAAAAAADz0/HrMC_vB5wWEmC9tTQ4X0flQO_TA-X3LqACLcBGAs/s1600/2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://1.bp.blogspot.com/-ZeEna10qiSI/WqGwBQXKrcI/AAAAAAAADz0/HrMC_vB5wWEmC9tTQ4X0flQO_TA-X3LqACLcBGAs/s400/2.png" width="400" /></a></div>Both the distributions have the same shape. However, because one is a discrete distribution, while the other is continuous, they have different normalizations, and hence different actual values of \(p(x,N)\).</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/DOcrKvnGKrs" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/03/a-primer-on-diffusion-random-walks-in-1d.htmltag:blogger.com,1999:blog-7379110960796014170.post-81007914646239147662018-03-25T08:38:00.000-04:002018-03-25T08:38:14.988-04:00Links: Probability, Statistics, and Monte Carlo<div dir="ltr" style="text-align: left;" trbidi="on">1. A beautiful visual introduction to some concepts in probability and statistics (<a href="http://students.brown.edu/seeing-theory/">link</a>)<br /><blockquote class="tr_bq">Seeing Theory was created by Daniel Kunin while an undergraduate at Brown University. The goal of this website is to make statistics more accessible through interactive visualizations (designed using Mike Bostock’s JavaScript library D3.js).</blockquote>It starts from relatively basic concepts, touches on some intermediate-level topics (Basic Probability, Compound Probability, Probability Distributions, Frequentist Inference, Bayesian Inference, Regression Analysis)<br /><br />2. You are not a Monte Carlo Simulation (<a href="https://blog.thinknewfound.com/2018/03/you-are-not-a-monte-carlo-simulation/">link</a>)<br /><br />It is now <a href="https://en.wikipedia.org/wiki/Loss_aversion">well-established</a> that humans feel the pain of loss more strongly than the pleasure of an equivalent amount of gain. This interesting quirk may be shelved as an unfortunate cognitive bias (like confirmation bias); something for our rational mind to overcome.<br /><br />However, one can the follow up question: why? A first level explanation is as follows: Suppose you invest $100. You lose 50% one day, and gain 100% the next day. You are now back to square one ($100 * 0.50 * 2.0 = $100). A gain twice the size of the loss was necessary to stay neutral.<br /><br />Corey Hoffstein argues remarkably well that for individuals average outcomes are less meaningful than median outcomes. That the logarithmic scale for utility is more appropriate than a linear scale. And that loss aversion - that silly behavioral quirk - might be a powerful survival technique that helps us live to fight another day.<br /><br />I loved this insightful post. If nothing else, do yourself a favor and read the summary.<br /><br /><br /></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/5hxkeNODQU0" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/03/links-probability-statistics-and-monte.htmltag:blogger.com,1999:blog-7379110960796014170.post-39515772690998503412018-03-22T08:13:00.001-04:002018-03-22T08:13:36.430-04:00Links to matplotlib Resources<div dir="ltr" style="text-align: left;" trbidi="on">I wanted to pull together a list of matplotlib resources that I need to consult frequently.<br /><br />1. SciPy Lectures: The entire series is great, including the introduction to <a href="http://www.scipy-lectures.org/intro/matplotlib/">matplotlib</a>.<br /><br />2. Tutorials from <a href="https://github.com/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb">J.R. Johansson</a> and <a href="http://www.labri.fr/perso/nrougier/teaching/matplotlib/">Nicholas P. Rougier</a><br /><br />3. A couple of my own Jupyter notebooks on <a href="https://sachinashanbhag.blogspot.com/2017/06/matplotlib-styles.html">customizing styles</a>, and <a href="https://sachinashanbhag.blogspot.com/2017/07/matplotlib-subplots-inset-plots-and.html">multiplots</a>.</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/t8TNvdRLpFE" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/03/links-to-matplotlib-resources.htmltag:blogger.com,1999:blog-7379110960796014170.post-66157383490373858702018-02-23T07:32:00.004-05:002018-02-23T07:32:54.778-05:00Google Colaboratory<div dir="ltr" style="text-align: left;" trbidi="on">If you need to use and interact with a jupyter notebook on a computer that does not have it installed, <a href="https://research.google.com/colaboratory/faq.html">Google Colaboratory</a> seems like a great in-browser solution. I learned about it from a student earlier this semester from a student.<br /><br />The best part is that you don't need to install any software locally on the machine. The standard scientific/data science python stack (numpy, scipy, sympy, pandas) is available, and you can even "install" some additional on the fly using pip install.<br /><br />It works more or less like Google Docs, in that you documents are saved on Google Drive, and you can collaborate with others in much the same way.<br /><br />Check it out!</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/3oy8g3Fsmak" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/02/google-colaboratory.htmltag:blogger.com,1999:blog-7379110960796014170.post-62386370653187923142018-02-20T10:12:00.000-05:002018-02-20T10:12:14.209-05:00One Year Later<div dir="ltr" style="text-align: left;" trbidi="on">One year ago, I decided to get off of Facebook.<br /><div><br /></div><div>It wasn't a carefully thought out decision. I did not weigh the positives against the negatives. I just stopped.</div><div><div><br /></div><div>There were some signs of this for a few years. In mid-2016, I <a href="http://sachinashanbhag.blogspot.com/2016/04/the-problem-with-facebook-and-global.html">wrote</a>:</div></div><blockquote>A few years ago, Facebook was a source of joy in my life. I was actively rediscovering friends who had slipped away over time. Reconnecting, discovering what they were up to, and filling the gap between where we had left and found each other again, ushered in a sense of everyday freshness. </blockquote><blockquote>Over time, as a billion people got onboard, the rate of rediscovery diminished, and so did the excitement of eagerly checking new notifications. These days, most of my Newsfeed is cluttered with click-bait, unscientific bullshit, flashy headlines, and "Hallmark" greetings.</blockquote>Here is a recent Vanity Fair <a href="https://www.vanityfair.com/news/2018/01/mark-zuckerberg-facebook-downward-spiral">article</a> that touches on similar issues.<br /><blockquote class="tr_bq">During the past six months alone, <a href="https://www.vanityfair.com/news/2017/10/early-facebook-employees-regret-the-monster-they-created">countless executives who once worked</a> for the company are publicly articulating the perils of social media on both their families and democracy. Chamath Palihapitiya, an early executive, said social networks “are <a href="https://www.nytimes.com/2017/12/15/technology/facebook-blog-feel-bad.html">destroying</a> how society works”; <a href="https://www.vanityfair.com/people/sean-parker#intcid=dt-hot-link">Sean Parker</a>, its founding president, said “God only knows what it’s doing to our children’s brains.” (Just this weekend, Tim Cook, the C.E.O. of Apple, said <a href="https://www.marketwatch.com/story/why-apples-tim-cook-doesnt-want-his-nephew-to-use-social-networks-2018-01-21">he won’t let his nephew</a> on social media.) Over the past year, people I have spoken to internally at the company have voiced concerns for what Facebook is doing (or most recently, has done) to society. Many begin the conversation by rattling off a long list of great things that Facebook inarguably does for the world—bring people and communities together, help people organize around like-minded positive events—but, as if in slow motion, those same people recount the negatives. </blockquote></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/5_HrJTfV0WA" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/02/one-year-later.htmltag:blogger.com,1999:blog-7379110960796014170.post-253092667576851852018-02-12T14:13:00.000-05:002018-02-12T14:13:02.898-05:00Links<div dir="ltr" style="text-align: left;" trbidi="on">1. "Ten Lessons I WishI Had Learned Before I Started Teaching Differential Equations" (<a href="http://www.ega-math.narod.ru/Tasks/GCRota.htm">Gian-Carlo Rota</a>)<br /><blockquote class="tr_bq">What can we expect students to get out of an elementary course in differential equations? I reject the "bag of tricks" answer to this question. A course taught as a bag of tricks is devoid of educational value. One year later, the students will forget the tricks, most of which are useless anyway. The bag of tricks mentality is, in my opinion, a defeatist mentality, and the justifications I have heard of it, citing poor preparation of the students, their unwillingness to learn, and the possibility of assigning clever problem sets, are lazy ways out.</blockquote>2. A web clone of MS Paint (<a href="http://jspaint.ml/">jspaint.ml</a>)<br /><br />3. Strogatz Lectures on Nonlinear Dyanmics and Chaos (<a href="https://www.youtube.com/playlist?list=PLbN57C5Zdl6j_qJA-pARJnKsmROzPnO9V">youtube</a>)</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/jw8BauVyhK8" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/02/links.htmltag:blogger.com,1999:blog-7379110960796014170.post-622708729714797902018-02-04T09:43:00.000-05:002018-02-04T09:43:03.577-05:00Frequentist verus Bayesian Statistics<div dir="ltr" style="text-align: left;" trbidi="on">Jake VanderPlas has a bunch of interesting resources on this fascinating topic.<br /><br />For example, this <a href="https://www.youtube.com/watch?v=KhAUfqhLakw">video</a> from SciPy 2014 and the associated conference <a href="https://arxiv.org/pdf/1411.5018.pdf">proceeding</a>.<br /><br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/KhAUfqhLakw/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/KhAUfqhLakw?feature=player_embedded" width="320"></iframe></div><br />He also has a nice python-based <a href="http://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/">5-part</a> series on the same topic.<br /><br /></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/6omoL38EURc" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/02/frequentist-verus-bayesian-statistics.htmltag:blogger.com,1999:blog-7379110960796014170.post-69615181488313896182018-01-16T17:11:00.000-05:002018-01-16T17:11:02.614-05:00Unsolvability of Quintic Equations<div dir="ltr" style="text-align: left;" trbidi="on">General formulas for roots of <a href="https://en.wikipedia.org/wiki/Quadratic_formula">quadratic</a>, <a href="https://en.wikipedia.org/wiki/Cubic_function#General_solution_to_the_cubic_equation_with_real_coefficients">cubic</a>, and <a href="https://en.wikipedia.org/wiki/Quartic_function">quartic</a> equations can be written in closed form using the following algebraic operations: addition, subtraction, multiplication, division, raising to an integer power, and taking an integer root.<br /><br />However, roots of quintic equations, \[ax^5 + bx^4 + cx^3 + dx^2 + e x + f = 0,\] cannot be written in closed form using these operations.<br /><br />My PhD advisor, Ron Larson, had told me that this was one of the questions he was asked on his oral PhD qualifying exam. I knew the fact, but never understood the proof, since it involved math that I was not familiar with.<br /><br />Fred Akalin <a href="https://www.akalin.com/quintic-unsolvability">presents a nice proof</a> using plenty of interactive demos, visualizations, and not much advanced math.</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/9r0bQaW-OMU" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/01/unsolvability-of-quintic-equations.htmltag:blogger.com,1999:blog-7379110960796014170.post-81753074997793512602018-01-12T16:25:00.000-05:002018-01-12T16:25:37.964-05:00David Brooks: Resume Virtues versus Eulogy Virtues<div dir="ltr" style="text-align: left;" trbidi="on">Last week, I heard an interview with David Brooks on Intelligence Squared. Even though I was a few years late to the party (the show was from 2015), I found the content riveting.<br /><div><br /></div><div>Here is a video of that <a href="https://www.youtube.com/watch?v=_iGewxH3dgY">interview</a>:<br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/_iGewxH3dgY/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/_iGewxH3dgY?feature=player_embedded" width="320"></iframe></div><div><br /><div>I found his distinction of "resume virtues" and "eulogy virtues" helpful as a compass on how to lead the good life. Here is a relevant excerpt from an <a href="https://www.nytimes.com/2015/04/12/opinion/sunday/david-brooks-the-moral-bucket-list.html">NYT article</a>:</div></div></div><div><br /></div><blockquote>The résumé virtues are the skills you bring to the marketplace. The eulogy virtues are the ones that are talked about at your funeral — whether you were kind, brave, honest or faithful. Were you capable of deep love? </blockquote><blockquote>We all know that the eulogy virtues are more important than the résumé ones. But our culture and our educational systems spend more time teaching the skills and strategies you need for career success than the qualities you need to radiate that sort of inner light. Many of us are clearer on how to build an external career than on how to build inner character. </blockquote><blockquote>But if you live for external achievement, years pass and the deepest parts of you go unexplored and unstructured. You lack a moral vocabulary. It is easy to slip into a self-satisfied moral mediocrity. You grade yourself on a forgiving curve. You figure as long as you are not obviously hurting anybody and people seem to like you, you must be O.K. But you live with an unconscious boredom, separated from the deepest meaning of life and the highest moral joys. Gradually, a humiliating gap opens between your actual self and your desired self, between you and those incandescent souls you sometimes meet.</blockquote></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/ljiYP7Ps_sk" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/01/david-brooks-resume-virtues-versus.htmltag:blogger.com,1999:blog-7379110960796014170.post-67640226413760423062018-01-08T13:19:00.000-05:002018-01-12T08:49:09.188-05:00Multitasking Doesn't Work<div dir="ltr" style="text-align: left;" trbidi="on"><div class="tr_bq">Yesterday, I saw a YouTube <a href="https://www.youtube.com/watch?v=BCeGKxz3Q8Q">video</a> in which we are asked to complete two tasks in serial, and in parallel (multitasking). While I am not sure if the test is representative of multitasking in everyday life, it is obvious even from this simple exercise that multitasking is counterproductive.</div><br /><a href="http://www.apa.org/research/action/multitask.aspx">Switching costs</a> decrease efficiency, quality of experience, and accuracy, while <a href="http://www.health.com/health/gallery/0,,20707868,00.html#the-multitasking-myth-2">raising stress</a> levels. Multitasking on people degrades relationships.<br /><blockquote class="tr_bq">[...] evidence suggests that the human "executive control" processes have two distinct, complementary stages. They call one stage "goal shifting" ("I want to do this now instead of that") and the other stage "rule activation" ("I'm turning off the rules for that and turning on the rules for this"). Both of these stages help people to, without awareness, switch between tasks. </blockquote><blockquote class="tr_bq">Although switch costs may be relatively small, sometimes just a few tenths of a second per switch, they can add up to large amounts when people switch repeatedly back and forth between tasks. Thus, multitasking may seem efficient on the surface but may actually take more time in the end and involve more error. Meyer has said that even brief mental blocks created by shifting between tasks can cost as much as 40 percent of someone's productive time.</blockquote><div>It causes collateral damage beyond that inflicted on the multitasker. Maria Konnikova <a href="https://www.newyorker.com/science/maria-konnikova/multitask-masters">writes</a> in the New Yorker,</div><blockquote class="tr_bq"><a href="http://www.nature.com/scientificamericanmind/journal/v23/n1/full/scientificamericanmind0312-22.html">When</a> Strayer and his colleagues observed fifty-six thousand drivers approaching an intersection, they found that those on their cell phones were more than twice as likely to fail to heed the stop signs. In 2010, the National Safety Council estimated that twenty-eight per cent of all deaths and accidents on highways were the result of drivers on their phones.</blockquote>The vast majority (~98%) of us cannot multitask well, and shouldn't delude ourselves.<br /><br />I like the quote at the opening of Christine Rosen's <a href="https://www.thenewatlantis.com/publications/the-myth-of-multitasking">essay</a>,<br /><blockquote>In one of the many letters he wrote to his son in the 1740s, Lord Chesterfield offered the following advice: “<b>There is time enough for everything in the course of the day, if you do but one thing at once, but there is not time enough in the year, if you will do two things at a time.</b>” To Chesterfield, singular focus was not merely a practical way to structure one’s time; it was a mark of intelligence. “<b>This steady and undissipated attention to one object, is a sure mark of a superior genius; as hurry, bustle, and agitation, are the never-failing symptoms of a weak and frivolous mind.</b>” </blockquote></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/zo-jJ5V3rxs" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/01/multitasking-doesnt-work.htmltag:blogger.com,1999:blog-7379110960796014170.post-13036219960177120532018-01-04T21:16:00.000-05:002018-01-04T21:16:01.718-05:00The Ritual<div dir="ltr" style="text-align: left;" trbidi="on">The place was bustling with activity. The annual ritual had begun.<br /><div><br /></div><div>Every January, lots of people don their workout gear, and hit the gym. If the past is anything to go by, the crowds will thin out in a month or so. A handful of regulars will persist.</div><div><br /></div><div>All points on a circle are equally important. Yet, some points like January 1st are more important than others!</div><div><br /></div><div>I observe all this, not with judgment or condescension. The optimism of a new year is infectious. </div><div><br /></div><div>People around, through their actions, seem to say, "Forget and forgive the past, for this year, I resolve to get into shape." It is hard not to be inspired by that.<br /><br />Even if most of them will fail.</div><div><br /></div><div>It is a recognition that though we are flawed, we will strive!</div><div><br /></div><div>Happy New Year.</div></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/z1kMa5eKCXM" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/01/the-ritual.htmltag:blogger.com,1999:blog-7379110960796014170.post-39424292517029377252018-01-03T12:48:00.001-05:002018-01-03T12:48:30.016-05:00Some Links<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: left;">1. A nice portrait of Maryam Mirzakhani (<a href="https://www.nytimes.com/interactive/2017/12/28/magazine/the-lives-they-lived-maryam-mirzakhani.html?smid=tw-share&_r=0">NYT</a>)</div><blockquote class="tr_bq">Three years ago, Mirzakhani, 37, became the first woman to win the Fields Medal, the Nobel Prize of mathematics. News of the award, and the obvious symbolism (first woman, first Iranian, an immigrant from a Muslim country) sat uneasily with her. She was puzzled when she discovered that some people thought mathematics was not for women — it was not an idea that she or her friends encountered growing up in Iran — but she was not inclined, by personality, to tell others what to think.</blockquote>2. <a href="http://www.pement.org/awk/awk_sed.txt">Similar operations</a> using sed and awk<br /><br />3. Wikipedia and Fake Claims (<a href="https://theness.com/neurologicablog/index.php/how-wikipedia-tackles-fringe-nonsense/">neurologica</a>)</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/7aQKJm32sjI" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2018/01/some-links.htmltag:blogger.com,1999:blog-7379110960796014170.post-10754927226414543542017-12-29T17:02:00.001-05:002017-12-29T17:02:46.816-05:00Teaching Kids to Code<div dir="ltr" style="text-align: left;" trbidi="on">I've been on the lookout for video tutorials that teach kids in upper elementary school the basics of coding.<br /><br />Some of my requirements/biases are:<br /><ul style="text-align: left;"><li>a <b>general-purpose fully-featured language</b> that one can grow with. Presumably, this is the start of a multi-year commitment. This eliminates awesome, but specialized, tools like Scratch.</li><li>a language with <b>rich library support</b>, so that one can get started quickly and <b>start prototyping</b>. This probably eliminates most fully featured compiled languages like C++ etc. </li><li>A language that is <b>cross-platform</b>, and can do <b>graphics</b> well. Use art (mathematical perhaps) as the window.</li></ul><div>Python seems like a potential choice.</div><div><br /></div><div>I found a superb series of YouTube lectures, which caters directly to my requirements. Here is a link to the playlist from <a href="https://www.youtube.com/playlist?list=PLsk-HSGFjnaGe7sS_4VpZoEtZF2VoWtoR">KidsCanCode</a>.</div></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/xIEF2tWtq3c" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/12/teaching-kids-to-code.htmltag:blogger.com,1999:blog-7379110960796014170.post-87197334036148393782017-12-16T10:15:00.001-05:002017-12-16T10:15:14.568-05:00Taubes, Sugar, and Fat<div dir="ltr" style="text-align: left;" trbidi="on">Last week, I listened to Shane Parrish's interview with Gary Taubes on <a href="https://www.farnamstreetblog.com/2017/11/gary-taubes-sugar/">the Knowledge Project</a> podcast. Taubes provides an informative historical perspective on some aspects of research in nutrition science.<br /><br />His view is not charitable. Perhaps, deservedly so.<br /><br />I have to confess that I have't read the book "The Case Against Sugar", but I have followed Taubes' arguments for quite a while. His thesis, essentially the same as his previous two books, is that we ditch a "low-fat high-carb" diet, for a "low-carb high-fat (and protein)" diet.<br /><br />The points he make are provocative, and interesting.<br /><br />That said, I wished Shane would have challenged Taubes more, and held him accountable.<br /><br />This <a href="http://www.stephanguyenet.com/bad-sugar-or-bad-journalism-an-expert-review-of-the-case-against-sugar/">counter-point</a> by Stephan Guyenet points to numerous reasonable flaws with Taubes' thesis. It is worth reading in its entirety, if only for the balance it provides.<br /><br />A couple of other rebuttals are available <a href="https://sciencebasedmedicine.org/gary-taubes-and-the-cause-of-obesity/">here</a> and <a href="https://nutritionsciencefactcheck.com/2017/07/20/the-case-against-the-case-against-sugar/">here</a>.</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/qs6YBJV4MFY" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/12/taubes-sugar-and-fat.htmltag:blogger.com,1999:blog-7379110960796014170.post-45556595993361150962017-12-12T15:40:00.000-05:002017-12-15T13:25:19.902-05:00Randomized SVD<div dir="ltr" style="text-align: left;" trbidi="on">Dimension reduction is an important problem in the era of big data. SVD is a classic method for obtaining low-rank approximations of data.<br /><br />The standard algorithm (which finds all the singular values) is one of the most expensive <a href="http://sachinashanbhag.blogspot.com/2016/03/matrix-decompositions.html">matrix decomposition</a> algorithms.<br /><br />Companies like Facebook or Google deal with huge matrices (big data can be big). Often, they don't care about finding all the singular values - perhaps only the first 10 or 20. They may also not need exquisite precision in the singular values. Good approximations might do just fine.<br /><br />Fortunately, there are <a href="https://arxiv.org/pdf/0909.4061.pdf">randomized algorithms</a> for finding SVDs which work on a relatively <a href="https://arxiv.org/pdf/1608.02148.pdf">simple logic</a>. One approximates the range of the matrix, by repeatedly multiplying it with random vectors, and works with with those.<br /><br />The algorithm is fairly simple to implement:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-4tBq439ne60/WRStLE7zVwI/AAAAAAAADAk/ANksYzlLd5cv3W99B7OsXqH3065T3BYXgCLcB/s1600/Screenshot%2Bfrom%2B2017-05-11%2B14-23-57.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="251" src="https://2.bp.blogspot.com/-4tBq439ne60/WRStLE7zVwI/AAAAAAAADAk/ANksYzlLd5cv3W99B7OsXqH3065T3BYXgCLcB/s400/Screenshot%2Bfrom%2B2017-05-11%2B14-23-57.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure from <a href="https://arxiv.org/pdf/1608.02148.pdf">Erichson et al</a></td></tr></tbody></table>In Octave or Matlab, the code can be <a href="https://www.mathworks.com/matlabcentral/fileexchange/47835-randomized-singular-value-decomposition">implemented</a> in about 10 lines.<br /><br />The resulting truncated-SVD can be a surprisingly good approximation, which can shave multiple orders of magnitude (mileage improves as matrices get bigger) from computation time.<br /><br />For python, there are decent implementations of randomized SVD in the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html">sklearn</a> package, and the <a href="https://github.com/facebook/fbpca">fbpca</a> package from Facebook. This <a href="http://amedee.me/post/pca-large-matrices/">blog post</a> shows some code to call these routines, and provides some benchmarks.<br /><br /></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/QnngFzBFlJE" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com2http://sachinashanbhag.blogspot.com/2017/12/randomized-svd.htmltag:blogger.com,1999:blog-7379110960796014170.post-22128876508627036192017-12-07T12:05:00.000-05:002017-12-07T12:05:50.978-05:00More is Different<div dir="ltr" style="text-align: left;" trbidi="on">Last week, I read a nearly 50 year old essay by P. W. Anderson (h/t fermatslibrary) entitled "More is Different" (<a href="http://fermatslibrary.com/s/more-is-different">pdf</a>). It is a fascinating opinion piece.<br /><ul style="text-align: left;"><li>"Quantitative differences become qualitative ones" - Marx</li><li>Psychology is not applied biology, nor is biology applied chemistry.</li></ul><div>This other essay on the "<a href="http://arthur.shumwaysmith.com/life/content/the_arrogance_of_physicists">arrogance of physicists</a>" speaks to a similar point:</div><blockquote class="tr_bq">But training and experience in physics gives you a very powerful toolbox of techniques, intuitions and approaches to solving problems that molds your outlook and attitude toward the rest of the world. Other fields of science or engineering are limited in their scope. Mathematics is powerful and immense in logical scope, but in the end it is all tautology, as I tease my mathematician friends, with no implied or even desired connection to the real world. Physics is the application of mathematics to reality and the 20th century proved its remarkable effectiveness in understanding that world, from the behavior of the tiniest particles to the limits of the entire cosmos. Chemistry generally confines itself to the world of atoms and molecules, biology to life, wonderful in itself, but confined so far as we know to just this planet. The social sciences limit themselves still further, mainly to the behavior of us human beings - certainly a complex and highly interesting subject, but difficult to generalize from. Engineering also has a powerful collection of intuitions and formulas to apply to the real world, but those tend to be more specific individual rules, rather than the general and universal laws that physicists have found. </blockquote><blockquote class="tr_bq">Computer scientists and their practical real-world programming cousins are perhaps closest to physicists in justified confidence in the generality of their toolbox. Everything real can be viewed as computational, and there are some very general rules about information and logic that seep into the intuition of any good programmer. As physics is the application of mathematics to the real world of physical things, so programming is the application of mathematics to the world of information about things, and sometimes those two worlds even seem to be merging.</blockquote></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/euR8bcTJEpA" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/12/more-is-different.htmltag:blogger.com,1999:blog-7379110960796014170.post-23590341854404443662017-11-26T19:14:00.004-05:002017-11-26T19:14:57.650-05:00Post-Thanksgiving Links<div dir="ltr" style="text-align: left;" trbidi="on">Some links to interesting scientific content:<br /><br />1. How Wikipedia Tackles Fringe Nonsense (<a href="http://theness.com/neurologicablog/index.php/how-wikipedia-tackles-fringe-nonsense/">neurologica</a>)<br /><br />2. Seven Academic-World Lies (<a href="https://www.linkedin.com/pulse/7-lies-academic-world-keeps-telling-you-mariana-cerdeira">Mariana Cerdeira</a>)<br /><br />3. Numerically Approximating Ghosts (<a href="https://www.johndcook.com/blog/2009/08/11/approximating-a-solution-that-doesnt-exist/">John D Cook</a>)<br /><br />4. An Archive of Projects Using Differential Equations (<a href="https://www.cengage.com/math/book_content/0495108243_zill/projects_archive/">Zill</a>)</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/-qTNd0WkJJA" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/11/post-thanksgiving-links.htmltag:blogger.com,1999:blog-7379110960796014170.post-35404593113827032492017-11-14T08:09:00.001-05:002017-11-14T08:09:24.423-05:00History of PowerPoint<div dir="ltr" style="text-align: left;" trbidi="on">The history of MS Office is riveting.<br /><br />This <a href="https://spectrum.ieee.org/tech-history/cyberspace/the-improbable-origins-of-powerpoint">essay</a> in IEEE Spectrum recounts the "Improbable Origins of PowerPoint". I did not know that Xerox PARC had such a direct influence of on MS Office (including <a href="https://en.wikipedia.org/wiki/Microsoft_Word">MS Word</a>).<br /><br />Reading the essay, one gets a sense for how fluid the desktop computer landscape was between the advent of the Apple Lisa and Microsoft's bundling of Word, Excel, and PowerPoint.</div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/QnLfNXbLixU" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/11/history-of-powerpoint.htmltag:blogger.com,1999:blog-7379110960796014170.post-75537918885840305222017-11-13T14:50:00.000-05:002017-11-26T11:48:45.799-05:00Exporting Numpy Arrays and Matrices to LaTeX<div dir="ltr" style="text-align: left;" trbidi="on">Over the past couple of years, a lot of my "numerical experimentation" work has moved from Octave to python/numpy.<br /><br />I incorporate a lot of this work into my classes and presentations (made using beamer), and having a script to translate vectors and matrices to LaTeX format is handy.<br /><br />In the past, I <a href="http://sachinashanbhag.blogspot.com/2012/11/exporting-matrices-in-octavematlab-to.html">shared</a> a Matlab/Octave <a href="https://docs.google.com/open?id=0Bww3OZktvGQucmxnd1FJNElCVGc">script</a> which does this.<br /><br />Here is a python/numpy <a href="https://gist.github.com/shane5ul/ab47124f9c22796e369948122c80037b">script</a> which does something similar. The script<br /><br /><ul style="text-align: left;"><li>autodetects integers and floats</li><li>allows you to control the number of decimals for floats</li><li>allows you optionally render floats in scientific format</li><li>right-justify using the bmatrix* environment (good for -ve numbers)</li><li>suppress small values near zero (~ 1e-16)</li></ul><div><br /></div></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/-M5ZixqEFLQ" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com4http://sachinashanbhag.blogspot.com/2017/11/exporting-numpy-arrays-and-matrices-to.htmltag:blogger.com,1999:blog-7379110960796014170.post-16451153819106103062017-11-06T17:27:00.000-05:002017-11-06T17:27:28.981-05:00Python: Orthogonal Polynomials and Generalized Gauss Quadrature<div dir="ltr" style="text-align: left;" trbidi="on">A new (to me) <a href="https://github.com/nschloe/orthopy">python library</a> for easily computing families of orthogonal polynomials.<br /><br />Getting standard (generalized) Gauss quadrature schemes is extremely simple. For example to get 13 nodes and weights for Gauss-Laguerre integration, correct up to 50 decimal places:<br /><br /><pre style="background-color: #f6f8fa; border-radius: 3px; box-sizing: border-box; color: #24292e; font-family: SFMono-Regular, Consolas, "Liberation Mono", Menlo, Courier, monospace; font-size: 13.6px; line-height: 1.45; overflow: auto; padding: 16px; word-break: normal; word-wrap: normal;">pts,wts <span class="pl-k" style="box-sizing: border-box; color: #d73a49;">=</span> orthopy.schemes.laguerre(<span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">13</span>, <span class="pl-v" style="box-sizing: border-box; color: #e36209;">decimal_places</span><span class="pl-k" style="box-sizing: border-box; color: #d73a49;">=</span><span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">50</span>)</pre><br />The numpy <a href="https://docs.scipy.org/doc/numpy/reference/routines.polynomials.package.html">Polynomial</a> package provides similar functionality:<br /><br /><span style="font-family: "courier new" , "courier" , monospace;">pts, wts = numpy.polynomial.laguerre.laggauss(13)</span><br /><div><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div><div><span style="font-family: inherit;">A nice feature (besides arbitrary precision) is that you can derive custom orthogonal polynomials and quadrature rules. All you need to provide is a weight function and domain of the polynomials. From the project webpage:</span></div><div><pre style="background-color: #f6f8fa; border-radius: 3px; box-sizing: border-box; color: #24292e; font-family: SFMono-Regular, Consolas, "Liberation Mono", Menlo, Courier, monospace; font-size: 13.6px; line-height: 1.45; overflow: auto; padding: 16px; word-break: normal; word-wrap: normal;"><span class="pl-k" style="box-sizing: border-box; color: #d73a49;">import</span> orthopy<br />moments <span class="pl-k" style="box-sizing: border-box; color: #d73a49;">=</span> orthopy.compute_moments(<span class="pl-k" style="box-sizing: border-box; color: #d73a49;">lambda</span> <span class="pl-smi" style="box-sizing: border-box;">x</span>: x<span class="pl-k" style="box-sizing: border-box; color: #d73a49;">**</span><span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">2</span>, <span class="pl-k" style="box-sizing: border-box; color: #d73a49;">-</span><span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">1</span>, <span class="pl-k" style="box-sizing: border-box; color: #d73a49;">+</span><span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">1</span>, <span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">20</span>)<br />alpha, beta <span class="pl-k" style="box-sizing: border-box; color: #d73a49;">=</span> orthopy.chebyshev(moments)<br />points, weights <span class="pl-k" style="box-sizing: border-box; color: #d73a49;">=</span> orthopy.schemes.custom(alpha, beta, <span class="pl-v" style="box-sizing: border-box; color: #e36209;">decimal_places</span><span class="pl-k" style="box-sizing: border-box; color: #d73a49;">=</span><span class="pl-c1" style="box-sizing: border-box; color: #005cc5;">30</span>)</pre></div><div>This generates a 10-point scheme for integrating functions over the interval [-1, 1], with weight function \(w(x) = x^2\).</div><div><span style="font-family: inherit;"><br /></span></div></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/ZecSJXzGX-I" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/11/python-orthogonal-polynomials-and.htmltag:blogger.com,1999:blog-7379110960796014170.post-82962477701486582172017-10-27T12:21:00.000-04:002017-10-27T12:21:12.582-04:00Science Links<div dir="ltr" style="text-align: left;" trbidi="on">1. When the Revolution Came for Amy Cuddy (<a href="https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html">Susan Dominus in the NYT</a>)<br /><blockquote class="tr_bq">But since 2015, even as she continued to stride onstage and tell the audiences to face down their fears, Cuddy has been fighting her own anxieties, as fellow academics have subjected her research to exceptionally high levels of public scrutiny. She is far from alone in facing challenges to her work: Since 2011, a methodological reform movement has been rattling the field, raising the possibility that vast amounts of research, even entire subfields, might be unreliable. Up-and-coming social psychologists, armed with new statistical sophistication, picked up the cause of replications, openly questioning the work their colleagues conducted under a now-outdated set of assumptions. The culture in the field, once cordial and collaborative, became openly combative, as scientists adjusted to new norms of public critique while still struggling to adjust to new standards of evidence.</blockquote>2. When correlations don't imply causation, but something far more screwy! (<a href="https://www.theatlantic.com/business/archive/2012/05/when-correlation-is-not-causation-but-something-much-more-screwy/256918/">the Atlantic</a>)<br />2a. John D. Cook <a href="https://www.johndcook.com/blog/2017/09/10/negative-correlation-introduced-by-success/">follows up</a> with "negative correlations" induced by success.<br /><br />3. <a href="http://pathwaystoscience.org/Grad.aspx">STEM resources</a> for students from K-PhD, and beyond (<a href="http://pathwaystoscience.org/Library.aspx">PathwaysToScience</a>)<br /><br /></div><img src="http://feeds.feedburner.com/~r/CluelessFundatma/~4/qc_M0ir5-H0" height="1" width="1" alt=""/>Sachin Shanbhaghttps://plus.google.com/115150474038005608083noreply@blogger.com0http://sachinashanbhag.blogspot.com/2017/10/science-links.html