<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>John D. Cook</title>
	<atom:link href="http://www.johndcook.com/blog/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.johndcook.com/blog</link>
	<description>Applied Mathematics Consulting</description>
	<lastBuildDate>Tue, 14 Apr 2026 17:02:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Intersecting spheres and GPS</title>
		<link>https://www.johndcook.com/blog/2026/04/14/intersecting-spheres-and-gps/</link>
					<comments>https://www.johndcook.com/blog/2026/04/14/intersecting-spheres-and-gps/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 14:08:36 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Geometry]]></category>
		<category><![CDATA[Navigation]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246970</guid>

					<description><![CDATA[<p>If you know the distance d to a satellite, you can compute a circle of points that passes through your location. That&#8217;s because you&#8217;re at the intersection of two spheres—the earth&#8217;s surface and a sphere of radius d centered on the satellite—and the intersection of two spheres is a circle. Said another way, one observation [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/14/intersecting-spheres-and-gps/">Intersecting spheres and GPS</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>If you know the distance <em>d</em> to a satellite, you can compute a circle of points that passes through your location. That&#8217;s because you&#8217;re at the intersection of two spheres—the earth&#8217;s surface and a sphere of radius <em>d</em> centered on the satellite—and the intersection of two spheres is a circle. Said another way, one observation of a satellite determines a circle of possible locations.</p>
<p>If you know the distance to a second satellite as well, then you can find two circles that contain your location. The two circles intersect at two points, and you know that you&#8217;re at one of two possible positions. If you know your approximate position, you may be able to rule out one of the intersection points.</p>
<p>If you know the distance to three different satellites, now you know three circles that you&#8217;re standing on, and the third circle will only pass through one of the two points determined by the first two satellites. Now you know exactly where you are.</p>
<p>Knowing the distance to more satellites is even better. In theory additional observations are redundant but harmless. In practice, they let you partially cancel out inevitable measurement errors.</p>
<p>If you&#8217;re not on the earth&#8217;s surface, you&#8217;re still at the intersection of <em>n</em> spheres if you know the distance to <em>n</em> satellites. If you&#8217;re in an airplane, or on route to the moon, the same principles apply.</p>
<h2>Errors and corrections</h2>
<p>How do you know the distance to a satellite? The satellite can announce what time it is by its clock, then when you receive the announcement you compare it to the time by your clock. The difference between the two times tells you how long the radio signal traveled. Multiply by the speed of light and you have the distance.</p>
<p>However, your clock will probably not be exactly synchronized with the satellite clock. Observing a fourth satellite can fix the problem of your clock not being synchronized with the satellite clocks. But it doesn&#8217;t fix the more subtle problems of special relativity and general relativity. See <a href="https://perthirtysix.com/how-does-gps-work">this post</a> by Shri Khalpada for an accessible discussion of the physics.</p>
<h2>Numerical computation</h2>
<p>Each distance measurement gives you an equation:</p>
<p style="padding-left: 40px;">|| <em>x</em> &#8211; <em>s</em><sub><em>i</em></sub> || = <em>d</em><sub><em>i</em></sub></p>
<p>where <em>s</em><sub><em>i</em></sub> is the location of the <em>i</em>th satellite and <em>d</em><sub><em>i</em></sub> is your distance to that satellite. If you square both sides of the equation, you have a quadratic equation. You have to solve a system of nonlinear equations, and yet there is a way to transform the problem into solving linear equations, i.e. using linear algebra. See <a href="https://www.cambridge.org/core/journals/anziam-journal/article/note-on-computing-the-intersection-of-spheres-in-mathbbrn/D15FB22917024962409980AC7D3C086D">this article</a> for details.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2025/12/04/the-navigational-triangle/">The navigation triangle</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2023/03/07/duttons/">Dutton&#8217;s Navigation and Piloting</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/14/intersecting-spheres-and-gps/">Intersecting spheres and GPS</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/14/intersecting-spheres-and-gps/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Finding a parabola through two points with given slopes</title>
		<link>https://www.johndcook.com/blog/2026/04/14/artz-parabola/</link>
					<comments>https://www.johndcook.com/blog/2026/04/14/artz-parabola/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 12:19:42 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Geometry]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246968</guid>

					<description><![CDATA[<p>The Wikipedia article on modern triangle geometry has an image labled &#8220;Artzt parabolas&#8221; with no explanation. A quick search didn&#8217;t turn up anything about Artzt parabolas [1], but apparently the parabolas go through pairs of vertices with tangents parallel to the sides. The general form of a conic section is ax² + bxy + cy² [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/14/artz-parabola/">Finding a parabola through two points with given slopes</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The Wikipedia article on modern triangle geometry has an image labled &#8220;Artzt parabolas&#8221; with no explanation.</p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/artz_triangle.png" width="330" height="247" /></p>
<p>A quick search didn&#8217;t turn up anything about Artzt parabolas [1], but apparently the parabolas go through pairs of vertices with tangents parallel to the sides.</p>
<p>The general form of a conic section is</p>
<p style="padding-left: 40px;"><em>ax</em>² + <em>bxy</em> + <em>cy</em>² + <em>dx</em> + <em>ey</em> + <em>f</em> = 0</p>
<p>and the constraint <em>b</em>² = 4<em>ac</em> means the conic will be a parabola.</p>
<p>We have 6 parameters, each determined only up to a scaling factor; you can multiply both sides by any non-zero constant and still have the same conic. So a general conic has 5 degrees of freedom, and the parabola condition <em>b</em>² = 4<em>ac</em> takes us down to 4. Specifying two points that the parabola passes through takes up 2 more degrees of freedom, and specifying the slopes takes up the last two. So it&#8217;s plausible that there is a unique solution to the problem.</p>
<p>There is indeed a solution, unique up to scaling the parameters. The following code finds parameters of a parabola that passes through (<em>x</em><sub><em>i</em></sub>, <em>y</em><sub><em>i</em></sub>) with slope <em>m</em><sub><em>i</em></sub> for <em>i</em> = 1, 2.</p>
<pre>def solve(x1, y1, m1, x2, y2, m2):
    
    Δx = x2 - x1
    Δy = y2 - y1
    λ = 4*(Δx*m1 - Δy)*(Δx*m2 - Δy)/(m1 - m2)**2
    k = x2*y1 - x1*y2

    a = Δy**2 + λ*m1*m2
    b = -2*Δx*Δy - λ*(m1 + m2)
    c = Δx**2 + λ
    d =  2*k*Δy + λ*(m1*y2 + m2*y1 - m1*m2*(x1 + x2))
    e = -2*k*Δx + λ*(m1*x1 + m2*x2 - y1 - y2)
    f = k**2 + λ*(m1*x1 - y1)*(m2*x2 - y2)

    return (a, b, c, d, e, f)
</pre>
<p>[1] The page said &#8220;Artz&#8221; when I first looked at it, but it has since been corrected to &#8220;Artzt&#8221;. Maybe I didn&#8217;t find anything because I was looking for the wrong spelling.</p>The post <a href="https://www.johndcook.com/blog/2026/04/14/artz-parabola/">Finding a parabola through two points with given slopes</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/14/artz-parabola/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Mathematical minimalism</title>
		<link>https://www.johndcook.com/blog/2026/04/13/the-smallest-math-library/</link>
					<comments>https://www.johndcook.com/blog/2026/04/13/the-smallest-math-library/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 14:33:16 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246966</guid>

					<description><![CDATA[<p>Andrzej Odrzywolek recently posted an article on arXiv showing that you can obtain all the elementary functions from just the function and the constant 1. The following equations, taken from the paper&#8217;s supplement, show how to bootstrap addition, subtraction, multiplication, and division from the eml function. See the paper and supplement for how to obtain [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/13/the-smallest-math-library/">Mathematical minimalism</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Andrzej Odrzywolek recently posted an article on <a href="https://arxiv.org/abs/2603.21852v2">arXiv</a> showing that you can obtain all the elementary functions from just the function</p>
<p><img decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/eml.svg" alt="\operatorname{eml}(x,y) = \exp(x) - \log(y)" width="224" height="18" /></p>
<p>and the constant 1. The following equations, taken from the paper&#8217;s <a href="https://arxiv.org/src/2603.21852v2/anc/SupplementaryInformation.pdf">supplement</a>, show how to bootstrap addition, subtraction, multiplication, and division from the eml function.</p>
<p><img decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/elm.svg" alt="\begin{align*} \exp(z) &amp;\mapsto \operatorname{eml}(z,1) \\ \log(z) &amp;\mapsto \operatorname{eml}(1,\exp(\operatorname{eml}(1,z))) \\ x - y &amp;\mapsto \operatorname{eml}(\log(x),\exp(y)) \\ -z &amp;\mapsto (\log 1) - z \\ x + y &amp;\mapsto x - (-y) \\ 1/z &amp;\mapsto \exp(-\log z) \\ x \cdot y &amp;\mapsto \exp(\log x + \log y) \end{align*}" width="264" height="193" /></p>
<p>See the paper and supplement for how to obtain constants like π and functions like square and square root, as well as the standard circular and hyperbolic functions.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2021/01/05/bootstrapping-math-library/">Bootstrapping a small math library</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2026/04/06/tofolli-gates/">Toffoli gates are all you need</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/13/the-smallest-math-library/">Mathematical minimalism</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/13/the-smallest-math-library/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Lunar period approximations</title>
		<link>https://www.johndcook.com/blog/2026/04/12/lunations/</link>
					<comments>https://www.johndcook.com/blog/2026/04/12/lunations/#respond</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Sun, 12 Apr 2026 23:42:01 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Calendars]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246963</guid>

					<description><![CDATA[<p>The date of Easter The church fixed Easter to be the first Sunday after the first full moon after the Spring equinox. They were choosing a date in the Roman (Julian) calendar to commemorate an event whose date was known according to the Jewish lunisolar calendar, hence the reference to equinoxes and full moons. The [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/12/lunations/">Lunar period approximations</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<h2>The date of Easter</h2>
<p>The church fixed Easter to be the first Sunday after the first full moon after the Spring equinox. They were choosing a date in the Roman (Julian) calendar to commemorate an event whose date was known according to the Jewish lunisolar calendar, hence the reference to equinoxes and full moons.</p>
<p>The <a href="https://www.johndcook.com/blog/2026/04/12/orthodox-western-easter/">previous post</a> explained why the Eastern and Western dates of Easter differ. The primary reason is that both churches use March 21 as the first day of Spring, but the Eastern church uses March 21 on the Julian calendar and the Western church uses March 21 on the Gregorian calendar.</p>
<p>But that&#8217;s not the only difference. The churches chose different algorithms for calculating when the first full moon would be. The date of Easter doesn&#8217;t depend on the date of the full moon per se, but the methods used to predict full moons.</p>
<p>This post will show why determining the date of the full moon is messy.</p>
<h2>Lunation length</h2>
<p>The moon takes between 29 and 30 days between full moons (or between new moons, which are easier to objectively measure). This period is called a <b>lunation</b>. The average length of a lunation is <em>L</em> = 29.530588853 days. This is not a convenient number to work with, and so there&#8217;s no simple way of reconciling the orbital period of the moon with the rotation period of the earth [1]. Lunar calendars alternate months with 29 and 30 days, but they can&#8217;t be very accurate, so they have to have some fudge factor analogous to leap years.</p>
<p>The value of <em>L</em> was known from ancient times. Meton of Athens calculated in 432 BC that 235 lunar cycles equaled 19 tropical years or 6940 days. This corresponds to <em>L</em> ≈ 29.5319. Around a century later the Greek scholar Callippus refined this to 940 cycles in 76 years or 27,759 days. This corresponds to <em>L</em> ≈ 29.53085.</p>
<p>The problem wasn&#8217;t <em>knowing</em> <em>L</em> but devising a convenient way of <em>working</em> with <em>L</em>. There is no way to work with lunations that is as easy as the way the Julian (or even the more complicated Gregorian) calendar reconciles days with years.</p>
<h2>Approximations</h2>
<p>Let&#8217;s look at the accuracy of several approximations for <em>L</em>. We&#8217;d like an approximation that is not only accurate in an absolute sense, but also accurate relative to its complexity. The complexity of a fraction is measured by a <a href="https://www.johndcook.com/blog/2023/09/17/rational-height-functions/">height function</a>. We&#8217;ll use what&#8217;s called the &#8220;classic&#8221; height function: log( max(<em>n</em>, <em>d</em>) ) where <em>n</em> and <em>d</em> are the numerator and denominator of a fraction. Since we&#8217;re approximating a number bigger than 1, this will be simply log(<em>n</em>).</p>
<p>We will compare the first five convergents, approximations that come from the continued fraction form of <em>L</em>, and the approximations of Meton and Callippus. Here&#8217;s a plot.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/lunation.png" width="480" height="360" /></p>
<p>And here&#8217;s the code that produced the plot, showing the fractions used.</p>
<pre>from numpy import log
import matplotlib.pyplot as plt

fracs = [
    (30, 1), 
    (59, 2),
    (443, 15),
    (502, 17),
    (1447, 49),
    (6940, 235),
    (27759, 940)
]

def error(n, d):
    L = 29.530588853    
    return abs(n/d - L)

for f in fracs:
    plt.plot(log(f[0]), log(error(*f)), 'o')
plt.xlabel("log numerator")
plt.ylabel("log error")
plt.show()
</pre>
<p>The approximation 1447/49 is the best by far, both in absolute terms and relative to the size of the numerator. But it&#8217;s not very useful for calendar design because 1447 is not nicely related to the number of days in a year.</p>
<p>&nbsp;</p>
<p>[1] The time between full moons is a synodic month, the time it takes for the moon to return to the same position relative to the sun. This is longer than a sidereal month, the time it takes the moon to complete one orbit relative to the fixed stars.</p>The post <a href="https://www.johndcook.com/blog/2026/04/12/lunations/">Lunar period approximations</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/12/lunations/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The gap between Eastern and Western Easter</title>
		<link>https://www.johndcook.com/blog/2026/04/12/orthodox-western-easter/</link>
					<comments>https://www.johndcook.com/blog/2026/04/12/orthodox-western-easter/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Sun, 12 Apr 2026 12:09:13 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Calendars]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246959</guid>

					<description><![CDATA[<p>Today is Orthodox Easter. Western churches celebrated Easter last week. Why are the Eastern and Western dates of Easter different? Is Eastern Easter always later than Western Easter? How far apart can the two dates be? Why the dates differ Easter is on the first Sunday after the first full moon in Spring [1]. East [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/12/orthodox-western-easter/">The gap between Eastern and Western Easter</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Today is Orthodox Easter. Western churches celebrated Easter last week. Why are the Eastern and Western dates of Easter different? Is Eastern Easter always later than Western Easter? How far apart can the two dates be?</p>
<h2>Why the dates differ</h2>
<p>Easter is on the first Sunday after the first full moon in Spring [1]. East and West agree on this. What they disagree on is the details of &#8220;full moon&#8221; and &#8220;Spring.&#8221; The dates are not based on precise astronomical measurements but rather on astronomical approximations codified long ago.</p>
<p>Spring begins on March 21 for the purposes of calculating Easter. But the Western church uses March 21 on the <a href="https://www.johndcook.com/blog/2024/12/16/gregorian-calendar/">Gregorian calendar</a> and the Eastern church uses March 21 on the Julian calendar. This mostly accounts for the difference between Eastern and Western dates for Easter. East and West also use slightly different methods of approximating when the moon will be full. More on that in the <a href="https://www.johndcook.com/blog/2026/04/12/lunations/">next post</a>.</p>
<h2>Pascha never comes before Easter</h2>
<p>The Eastern name for Easter is Pascha. Eastern Pascha and Western Easter can occur on the same day, but otherwise Pascha is always later, never earlier. This is because the Julian year is longer than the Gregorian year, causing fixed dates on the former calendar to occur after the later. Also, the Eastern method of approximating the date of the Paschal full moon gives a later date than the Western method.</p>
<p>The Julian calendar has exactly 365 1/4 days. The Gregorian calendar has 365 97/400 days; centuries are not leap years unless they&#8217;re divisible by 400. This complication in the Gregorian calendar was necessary to match the solar year. The date March 21 on the Julian calendar is drifting later in the year from the perspective of the Gregorian calendar, moving further past the astronomical equinox [2].</p>
<h2>Size of the gap</h2>
<p>Eastern and Western dates of Easter can coincide. The were the same last year, and will be the same again in 2028. The gap is always a whole number of weeks because Easter is always on a Sunday.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/easter_gap3.svg" width="600" height="350" /></p>
<p>The gap is usually 1 week. It can be 0, 4, or 5 weeks, but never 2 or 3 weeks.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/easter_gap4.svg" width="600" height="370" /></p>
<p>This is the pattern for now. Sometime in the distant future the Julian and Gregorian calendars will diverge further than the gaps will increase. Presumably Orthodox churches will make some sort of adjustment before the Julian date March 21 drifts into summer or fall.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2024/12/31/cycle-of-new-years-days/">Cycle of New Year&#8217;s Days</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2025/12/23/when-was-newton-born/">When was Newton born?</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2025/02/28/martian-leap-years/">Martian leap years</a></li>
</ul>
<p>[1] The reason for this definition is that Christ was crucified at the time of the Passover. Due to the lunisolar design of the Jewish calendar, this would have been during the first full moon after the Spring equinox. Christ rose from the dead the Sunday following the crucifixion, so Easter is on the first Sunday after the first full moon of Spring.</p>
<p>[2] The Julian and Gregorian calendars currently differ by 13 days, and they&#8217;re drifting apart at the rate of 3 days every 400 years. Somewhere around 47,000 years from now the two calendars will agree again, sorta, because the Julian calendar will be a full year behind the Gregorian calendar.</p>The post <a href="https://www.johndcook.com/blog/2026/04/12/orthodox-western-easter/">The gap between Eastern and Western Easter</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/12/orthodox-western-easter/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
		<item>
		<title>Distribution of digits in fractions</title>
		<link>https://www.johndcook.com/blog/2026/04/10/fraction-digits/</link>
					<comments>https://www.johndcook.com/blog/2026/04/10/fraction-digits/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 14:29:49 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Number theory]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246958</guid>

					<description><![CDATA[<p>There&#8217;s a lot of mathematics just off the beaten path. You can spend a career in math and yet not know all there is to know about even the most basic areas of math. For example, this post will demonstrate something you may not have seen about decimal forms of fractions. Let p &#62; 5 [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/10/fraction-digits/">Distribution of digits in fractions</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>There&#8217;s a lot of mathematics just off the beaten path. You can spend a career in math and yet not know all there is to know about even the most basic areas of math. For example, this post will demonstrate something you may not have seen about decimal forms of fractions.</p>
<p>Let <em>p</em> &gt; 5 be a prime number and 0 &lt; <em>k</em> &lt; <em>p</em>. Then the digits in <em>k</em>/<em>p</em> might be the same for all <em>k</em>, varying only by cyclic permutations. This is the case, for example, when <em>p</em> = 7 or <em>p</em> = 17. More on these kinds of fractions <a href="https://www.johndcook.com/blog/2014/11/12/cyclic-fractions/">here</a>.</p>
<p>The digits in <em>k</em>/<em>p</em> repeat for every <em>k</em>, but different values of <em>k</em> might have sequences of digits that vary by more than cyclic permutations. For example, let&#8217;s look at the values of <em>k</em>/13.</p>
<pre>&gt;&gt;&gt; for i in range(1, 13):
...   print(i/13)
...
 1 0.0769230769230769
 2 0.1538461538461538
 3 0.2307692307692307
 4 0.3076923076923077
 5 0.3846153846153846
 6 0.4615384615384615
 7 0.5384615384615384
 8 0.6153846153846154
 9 0.6923076923076923
10 0.7692307692307693
11 0.8461538461538461
12 0.9230769230769231
</pre>
<p>One cycle goes through the digits 076923. You&#8217;ll see this when <em>k</em> = 1, 3, 4, 9, 10, or 11. The other cycle goes through 153846 for the rest of the values of <em>k</em>. The cycles 076923 and 153846 are called the <strong>distinct repeating sets</strong> of 13 in [1].</p>
<p>If we look at fractions with denominator 41, thee are six distinct repeating sets.</p>
<pre>02439
04878
07317
09756
12195
14634
26829
36585
</pre>
<p>You could find these by modifying the Python code above. However, in general you&#8217;ll need more than default precision to see the full periods. You might want to shift over to <code>bc</code>, for example.</p>
<p>When you look at all the distinct repeating sets of a prime number, all digits appear almost the same number of times. Some digits may appear one more time than others, but that&#8217;s as uneven as you can get. A corollary in [1] states that if <em>p</em> = 10<em>q</em> + <em>r</em>, with 0 &lt; <em>r</em> &lt; 10, then 11 − <em>r</em> digits appear <em>q</em> times, and <em>r</em> − 1 digits appear <em>q</em> + 1 times.</p>
<p>Looking back at the example with <em>p</em> = 13, we have <em>q</em> = 1 and <em>r</em> = 3. The corollary says we should expect 8 digits to appear once and 2 digits to appear twice. And that&#8217;s what we see: in the sets 076923 and 153846 we have 3 and 6 repeated twice and the remaining 8 digits appear once.</p>
<p>In the example with <em>p</em> = 41, we have <em>q</em> = 4 and <em>r</em> = 1. So we expect all 10 digits to appear 4 times, which is the case.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2014/11/12/cyclic-fractions/">Cyclic fractions</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2016/10/18/periods-of-fractions/">Periods of fractions</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2018/05/30/calendars-and-continued-fractions/">Calendars and continued fractions</a></li>
</ul>
<p>[1] James K. Schiller. A Theorem in the Decimal Representation of Rationals. The American Mathematical Monthly<br />
Vol. 66, No. 9 (Nov., 1959), pp. 797-798</p>The post <a href="https://www.johndcook.com/blog/2026/04/10/fraction-digits/">Distribution of digits in fractions</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/10/fraction-digits/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>The Great Pyramid of Giza and the Speed of Light</title>
		<link>https://www.johndcook.com/blog/2026/04/09/pyramid-speed-of-light/</link>
					<comments>https://www.johndcook.com/blog/2026/04/09/pyramid-speed-of-light/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 17:54:21 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246957</guid>

					<description><![CDATA[<p>Saw a post on X saying that the latitude of the Pyramid of Giza is the same as the speed of light. I looked into this, expecting it to be approximately true. It&#8217;s exactly true in the sense that the speed of light in vacuum is 299,792,458 m/s and the line of latitude 29.9792458° N [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/09/pyramid-speed-of-light/">The Great Pyramid of Giza and the Speed of Light</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Saw a <a href="https://x.com/Andercot/status/2042088784255447062?s=20">post</a> on X saying that the latitude of the Pyramid of Giza is the same as the speed of light.</p>
<p>I looked into this, expecting it to be approximately true. It&#8217;s <em>exactly</em> true in the sense that the speed of light in vacuum is 299,792,458 m/s and the line of latitude 29.9792458° N passes through the pyramid. The exact center of the pyramid is at 29.97917° N, 31.13417° E.</p>
<p>Of course this is a coincidence. Even if you believe that somehow the ancient Egyptians knew the speed of light, the meter was defined four millennia after the pyramid was built. </p>The post <a href="https://www.johndcook.com/blog/2026/04/09/pyramid-speed-of-light/">The Great Pyramid of Giza and the Speed of Light</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/09/pyramid-speed-of-light/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Random hexagon fractal</title>
		<link>https://www.johndcook.com/blog/2026/04/09/random-hexagon-fractal/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 17:25:08 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Geometry]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246955</guid>

					<description><![CDATA[<p>I recently ran across a post on X describing a process for creating a random fractal. First, pick a random point c inside a hexagon. Then at each subsequent step, pick a random side of the hexagon and create the triangle formed by that side and c. Update c to be the center of the new triangle [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/09/random-hexagon-fractal/">Random hexagon fractal</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>I recently ran across a <a href="https://x.com/LensScientific/status/2041951822727016930?s=20">post</a> on X describing a process for creating a random fractal. First, pick a random point <em>c</em> inside a hexagon.</p>
<p>Then at each subsequent step, pick a random side of the hexagon and create the triangle formed by that side and <em>c</em>. Update <em>c</em> to be the center of the new triangle and plot <em>c</em>.</p>
<p>Note that you only choose a random <em>point</em> inside the hexagon once. After that you randomly choose <em>sides</em>.</p>
<p>Now there are <a href="https://faculty.evansville.edu/ck6/encyclopedia/etc.html">many</a> ways to define the center of a triangle. I assumed the original meant barycenter (centroid) when it said &#8220;center&#8221;, and apparently that was correct. I was able to create a similar figure.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/hex_barycenter.png" width="314" height="355" /></p>
<p>But if you define center differently, you get a different image. For example, here&#8217;s what you get when you use the incenter, the center of the largest circle inside the triangle.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/hex_incenter.png" width="314" height="355" /></p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href="https://www.johndcook.com/blog/2025/09/11/random-inside-triangle/">Randomly selecting points in a triangle</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2023/09/09/triangle-subdivision/'>Subdividing a triangle with various centers</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2025/08/16/randomly-generated-dragon/'>Randomly generated dragon fractal</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2017/07/08/the-chaos-game-and-the-sierpinski-triangle/">The chaos game and the Sierpinski triangle</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/09/random-hexagon-fractal/">Random hexagon fractal</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Root prime gap</title>
		<link>https://www.johndcook.com/blog/2026/04/08/andrica/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 00:18:57 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Number theory]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246954</guid>

					<description><![CDATA[<p>I recently found out about Andrica&#8217;s conjecture: the square roots of consecutive primes are less than 1 apart. In symbols, Andrica&#8217;s conjecture says that if pn and pn+1 are consecutive prime numbers, then √pn+1 − √pn &#60; 1. This has been empirically verified for primes up to 2 × 1019. If the conjecture is true, [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/08/andrica/">Root prime gap</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>I recently found out about Andrica&#8217;s conjecture: the square roots of consecutive primes are less than 1 apart.</p>
<p>In symbols, Andrica&#8217;s conjecture says that if <em>p</em><sub><em>n</em></sub> and <em>p</em><sub><em>n</em>+1</sub> are consecutive prime numbers, then</p>
<p style="padding-left: 40px;">√<em>p</em><sub><em>n</em>+1</sub> − √<em>p</em><sub><em>n</em></sub> &lt; 1.</p>
<p>This has been empirically verified for primes up to 2 × 10<sup>19</sup>.</p>
<p>If the conjecture is true, it puts an upper bound on how long you&#8217;d have to search to find the next prime:</p>
<p style="padding-left: 40px;"><em>p</em><sub><em>n</em>+1</sub> &lt; 1 + 2√<em>p</em><sub><em>n</em></sub>  + <em>p</em><sub><em>n</em></sub>,</p>
<p>which would be an improvement on the Bertrand-Chebyshev theorem that says</p>
<p style="padding-left: 40px;"><em>p</em><sub><em>n</em>+1</sub> &lt; 2<em>p</em><sub><em>n</em></sub>.</p>
<p>&nbsp;</p>The post <a href="https://www.johndcook.com/blog/2026/04/08/andrica/">Root prime gap</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A Three- and a Four- Body Problem</title>
		<link>https://www.johndcook.com/blog/2026/04/08/artemis-1-apollo-12/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 23:30:15 +0000</pubDate>
				<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246952</guid>

					<description><![CDATA[<p>Last week I wrote about the orbit of Artemis II. The orbit of Artemis I was much more interesting. Because Artemis I was unmanned, it could spend a lot more time in orbit. The Artemis I mission took 25 days while Artemis II will take 10 days. Artemis I took an unusual path, orbiting the [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/08/artemis-1-apollo-12/">A Three- and a Four- Body Problem</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p><a href="https://www.johndcook.com/blog/2026/04/02/artemis-apollo/">Last week</a> I wrote about the orbit of Artemis II. The orbit of Artemis I was much more interesting.</p>
<p>Because Artemis I was unmanned, it could spend a lot more time in orbit. The Artemis I mission took 25 days while Artemis II will take 10 days. Artemis I took an unusual path, orbiting the moon the opposite direction of the moon&#8217;s orbit around earth. <a href="https://www.youtube.com/watch?v=AvVFy3Feb1U&amp;list=WL&amp;index=2">This video</a> by Primal Space demonstrates the orbit both from the perspective of earth and from the perspective of the Moon.</p>
<p><a href="https://www.youtube.com/watch?v=vLefsklLkqQ&amp;list=WL&amp;index=1">Another video</a> from Primal Space describes the orbit of the third stage of Apollo 12. This stage was supposed to orbit around the sun in 1971, but an error sent it on a complicated unstable orbit of the earth, moon, and sun. It returned briefly to earth in 2002 and expected to return sometime in the 2040s.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2021/12/28/lagrange-points-l1-and-l2/'>Finding Lagrange points</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2021/12/30/stable-lagrange-points/'>When are Lagrange points stable?</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2024/01/27/butterflies-dont-work-that-way/'>Bad takes on chaos theory</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/08/artemis-1-apollo-12/">A Three- and a Four- Body Problem</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Toffoli gates are all you need</title>
		<link>https://www.johndcook.com/blog/2026/04/06/tofolli-gates/</link>
					<comments>https://www.johndcook.com/blog/2026/04/06/tofolli-gates/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 00:33:23 +0000</pubDate>
				<category><![CDATA[Computing]]></category>
		<category><![CDATA[Information theory]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246950</guid>

					<description><![CDATA[<p>Landauer&#8217;s principle gives a lower bound on the amount of energy it takes to erase one bit of information: E ≥ log(2) kB T where kB is the Boltzmann constant and T is the ambient temperature in Kelvin. The lower bound applies no matter how the bit is physically stored. There is no theoretical lower [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/06/tofolli-gates/">Toffoli gates are all you need</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Landauer&#8217;s principle gives a lower bound on the amount of energy it takes to erase one bit of information:</p>
<p style="padding-left: 40px;"><em>E</em> ≥ log(2) <em>k</em><sub><em>B</em></sub> <em>T</em></p>
<p>where <em>k</em><sub><em>B</em></sub> is the Boltzmann constant and <em>T</em> is the ambient temperature in Kelvin. The lower bound applies no matter how the bit is physically stored. There is no theoretical lower limit on the energy required to carry out a reversible calculation.</p>
<p>In practice the energy required to erase a bit is around a billion times greater than Landauer&#8217;s lower bound. You might reasonably conclude that reversible computing isn&#8217;t practical since we&#8217;re nowhere near the Landauer limit. And yet in practice reversible circuits have been demonstrated to use less energy than conventional circuits. We&#8217;re far from the ultimate physical limit, but reversibility still provides practical efficiency gains today.</p>
<p>A Toffoli gate is a building block of reversible circuits. A Toffoli gate takes three bits as input and returns three bits as output:</p>
<p style="padding-left: 40px;"><em>T</em>(<em>a</em>, <em>b</em>, <em>c</em>) = (<em>a</em>, <em>b</em>, <em>c</em> XOR (<em>a</em> AND <em>b</em>)).</p>
<p>In words, a Toffoli gate flips its third bit if and only if the first two bits are ones.</p>
<p>A Toffoli gate is its own inverse, and so it is reversible. This is easy to prove. If <em>a</em> = <em>b</em> = 1, then the third bit is flipped. Apply the Toffoli gate again flips the bit back to what it was. If <em>ab</em> = 0, i.e. at least one of the first two bits is zero, then the Toffoli gate doesn&#8217;t change anything.</p>
<p>There is a theorem that any Boolean function can be computed by a circuit made of only NAND gates. We&#8217;ll show that you can construct a NAND gate out of Toffoli gates, which shows any Boolean function can be computed by a circuit made of Toffoli gates, which shows any Boolean function can be computed reversibly.</p>
<p>To compute NAND, i.e. ¬ (<em>a</em> ∧ <em>b</em>), send (<em>a</em>, <em>b</em>, 1) to the Toffoli gate. The third bit of the output will contain the NAND of <em>a</em> and <em>b</em>.</p>
<p style="padding-left: 40px;"><em>T</em><em>(a, b</em>, 1) = (<em>a</em>, <em>b</em>, ¬ (<em>a</em> ∧ <em>b</em>))</p>
<p>A drawback of reversible computing is that you may have to send in more input than you&#8217;d like and get back more output than you&#8217;d like, as we can already see from the example above. NAND takes two input bits and returns one output bit. But the Toffoli gate simulating NAND takes three input bits and returns three output bits.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2021/05/03/self-reproducing-cellular-automata/">Fredkin automata</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2025/07/31/machine-learning-by-satisfiability-solving/">Machine learning by satisfiability solving</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2020/11/19/minimizing-boolean-expressions/">Minimizing Boolean expressions</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/06/tofolli-gates/">Toffoli gates are all you need</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/06/tofolli-gates/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>HIPAA compliant AI</title>
		<link>https://www.johndcook.com/blog/2026/04/05/hipaa-compliant-ai/</link>
					<comments>https://www.johndcook.com/blog/2026/04/05/hipaa-compliant-ai/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Sun, 05 Apr 2026 23:04:46 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Privacy]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246947</guid>

					<description><![CDATA[<p>The best way to run AI and remain HIPAA compliant is to run it locally on your own hardware, instead of transferring protected health information (PHI) to a remote server by using a cloud-hosted service like ChatGPT or Claude. [1]. There are HIPAA-compliant cloud options, but they&#8217;re both restrictive and expensive. Even enterprise options are [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/05/hipaa-compliant-ai/">HIPAA compliant AI</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The best way to run AI and remain HIPAA compliant is to run it locally on your own hardware, instead of transferring protected health information (PHI) to a remote server by using a cloud-hosted service like ChatGPT or Claude. [1].</p>
<p>There are HIPAA-compliant cloud options, but they&#8217;re both restrictive and expensive. Even enterprise options are not &#8220;HIPAA compliant&#8221; out of the box. Instead, they are &#8220;HIPAA eligible&#8221; or that they &#8220;support HIPAA compliance,&#8221; because you still need the right Business Associate Agreement (BAA), configuration, logging, access controls, and internal process around it, and the end product often ends up far less capable than a frontier model. The least expensive and therefore most accessible services do not even allow this as an option.</p>
<p>Specific examples:</p>
<ul>
<li style="margin-bottom: 20px;">Only sales-managed ChatGPT Enterprise or Edu customers are eligible for a BAA, and OpenAI explicitly says it does not offer a BAA for ChatGPT Business. The consumer ChatGPT Health product says HIPAA and BAAs do not apply. ChatGPT for Healthcare pricing is based on ChatGPT Enterprise, depends on organization size and deployment needs, and requires contacting sales. Even within Enterprise, OpenAI&#8217;s Regulated Workspace <a href="https://cdn.openai.com/osa/chatgpt-regulated-workspace.pdf">spec</a> lists Codex and the multi-step Agent feature as &#8220;Non-Included Functionality,&#8221; i.e. off limits for PHI.</li>
<li style="margin-bottom: 20px;">Google says Gemini can support HIPAA workloads in Workspace, but NotebookLM is not covered by Google&#8217;s BAA, and Gemini in Chrome is automatically blocked for BAA customers. If a work or school account does not have enterprise-grade data protections, chats in the Gemini app may be reviewed by humans and used to improve Google&#8217;s products.</li>
<li style="margin-bottom: 20px;">GitHub Copilot, despite being a Microsoft product, is not under Microsoft&#8217;s BAA. Azure OpenAI Service is, but only for text endpoints. While Microsoft is working on their own models, it is unlikely that they will deviate significantly here.</li>
<li style="margin-bottom: 20px;">Anthropic says its BAA covers only certain &#8220;HIPAA-ready&#8221; services, namely the first-party API and a HIPAA-ready Enterprise plan, and does not cover Claude Free, Pro, Max, Team, Workbench, Console, Cowork, or Claude for Office. The HIPAA-ready Enterprise offering is sales-assisted only. Bundled Claude Code seats are not covered. AWS Bedrock API calls can work, but this requires extensive configuration and carries its own complexities and restrictions.</li>
</ul>
<p>Running AI locally is already practical as of early 2026. Open-weight models that approach the quality of commercial coding assistants run on consumer hardware. A single high-end GPU or a recent Mac with enough unified memory can run a 70B-parameter model at a reasonable token speed.</p>
<p>There&#8217;s an interesting interplay between economies of scale and diseconomies of scale. Cloud providers can run a data center at a lower cost per server than a small company can. That&#8217;s the economies of scale. But running HIPAA-compliant computing in the cloud, particularly with AI providers, incurs a large direct costs and indirect bureaucratic costs. That&#8217;s the diseconomies of scale. Smaller companies may benefit more from local AI than larger companies if they need to be HIPAA-compliant.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/expert-hipaa-deidentification/'>HIPAA expert determination</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2026/03/02/an-ai-odyssey-part-1-correctness-conundrum/'>An AI Odyssey</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2022/01/15/queueing-and-scale/'>Queueing and economies of scale</a></li>
</ul>
<p>[1] This post is not legal advice. My clients are often lawyers, but I&#8217;m not a lawyer.</p>The post <a href="https://www.johndcook.com/blog/2026/04/05/hipaa-compliant-ai/">HIPAA compliant AI</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/05/hipaa-compliant-ai/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Kalman and Bayes average grades</title>
		<link>https://www.johndcook.com/blog/2026/04/04/kalman-bayes/</link>
					<comments>https://www.johndcook.com/blog/2026/04/04/kalman-bayes/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Sat, 04 Apr 2026 15:00:14 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Kalman filter]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246946</guid>

					<description><![CDATA[<p>This post will look at the problem of updating an average grade as a very simple special case of Bayesian statistics and of Kalman filtering. Suppose you&#8217;re keeping up with your average grade in a class, and you know your average after n tests, all weighted equally. m = (x1 + x2 + x3 + [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/04/kalman-bayes/">Kalman and Bayes average grades</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>This post will look at the problem of updating an average grade as a very simple special case of Bayesian statistics and of Kalman filtering.</p>
<p>Suppose you&#8217;re keeping up with your average grade in a class, and you know your average after <em>n</em> tests, all weighted equally.</p>
<p style="padding-left: 40px;"><em>m</em> = (<em>x</em><sub>1</sub> + <em>x</em><sub>2</sub> + <em>x</em><sub>3</sub> + … + <em>x</em><sub><em>n</em></sub>) / <em>n</em>.</p>
<p>Then you get another test grade back and your new average is</p>
<p style="padding-left: 40px;"><em>m</em>′ = (<em>x</em><sub>1</sub> + <em>x</em><sub>2</sub> + <em>x</em><sub>3</sub> + … + <em>x</em><sub><em>n</em></sub> + <em>x</em><sub><em>n</em>+1</sub>) / (<em>n</em> + 1).</p>
<p>You don&#8217;t need the individual test grades once you&#8217;ve computed the average; you can instead remember the average <em>m</em> and the number of grades <em>n</em> [1]. Then you know the sum of the first <em>n</em> grades is <em>nm</em> and so</p>
<p style="padding-left: 40px;"><em>m</em>′ = (<em>nm</em> + <em>x</em><sub><em>n</em>+1</sub>) / (<em>n</em> + 1).</p>
<p>You could split that into</p>
<p style="padding-left: 40px;"><em>m</em>′ = <em>w</em><sub>1</sub> <em>m</em> + <em>w</em><sub>2</sub> <em>x</em><sub><em>n</em>+1</sub></p>
<p>where <em>w</em><sub>1</sub> = <em>n</em>/(<em>n</em> + 1) and <em>w</em><sub>2</sub> = 1/(<em>n</em> + 1). In other words, the new mean is the weighted average of the previous mean and the new score.</p>
<p>A <strong>Bayesian</strong> perspective would say that your posterior expected grade <em>m</em>′ is a compromise between your prior expected grade <em>m</em> and the new data <em>x</em><sub><em>n</em>+1</sub>. [2]</p>
<p>You could also rewrite the equation above as</p>
<p style="padding-left: 40px;"><em>m</em>′ = <em>m</em> + (<em>x</em><sub><em>n</em>+1</sub> − <em>m</em>)/(<em>n</em> + 1) = <em>m</em> + <em>K</em>Δ</p>
<p>where <em>K</em> = 1/(<em>n</em> + 1) and Δ = <em>x</em><sub><em>n</em>+1</sub> − <em>m</em>. In <strong>Kalman</strong> filter terms, <em>K</em> is the gain, the proportionality constant for how the change in your state is proportional to the difference between what you saw and what you expected.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2011/09/27/bayesian-amazon/">A Bayesian view of Amazon Resellers</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2016/07/14/kalman-filters-and-functional-programming/">Kalman filters and functional programming</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2016/05/24/kalman-filters-and-bottom-up-learning/">Kalman filters and bottom-up learning</a></li>
</ul>
<p>[1] In statistical terms, the mean is a <a href="https://www.johndcook.com/blog/2016/09/12/insufficient-statistics/">sufficient statistic</a>.</p>
<p>[2] You could flesh this out by using a normal likelihood and a flat improper prior.</p>The post <a href="https://www.johndcook.com/blog/2026/04/04/kalman-bayes/">Kalman and Bayes average grades</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/04/kalman-bayes/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Roman moon, Greek moon</title>
		<link>https://www.johndcook.com/blog/2026/04/03/roman-moon-greek-moon/</link>
					<comments>https://www.johndcook.com/blog/2026/04/03/roman-moon-greek-moon/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 16:31:54 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Orbital mechanics]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246945</guid>

					<description><![CDATA[<p>I used the term perilune in yesterday&#8217;s post about the flight path of Artemis II. When Artemis is closest to the moon it will be furthest from earth because its closest approach to the moon, its perilune, is on the side of the moon opposite earth. Perilune is sometimes called periselene. The two terms come from [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/03/roman-moon-greek-moon/">Roman moon, Greek moon</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>I used the term <strong>perilune</strong> in <a href="https://www.johndcook.com/blog/2026/04/02/artemis-apollo/">yesterday&#8217;s post</a> about the flight path of Artemis II. When Artemis is <em>closest</em> to the moon it will be <em>furthest</em> from earth because its closest approach to the moon, its perilune, is on the side of the moon opposite earth.</p>
<p>Perilune is sometimes called <strong>periselene</strong>. The two terms come from two goddesses associated with the moon, the Roman Luna and the Greek Selene. Since the peri- prefix is Greek, perhaps periselene would be preferable. But we&#8217;re far more familiar with words associated with the moon being based on Luna than Selene.</p>
<p>The neutral terms for closest and furthest points in an orbit are <strong>periapsis</strong> and <strong>apoapsis</strong>. but there are more colorful terms that are specific to orbiting particular celestial objects. The terms <strong>perigee</strong> and <strong>apogee</strong> for orbiting earth (from the Greek Gaia) are most familiar, and the terms <strong>perihelion</strong> and <strong>aphelion</strong> (not apohelion) for orbiting the sun (from the Greek Helios) are the next most familiar.</p>
<p>The terms <strong>perijove</strong> and <strong>apojove</strong> are unfamiliar, but you can imagine what they mean. Others like <strong>periareion</strong> and <strong>apoareion</strong>, especially the latter, are truly arcane.</p>The post <a href="https://www.johndcook.com/blog/2026/04/03/roman-moon-greek-moon/">Roman moon, Greek moon</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/03/roman-moon-greek-moon/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Hyperbolic version of Napier&#8217;s mnemonic</title>
		<link>https://www.johndcook.com/blog/2026/04/02/hyperbolic-napier-mnemonic/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 01:38:49 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Geometry]]></category>
		<category><![CDATA[Memory]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246944</guid>

					<description><![CDATA[<p>I was looking through an old geometry book [1] and saw a hyperbolic analog of Napier&#8217;s mnemonic for spherical trigonometry. In hindsight of course there&#8217;s a hyperbolic analog: there&#8217;s a hyperbolic analog of everything. But I was surprised because I&#8217;d never thought of this before. I suppose the spherical version is famous because of its [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/02/hyperbolic-napier-mnemonic/">Hyperbolic version of Napier’s mnemonic</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>I was looking through an old geometry book [1] and saw a hyperbolic analog of Napier&#8217;s mnemonic for spherical trigonometry. In hindsight of course there&#8217;s a hyperbolic analog: there&#8217;s a hyperbolic analog of everything. But I was surprised because I&#8217;d never thought of this before. I suppose the spherical version is famous because of its practical use in navigational calculations, while the hyperbolic analog is of more theoretical interest.</p>
<p><a href="https://www.johndcook.com/blog/2012/12/23/napiers-mnemonic/">Napier&#8217;s mnemonic</a> is a clever way to remember 10 equations in spherical trig. See the linked post for the meanings of the variables.</p>
<p style="padding-left: 40px;">sin <em>a</em> = sin <em>A</em> sin <em>c</em> = tan <em>b</em> cot <em>B</em><br />
sin <em>b</em> = sin <em>B</em> sin <em>c</em> = tan <em>a</em> cot <em>A</em><br />
cos <em>A</em> = cos <em>a</em> sin B = tan <em>b</em> cot <em>c</em><br />
cos <em>B</em> = cos <em>b</em> sin A = tan <em>a</em> cot <em>c</em><br />
cos <em>c</em> = cot <em>A</em> cot <em>B</em> = cos <em>a</em> cos <em>b</em></p>
<p>The hyperbolic analog replaces every circular function of <em>a</em>, <em>b</em>, or <em>c</em> with its hyperbolic counterpart.</p>
<p style="padding-left: 40px;">sinh <em>a</em> = sin <em>A</em> sinh <em>c</em> = tanh <em>b</em> cot <em>B</em><br />
sinh <em>b</em> = sin <em>B</em> sinh <em>c</em> = tanh <em>a</em> cot <em>A</em><br />
cos <em>A</em> = cosh <em>a</em> sin B = tanh <em>b</em> coth <em>c</em><br />
cos <em>B</em> = cosh <em>b</em> sin A = tanh <em>a</em> coth <em>c</em><br />
cosh <em>c</em> = cot <em>A</em> cot <em>B</em> = cosh <em>a</em> cosh <em>b</em></p>
<p>[1] D. M. Y. Sommerville. The Elements of Non-Euclidean Geometry. 1919.</p>The post <a href="https://www.johndcook.com/blog/2026/04/02/hyperbolic-napier-mnemonic/">Hyperbolic version of Napier’s mnemonic</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artemis II, Apollo 8, and Apollo 13</title>
		<link>https://www.johndcook.com/blog/2026/04/02/artemis-apollo/</link>
					<comments>https://www.johndcook.com/blog/2026/04/02/artemis-apollo/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 14:14:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Orbital mechanics]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246941</guid>

					<description><![CDATA[<p>The Artemis II mission launched yesterday. Much like the Apollo 8 mission in 1968, the goal is to go around the moon in preparation for a future mission that will land on the moon. And like Apollo 13, the mission will swing around the moon rather than entering lunar orbit. Artemis II will deliberately follow [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/02/artemis-apollo/">Artemis II, Apollo 8, and Apollo 13</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The Artemis II mission launched yesterday. Much like the Apollo 8 mission in 1968, the goal is to go around the moon in preparation for a future mission that will land on the moon. And like Apollo 13, the mission will swing around the moon rather than entering lunar orbit. Artemis II will deliberately follow the trajectory around the moon that Apollo 13 took as a fallback. </p>
<p>Apollo 8 spent 2 hours and 44 minutes in low earth orbit (LEO) before performing trans-lunar injection (TLI) and heading toward the moon. Artemis II made one low earth orbit before moving to high earth orbit (HEO) where it will stay for around 24 hours before TLI. The Apollo 8 LEO was essentially circular at an altitude of around 100 nautical miles. The Artemis II HEO is highly eccentric with an apogee of around 40,000 nautical miles.</p>
<p>Apollo 8 spent roughly three days traveling to the moon, measured as the time between TLI and lunar insertion orbit. Artemis II will not orbit the moon but instead swing past the moon on a &#8220;lunar free-return trajectory&#8221; like Apollo 13. The time between Artemis&#8217; TLI and perilune (the closest approach to the moon, on the far side) is expected to be about four days. For Apollo 13, this period was three days.</p>
<p><img loading="lazy" decoding="async" src="https://www.johndcook.com/artemis.png" width="500" height="268" class="aligncenter size-medium" /></p>
<p>The furthest any human has been from earth was the Apollo 13 perilune at about 60 nautical miles above the far side of the moon. Artemis is expected to break this record with a perilune of between 3,500 and 5,200 nautical miles.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2022/12/15/sphere-of-infuence/'>Sphere of influence</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2020/02/08/arenstorf-orbit/'> Arenstorf&#8217;s orbit</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2020/06/12/new-math-for-going-to-the-moon/'>Math developed for going to the moon</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/02/artemis-apollo/">Artemis II, Apollo 8, and Apollo 13</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/04/02/artemis-apollo/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Pentagonal numbers are truncated triangular numbers</title>
		<link>https://www.johndcook.com/blog/2026/04/01/truncated-triangular-numbers/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 13:23:42 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Number theory]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246933</guid>

					<description><![CDATA[<p>Pentagonal numbers are truncated triangular numbers. You can take the diagram that illustrates the nth pentagonal number and warp it into the base of the image that illustrates the (2n − 1)st triangular number. If you added a diagram for the (n − 1)st triangular number to the bottom of the image on the right, you&#8217;d [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/04/01/truncated-triangular-numbers/">Pentagonal numbers are truncated triangular numbers</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Pentagonal numbers are truncated triangular numbers. You can take the diagram that illustrates the <em>n</em>th pentagonal number and warp it into the base of the image that illustrates the (2<em>n</em> − 1)st triangular number. If you added a diagram for the (<em>n</em> − 1)st triangular number to the bottom of the image on the right, you&#8217;d have a diagram for the (2<em>n</em> − 1)st triangular number.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" style="background-color: white;" src="https://www.johndcook.com/pent_trunk.svg" width="587" height="224" /></p>
<p>In short,</p>
<p style="padding-left: 40px;"><em>P</em><sub><em>n</em></sub> = <em>T</em><sub>2<em>n</em> − 1</sub> − <em>T</em><sub><em>n</em>.</sub></p>
<p>This is trivial to prove algebraically, though the visual proof above is more interesting.</p>
<p>The proof follows immediately from the definition of pentagonal numbers</p>
<p style="padding-left: 40px;"><em>P</em><sub><em>n</em></sub> = (3<em>n</em>² − <em>n</em>)/2</p>
<p>and triangular numbers</p>
<p style="padding-left: 40px;"><em>T</em><sub><em>n</em></sub> = (<em>n</em>² − <em>n</em>)/2.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2021/11/10/partitions-and-pentagons/'>Partitions and pentagons</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2018/06/07/tetrahedral-numbers-2/'>Tetrahedral numbers</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2023/04/14/euclid-xiii-10/'> A pentagon, hexagon, and decagon walk into a bar …<br />
 </a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/04/01/truncated-triangular-numbers/">Pentagonal numbers are truncated triangular numbers</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Quantum Y2K</title>
		<link>https://www.johndcook.com/blog/2026/03/31/quantum-y2k/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 14:43:06 +0000</pubDate>
				<category><![CDATA[Computing]]></category>
		<category><![CDATA[Cryptography]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246932</guid>

					<description><![CDATA[<p>I&#8217;m skeptical that quantum computing will become practical. However, if it does become practical before we&#8217;re prepared, the world&#8217;s financial system could collapse. Everyone agrees we should prepare for quantum computing, even those of us who doubt it will be practical any time soon. Quantum computers exist now, but the question is when and if [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/31/quantum-y2k/">Quantum Y2K</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m skeptical that quantum computing will become practical. However, if it does become practical before we&#8217;re prepared, the world&#8217;s financial system could collapse. Everyone agrees we should prepare for quantum computing, even those of us who doubt it will be practical any time soon.</p>
<p>Quantum computers exist now, but the question is when and if a cryptographically relevant quantum computer (CRQC) is coming. At the moment, a quantum computer cannot factor 21 without cheating (i.e. not implementing circuits that you know <em>a priori</em> won&#8217;t be needed). But that could change suddenly. And some believe quantum computers could quickly go from being able to factor numbers with two digits to being able to factor numbers with thousands of digits (i.e. breaking RSA encryption) without much incremental transition between.</p>
<p>The move to post-quantum encryption may be a lot like Y2K, fixing vast amounts of 20th century software that represented years with two digits. Y2K turned out to be a big nothingburger, but only because the world spent half a trillion dollars on preparation to make sure it would be a big nothingburger.</p>
<p>Programmers in the 1970s obviously knew that the year 2000 was coming, but they also knew that they needed to conserve bytes at the time. And they assumed, reasonably but incorrectly, that their software would all be replaced before two-digit dates became a problem.</p>
<p>Programmers still need to conserve bytes, though this is less obvious today. Quantum-resistant signatures and encryption keys are one or two orders of magnitude bigger. This takes up bandwidth and storage space, which may or may not be a significant problem, depending on context. Programmers may conclude that it&#8217;s not (yet) worth the extra overhead to use post-quantum encryption. Like their counterparts 50 years ago, they may assume, rightly or wrongly, that their software will be replaced by the time it needs to be.</p>
<p>Moving to post-quantum cryptography ASAP is not a great idea if you can afford to be more strategic. It takes many years to gain confidence that new encryption algorithms are secure. The SIKE algorithm, for example, was a semi-finalist the NIST post-quantum encryption competition, but someone found a way to break it using an hour of computing on a laptop.</p>
<p>Another reason to not be in a hurry is that it may be possible to be more clever than simply replacing pre-quantum algorithms with post-quantum analogs. For example, some blockchains are exploring zero-knowledge proofs as a way to aggregate signatures. Simply moving to post-quantum signatures could make every transaction block 100 times bigger. But replacing a set of signatures by a (post-quantum) zero-knowledge proof of the existence of the signatures, transaction blocks could be <em>smaller</em> than now.</p>
<p>As with Y2K, the move to post-quantum cryptography will be gradual. Some things have already moved, and some are in transition now. You may have seen the following warning when connecting to a remote server.</p>
<pre>** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to "store now, decrypt later" attacks.
** The server may need to be upgraded. See https://openssh.com/pq.html
</pre>
<p>Key sizes don&#8217;t matter as much to <code>sftp</code> connections as they do to blockchains. And the maturity of post-quantum algorithms is mitigated by OpenSSH using hybrid encryption: well-established encryption (like ECDH) wrapped by newer quantum-resistant encryption (like MK-KEM). If the newer algorithm isn&#8217;t as secure as expected, you&#8217;re no worse off than if you had only used the older algorithm.</p>
<p>When clocks rolled over from 1999 to 2000 without incident, many people felt the concern about Y2K had been overblown. Maybe something similar will happen with quantum computing. Let&#8217;s hope so.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2019/03/23/code-based-cryptography/">Mixing error-correcting codes and cryptography</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2019/02/22/regression-modular-arithmetic-and-pqc/">Regression ond PQC</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/crypto/">Blockchains and cryptocurrency</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/03/31/quantum-y2k/">Quantum Y2K</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Morse code tree</title>
		<link>https://www.johndcook.com/blog/2026/03/31/morse-code-tree/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 12:12:31 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Morse code]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246931</guid>

					<description><![CDATA[<p>Peter Vogel posted the following image on X yesterday. The receive side of the coin is a decision tree for decoding Morse code. The shape is what makes this one interesting. Decision trees are typically not very compact. Each branch is usually on its own horizontal level, with diagonal lines going down from each node [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/31/morse-code-tree/">Morse code tree</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p><a href="https://x.com/PeterVogel/status/2038637417868267567">Peter Vogel</a> posted the following image on X yesterday.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/morse_code_tree.jpeg" width="400" height="659" /></p>
<p>The receive side of the coin is a decision tree for decoding Morse code. The shape is what makes this one interesting.</p>
<p>Decision trees are typically not very compact. Each branch is usually on its own horizontal level, with diagonal lines going down from each node to its children. But by making the lines either horizontal or vertical, the tree fits nicely into a circle.</p>
<p>I thought for a second that the designer had made the choices of horizontal or vertical segments in order to make the tree compact, but that&#8217;s not so. The direction of the path through the tree changes when and only when the Morse code switches from dot to dash or dash to dot.</p>
<p>It would be fun to play around with this, using the same design idea for other binary trees.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2016/05/03/family-tree-numbering/'>Family tree numbering</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2022/02/21/q-codes-in-seveneves/'>Q code tree</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2025/04/23/qrq/'>Morse code and psychological limits</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/03/31/morse-code-tree/">Morse code tree</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>An AI Odyssey, Part 3: Lost Needle in the Haystack</title>
		<link>https://www.johndcook.com/blog/2026/03/27/an-ai-odyssey-part-3-lost-needle-in-the-haystack/</link>
		
		<dc:creator><![CDATA[Wayne Joubert]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 16:06:18 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI reliability]]></category>
		<category><![CDATA[Usability]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246929</guid>

					<description><![CDATA[<p>While shopping on a major e-commerce site, I wanted to get an answer to an obscure question about a certain product. Not finding the answer immediately on the product page, I thought I&#8217;d try clicking the AI shopping assistant helper tool to ask this question. I waited with anticipation for an answer I would expect [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/27/an-ai-odyssey-part-3-lost-needle-in-the-haystack/">An AI Odyssey, Part 3: Lost Needle in the Haystack</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>While shopping on a major e-commerce site, I wanted to get an answer to an obscure question about a certain product.</p>
<p>Not finding the answer immediately on the product page, I thought I&#8217;d try clicking the AI shopping assistant helper tool to ask this question.</p>
<p>I waited with anticipation for an answer I would expect be more informative and useful than a standard search result. But it was not to be. The AI tool had nothing worthwhile.</p>
<p>Then I decided on an old-fashioned keyword search across all the product reviews. And, lo and behold, I immediately found several credible reviews addressing my question.</p>
<p>It is not good usability when multiple search mechanisms exist but only one of them is reliable. And it is surprising that a retrieval-based approach (e.g., RAG) could not at least match the effectiveness of a simple keyword search over reviews.</p>
<p>Models are capable, but effective integration can be lacking. Without improvements for cases like this, customers will not be satisfied users of these new AI tools.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2026/03/02/an-ai-odyssey-part-1-correctness-conundrum/">An AI Odyssey, Part 1: Correctness Conundrum</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2026/03/04/an-ai-odyssey-part-2-prompting-peril/">An AI Odyssey, Part 2: Prompting Peril</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/03/27/an-ai-odyssey-part-3-lost-needle-in-the-haystack/">An AI Odyssey, Part 3: Lost Needle in the Haystack</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Computing sine and cosine of complex arguments with only real functions</title>
		<link>https://www.johndcook.com/blog/2026/03/27/complex-argument/</link>
					<comments>https://www.johndcook.com/blog/2026/03/27/complex-argument/#comments</comments>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 11:33:04 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Complex analysis]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246928</guid>

					<description><![CDATA[<p>Suppose you have a calculator or math library that only handles real arguments but you need to evaluate sin(3 + 4i). What do you do? If you&#8217;re using Python, for example, and you don&#8217;t have NumPy installed, you can use the built-in math library, but it will not accept complex inputs. &#62;&#62;&#62; import math &#62;&#62;&#62; [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/27/complex-argument/">Computing sine and cosine of complex arguments with only real functions</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Suppose you have a calculator or math library that only handles real arguments but you need to evaluate sin(3 + 4<em>i</em>). What do you do?</p>
<p>If you&#8217;re using Python, for example, and you don&#8217;t have NumPy installed, you can use the built-in math library, but it will not accept complex inputs.</p>
<pre>&gt;&gt;&gt; import math
&gt;&gt;&gt; math.sin(3 + 4j)
Traceback (most recent call last):
File "&lt;stdin&gt;", line 1, in &lt;module&gt;
TypeError: must be real number, not complex
</pre>
<p>You can use the following identities to calculate sine and cosine for complex arguments using only real functions.</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/complex_sincos1.svg" alt="\begin{align*} \sin(x + iy) &amp;= \sin x \cosh y + i \cos x \sinh y \\ \cos(x + iy) &amp;= \cos x \cosh y - i \sin x \sinh y \\ \end{align*}" width="324" height="47" /></p>
<p>The proof is very simple: just use the addition formulas for sine and cosine, and the following identities.</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/complex_sincos4.svg" alt="\begin{align*} \sin iz &amp;= i \sinh z \\ \cos iz &amp;= \cosh z \end{align*}" width="122" height="43" /></p>
<p>The following code implements sine and cosine for complex arguments using only the built-in Python functions that accept real arguments. It then tests these against the NumPy versions that accept complex arguments.</p>
<pre>
from math import *
import numpy as np

def complex_sin(z):
    x, y = z.real, z.imag
    return sin(x)*cosh(y) + 1j*cos(x)*sinh(y)

def complex_cos(z):
    x, y = z.real, z.imag
    return cos(x)*cosh(y) - 1j*sin(x)*sinh(y)

z = 3 + 4j
mysin = complex_sin(z)
mycos = complex_cos(z)
npsin = np.sin(z)
npcos = np.cos(z)
assert(abs(mysin - npsin) < 1e-14)
assert(abs(mycos - npcos) < 1e-14)
</pre>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2013/04/23/why-j-for-imaginary-unit/'>Why <em>j</em> for imaginary unit?</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2021/01/05/bootstrapping-math-library/'>Bootstrapping a minimal math library</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2024/08/20/osborn-rule/'>Osborn's rule</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/03/27/complex-argument/">Computing sine and cosine of complex arguments with only real functions</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.johndcook.com/blog/2026/03/27/complex-argument/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Lebesgue constants</title>
		<link>https://www.johndcook.com/blog/2026/03/26/lebesgue-constants/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 20:05:06 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Interpolation]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246925</guid>

					<description><![CDATA[<p>I alluded to Lebesgue constants in the previous post without giving them a name. There I said that the bound on order n interpolation error has the form where h is the spacing between interpolation points and δ is the error in the tabulated values. The constant c depends on the function f being interpolated, and to a [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/26/lebesgue-constants/">Lebesgue constants</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>I alluded to Lebesgue constants in the <a href="https://www.johndcook.com/blog/2026/03/26/table-precision/">previous post</a> without giving them a name. There I said that the bound on order <em>n</em> interpolation error has the form</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst0.svg" alt="ch^{n+1} + \lambda \delta" width="83" height="18" /></p>
<p>where <em>h</em> is the spacing between interpolation points and δ is the error in the tabulated values. The constant <em>c</em> depends on the function <em>f</em> being interpolated, and to a lesser extent on <em>n</em>. The constant λ is independent of <em>f</em> but depends on <em>n</em> and on the relative spacing between the interpolation nodes. This post will look closer at λ.</p>
<p>Given a set of <em>n</em> + 1 nodes <em>T</em></p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst1.svg" alt="a = x_0 &lt; x_1 &lt; x_2 &lt; \cdots &lt; x_{n-1} &lt; x_n = b" width="318" height="16" /></p>
<p>define</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst2.svg" alt="\ell_j(x) := \prod_{\begin{smallmatrix}i=0\\ j\neq i\end{smallmatrix}}^{n} \frac{x-x_i}{x_j-x_i}" width="155" height="69" /></p>
<p>Then the Lebesgue function is defined by</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst3.svg" alt="\lambda_n(x) = \sum_{j=0}^n |\ell_j(x)|" width="148" height="57" /></p>
<p>and the Lebesgue constant for the grid is the maximum value of the Lebesgue function</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst4.svg" alt="\Lambda_n(T)=\max_{x\in[a,b]} \lambda_n(x)" width="165" height="30" /></p>
<p>The values of Λ are difficult to compute, but there are nice asymptotic expressions for Λ when the grid is evenly spaced:</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst5.svg" alt="\Lambda_n \sim \frac{2^{n+1}}{n \log n}" width="100" height="48" /></p>
<p>When the grid points are at the roots of a Chebyshev polynomial then</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/lebesgueconst6.svg" alt="\Lambda_n \approx \frac{2}{\pi} \log(n + 1) + 1" width="177" height="40" /></p>
<p>The previous post mentioned the cases <em>n</em> = 11 and <em>n</em> = 29 for evenly spaced grids. The corresponding values of Λ are approximately 155 and 10995642. So 11th order interpolation is amplifying the rounding error in the tabulated points by a factor of 155, which might be acceptable. But 29th order interpolation is amplifying the rounding error by a factor of over 10 million.</p>
<p>The corresponding values of Λ for Chebyshev-spaced nodes are 2.58 and 3.17. Chebyshev spacing is clearly better for high-order interpolation, when you have that option.</p>The post <a href="https://www.johndcook.com/blog/2026/03/26/lebesgue-constants/">Lebesgue constants</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How much precision can you squeeze out of a table?</title>
		<link>https://www.johndcook.com/blog/2026/03/26/table-precision/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 14:33:37 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Interpolation]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246924</guid>

					<description><![CDATA[<p>Richard Feynman said that almost everything becomes interesting if you look into it deeply enough. Looking up numbers in a table is certainly not interesting, but it becomes more interesting when you dig into how well you can fill in the gaps. If you want to know the value of a tabulated function between values [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/26/table-precision/">How much precision can you squeeze out of a table?</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Richard Feynman said that almost everything becomes interesting if you look into it deeply enough. Looking up numbers in a table is certainly not interesting, but it becomes more interesting when you dig into how well you can fill in the gaps.</p>
<p>If you want to know the value of a tabulated function between values of <em>x</em> given in the table, you have to use interpolation. Linear interpolation is often adequate, but you could get more accurate results using higher-order interpolation.</p>
<p>Suppose you have a function <em>f</em>(<em>x</em>) tabulated at <em>x</em> = 3.00, 3.01, 3.02, …, 3.99, 4.00 and you want to approximate the value of the function at π. You could approximate <em>f</em>(π) using the values of <em>f</em>(3.14) and <em>f</em>(3.15) with linear interpolation, but you could also take advantage of more points in the table. For example, you could use cubic interpolation to calculate <em>f</em>(π) using <em>f</em>(3.13), <em>f</em>(3.14), <em>f</em>(3.15), and <em>f</em>(3.16). Or you could use 29th degree interpolation with the values of <em>f</em> at 3.00, 3.01, 3.02, …, 3.29.</p>
<p>The Lagrange interpolation theorem lets you compute an upper bound on your interpolation error. However, the theorem assumes the values at each of the tabulated points are exact. And for ordinary use, you can assume the tabulated values are exact. The biggest source of error is typically the size of the gap between tabulated <em>x</em> values, not the precision of the tabulated values. Tables were designed so this is true [1].</p>
<p>The bound on order <em>n</em> interpolation error has the form</p>
<p style="padding-left: 40px;"><em>c </em><em>h</em><sup><em>n</em> + 1</sup> + λ δ</p>
<p>where <em>h</em> is the spacing between interpolation points and δ is the error in the tabulated values. The value of <em>c</em> depends on the derivatives of the function you&#8217;re interpolating [2]. The value of λ is at least 1 since λδ is the &#8220;interpolation&#8221; error at the tabulated points.</p>
<p>The accuracy of an interpolated value cannot be better than δ in general, and so you pick the value of <em>n</em> that makes <em>c </em><em>h</em><sup><em>n</em> + 1</sup> less than δ. Any higher value of <em>n</em> is not helpful. And in fact higher values of <em>n</em> are harmful since λ grows exponentially as a function of <em>n </em>[3].</p>
<p>See the <a href="https://www.johndcook.com/blog/2026/03/26/lebesgue-constants/">next post</a> for mathematical details regarding the λs.</p>
<h2>Examples</h2>
<p>Let&#8217;s look at a specific example. Here&#8217;s a piece of a table for natural logarithms from <a href="https://www.johndcook.com/blog/2017/02/26/function-on-cover-of-abramowitz-stegun/">A&amp;S</a>.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-medium" src="https://www.johndcook.com/AS_111_1.png" width="300" height="244" /></p>
<p>Here <em>h</em> = 10<sup>−3</sup>, and so linear interpolation would give you an error on the order of <em>h</em>² = 10<sup>−6</sup>. You&#8217;re never going to get error less than 10<sup>−15</sup> since that&#8217;s the error in the tabulated values, so 4th order interpolation gives you about as much precision as you&#8217;re going to get. Carefully bounding the error would require using the values of <em>c</em> and λ above that are specific to this context. In fact, the interpolation error is on the order of 10<sup>−8</sup> using 5th order interpolation, and that&#8217;s the best you can do.</p>
<p>I&#8217;ll briefly mention a couple more examples from A&amp;S. The book includes a table of sine values, tabulated to 23 decimal places, in increments of <em>h</em> = 0.001 radians. A rough estimate would suggest 7th order interpolation is as high as you should go, and in fact the book indicates that 7th order interpolation will give you 9 figures of accuracy,</p>
<p>Another table from A&amp;S gives values of the Bessel function <em>J</em><sub>0</sub> in with 15 digit values in increments of <em>h</em> = 0.1. It says that 11th order interpolation will give you four decimal places of precision. In this case, fairly high-order interpolation is useful and even necessary. A large number of decimal places are needed in the tabulated values relative to the output precision because the spacing between points is so wide.</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2024/06/03/using-a-table-of-logarithms/">Using a table of logarithms</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2024/08/18/bessel-everett/">Bessel, Everett, and Lagrange interpolation</a></li>
</ul>
<p>[1] I say <em>were</em> because of course people rarely look up function values in tables anymore. Tables and interpolation are still widely used, just not directly by people; computers do the lookup and interpolation on their behalf.</p>
<p>[2] For functions like sine, the value of <em>c</em> doesn&#8217;t grow with <em>n</em>, and in fact decreases slowly as <em>n</em> increases. But for other functions, <em>c</em> can grow with <em>n</em>, which can cause problems like <a href="https://www.johndcook.com/blog/2017/11/18/runge-phenomena/">Runge phenomena</a>.</p>
<p>[2] The constant λ grows exponentially with <em>n</em> for evenly spaced interpolation points, and values in a table are evenly spaced. The constant λ grows only logarithmically for Chebyshev spacing, but this isn&#8217;t practical for a general purpose table.</p>The post <a href="https://www.johndcook.com/blog/2026/03/26/table-precision/">How much precision can you squeeze out of a table?</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Mendeleev to Fourier</title>
		<link>https://www.johndcook.com/blog/2026/03/24/from-mendeleev-to-fourier/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 15:01:35 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Fourier analysis]]></category>
		<category><![CDATA[Inequalities]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246922</guid>

					<description><![CDATA[<p>The previous post looked at an inequality discovered by Dmitri Mendeleev and generalized by Andrey Markov: Theorem (Markov): If P(x) is a real polynomial of degree n, and &#124;P(x)&#124; ≤ 1 on [−1, 1] then &#124;P′(x)&#124; ≤ n² on [−1, 1]. If P(x) is a trigonometric polynomial then Bernstein proved that the bound decreases from n² to n. Theorem [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/24/from-mendeleev-to-fourier/">From Mendeleev to Fourier</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The previous post looked at an inequality discovered by Dmitri Mendeleev and generalized by Andrey Markov:</p>
<blockquote><p><strong>Theorem</strong> (Markov): If <em>P</em>(<em>x</em>) is a real polynomial of degree <em>n</em>, and |<em>P</em>(<em>x</em>)| ≤ 1 on [−1, 1] then |<em>P</em>′(<em>x</em>)| ≤ <em>n</em>² on [−1, 1].</p></blockquote>
<p>If <em>P</em>(<em>x</em>) is a trigonometric polynomial then Bernstein proved that the bound decreases from <em>n</em>² to <em>n</em>.</p>
<p style="padding-left: 40px;"><strong>Theorem</strong> (Bernstein): If <em>P</em>(<em>x</em>) is a trigonometric polynomial of degree <em>n</em>, and |<em>P</em>(<em>z</em>)| ≤ 1 on [−π, π] then |<em>P</em>′(<em>x</em>)| ≤ <em>n</em> on [−π, π].</p>
<p>Now a trigonometric polynomial is a truncated Fourier series</p>
<p><img loading="lazy" decoding="async" class="aligncenter" style="background-color: white;" src="https://www.johndcook.com/trigpoly.svg" alt="T(x) = a_0 + \sum_{n=1}^N a_n \cos nx + \sum_{n=1}^N b_n \sin nx" width="321" height="57" /></p>
<p>and so the max norm of the <em>T</em>′ is no more than <em>n</em> times the max norm of <em>T</em>.</p>
<p>This post and the previous one were motivated by Terence Tao&#8217;s latest post on <a href="https://terrytao.wordpress.com/2026/03/23/local-bernstein-theory-and-lower-bounds-for-lebesgue-constants/">Bernstein theory</a>.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2021/01/14/sturm-hurwitz/'>Zeros of trig polynomials</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/real-and-complex-fourier/'>Convert between real and complex Fourier series</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2022/11/05/solving-trig-equations/'>Systematically solving trig equations</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/03/24/from-mendeleev-to-fourier/">From Mendeleev to Fourier</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mendeleev&#8217;s inequality</title>
		<link>https://www.johndcook.com/blog/2026/03/24/mendeleevs-inequality/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 12:47:32 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Inequalities]]></category>
		<category><![CDATA[Interpolation]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246921</guid>

					<description><![CDATA[<p>Dmitri Mendeleev is best known for creating the first periodic table of chemical elements. He also discovered an interesting mathematical theorem. Empirical research led him to a question about interpolation, which in turn led him to a theorem about polynomials and their derivatives. I ran across Mendeleev&#8217;s theorem via a paper by Boas [1]. The [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/24/mendeleevs-inequality/">Mendeleev’s inequality</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>Dmitri Mendeleev is best known for creating the first periodic table of chemical elements. He also discovered an interesting mathematical theorem. Empirical research led him to a question about interpolation, which in turn led him to a theorem about polynomials and their derivatives.</p>
<p>I ran across Mendeleev&#8217;s theorem via a paper by Boas [1]. The opening paragraph describes what Mendeleev was working on.</p>
<blockquote><p>Some years after the chemist Mendeleev invented the periodic table of the elements he made a study of the specific gravity of a solution as a function of the percentage of the dissolved substance. This function is of some practical importance: for example, it is used in testing beer and wine for alcoholic content, and in testing the cooling system of an automobile for concentration of anti-freeze; but present-day physical chemists do not seem to find it as interesting as Mendeleev did.</p></blockquote>
<p>Mendeleev fit his data by patching together quadratic polynomials, i.e. he used quadratic splines. A question about the slopes of these splines lead to the following.</p>
<blockquote><p><strong>Theorem</strong> (Mendeleev): Let <em>P</em>(<em>x</em>) be a quadratic polynomial on [−1, 1] such that |<em>P</em>(<em>x</em>)| ≤ 1. Then |<em>P</em>′(<em>x</em>)| ≤ 4.</p></blockquote>
<p>Mendeleev showed his result to mathematician Andrey Markov who generalized it to the following.</p>
<blockquote><p><strong>Theorem</strong> (Markov): If <em>P</em>(<em>x</em>) is a real polynomial of degree <em>n</em>, and |<em>P</em>(<em>x</em>)| ≤ 1 on [−1, 1] then |<em>P</em>′(<em>x</em>)| ≤ <em>n</em>² on [−1, 1].</p></blockquote>
<p>Both inequalities are sharp with equality if and only if <em>P</em>(<em>x</em>) = ±<em>T</em><sub><em>n</em></sub>(<em>x</em>), the <em>n</em>th <a href="https://www.johndcook.com/blog/2024/08/15/distorted-cosines/">Chebyshev polynomial</a>. In the special case of Mendeleev&#8217;s inequality, equality holds for</p>
<p style="padding-left: 40px;"><em>T</em><sub>2</sub>(<em>x</em>) = 2<em>x</em>² − 1.</p>
<p>Andrey Markov&#8217;s brother Vladimir proved an <a href="https://www.johndcook.com/blog/2020/03/14/the-brothers-markov/">extension</a> of Andrey&#8217;s theorem to higher derivatives,</p>
<h2>Related posts</h2>
<ul>
<li class="link"><a href="https://www.johndcook.com/blog/2022/06/18/length-of-periods-in-the-infinite-periodic-table/">Periods in the periodic table</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2023/11/30/memorize-the-periodic-table/">How to memorize the periodic table</a></li>
<li class="link"><a href="https://www.johndcook.com/blog/2020/03/14/the-brothers-markov/">The Brothers Markov</a></li>
</ul>
<p>[1] R. P. Boas, Jr. Inequalities for the Derivatives of Polynomials. Mathematics Magazine, Vol. 42, No. 4 (Sep., 1969), pp. 165–174</p>The post <a href="https://www.johndcook.com/blog/2026/03/24/mendeleevs-inequality/">Mendeleev’s inequality</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Set intersection and difference at the command line</title>
		<link>https://www.johndcook.com/blog/2026/03/23/intersection-difference/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 11:39:47 +0000</pubDate>
				<category><![CDATA[Computing]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246920</guid>

					<description><![CDATA[<p>A few years ago I wrote about comm, a utility that lets you do set theory at the command line. It&#8217;s a really useful little program, but it has two drawbacks: the syntax is hard to remember, and the input files must be sorted. If A and B are two sorted lists, comm A B [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/23/intersection-difference/">Set intersection and difference at the command line</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>A few years ago I wrote about <code>comm</code>, a utility that lets you do <a href="https://www.johndcook.com/blog/2019/11/24/comm-set-theory/">set theory at the command line</a>. It&#8217;s a really useful little program, but it has two drawbacks: the syntax is hard to remember, and the input files must be sorted.</p>
<p>If A and B are two sorted lists,</p>
<pre>    comm A B</pre>
<p>prints A − B, B − A, and A ∩ B. You usually don&#8217;t want all three, and so <code>comm</code> lets you filter the output. It&#8217;s a little quirky in that you specify what you <em>don&#8217;t</em> want instead of what you do. And you have to remember that 1, 2, and 3 correspond to A − B, B − A, and A ∩ B respectively.</p>
<p><img decoding="async" class="aligncenter" src="https://www.johndcook.com/comm_venn.png" alt="Venn diagram of comm parameters" /></p>
<p>A couple little scripts can hide the quirks. I have a script <code>intersect</code></p>
<pre>    comm -12 &lt;(sort "$1") &lt;(sort "$2")</pre>
<p>and another script <code>setminus</code></p>
<pre>    comm -23 &lt;(sort "$1") &lt;(sort "$2")</pre>
<p>that sort the input files on the fly and eliminate the need to remember <code>comm</code>&#8216;s filtering syntax.</p>
<p>The <code>setminus</code> script computes A − B. To find B − A call the script with the arguments reversed.</p>The post <a href="https://www.johndcook.com/blog/2026/03/23/intersection-difference/">Set intersection and difference at the command line</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Embedded regex flags</title>
		<link>https://www.johndcook.com/blog/2026/03/20/embedded-regex-flags/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 16:58:42 +0000</pubDate>
				<category><![CDATA[Computing]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Regular expressions]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246919</guid>

					<description><![CDATA[<p>The hardest part of using regular expressions is not crafting regular expressions per se. In my opinion, the two hardest parts are minor syntax variations between implementations, and all the environmental stuff outside of regular expressions per se. Embedded regular expression modifiers address one of the environmental complications by putting the modifier in the regular expression [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/20/embedded-regex-flags/">Embedded regex flags</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The hardest part of using regular expressions is not crafting regular expressions per se. In my opinion, the two hardest parts are minor syntax variations between implementations, and all the environmental stuff outside of regular expressions <em>per se.</em></p>
<p>Embedded regular expression modifiers address one of the environmental complications by putting the modifier in the regular expression itself. </p>
<p>For example, if you want to make a <code>grep</code> search case-insensitive, you pass it the <code>-i</code> flag. But if you want to make a regex case-insensitive inside a Python program, you pass a function the argument <code>re.IGNORECASE</code>. But if you put <code>(?i)</code> at the beginning of your regular expression, then the intention to make the match case-insensitive is embedded directly into the regex. You could use the regex in any environment that supports <code>(?i)</code> without having to know how to specify modifiers in that environment. </p>
<p>I was debugging a Python script this morning that worked under one version of Python and not under another. The root of the problem was that it was using <code>re.findall()</code> with several huge regular expression that had embedded modifiers. That was OK up to Python 3.5, then it was a warning between versions 3.6 and 3.10, and it&#8217;s an error in versions 3.11 and later. </p>
<p>The problem isn&#8217;t with all embedded modifiers, only global modifiers that don&#8217;t appear at the beginning of the regex. Older versions of Python, following Perl&#8217;s lead, would let you put a modifier like <code>(?i)</code> in the middle of a regex, and apply the modifier from that point to the end of the expression. In the latest versions of Python, you can either place the modifier at the beginning of the regex, or use a scoped modifier like <code>(?:&hellip;)</code> in the middle of the expression.</p>
<p>I didn&#8217;t want to edit the regular expressions in my code&mdash;some had over a thousand characters&mdash;so I changed <code>re.findall()</code> to <code>regex.findall()</code>. The third-party <code>regex</code> module is generally more Perl-compatible than Python’s standard re module.</p>The post <a href="https://www.johndcook.com/blog/2026/03/20/embedded-regex-flags/">Embedded regex flags</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A lesser-known characterization of the gamma function</title>
		<link>https://www.johndcook.com/blog/2026/03/18/wielandt/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 01:21:53 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Complex analysis]]></category>
		<category><![CDATA[Special functions]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246918</guid>

					<description><![CDATA[<p>The gamma function Γ(z) extends the factorial function from integers to complex numbers. (Technically, Γ(z + 1) extends factorial.) There are other ways to extend the factorial function, so what makes the gamma function the right choice? The most common answer is the Bohr-Mollerup theorem. This theorem says that if f: (0, ∞) → (0, [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/18/wielandt/">A lesser-known characterization of the gamma function</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The gamma function Γ(<em>z</em>) extends the factorial function from integers to complex numbers. (Technically, Γ(<em>z</em> + 1) extends factorial.) There are other ways to extend the factorial function, so what makes the gamma function the right choice?</p>
<p>The most common answer is the Bohr-Mollerup theorem. This theorem says that if <em>f</em>: (0, ∞) → (0, ∞) satisfies</p>
<ol>
<li><em>f</em>(<em>x</em> + 1) = <em>x</em> <em>f</em>(<em>x</em>)</li>
<li><em>f</em>(1) = 1</li>
<li>log <em>f</em> is convex</li>
</ol>
<p>then <em>f</em>(<em>x</em>) = Γ(<em>x</em>). The theorem applies on the positive real axis, and there is a unique holomorphic continuation of this function to the complex plane.</p>
<p>But the Bohr-Mollerup theorem is not the only theorem characterizing the gamma function. Another theorem was by Helmut Wielandt. His theorem says that if <em>f</em> is holomorphic in the right half-plane and</p>
<ol>
<li><em>f</em>(<em>z</em> + 1) = <em>z</em> <em>f</em>(<em>z</em>)</li>
<li><em>f</em>(1) = 1</li>
<li><em>f</em>(<em>z</em>) is bounded for {<em>z</em>: 1 ≤ Re <em>z</em> ≤ 2}</li>
</ol>
<p>then <em>f</em>(<em>x</em>) = Γ(<em>x</em>). In short, Wielandt replaces the log-convexity for positive reals with the requirement that <em>f</em> is bounded in a strip of the complex plane.</p>
<p>You might wonder what is the bound alluded to in Wielandt&#8217;s theorem. You can show from the integral definition of Γ(<em>z</em>) that</p>
<p style="padding-left: 40px;">|Γ(<em>z</em>)| ≤ |Γ(Re <em>z</em>)|</p>
<p>for <em>z</em> in the right half-plane. So the bound on the complex strip {<em>z</em>: 1 ≤ Re <em>z</em> ≤ 2} equals the bound on the real interval [1, 2], which is 1.</p>The post <a href="https://www.johndcook.com/blog/2026/03/18/wielandt/">A lesser-known characterization of the gamma function</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tighter bounds on alternating series remainder</title>
		<link>https://www.johndcook.com/blog/2026/03/17/alternating-series-remainder/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Wed, 18 Mar 2026 02:50:12 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246916</guid>

					<description><![CDATA[<p>The alternating series test is part of the standard calculus curriculum. It says that if you truncate an alternating series, the remainder is bounded by the first term that was left out. This fact goes by in a blur for most students, but it becomes useful later if you need to do numerical computing. To [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/17/alternating-series-remainder/">Tighter bounds on alternating series remainder</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>The alternating series test is part of the standard calculus curriculum. It says that if you truncate an alternating series, the remainder is bounded by the first term that was left out. This fact goes by in a blur for most students, but it becomes useful later if you need to do numerical computing.</p>
<p>To be more precise, assume we have a series of the form</p>
<p><img class='aligncenter' src='https://www.johndcook.com/villarino1.svg' alt='  \sum_{i=1}^\infty (-1)^i a_i' style='background-color:white' height='54' width='84' /></p>
<p>where the <em>a</em><sub><em>i</em></sub> are positive and monotonically converge to zero. Then the tail of the series is bounded by its first term:</p>
<p><img class='aligncenter' src='https://www.johndcook.com/villarino2.svg' alt='\left|R_n\right| = \left| \sum_{i=n+1}^\infty (-1)^i a_i \right| \leq a_{n+1}' style='background-color:white' height='60' width='215' /></p>
<p>The more we can say about the behavior of the <em>a</em><sub><em>i</em></sub> the more we can say about the remainder. So far we&#8217;ve assumed that these terms go monotonically to zero. If their differences</p>
<p><img class='aligncenter' src='https://www.johndcook.com/villarino3.svg' alt='\Delta a_i = a_i - a_{i+1}' style='background-color:white' height='16' width='117' /></p>
<p>also go monotonically to zero, then we have an upper and lower bound on the truncation error:</p>
<p><img class='aligncenter' src='https://www.johndcook.com/villarino4.svg' alt='\frac{a_{n+1}}{2} \leq |R_n| \leq \frac{a_n}{2}' style='background-color:white' height='35' width='137' /></p>
<p>If the differences of the differences, </p>
<p><img class='aligncenter' src='https://www.johndcook.com/villarino5.svg' alt='\Delta^2 a_i = \Delta (\Delta a_i)' style='background-color:white' height='22' width='115' /></p>
<p>also converge monotonically to zero, we can get a larger lower bound and a smaller upper bound on the remainder. In general, if the differences up to order <em>k</em> of the <em>a</em><sub><em>i</em></sub> go to zero monotonically, then the remainder term can be bounded as follows.</p>
<p><img class='aligncenter' src='https://www.johndcook.com/villarino6.svg' alt='\frac{a_{n+1}}{2}
+\frac{\Delta a_{n+1}}{2^2}
+\cdots+
\frac{\Delta^k a_{n+1}}{2^{k+1}}
< \left|R_n\right| <
\frac{a_n}{2}
-\left\{
\frac{\Delta a_n}{2^2}
+\cdots+
\frac{\Delta^k a_n}{2^{k+1}}
\right\}.
' style='background-color:white' height='49' width='524' /></p>
<p>Source: Mark B. Villarino. The Error in an Alternating Series. American Mathematical Monthly, April 2018, pp. 360&ndash;364.</p>
<h2>Related posts</h2>
<ul>
<li class='link'><a href='https://www.johndcook.com/blog/2019/08/01/accelerating-an-alternating-series/'>Euler&#8217;s method for accelerating an alternating series</a></li>
<li class='link'><a href='https://www.johndcook.com/blog/2020/08/06/cohen-acceleration/'>Cohen&#8217;s method for accelerating an alternating series</a></li>
</ul>The post <a href="https://www.johndcook.com/blog/2026/03/17/alternating-series-remainder/">Tighter bounds on alternating series remainder</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Powers don&#8217;t clear fractions</title>
		<link>https://www.johndcook.com/blog/2026/03/17/powers-dont-clear-fractions/</link>
		
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 13:26:13 +0000</pubDate>
				<category><![CDATA[Math]]></category>
		<category><![CDATA[Number theory]]></category>
		<guid isPermaLink="false">https://www.johndcook.com/blog/?p=246912</guid>

					<description><![CDATA[<p>If a number has a finite but nonzero fractional part, so do all its powers. I recently ran across a proof in [1] that is shorter than I expected. Theorem: Suppose r is a real number that is not an integer, and the decimal part of r terminates. Then rk is not an integer for any positive integer [&#8230;]</p>
The post <a href="https://www.johndcook.com/blog/2026/03/17/powers-dont-clear-fractions/">Powers don’t clear fractions</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></description>
										<content:encoded><![CDATA[<p>If a number has a finite but nonzero fractional part, so do all its powers. I recently ran across a proof in [1] that is shorter than I expected.</p>
<p>Theorem: Suppose <em>r</em> is a real number that is not an integer, and the decimal part of <em>r</em> terminates. Then <em>r</em><sup><em>k</em></sup> is not an integer for any positive integer <em>k</em>.</p>
<p>Proof: The number <em>r</em> can be written as a reduced fraction <em>a</em> / 10<sup><em>m</em></sup> for some positive <em>m</em>. If <em>s</em> = <em>r</em><sup><em>k</em></sup> were an integer, then</p>
<p style="padding-left: 40px;">10<sup><em>mk</em></sup> <em>s</em> = a<sup><em>k</em></sup>.</p>
<p>Now the left side of this equation is divisible by 10 but the right side is not, and so <em>s</em> = <em>r</em><sup><em>k</em></sup> must not be an integer. QED.</p>
<p>The only thing special about base 10 is that we most easily think in terms of base 10, but you could replace 10 with any other base.</p>
<p>What about repeating decimals, like 1/7 = 0.142857142857…? They&#8217;re only repeating decimals in our chosen base. Pick the right base, i.e. 7 in this case, and they terminate. So the theorem above extends to repeating decimals.</p>
<p>[1] Eli Leher. √2 is Not 1.41421356237 or Anything of the Sort. The American Mathematical Monthly, Vol. 125, No. 4 (APRIL 2018), page 346.</p>The post <a href="https://www.johndcook.com/blog/2026/03/17/powers-dont-clear-fractions/">Powers don’t clear fractions</a> first appeared on <a href="https://www.johndcook.com/blog">John D. Cook</a>.]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
