<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:media="http://search.yahoo.com/mrss/"
	>

<channel>
	<title>Learning in Linux</title>
	<atom:link href="https://learninginlinux.wordpress.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://learninginlinux.wordpress.com</link>
	<description>Amateur fiddling with GNU/Linux and everything that runs on top of it (Windows, too ;))</description>
	<lastBuildDate>Wed, 16 Jun 2010 17:15:54 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
<cloud domain='learninginlinux.wordpress.com' port='80' path='/?rsscloud=notify' registerProcedure='' protocol='http-post' />

	<atom:link rel="search" type="application/opensearchdescription+xml" href="https://learninginlinux.wordpress.com/osd.xml" title="Learning in Linux" />
	<atom:link rel='hub' href='https://learninginlinux.wordpress.com/?pushpress=hub'/>
	<item>
		<title>Debugging notes-to-self</title>
		<link>https://learninginlinux.wordpress.com/2010/06/16/debugging-notes-to-self/</link>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Wed, 16 Jun 2010 17:15:54 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<guid isPermaLink="false">http://learninginlinux.wordpress.com/?p=348</guid>

					<description><![CDATA[Stuff I wish I&#8217;d known for years (and that the rest of the world probably always knew, just not me): after your buggy program was slowly eating your memory and swap, and you escaped to a console to kill it using Alt+SysRq+R and then Ctrl+Alt+F1, when you get back to your X-session you can make [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Stuff I wish I&#8217;d known for years (and that the rest of the world probably always knew, just not me):</p>
<ol>
<li>after your buggy program was slowly eating your memory and swap, and you escaped to a console to kill it using Alt+SysRq+R and then Ctrl+Alt+F1, when you get back to your X-session you can make it grab the keyboard again by typing &#8220;kbd_mode -s&#8221; in an xterm (found <a href="http://www.blino.org/notes/tech/linux/keyboard-raw.html">here</a> after trying all sorts of incantations in <a href="http://duckduckgo.com/?q=get+back+to+keyboard+raw-mode">DuckDuckGo</a>)</li>
<li>if that was on a laptop keyboard with &#8220;AltGr dead keys&#8221;, the SysRq combo might need to be Fn+SysRq+AltGr+R (yes, try that with one hand!), where AltGr is the right Alt key (found after lots of failed attempts &#8211; very frustrating if your swap runs out before you get to the console)</li>
<li>in the end, you wouldn&#8217;t typically have needed any of the above if only you&#8217;d known that &#8220;ulimit -v 1000000&#8221; limits any process in your bash shell to 1GB of memory, so that your buggy program will die superquick without locking up the system for 10 minutes (<a href="http://mail.nl.linux.org/kernelnewbies/2002-07/msg00063.html">found</a> in no time through a <a href="http://duckduckgo.com/?q=linux+limit+memory+allocation">most obvious</a> query &#8211; it just took a long time before I thought of looking for it at all&#8230;)</li>
</ol>
]]></content:encoded>
					
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>Ksplice Uptrack: a quick-test on Ubuntu 9.04 Live</title>
		<link>https://learninginlinux.wordpress.com/2009/06/28/ksplice-uptrack-quick-test-on-ubuntu-9-04-live/</link>
					<comments>https://learninginlinux.wordpress.com/2009/06/28/ksplice-uptrack-quick-test-on-ubuntu-9-04-live/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Sun, 28 Jun 2009 00:21:13 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[Jaunty Jackalope]]></category>
		<category><![CDATA[ksplice]]></category>
		<category><![CDATA[Ksplice Uptrack]]></category>
		<category><![CDATA[reboots]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[software updates]]></category>
		<category><![CDATA[ubuntu 9.04]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=297</guid>

					<description><![CDATA[I&#8217;ve been using Ubuntu 8.04 on my laptop for ages, and never had any reason to upgrade from there &#8211; &#8220;it just works, I&#8217;m done upgrading&#8221; is what I&#8217;d smugly tell people&#8230; Now, I&#8217;ve found a big reason to upgrade: Ksplice, which I mentioned the other day, put a new service up: Ksplice Uptrack is [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve been using Ubuntu 8.04 on my laptop <a title="Installing Ubuntu 8.04 with full disk encryption" href="http://oei.yungchin.nl/2008/04/23/installing-ubuntu-804-with-full-disk-encryption/">for ages</a>, and never had any reason to upgrade from there &#8211; &#8220;it just works, I&#8217;m done upgrading&#8221; is what I&#8217;d smugly tell people&#8230; Now, I&#8217;ve found a big <a title="Linux Kernel Mailing List: Ksplice updates for Ubuntu 9.04 Jaunty" href="http://lkml.org/lkml/2009/6/25/123">reason to upgrade</a>: Ksplice, which I <a title="Ksplice Trophée du Libre" href="http://oei.yungchin.nl/2009/06/18/ksplice-trophee-du-libre/">mentioned</a> the other day, put a <a href="http://ksplice.com/uptrack/">new service</a> up:</p>
<blockquote><p>Ksplice Uptrack is a new service that lets you effortlessly keep your systems up to date and secure, without rebooting.</p>
<p>Once you’ve completed the easy installation process, your system will be set up to receive rebootless updates instead of traditional, disruptive updates.  [&#8230;]</p>
<p>Ksplice, Inc. is proud to make this service freely available for the latest version of the world’s most popular desktop Linux distribution: Ubuntu 9.04 Jaunty Jackalope.</p></blockquote>
<p><a title="Reboots are uncool (= me whining about productivity loss)" href="http://oei.yungchin.nl/2008/10/15/reboots-are-uncool/">No more reboots</a>, and still applying security patches <a title="The superiority of the distro (= me whining about the risks of delayed installations of updates)" href="http://oei.yungchin.nl/2009/05/10/the-superiority-of-the-distro/">as soon as they become available</a>. That&#8217;s worth the dist-upgrade hassle.</p>
<p>For now, all I did was running a quick test. I had a USB stick with Ubuntu Netbook Remix 9.04 lying around, so I booted from that, hooked up the wifi (man, connecting is fast with NetworkManager 0.7-something &#8211; another reason to upgrade&#8230;), downloaded ksplice-uptrack.deb, and installed it on the Live system (you also need network connectivity to fetch some dependencies from the Ubuntu repository). This is what you get:</p>
<p><a href="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png"><img data-attachment-id="302" data-permalink="https://learninginlinux.wordpress.com/2009/06/28/ksplice-uptrack-quick-test-on-ubuntu-9-04-live/ksplice-uptrack-init/" data-orig-file="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png" data-orig-size="1440,900" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}" data-image-title="ksplice-uptrack updates window " data-image-description="" data-image-caption="" data-medium-file="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=300" data-large-file="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=500" class="aligncenter size-medium wp-image-302" title="ksplice-uptrack updates window " src="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=300&#038;h=187" alt="ksplice-uptrack updates window " width="300" height="187" srcset="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=300 300w, https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=598 598w, https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=150 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<p>There&#8217;s a little tray-icon (the one resembling a &#8220;K&#8221;&#8230;) informing you that kernel updates are available, and clicking it opens an update window. Nothing exciting to see here, actually.</p>
<p><a href="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png"><img data-attachment-id="303" data-permalink="https://learninginlinux.wordpress.com/2009/06/28/ksplice-uptrack-quick-test-on-ubuntu-9-04-live/ksplice-uptrack-working/" data-orig-file="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png" data-orig-size="1440,900" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}" data-image-title="ksplice-uptrack in action" data-image-description="" data-image-caption="" data-medium-file="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=300" data-large-file="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=500" class="aligncenter size-medium wp-image-303" title="ksplice-uptrack in action" src="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=300&#038;h=187" alt="ksplice-uptrack in action" width="300" height="187" srcset="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=300 300w, https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=598 598w, https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=150 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<p>Still not very exciting. The whole thing is very understated, almost disappointingly so &#8211; I mean, something this cool should look cool, shouldn&#8217;t it?</p>
<p>&#8230;. and everything still works after this. In fact, I&#8217;m typing this post from the Live system with the (supposedly) updated kernel. I tried shutting the lid on my D630, and it nicely went into ACPI suspend. And came back up.</p>
<p>Wicked.</p>
<p>(Small disappointment: it seems Firefox crashed between suspend and resume. Did it a second time, and again Firefox died. Third time: no problems. Not sure if this has anything to do with anything, so for now pretend I didn&#8217;t mention it.)</p>
<p>Cool stuff, seriously. This will be in 10.04 by default, I&#8217;ve no doubt. In case you&#8217;re looking, here&#8217;s one guy eager to work on that!</p>
<p>One more thing: in their <a title="Ksplice Uptrack FAQ" href="http://ksplice.com/uptrack/faq">FAQ</a> they suggest a little test to demonstrate that the thing actually does something. I tried their suggestion and ran their test-thing a couple of times. But I&#8217;m off to bed now, so here&#8217;s the output, and I&#8217;ll leave calculating whether the difference before/after updates is statistically significant to you&#8230;</p>
<blockquote>
<pre>ubuntu@ubuntu:~$ wget -O demo.c http://www.ksplice.com/uptrack/2009-06-demo.c
ubuntu@ubuntu:~$ gcc demo.c -o demo
ubuntu@ubuntu:~$ sudo cpufreq-selector -c 0 -g performance
ubuntu@ubuntu:~$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
performance
ubuntu@ubuntu:~$ sudo cpufreq-selector -c 1 -g performance
ubuntu@ubuntu:~$ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 23
model name      : Intel(R) Core(TM)2 Duo CPU     T8100  @ 2.10GHz
stepping        : 6
cpu MHz         : 2101.000
cache size      : 3072 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida tpr_shadow vnmi flexpriority
bogomips        : 4189.64
clflush size    : 64
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 23
model name      : Intel(R) Core(TM)2 Duo CPU     T8100  @ 2.10GHz
stepping        : 6
cpu MHz         : 2101.000
cache size      : 3072 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida tpr_shadow vnmi flexpriority
bogomips        : 4189.57
clflush size    : 64
power management:

ubuntu@ubuntu:~$ ./demo
time to write 100 lines is 6(msec)
# ...hmmm, wait, this is a Live system...
ubuntu@ubuntu:~$ sudo mount /dev/sda3 /mnt
ubuntu@ubuntu:~$ cd /mnt/
ubuntu@ubuntu:/mnt$ sudo mkdir test
ubuntu@ubuntu:/mnt$ sudo chmod a+rwx test
ubuntu@ubuntu:/mnt$ cd test/
ubuntu@ubuntu:/mnt/test$ cp /home/ubuntu/demo .
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 49(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 54(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 64(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 60(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 75(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 72(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 62(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 65(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 80(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 52(msec)
ubuntu@ubuntu:/mnt/test$ sudo uptrack-remove --all -y
The following steps will be taken:
Remove [cdoprpi1] Performance regression in filesystem buffer code.
Remove [9xoc5qmo] Possible erroneous memory overcommit in program start.
Remove [ll9q1ymc] Multiple bugs in filesystem core.
Remove [ovniqwxh] CVE-2009-1192: Information leak in the agp subsystem.
Remove [hrxbvh0e] CVE-2009-1265: Integer overflow in the af_rose maximum user frame size.
Remove [uzolzfa2] CVE-2009-1337: kill the wrong capable(CAP_KILL) check.
Remove [xgqc9vy4] VGA console corrupts non-ASCII characters.
Remove [pdfrn6qa] Denial of service by evading CPU time limits.
Remove [c8ueseae] Symbolic link filenames under eCryptfs can produce alarming warnings in dmesg.
Removing [cdoprpi1] Performance regression in filesystem buffer code.
Removing [9xoc5qmo] Possible erroneous memory overcommit in program start.
Removing [ll9q1ymc] Multiple bugs in filesystem core.
Removing [ovniqwxh] CVE-2009-1192: Information leak in the agp subsystem.
Removing [hrxbvh0e] CVE-2009-1265: Integer overflow in the af_rose maximum user frame size.
Removing [uzolzfa2] CVE-2009-1337: kill the wrong capable(CAP_KILL) check.
Removing [xgqc9vy4] VGA console corrupts non-ASCII characters.
Removing [pdfrn6qa] Denial of service by evading CPU time limits.
Removing [c8ueseae] Symbolic link filenames under eCryptfs can produce alarming warnings in dmesg.
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 816(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 805(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 793(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 786(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 785(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 787(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 791(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 787(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 786(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 785(msec)
ubuntu@ubuntu:/mnt/test$ sudo uptrack-upgrade -y
The following steps will be taken:
Install [c8ueseae] Symbolic link filenames under eCryptfs can produce alarming warnings in dmesg.
Install [pdfrn6qa] Denial of service by evading CPU time limits.
Install [xgqc9vy4] VGA console corrupts non-ASCII characters.
Install [uzolzfa2] CVE-2009-1337: kill the wrong capable(CAP_KILL) check.
Install [hrxbvh0e] CVE-2009-1265: Integer overflow in the af_rose maximum user frame size.
Install [ovniqwxh] CVE-2009-1192: Information leak in the agp subsystem.
Install [ll9q1ymc] Multiple bugs in filesystem core.
Install [9xoc5qmo] Possible erroneous memory overcommit in program start.
Install [cdoprpi1] Performance regression in filesystem buffer code.
Installing [c8ueseae] Symbolic link filenames under eCryptfs can produce alarming warnings in dmesg.
Installing [pdfrn6qa] Denial of service by evading CPU time limits.
Installing [xgqc9vy4] VGA console corrupts non-ASCII characters.
Installing [uzolzfa2] CVE-2009-1337: kill the wrong capable(CAP_KILL) check.
Installing [hrxbvh0e] CVE-2009-1265: Integer overflow in the af_rose maximum user frame size.
Installing [ovniqwxh] CVE-2009-1192: Information leak in the agp subsystem.
Installing [ll9q1ymc] Multiple bugs in filesystem core.
Installing [9xoc5qmo] Possible erroneous memory overcommit in program start.
Installing [cdoprpi1] Performance regression in filesystem buffer code.
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 61(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 56(msec)
ubuntu@ubuntu:/mnt/test$ ./demo
time to write 100 lines is 47(msec)
ubuntu@ubuntu:/mnt/test$</pre>
</blockquote>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/06/28/ksplice-uptrack-quick-test-on-ubuntu-9-04-live/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>

		<media:content url="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-init.png?w=300" medium="image">
			<media:title type="html">ksplice-uptrack updates window </media:title>
		</media:content>

		<media:content url="https://learninginlinux.wordpress.com/wp-content/uploads/2009/06/ksplice-uptrack-working.png?w=300" medium="image">
			<media:title type="html">ksplice-uptrack in action</media:title>
		</media:content>
	</item>
		<item>
		<title>Ksplice Trophée du Libre</title>
		<link>https://learninginlinux.wordpress.com/2009/06/18/ksplice-trophee-du-libre/</link>
					<comments>https://learninginlinux.wordpress.com/2009/06/18/ksplice-trophee-du-libre/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Thu, 18 Jun 2009 23:06:08 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[ksplice]]></category>
		<category><![CDATA[reboots]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[software updates]]></category>
		<category><![CDATA[trophees-du-libre]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=294</guid>

					<description><![CDATA[I&#8217;ve repeatedly been whining here about how kernel-update reboots kill productivity, but I also think that delaying security updates is the worse alternative.  So I was very excited to learn about Ksplice, through the LWN announcement of the &#8220;Trophées du Libre&#8221;. Ksplice is the 2009 winner in the Security category. A quick snippet from the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve repeatedly been whining here about how kernel-update reboots kill productivity, but <a title="The superiority of the distro" href="http://oei.yungchin.nl/2009/05/10/the-superiority-of-the-distro/">I also think</a> that delaying security updates is the worse alternative.  So I was very excited to learn about <a title="Ksplice at trophees-du-libre" href="http://www.trophees-du-libre.org/content/view/137/">Ksplice</a>, through the LWN <a title="Free Software Awards - Trophees du Libre 2009 " href="http://lwn.net/Articles/337863/">announcement</a> of the &#8220;Trophées du Libre&#8221;. Ksplice is the 2009 winner in the Security category.</p>
<p>A quick snippet from the <a title="Ksplice - Technology" href="http://www.ksplice.com/technology">project page</a>:</p>
<blockquote><p>Ksplice enables running systems to stay secure without the disruption of rebooting.  Specifically, Ksplice creates rebootless updates that are based on traditional source code patches.  These updates are as effective as traditional updates, but they can be applied seamlessly, with no downtime.</p>
<p>Ksplice currently supports updating the Linux kernel, but the core technology applies to any operating system or to user space applications.</p></blockquote>
<p>A quick search tells me even <a title="Ksplice automates hot patching Linux kernel with no reboot needed" href="http://blogs.zdnet.com/open-source/?p=2333">ZDNet</a> had already heard of this project over a year ago, so I&#8217;m half ashamed that it&#8217;s news to me, but I&#8217;m too excited to keep it to myself :)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/06/18/ksplice-trophee-du-libre/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>Federating search through open protocols</title>
		<link>https://learninginlinux.wordpress.com/2009/06/14/federating-search/</link>
					<comments>https://learninginlinux.wordpress.com/2009/06/14/federating-search/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Sun, 14 Jun 2009 21:27:18 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[federated search]]></category>
		<category><![CDATA[internet search]]></category>
		<category><![CDATA[search]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=286</guid>

					<description><![CDATA[Cory Doctorow wrote a Guardian column the other week that draws attention to the dangers of having one or a few big companies in charge of Search Services for the internet: It&#8217;s a terrible idea to vest this much power with one company, even one as fun, user-centered and technologically excellent as Google. It&#8217;s too [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Cory Doctorow wrote <a title="Search is too important to leave to one company – even Google" href="http://craphound.com/?p=2245">a Guardian column</a> the other week that draws attention to the dangers of having one or a few big companies in charge of Search Services for the internet:</p>
<blockquote><p>It&#8217;s a terrible idea to vest this much power with one company, even one as fun, user-centered and technologically excellent as Google. It&#8217;s too much power for a handful of companies to wield.</p></blockquote>
<blockquote><p>The question of what we can and can&#8217;t see when we go hunting for answers demands a transparent, participatory solution. [&#8230;]</p></blockquote>
<p>I completely agree with him that there&#8217;s a problem here, in fact also for at least one other reason which he didn&#8217;t mention. That reason also invalidates the solution he seems to propose &#8211; a sort of non-profit search giant under public control. Scroll down a few sections if you want to hear an alternate proposal&#8230;</p>
<h3>Search giants slow innovation</h3>
<p>Monopolists kill innovation even if they&#8217;re trying hard <a title="Wikipedia entry on Google's slogan" href="http://en.wikipedia.org/wiki/Don't_be_evil">not to be evil</a>, simply because monopolies kill innovation. There&#8217;s a specific problem with Search, in that it costs a boat-load of money to just start out doing it, let alone improve on anything. You&#8217;ll always have to index the whole internet, for example &#8211; no matter how good your algorithms, nobody will use your service if you don&#8217;t have good coverage. After <a title="Wikipedia entry: Cuil" href="http://en.wikipedia.org/wiki/Cuil">Cuil</a>, venture capitalists may hesitate to cough-up that sort of money.</p>
<p>Only a handful of companies have the means to put up a hundred-thousand servers and compete with Google. After more than half a decade, Microsoft now managed to produce <a href="http://www.bing.com/">Bing</a>, which from my impressions so far is on par with Google Search. Read that again: half a decade &#8211; on par. What about innovation? Where&#8217;s the PageRank killer? What happened to those big leaps of progress that led to Google?</p>
<p>This is not Microsoft&#8217;s failure. The guy that might have had the hypothetical breakthrough-new idea just might have happened to work at another cool company, one that didn&#8217;t have the money to dive into Search. I&#8217;d say this is rather a failure of the free market (but see my About page: I&#8217;m not an economist &#8211; I have really no idea what I&#8217;m talking about :)). Every hypothetical insurgent has to overcome a multi-million dollar hurdle just to take a shot at the problem. That means there will always be too few candidates.</p>
<p><a title="Why there aren't more Googles" href="http://www.paulgraham.com/googles.html">Paul Graham</a> thinks it takes a different kind of investors to tackle the problem &#8211; ones that have the guts to throw money at this. I think we should better find a way to bring the cost down. But, let&#8217;s quickly shoot at the idea of a non-profit first.</p>
<h3>A non-profit would kill innovation</h3>
<p>As in completely, totally kill it. A public, participatory system is what you settle for when you want stability: it thus necessarily opposes innovation. You want a stable government, so you build a democracy. But you leave innovation to the free market, because innovating under parliamentary oversight would take forever.</p>
<p>Just imagine what would happen: we&#8217;d settle on, say, <a title="Wikipedia entry: Nutch" href="http://en.wikipedia.org/wiki/Nutch">Nutch</a>, throw a huge amount of public money at it, and then end up spending that money on endless bureaucracy &#8211; some users want this innovation, some that, others want to try something totally different instead, academics get to write papers about how it could all be better, the steering committee gets to debate it too, and then when a decision is near, there will be endless rounds of appeal&#8230;</p>
<p>(Doctorow realises this, as he writes &#8220;But can an ad-hoc group of net-heads marshall the server resources to store copies of the entire Internet?&#8221;)</p>
<h3>Federation</h3>
<p>We want to achieve two goals: the one that Doctorow outlined, which I will rephrase as &#8220;Search services that transparently serve the interests of all those who search as well as all those who want to be found&#8221; (with some legal limits to it of course), and the fast-innovation goal, which I think boils down to this: start-ups shouldn&#8217;t need to build every aspect of the search engine just to get to improve one aspect of it. The following is a rough outline of a crazy idea, and again: I have no idea what I&#8217;m talking about. Here we go&#8230;</p>
<p>Let&#8217;s call the people who search consumers, and the ones who want to be found providers. If you look at how the <a title="Wikipedia entry: Google platform" href="http://en.wikipedia.org/wiki/Google_platform">Google platform</a> works internally, you&#8217;ll see there&#8217;s roughly a separation that reflects the presence of these two parties: there are index and document servers (let&#8217;s call them the back-end) that represent the providers, and there&#8217;s the front-end that handles a consumer&#8217;s query, talks to the index/document servers, and compiles a priority list for the consumer.</p>
<p>In the age of dial-up connections, you had to have all that happen within the data center. There&#8217;s a massive amount of communication between the back-end and the front-end servers. So it had to be designed the way it was. Now that there&#8217;s fat bandwidth all-over, couldn&#8217;t the front-end servers be separated from the back-end servers?</p>
<p>As a consumer, I&#8217;d get to deal with a front-end-providing company that would serve my interests, and my interests only. A natural choice would be my ISP, but as a more extreme solution the front-end could run on my desktop machine &#8211; the details don&#8217;t matter for now. The point is, there could be many of these front-ends, and I could switch to a different solution if I wanted more transparency (in that case I&#8217;d get an open-source solution, I guess) or if I wanted the latest and greatest.</p>
<p>All these front-ends would deal with many back-end servers &#8211; just like it is now, because the internet can just not be indexed on only a few machines. But they wouldn&#8217;t have to be owned by one company: there could be many. As a provider, then, I&#8217;d also have a choice of companies that would compete to serve my interests &#8211; they wouldn&#8217;t certainly not drop me from their index (as in Doctorow&#8217;s problem outline), because I&#8217;m paying them. A natural choice for this would be my hosting company, but if they do a bad job (too slow, wrong keywords, whatever), I could fire them and go somewhere else.</p>
<p>(Big parties like Akamai or Amazon would be at a small advantage here, having a lot of server power to handle many index queries, but small parties could cut deals with other small parties to mirror each others&#8217; servers &#8211; heck, I&#8217;m thinking about details again!)</p>
<p>Note that in addition, providers are in a much better position to index their documents than search-engine crawlers currently are. They could index information that crawlers may not get to &#8211; this is the main goal of the more narrowly defined <a title="Wikipedia entry: Federated search" href="http://en.wikipedia.org/wiki/Federated_search">federated search</a> that Wikipedia currently serves up for that term. What&#8217;s proposed here is bigger &#8211; all-inclusive.</p>
<h3>So who does the PageRanking?</h3>
<p>There&#8217;s a little problem of course, in that the above is not an accurate picture of how stuff works. At Google, the back-end servers have to also store each site&#8217;s PageRank, and the front-ends rely on that for their ordering work. In the federated model, there would be some conflict of interest there: wouldn&#8217;t the providers bribe their back-end companies to game the system?</p>
<p>If all the companies involved were small enough, then no. If one back-end would return dishonest rankings, it would quickly become known among the front-ends, and they would drop this back-end from their lists. That&#8217;s similar to what Google does and what Doctorow is worried about, but there&#8217;s a big difference: if your back-end company behaves in this way, and you suffer as a provider, you can leave them and find a more respectable back-end. Honest providers would not have to suffer.</p>
<p>What about innovation? For one scenario, let&#8217;s say I&#8217;m a new front-end company and I want to replace PageRank by my innovation called RankPage. I&#8217;d have to get all the back-end guys to give me some sort of access to their servers to get to calculate RankPage. But that should (in theory, at least) be relatively easy: they don&#8217;t stand to lose anything, except maybe some compute time and sysadmin hours. If I turn out to be onto something, I&#8217;ll become a big front-end, driving a lot of consumers to them &#8211; that is, helping me try my innovation is ultimately in the best interest of the providers they serve. Note that nobody incurs high costs in this model.</p>
<p>(I&#8217;m having a really hard time stopping myself from thinking about details here, but let&#8217;s say a good front-end in this federated-search world would be able to deal with heterogeneity, where some back-ends respond with PageRank, some also provide RankPage, and some do yet something else&#8230;)</p>
<p>(And for more irrelevant details: we would also see many more specialist front-ends appear, that serve consumers with very specific interests. Could be cool!)</p>
<h3>Why it won&#8217;t happen anytime soon</h3>
<p>While the front-ends and back-ends could have many different implementations, they would have to somehow be able to speak to each other in a very extensible language (we don&#8217;t want to end up with something like email &#8211; built on a hugely successful protocol, that however doesn&#8217;t even facilitate verifying the originator of a message!). That extensibility is pretty difficult to design, I imagine.</p>
<p>(Perhaps superfluously noted: it&#8217;s crucially important to establish a protocol, and not an implementation. If we&#8217;d settle for a federated version of Nutch, however good it may be, there&#8217;s no way to innovate afterwards.)</p>
<p>What&#8217;s also difficult to deal with, is the chicken-and-egg problem: no consumers will come unless all providers are on-board on this, and why would the providers participate? I could see a few big parties driving this process though &#8211; parties that want to become less dependent on Google (and Bing, and Yahoo Search).</p>
<p>Looking at how long it&#8217;s taken to establish <a title="Wikipedia entry: OAuth" href="http://en.wikipedia.org/wiki/OAuth">OAuth</a> (and that still has the job of conquering the world ahead of it), this might really take a while to come together.</p>
<p>But wouldn&#8217;t it be cool&#8230;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/06/14/federating-search/feed/</wfw:commentRss>
			<slash:comments>8</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>OpenSolaris&#8217; ARM port</title>
		<link>https://learninginlinux.wordpress.com/2009/06/11/opensolaris-arm-port/</link>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Thu, 11 Jun 2009 07:55:59 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=274</guid>

					<description><![CDATA[I&#8217;m usually too slow to catch onto news items like this. This time &#8217;round, Sybreon dropped it onto my Google Reader home page &#8211; thanks dude :) Two things I thought: It&#8217;s worth mentioning that Ian Murdock said this will form the basis for &#8220;Solaris 11&#8221;. The ARM port makes a lot of sense to [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m usually too slow to catch onto <a title="Ars Technica: OpenSolaris 2009.06 released, new ARM port announced" href="http://arstechnica.com/open-source/news/2009/06/opensolaris-200906-released-new-arm-port-announced.ars">news items like this</a>. This time &#8217;round, <a href="http://blog.sybreon.com/">Sybreon</a> dropped it onto my Google Reader home page &#8211; thanks dude :)</p>
<p>Two things I thought:</p>
<p>It&#8217;s worth mentioning that <a title="Ian Murdock: &quot;It goes to 11&quot;" href="http://ianmurdock.com/2009/06/02/it-goes-to-11/">Ian Murdock said</a> this will form the basis for &#8220;Solaris 11&#8221;.</p>
<p>The ARM port makes a lot of sense to me. I can imagine companies being interested in having all their employees&#8217; smartphones becoming first-class members of their company computing ecosystem (did I just write that monster sentence??). I&#8217;m sort of thinking: MacOS will be able to offer this, but in a more closed flavour, Linux will be able to offer this, but in a more heterogeneous flavour, and Solaris could sort of offer the middle ground between those.</p>
<p>I know, I probably sound as &#8220;head in the clouds&#8221; as Jonathan Schwartz right now. Anyway, having multiple solutions can only be good: some companies will be looking for the flexibility of Linux offerings, but others may like the fact they can get it all from one vendor, who not only provides support but also holds the reins on development. A winner in any case will be ARM&#8230;</p>
]]></content:encoded>
					
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>MiserWare MicroMiser: Miserable Licensing</title>
		<link>https://learninginlinux.wordpress.com/2009/05/13/miserware-micromiser-miserable-licensing/</link>
					<comments>https://learninginlinux.wordpress.com/2009/05/13/miserware-micromiser-miserable-licensing/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Wed, 13 May 2009 20:54:18 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[MiserWare MicroMiser]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=259</guid>

					<description><![CDATA[Edit dd. 25 May: Miserware are offering the software under a more permissive license now. In short: they ask that you run your benchmarking procedures by them before publishing results. This seems a reasonable compromise, that protects them from poorly conducted testing taking the limelight. I will be playing with it&#8230; &#8212; Through the Ubuntu [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><em>Edit dd. 25 May: Miserware are offering the software under a more permissive license now. In short: they ask that you run your benchmarking procedures by them before publishing results. This seems a reasonable compromise, that protects them from poorly conducted testing taking the limelight. I will be playing with it&#8230;</em></p>
<p><em>&#8212;<br />
</em></p>
<p><em></em>Through the <a href="http://www.ubuntuweblogs.org">Ubuntu Weblogs</a> feed I read <a title="Power Saving Software for Linux" href="http://www.theopensourcerer.com/2009/05/13/power-saving-software-for-linux/">a post at The Open Sourcerer</a> about a piece of software by a company called MiserWare (yeah, the name inspired the pun even before I concluded as much&#8230;). They&#8217;ve released a private beta of MicroMiser, a closed-source package that promises a substantial power-consumption reduction on x86 Linux systems. Quoting from their website:</p>
<blockquote><p>MicroMiser typically lowers total system energy use by 10-35% even when a system is 100% utilized.</p></blockquote>
<p>That&#8217;s interesting. Even better, Alan wrote <a title="Over 65% Power Reduction on my Ubuntu Server!" href="http://www.theopensourcerer.com/2009/05/13/over-65-power-reduction-on-my-ubuntu-server/">a follow-up post</a> at The Open Sourcerer which shows that on his Ubuntu Server system the software estimates its own power savings to be close to 65% (!). Now, assuming for the moment that it&#8217;s not a bug in the calculations in the beta (hey, I&#8217;m in a mild mood tonight), I&#8217;d say those 65% would be really worthwile.</p>
<p>Alan was also kind enough to send me an invitation for the beta, and so &#8211; always the curious type &#8211; I was starting to make a plan on how to test this thing: use a dummy installation (we don&#8217;t trust closed-source betas, do we?), current-monitoring at the wall-socket, that sort of thing.Then, I figured I&#8217;d download the package first to see what was in it. A kernel module? Lots of ugly scripts? Who knows&#8230;</p>
<p>I don&#8217;t, anyway. I never got to the download step. In the step before that I was presented with a rather restrictive license. I quote verbatim (only the change to bold-type is mine, all text is original):</p>
<blockquote><p>5.    Confidentiality</p>
<p>The Software and Documentation, all related Intellectual Property, and <strong>any information learned or discovered by You about the performance of the Software in the course of use under this License constitutes proprietary trade secret information</strong> owned solely by Licensor (collectively, &#8220;Confidential Information&#8221;).  You agree that You will not, without the express prior written consent of Licensor, (1) use the Confidential Information other than to use the Software as authorized by this License Agreement; (2) disclose any Confidential Information to any third party; or (3) fail to use best efforts to safeguard the Confidential Information from unauthorized use, copying, or disclosure. You acknowledge that a breach of this Section 5 may cause the Licensor irreparable harm and damages that are difficult to ascertain.  Therefore, the Licensor, upon a disclosure or threatened disclosure of any Confidential Information, will be entitled to injunctive relief (without the requirement of posting bond), without limiting its other remedies under this License Agreement, in equity or at law.  The obligations of this Section shall survive this License Agreement without limitation in duration.  By clicking the “I Agree” button below, You consent to having any information that You provide to the Licensor processed and stored in the United States.</p></blockquote>
<p>Well, I&#8217;m still not a lawyer, nor am I a native speaker, but I think that this basically means I would not be allowed to tell you what I would read on my current monitor. What the&#8230;.</p>
<p>I also think that this means that the post reporting the 65% savings is violating the license.</p>
<p>After a long half second of thinking about it, I did not agree, so I can&#8217;t tell you anything about the software &#8211; by the way, even if I had agreed, I still wouldn&#8217;t be able to tell you anything! My advice? Don&#8217;t waste your time with bloody legalese and read some real information instead: Intel has a very informative website on the topic at <a href="http://www.lesswatts.org/">LessWatts.org</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/05/13/miserware-micromiser-miserable-licensing/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>The superiority of the distro</title>
		<link>https://learninginlinux.wordpress.com/2009/05/10/the-superiority-of-the-distro/</link>
					<comments>https://learninginlinux.wordpress.com/2009/05/10/the-superiority-of-the-distro/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Sun, 10 May 2009 23:59:50 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[distro]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[software updates]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=251</guid>

					<description><![CDATA[Shawn wrote a nice piece pointing out why developers (in particular for embedded systems) are better off running Linux than Windows. In summary, it&#8217;s all about the tools that ship with it. If we&#8217;re nitpicking then, what we&#8217;re actually talking about here is not Linux proper (the kernel), and not GNU/Linux (the operating system), but [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Shawn wrote a <a title="Going back to Linux" href="http://blog.sybreon.com/index.php/2009/05/going-back-to-linux/">nice piece</a> pointing out why developers (in particular for embedded systems) are better off running Linux than Windows. In summary, it&#8217;s all about the tools that ship with it. If we&#8217;re nitpicking then, what we&#8217;re actually talking about here is not Linux proper (the kernel), and not GNU/Linux (the operating system), but rather the distribution (whichever one that is). It&#8217;s the software stack as a whole that&#8217;s making the difference.</p>
<p>This is exactly what I usually tell people who ask why I&#8217;m running &#8220;this Linux thing&#8221;. It&#8217;s better suited to my needs <em>on every level of the software stack</em>. Of course, that&#8217;s usually way too vague to compel someone who had to ask to begin with. Examples please? My new favourite example is a security thing.</p>
<p>The other week I was catching up with a few long-overdue admin tasks on my parents&#8217; PCs, mostly security updates. They&#8217;re (still) on MS Windows, and you should have seen the number of pop-ups when I logged on as admin: I don&#8217;t know how many apps, all reporting they&#8217;d like permission to fetch and install updates. Compare that to the elegance of the little warning star in the Gnome menu, or the output of a quick &#8220;aptitude upgrade&#8221;.</p>
<p>Big deal? I think so. I&#8217;m quite sure that that single interface to all software updates is not just more elegant, but that it&#8217;s also directly keeping systems more secure. Quoting from an <a title="Linux vs. Windows: Which Is More Secure?" href="http://www.eweek.com/c/a/Linux-and-Open-Source/Linux-vs-Windows-Which-Is-More-Secure/">old eWeek item</a>:</p>
<blockquote><p>For example, for the nine highest-profile Windows malicious code incidents as of March 2003, Microsofts patches predated major outbreaks by an average of 305 days, yet most firms hadnt applied the patches.</p></blockquote>
<p>That is not a statement about Windows, or Linux. It&#8217;s a statement about human nature.</p>
<p><a title="Third-party software leaves users open to security risks" href="http://arstechnica.com/security/news/2009/04/third-party-software-leaves-users-open-to-security-risks.ars">Here&#8217;s</a> another item, from Ars:</p>
<blockquote><p>Secunia cited data from Microsoft showing that third-party software vulnerabilities are the ones that are most frequently exploited, and said that its own data showed that users simply don&#8217;t update as frequently as they should.</p></blockquote>
<p>Having a clear, simple, and non-crappy upgrade manager vastly diminishes these problems, because all people are lame, and the number of people that will not apply updates promptly will go up at least quadratically with the number of steps the update takes (and that&#8217;s a conservative estimate). That&#8217;s why distros will win from bare operating systems with apps dropped on top of them. It&#8217;s also why I&#8217;m going to press mom and dad to please let me replace their systems&#8230;</p>
<p>Is this benefit intrinsically tied to free software? In theory, no. I actually tried to pitch this idea to the<a title="A Dell Software Distribution Channel / Package Management Tool" href="http://www.ideastorm.com/ideaView?id=0877000000007A6AAI"> Ideastorm</a> crowd at some point. But maybe it would not work so well in practice: if MS would try to turn Windows into a distro, or would try to press other vendors into using their update manager, the anti-monopolistic regulators would be all over them in no time. So in practice, one might say this is a benefit of free software. Yay.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/05/10/the-superiority-of-the-distro/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>Dell shipping revamped Ubuntu 8.04 with the Mini 10</title>
		<link>https://learninginlinux.wordpress.com/2009/05/07/dell-shipping-revamped-ubuntu-8-04-with-the-mini-10/</link>
					<comments>https://learninginlinux.wordpress.com/2009/05/07/dell-shipping-revamped-ubuntu-8-04-with-the-mini-10/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Thu, 07 May 2009 21:19:08 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[Dell Mini 10]]></category>
		<category><![CDATA[ubuntu 8.04]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=244</guid>

					<description><![CDATA[Update: it looks like I&#8217;m sucking in a lot of search engine traffic relating to Poulsbo / GMA500 on Linux &#8211; that was unintentional because there&#8217;s no information here. Please look at Adam Williamson&#8217;s work on a native driver for Poulsbo instead. Update 2: I came across some Ubuntu-specific notes which should be useful too. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><strong>Update</strong>: it looks like I&#8217;m sucking in a lot of search engine traffic relating to Poulsbo / GMA500 on Linux &#8211; that was unintentional because there&#8217;s no information here. Please look at <a title=" Native Poulsbo (GMA 500) graphics driver for Fedora 10+ " href="http://www.happyassassin.net/2009/05/13/native-poulsbo-gma-500-graphics-driver-for-fedora-10/">Adam Williamson&#8217;s work on a native driver for Poulsbo</a> instead.</p>
<p><strong>Update 2</strong>: I came across <a title="mok0’s world - Ubuntu on the Dell Mini 10" href="http://mok0.wordpress.com/2009/05/25/ubuntu-on-the-dell-mini-10-2/">some Ubuntu-specific notes</a> which should be useful too.</p>
<p>&#8212;</p>
<p>I was a bit surprised by <a title="Ubuntu Now Available for Mini 10 Customers in the United States and Canada" href="http://en.community.dell.com/blogs/direct2dell/archive/2009/05/07/Ubuntu-Now-Available-for-Mini-10-Customers-in-the-United-States-and-Canada.aspx">Dell&#8217;s announcement</a> just now: it seems they&#8217;ve gone through some real efforts to make Ubuntu 8.04 look all 2009ish on their latest <a title="Dell fights back against Psion netBook trademark rampage" href="http://arstechnica.com/gadgets/news/2009/02/dell-fights-back-against-psion-netbook-trademark-rampage.ars"><span style="text-decoration:line-through;">netbook</span></a> entry-level notebook. Let me try and be all 2009ish too, and include some of their <span style="text-decoration:line-through;">Flash-terrorism</span> YouTube footage:</p>
<iframe class="youtube-player" width="500" height="282" src="https://www.youtube.com/embed/IWhi6Htf3AM?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
<p>Don&#8217;t worry,  I haven&#8217;t suddenly become a sucker for fancy &#8220;<a title="Wikipedia entry" href="http://en.wikipedia.org/wiki/Iconography">iconography</a>&#8221; (apparently that has a whole different meaning &#8220;in the office of the CTO&#8221; &#8211; don&#8217;t take this badly, Dell, I am a fan!), nor have I finally grown fond of NetworkManager, but seriously: WWAN support <em>is</em> pretty cool, and having it neatly integrated is even better.</p>
<p>Actually, I&#8217;m just really pleased to see what importance Dell seem to be assigning their Ubuntu programme these days. This little cheapling has Intel&#8217;s Poulsbo chipset (just like the Mini 12), which is great for power consumption, but apparently <a title="Intel GMA 500 (Poulsbo) graphics on Linux: a precise and comprehensive summary as to why you’re screwed" href="http://www.happyassassin.net/2009/01/30/intel-gma-500-poulsbo-graphics-on-linux-a-precise-and-comprehensive-summary-as-to-why-youre-screwed/">a real chore</a> when it comes to Linux support. As far as I&#8217;m aware, no other vendor is even trying to support it. In addition, also to keep the price low, there&#8217;s the ever troublesome Broadcom wifi chip, &#8220;official&#8221; Linux-support for which was <a title="unofficial account by an Ubuntu dev" href="http://ubuntuforums.org/showpost.php?p=5543025&amp;postcount=11">supposedly</a> also negotiated by Dell. I&#8217;m guessing there was no way they&#8217;d get all that working on 9.04, but to make up for that they backported the NM improvements and updated the &#8220;iconography&#8221; (sorry).</p>
<p>I&#8217;m just saying, that&#8217;s a lot of effort when you have a business partner called Microsoft that&#8217;s giving you Windows XP basically for free&#8230;</p>
<p>Ok, enough Ubuntu/Dell fanboyism for one day. Good night!</p>
<p>P.S.: Rik &#8211; if you happen to read this &#8211; I didn&#8217;t know they had a 6-cell battery coming (I told the guy to get a Samsung instead&#8230;)!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/05/07/dell-shipping-revamped-ubuntu-8-04-with-the-mini-10/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>Fully restricting rsync options server-side</title>
		<link>https://learninginlinux.wordpress.com/2009/05/07/rsync-fixed-server-side-options/</link>
					<comments>https://learninginlinux.wordpress.com/2009/05/07/rsync-fixed-server-side-options/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Thu, 07 May 2009 01:29:09 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[backup]]></category>
		<category><![CDATA[certificate authentication]]></category>
		<category><![CDATA[restricted access]]></category>
		<category><![CDATA[rsync]]></category>
		<category><![CDATA[ssh]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=233</guid>

					<description><![CDATA[how to extract the rsync command that your locally-executed rsync sends to the remote machine's ssh-server; how to make the ssh-server execute that exact command, regardless what someone tries to feed it from the local machine; how to run arbitrary scripts before rsync runs; and how to force using the restricted certificate for access

]]></description>
										<content:encoded><![CDATA[<p>Let me tell you up front: the following is a respin of information I found elsewhere. And it was very well written. Normally, then, I wouldn&#8217;t blog this, and rather add a link in my RSS feed in the sidebar &#8211; after all that&#8217;s the most compact form of &#8220;code reuse&#8221; and of upping the PageRank for a good site I found. What makes things different today? Well, it took me a crazy number of  search-engine queries to find the info.</p>
<p>Maybe I&#8217;m just stoopid, but let&#8217;s assume I&#8217;m not. I hope this respin ranks a bit better for keywords, so I can help some other lost souls find the site that I found. If you want the expert story, click straight through to <a title="rsync Tips &amp; Tricks" href="http://sial.org/howto/rsync/">the source</a> &#8211; actually, that whole site is simply excellent, and well worth browsing thoroughly if you&#8217;re looking to learn <a href="http://sial.org/howto/">cool sysadmin stuff</a> and <a href="http://sial.org/#s3">more</a>.</p>
<p>I learned (at least) two new things today:</p>
<ul>
<li>how to extract the rsync command that your locally-executed rsync sends to the remote machine&#8217;s ssh-server</li>
<li>how to make the ssh-server execute that exact command, regardless what someone tries to feed it from the local machine</li>
</ul>
<p>But first, let me explain (to myself) why I wanted to know these things.</p>
<h3>The issue with remote backups</h3>
<p>You want off-site backups, because, well, that&#8217;s <a title="The Tao of Backup: 3. Separation" href="http://taobackup.com/separation.html">rule #3</a>. But, by <a title="The Tao of Backup: 2. Frequency" href="http://taobackup.com/frequency.html">rule #2</a>, you also want to backup often, and there&#8217;s only one way to guarantee that that will work out: automation. There&#8217;s a problem with that though: to attain automation, your remote backups will need some unprotected authentication token, e.g. an ssh-certificate with an empty passphrase.</p>
<p>Obviously, you want to restrict what that dangerous key lets you do on the remote system. Simply put, you don&#8217;t want someone that managed to break into your backup client to be able to erase both your backups <em>and</em> the originals. The solutions I had seen so far included creating a separate backup-user on the server, and providing a restricted shell of some sort. That&#8217;s one way of doing things, but it&#8217;s not easy to set up:</p>
<ul>
<li>you only want to allow a select few commands, say rsync for transport, and perhaps some scripts to prepare the backup</li>
<li>ideally you want to have read-only access so that the client performing the backup cannot damage files, which might even occur without malicious intent, say by a wrong string of rsync options</li>
<li>but maybe you want to run some sort of hotcopy command on some database you&#8217;re using, and this does require write access</li>
<li>do you create yet another user for that?</li>
<li>and are you sure your shell is really as restricted as you think? No tricks to break out of it?</li>
<li>aaaaaahhrghh&#8230;..</li>
</ul>
<p>Right. I&#8217;m *that* mistrusting, and especially when it comes to my own competence. I&#8217;d definitely bork that restricted shell setup. Please give me something dead simple.</p>
<h3>Figuring out what your local rsync needs from the remote rsync</h3>
<p>Okay. Assume we&#8217;ll always call the server with the exact same rsync command, perhaps something like</p>
<pre>bash$ rsync -avz remote_host:/var/backup/ /var/remote_host_backup</pre>
<p>(On a side-note, I&#8217;m still doubting myself every time: trailing slash or no trailing slash? Terrible.)</p>
<p>Now, you can see what rsync command will get executed on the remote host if you add another -v:</p>
<pre>bash$ rsync -avvnz remote_host:/var/backup/ /var/remote_host_backup</pre>
<p>where I also added an -n to have a dry-run. The first line of output reads something like</p>
<pre>opening connection using ssh remote_host rsync --server --sender -vvnlogDtprz . /var/backup</pre>
<p>&#8230;which runs off the page here because I didn&#8217;t pay WordPress for my own custom CSS yet, but you can try this yourself anyway. What we&#8217;re interested in is the part that starts at &#8220;rsync&#8221;: this is what is executed on the remote host.</p>
<h3>Using sshd with the command=&#8221;&#8221; option</h3>
<p>Remember we&#8217;re using a passphrase-less ssh-certificate for the sake of automation. On the server, that requires an entry like this in $HOME/.ssh/authorized_keys:</p>
<pre>ssh-rsa AAYourVeryLongPublicKeyThingy== plus arbitrary comment/username at the end</pre>
<p>The <a title="sshd manpage" href="http://http://manpages.ubuntu.com/manpages/hardy/en/man8/sshd.8.html">sshd manpage</a> tells you you can insert quite a few options at the start of this line. You should really consider all of those options, but the cool one for now is the command=&#8221;&#8221; option. Between the quotes we put the result of the previous section minus the -n (or you&#8217;ll have only dry runs&#8230;).</p>
<pre>command="rsync --server --sender -vlogDtprz . /var/backup" ssh-rsa AAYourVeryLongPublicKeyThingy== plus arbitrary comment/username at the end</pre>
<p>&#8230;that&#8217;s probably running off the page big time now. Sorry. And I didn&#8217;t even add all the other restrictive options you ought to consider.</p>
<p>The beauty of this is that sshd will now <em>ignore</em> whatever abuse you&#8217;re feeding it from the ssh client. Whenever you authenticate using this specific certificate, it will only run that exact command.</p>
<p>Let me put this  yet another way. The only way to successfully talk to the server with that certificate is to say what it expects you to say: you can only run the matching local rsync command or the two rsync instances will not understand eachother. All the options are fixed, client-side and server-side.</p>
<p>This is what you want. Or, it is what I wanted, anyway.</p>
<h3>What about running scripts before the actual rsync?</h3>
<p>Okay, I learned a third thing. This was in the <a title="rsync manpage" href="http://manpages.ubuntu.com/manpages/hardy/en/man1/rsync.1.html">rsync manpage</a>: your remote rsync &#8220;can be any program,  script,  or command  sequence you’d care to run, so long as it does not corrupt the standard-in &amp; standard-out that rsync is using to  communicate&#8221;.</p>
<p>In other words: you can run any database hotcopy command on the server, as long as it cares to shut up, so that to the client, it looks as if only rsync was called. Your authorized_keys entry now looks somewhat like this:</p>
<pre>command="do_db_hotcopy &gt;&gt; /var/log/hotcopy.log 2&gt;&amp;1 ; rsync --server --sender -vlogDtprz . /var/backup" ssh-rsa AAYourVeryLongPublicKeyThingy== plus arbitrary comment/username at the end</pre>
<p>&#8230; where you&#8217;re being careful to make sure the only output sent comes from rsync. This works for me; I could imagine a long script might cause your local rsync to time-out in some way, so ymmv.</p>
<h3>One more thing</h3>
<p>I&#8217;ll shut up soon, too, but there was actually also a fourth thing&#8230; how do you make sure your local rsync command uses the restricted, passphraseless key under all circumstances? When I&#8217;m actually logged in myself, often <a title="ssh-agent manpage" href="http://manpages.ubuntu.com/manpages/hardy/en/man1/ssh-agent.1.html">ssh-agent</a> is keeping my less-restricted key available. The problem with this is that ssh will <em>prefer</em> using that key, but when I use that, my fancy hotcopy (from the previous section) never gets called.</p>
<p>To fix this, my backup script on the client contains an extra -e option to rsync, which is self-explanatory, but that&#8217;s not enough: ssh still prefers the key held by ssh-agent. The full solution (as the ssh-agent manpage more or less documents) is thus:</p>
<pre>#! /bin/bash
unset SSH_AUTH_SOCK
rsync -avz -e 'ssh -i .ssh/restricted_key' remote_host:/var/backup/ /var/remote_host_backup</pre>
<p>Sometime soon I might respin this whole thing with rdiff-backup (&#8230;you want to keep multiple states of your backup, because, well, that&#8217;s <a title="The Tao Of Backup: 4. History" href="http://taobackup.com/history.html">rule #4</a> :P). I just need to figure out how client-server communication works for that.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/05/07/rsync-fixed-server-side-options/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
		<item>
		<title>DSAs, USNs, udev, and&#8230; reboots</title>
		<link>https://learninginlinux.wordpress.com/2009/04/21/dsas-usns-udev-and-reboots/</link>
					<comments>https://learninginlinux.wordpress.com/2009/04/21/dsas-usns-udev-and-reboots/#comments</comments>
		
		<dc:creator><![CDATA[yungchin]]></dc:creator>
		<pubDate>Tue, 21 Apr 2009 13:06:18 +0000</pubDate>
				<category><![CDATA[UbuntuWeblogsOrg]]></category>
		<category><![CDATA[debian]]></category>
		<category><![CDATA[reboots]]></category>
		<category><![CDATA[software updates]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://oei.yungchin.nl/?p=230</guid>

					<description><![CDATA[As I&#8217;ve posted about a couple of times already, reboots are real productivity killers. So please spare me any unnecessary reboots. Like, possibly, the one last week. It occurred to me that, after a patch came through for some udev vulnerability, a Debian 5.0 system that I&#8217;m running did not prompt me for a reboot, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>As I&#8217;ve posted about a couple of times already, reboots are real productivity killers. So please spare me any unnecessary reboots. Like, possibly, the one last week.</p>
<p>It occurred to me that, after a patch came through for some udev vulnerability, a Debian 5.0 system that I&#8217;m running did not prompt me for a reboot, whereas Ubuntu 8.04 on my laptop did. Indeed, the <a title="Ubuntu Security Notice 758-1" href="http://www.ubuntu.com/usn/usn-758-1">related USN</a> mentions a reboot as necessary, and there&#8217;s no such mention in the <a title="Debian Security Advisory 1772" href="http://www.debian.org/security/2009/dsa-1772">corresponding DSA</a>. A quick look at the process IDs on the Debian system shows that the update just restarted udevd. Now, I understand too little of the intrinsics to judge if this was impossible for the Ubuntu system, so I&#8217;d like to put that out there as a question: why couldn&#8217;t we just restart udevd on Ubuntu?</p>
<p>In fact there was a similar issue some time ago. A NetworkManager update came through, and then I was prompted to reboot. As far as I could tell, restarting dbus was actually sufficient here to start using the updated binaries (but of course the update manager thingy kept bugging me to reboot nonetheless&#8230;). APT could have done that automatically, I&#8217;d think.</p>
<p>So what&#8217;s the policy with that? Just reboot to be on the safe side, avoid peculiarities with some users&#8217; systems? User friendliness in some intractable way or another? Any thoughts?</p>
<p>(On a related note, the whole boot-time benchmarking obsession permeating the geek blogs lately obviously doesn&#8217;t resonate with me &#8211; if I reboot my laptop once a month at most, it may take five minutes for all I care. I&#8217;m much more interested in the <a title="Kernel Newbies: Linux 2.6.29" href="http://kernelnewbies.org/Linux_2_6_29#head-e1bab8dc862e3b477cc38d87e8ddc779a66509d1">kernel mode-setting</a> advances, with its promises of more robust resume/suspend. Getting all excited here :))</p>
]]></content:encoded>
					
					<wfw:commentRss>https://learninginlinux.wordpress.com/2009/04/21/dsas-usns-udev-and-reboots/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		
		<media:content url="https://1.gravatar.com/avatar/77c5e6699db8b2182d517848f281896d8908a16487ce6a1e00ccdc966a355062?s=96&#38;d=&#38;r=X" medium="image">
			<media:title type="html">yungchin</media:title>
		</media:content>
	</item>
	</channel>
</rss>
