tag:blogger.com,1999:blog-11347239252658054272024-03-14T06:39:53.260+02:00Rambling around footending to depart from the main point or cover a wide range of subjectseddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.comBlogger281125tag:blogger.com,1999:blog-1134723925265805427.post-49314137409527293702021-09-02T19:30:00.001+03:002021-09-02T19:30:00.257+03:00Stretch to Buster upgrade issues: "Grub error: symbol ‘grub_is_lockdown’ not found", missing RTL8111/8168/8411 Ethernet driver and RTL8821CE Wireless adapter on Linux Kernel 5.10 (and 4.19)<p>I have been Debian Stretch running on my <a href="https://support.hp.com/gb-en/document/c06055335">HP Pavilion 14-ce0000nq laptop</a> since buying it back in April 2019, just before my presence at Oxidizeconf where I presented "<a href="https://www.youtube.com/watch?v=IKXrNlXXfL4" target="_blank">How to Rust When Standards Are Defined in C</a>".</p><p>Debian Buster (aka Debian 10) was released about 4 months later and I've been postponing the upgrade as my free time isn't what it used to be. I also tend to wait for the first or even second update of the release to avoid any sharp edges.</p><p>As this laptop has a Realtek 8821CE wireless card that wasn't officially supported in the Linux kernel, I had to use <a href="https://github.com/eddyp/endlessm-linux-drivers-net-wireless-rtl8821ce" target="_blank">an out-of-tree hacked driver</a> to have the wireless work on Stretch kernels such as 4.19, it didn't even got along with DKMS, so all compilations and installations of it, I did them manually. More reason to wait for a newer release that would contain <a href="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/net/wireless/realtek/rtw88/Kconfig?h=linux-5.13.y#n64" target="_blank">a driver inside the official kernel</a>.</p><p>I was waiting for the inevitable and dreading the wireless issues, but since <a href="https://www.debian.org/News/2021/20210814" target="_blank">mid-august Bullseye became stable</a>, turning Stretch into oldoldstable, I decided that I had to do the upgrade, at least to buster.</p><p><span style="font-size: medium;">The Grub error and the fix</span> <br /></p><p>Everything went quite smooth, except that after the reboot, the laptop failed to boot with this Grub error:</p><p style="text-align: left;"><span></span></p><blockquote><span style="font-family: courier;"><span>error: symbol ‘grub_is_lockdown’ not found</span></span></blockquote><p><span style="font-family: inherit;">I looked for a solution and it seemed everyone was <a href="https://www.reddit.com/r/linuxquestions/comments/m2dp4t/help_error_symbol_grub_is_lockdown_not_found_boot/" target="_blank">stuck</a><a href="https://www.reddit.com/r/linuxquestions/comments/m2dp4t/help_error_symbol_grub_is_lockdown_not_found_boot/"> </a>or the <a href="https://www.linux.org/threads/error-symbol-grub-is-lockdown-not-found-boot-error-solved.33404/" target="_blank">solution</a> was <a href="https://forum.manjaro.org/t/grub-error-symbol-grub-is-lockdown-after-update/58327">unclear</a>.</span></p><p><span style="font-family: inherit;">There is even a bug report in Debian about this error, bug <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=984760" target="_blank">#984760</a>.<br /></span></p><p><span style="font-family: inherit;">Adding to the pile of confusion my own confused solution: I tried supergrubdisk2/rescatux, it didn't work for me, it might have been a combination of me using LVM and grub-efi-amd64. I also tried to boot in rescue mode the Buster first DVD (to avoid the need for network), I was able to enter the partition, mount the EFI partition, too, but since I didn't want to mess the setup even more or depend on an external USB stick, I didn't know where should I try to write the Grub EFI config - the root partition is on an NVME storage.</span></p><p><span style="font-family: inherit;">When buying the laptop it had FreeDOS installed on it and some HP rescue app, which I did not wipe when installing Debian. I even forgot where or how was the EFI installed on the disk and EFI, even if it should be more reliable and simpler, I never got the hang of it.<br /></span></p><p><b><span style="font-family: inherit;">In the end, I realized that I could via BIOS actually select manually which EFI executable should be booted into, so I was able to boot with some manual intervention during boot into the regular system.</span></b></p><p><span style="font-family: inherit;">I tried regenerating the grub configuration, installing it and also tried restoring the default proper boot sequence (and I even installed <a href="https://packages.debian.org/buster/refind">refind</a> in the system during my fumbling), but I think somewhere between grub-efi-amd64 reconfiguration and its reinstallation I managed to do the right thing, as the default boot screen is the Grub one now.</span></p><p><span style="font-family: inherit;">Hints for anyone reading this in the hope to fix the same issue, hopefully it will make things better, not worse (see the text below):</span></p><p>1) regenerate the grub config:</p><p></p><blockquote><span style="font-family: courier;">update-grub2</span> </blockquote>2) reinstall grub-efi-amd64 and make Debian the default<p></p><p><span style="font-family: courier;"></span></p><blockquote><span style="font-family: courier;">dpkg-reconfigure -plow grub-efi-amd64</span></blockquote><p></p><p>When reinstalling grub-efi-amd64 onto the disk, I think the scariest questions were to these:</p><blockquote><p style="text-align: left;">Force extra installation to the EFI removable media path?<br /><br />Some EFI-based systems are buggy and do not handle new bootloaders correctly. If you force an extra installation of GRUB to the EFI removable media path, this should ensure that this system will boot Debian correctly despite such a problem. However, it may remove the ability to boot any other operating systems that also depend on this path. If so, you will need to make sure that GRUB is configured successfully to be able to boot any other OS installations correctly.<br /></p></blockquote><p> and</p><blockquote><p>Update NVRAM variables to automatically boot into Debian?<br /><br />GRUB can configure your platform's NVRAM variables so that it boots into Debian automatically when powered on. However, you may prefer to disable this behavior and avoid changes to your boot configuration. For example, if your NVRAM variables have been set up such that your system contacts a PXE server on every boot, this would preserve that behavior.<br /><br /></p></blockquote><p>I think the first can be safely answered "No" if you don't plan on booting via a removable USB stick, while the second is the one that does the restoring.</p><p>The second question is probably safe if you don't use PXE boot or other boot method, at least that's what I understand. But if you do, I suspect by installing refind, by playing with the multiple efi* named packages and tools, you can restore that, or it might be that your BIOS allows that directly.<br /></p><p>I just did a walk through of these 2 steps again on my laptop and answered "No" to the removable media question as it leads to errors when the media was not inserted (in my case the internal SD card reader), and "Yes" to making Debian the default.</p><p>It seems that for me this broke the FreeDOS and HP utilities boot entries from Grub, but I still can boot via the BIOS options and my goal was to have Debian boot correctly by default. <br /></p><p><span style="font-size: medium;">Fixing the missing RTL811/8168/8411 Ethernet card issue</span><br /></p><p>As a side note for people with computers having <b>Realtek RTL8111/8168/8411 Gigabit Ethernet Controller</b> and upgrading to Buster or switching to a newer kernel, please note that you might end up having the unpleasant surprise even your Ethernet card to disappear because the <b>r8169 </b>driver is not loader by default.</p><p>I had to add it to /etc/modules so is loaded by default:</p><blockquote><p>eddy@aptonia:/ $ cat /etc/modules<br /># /etc/modules: kernel modules to load at boot time.<br />#<br /># This file contains the names of kernel modules that should be loaded<br /># at boot time, one per line. Lines beginning with "#" are ignored.<br />r8169<br /><br /></p></blockquote><p><span style="font-size: medium;">The 5.10 compatible driver for RTL8821CE wireless adapter</span></p><p>After the upgrade to Buster, the oldstable version of the kernel, 4.19, the hacked version of the driver I've been using on Stretch on 4.9 kernels was no longer compatible - failed to compile due to missing symbols.</p><p>The fix for me was to switch to the DKMS compatible driver from <a href="https://github.com/tomaspinho/rtl8821ce">https://github.com/tomaspinho/rtl8821ce</a>, as this seems to work for both 4.19 and 5.10 kernels (installed from backports). <br /></p><p>I installed it via a modification of <a href="https://github.com/tomaspinho/rtl8821ce#manual-installation-of-driver" target="_blank">the manual install method</a> only for the 4.19 and 5.10 kernels, leaving the legacy 4.9 kernels working with the hacked driver. You can do the same if instead of running the provided script, you do its steps manually and you install only for the kernel versions you want, instead of the default to install for all:</p><p>I looked inside the dkms-install.sh script to do the required steps:</p><p>Copy the driver, add it to the dkms set of known drivers:</p><p></p><blockquote>DRV_NAME=rtl8821ce<br />DRV_VERSION=v5.5.2_34066.20200325<br /><br />cp -r . /usr/src/${DRV_NAME}-${DRV_VERSION}<br /><br />dkms add -m ${DRV_NAME} -v ${DRV_VERSION}<br /></blockquote><p>But you just build and install them only for the select kernel versions of your choice:<br /></p><blockquote>dkms build -m ${DRV_NAME} -v ${DRV_VERSION} <b>-k 5.10.0-0.bpo.8-amd64</b><br />dkms install -m ${DRV_NAME} -v ${DRV_VERSION} <b>-k 5.10.0-0.bpo.8-amd64</b><br /></blockquote><p> Or, without the variables:<br /></p><blockquote>dkms build rtl8821ce/v5.5.2_34066.20200325<b> -k 4.19.0-17-amd64</b><br />dkms install rtl8821ce/v5.5.2_34066.20200325 <b>-k 4.19.0-17-amd64</b><br /></blockquote><p></p><p>dkms status should confirm everything is in place and I think you need to update grub2 again after this.<br /></p><p><b>Please note this driver is no longer maintained and the 5.10 tree should support the RTL8821CE wireless card with the <a href="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/net/wireless/realtek/rtw88/Kconfig?h=v5.10.61#n64" target="_blank">rtw88 driver from the kernel</a>,</b> but for me it did not. I'll probably try this at a later time, or after I upgrade to the current Debian stable, Bullseye.<br /></p><p></p>eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-78605004526388145442019-07-10T02:57:00.001+03:002019-07-27T02:49:30.417+03:00Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without students going mad?<b>Update 2019-Jul-27: In the code below my StackVec type was more complicated than it had to be, I had been using StackVec<'a, &'a mut T> instead of StackVec<'a, T> where T: 'a. I am unsure how I ended up making the type so complicated, but I suspect the lifetimes mismatch errors and the attempt to implement IntoIterator were the reason why I made <a href="https://github.com/sbenitez-cs140e/assignments-1-shell-skeleton/commit/32ce43c471c891743fd1a87ef511b33a6e379113">the original mistake</a>.</b><br />
<b><br /></b>
<b>Corrected code accordingly.</b><br />
<b><br /></b>
<b><br /></b>
<br />
I'm trying to go through <a href="https://cs140e.sergio.bz/">Sergio Benitez's CS140E class</a> and I am currently at <a href="https://cs140e.sergio.bz/assignments/1-shell/#implementing-stackvec">Implementing StackVec</a>. StackVec is something that currently, looks like this:<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">/// A contiguous array type backed by a slice.<br />///<br />/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`<br />/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`<br />/// requires no memory allocation as it is backed by a user-supplied slice. As a<br />/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This<br />/// results in `push` being fallible: if `push` is called when the vector is<br />/// full, an `Err` is returned.<br />#[derive(Debug)]<br />pub struct StackVec<'a, T: 'a> {<br /> storage: &'a mut [T],<br /> len: usize,<br /> capacity: usize,<br />}</span></span></blockquote>
The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.<br />
<br />
Now I am trying to understand what needs to happens behind:<br />
<ol>
<li>IntoIterator</li>
<li>when in no_std</li>
<li>with a custom type which has generics</li>
<li>and has to use lifetimes </li>
</ol>
I don't now what I'm doing<strike>, I <a href="https://github.com/sbenitez-cs140e/assignments-1-shell-skeleton/commit/32ce43c471c891743fd1a87ef511b33a6e379113">might have managed to do it</a></strike>:<br />
<br />
<blockquote class="tr_bq">
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">pub struct StackVecIntoIterator<'a, T: 'a> {<br /> stackvec: StackVec<'a, T>,<br /> index: usize,<br />}<br /><br />impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {<br /> type Item = &'a mut T;<br /> type IntoIter = StackVecIntoIterator<'a, T>;<br /><br /> fn into_iter(self) -> Self::IntoIter {<br /> StackVecIntoIterator {<br /> stackvec: self,<br /> index: 0,<br /> }<br /> }<br />}<br /><br />impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {<br /> type Item = &'a mut T;<br /><br /> fn next(&mut self) -> Option<self::item> {<br /> let result = self.stackvec.pop();<br /> self.index += 1;<br /><br /> result<br /> }<br />}</self::item></span></span></strike></blockquote>
<br />
<a href="https://github.com/sbenitez-cs140e/assignments-1-shell-skeleton/commit/ca87231be8aaf71932389fc0dd3fce3df46ef1b5">Corrected code as of 2019-Jul-27</a>:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">pub struct StackVecIntoIterator<'a, T: 'a> {<br /> stackvec: StackVec<'a, T>,<br /> index: usize,<br />}<br /><br />impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, T> {<br /> type Item = T;<br /> type IntoIter = StackVecIntoIterator<'a, T>;<br /><br /> fn into_iter(self) -> Self::IntoIter {<br /> StackVecIntoIterator {<br /> stackvec: self,<br /> index: 0,<br /> }<br /> }<br />}<br /><br />impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {<br /> type Item = T;<br /><br /> fn next(&mut self) -> Option<self::item> {<br /> let result = self.stackvec.pop().clone();<br /> self.index += 1;<br /><br /> result<br /> }<br />}</self::item></span></span></blockquote>
<br />
<br />
<br />
I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.<br />
<br />
That was until I found <a href="https://stackoverflow.com/a/30220832">this wonderful example</a> on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">struct Pixel {<br /> r: i8,<br /> g: i8,<br /> b: i8,<br />}<br /><br />impl IntoIterator for Pixel {<br /> type Item = i8;<br /> <b>type IntoIter = PixelIntoIterator;</b><br /><br /> fn into_iter(self) -> Self::IntoIter {<br /><b> PixelIntoIterator {<br /> pixel: self,<br /> index: 0,<br /> }</b><br /> }<br />}<br /><br /><b>struct PixelIntoIterator {<br /> pixel: Pixel,<br /> index: usize,<br />}<br /><br />impl Iterator for PixelIntoIterator {<br /> type Item = i8;<br /> fn next(&mut self) -> Option<i8> {<br /> let result = match self.index {<br /> 0 => self.pixel.r,<br /> 1 => self.pixel.g,<br /> 2 => self.pixel.b,<br /> _ => return None,<br /> };<br /> self.index += 1;<br /> Some(result)<br /> }<br />}</i8></b><br /><br />fn main() {<br /> let p = Pixel {<br /> r: 54,<br /> g: 23,<br /> b: 74,<br /> };<br /> for component in p {<br /> println!("{}", component);<br /> }<br />}</span></span></blockquote>
The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.<br />
<br />
Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.<br />
<br />
<br />
Still, the fact there are so many new things at once, one of them being lifetimes - which <a href="https://twitter.com/OxidizeConf/status/1122114957057904641">can not be taught, only experienced</a> <a href="https://twitter.com/oli_obk">@oli_obk</a> - makes things very confusing.<br />
<br />
Even if I think I managed it for IntoIterator, I am similarly confused about <a href="https://github.com/sbenitez-cs140e/assignments-1-shell-skeleton/blob/32ce43c471c891743fd1a87ef511b33a6e379113/stack-vec/src/lib.rs#L126">implementing "Deref for StackVec"</a> for the same reasons.<br />
<br />
I think I am seeing on my own skin what <a href="https://www.youtube.com/watch?v=UT9Wlk64v7U&t=19m36s">Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go</a>. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, <a href="https://twitter.com/oli_obk">Oli</a>?<br />
<br />
All that aside, what should be the signature of the impl? Is this OK?<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> type Target = T;</span><br />
<span style="font-size: x-small;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> fn deref(&self) -> &Self::Target;</span><br />
<span style="font-size: x-small;">}</span></blockquote>
Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.<br />
<br />
I don't know what I'm doing, but I hope things will become clear with more exercise.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-31671595102927469182019-07-04T13:02:00.002+03:002019-07-04T13:02:47.673+03:00HOWTO: Rustup: Overriding the rustc compiler version just for some directoryIf you need to use a specific version of the rustc compiler instead of the default, the <a href="https://doc.rust-lang.org/nightly/edition-guide/rust-2018/rustup-for-managing-rust-versions.html#managing-versions" target="_blank">rustup documentation tells you how to do that</a>.<br />
<br />
<br />
First install the desired version, e.g. <i>nightly-2018-01-09</i><br />
<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><i>$ rustup install nightly-2018-01-09</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: syncing channel updates for 'nightly-2018-01-09-x86_64-pc-windows-msvc'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: latest update on 2018-01-09, rust version 1.25.0-nightly (b5392f545 2018-01-08)</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: downloading component 'rustc'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: downloading component 'rust-std'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: downloading component 'cargo'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: downloading component 'rust-docs'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: installing component 'rustc'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: installing component 'rust-std'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: installing component 'cargo'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: installing component 'rust-docs'</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i> nightly-2018-01-09-x86_64-pc-windows-msvc installed - rustc 1.25.0-nightly (b5392f545 2018-01-08)</i></span><br /><span style="font-family: "Courier New", Courier, monospace;"></span><br /><span style="font-family: "Courier New", Courier, monospace;"><i>info: checking for self-updates</i></span></span></blockquote>
<br />
Then override the default compiler with the desired one in the top directory of your choice:<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><i>$ <b>rustup override set nightly-2018-01-09</b></i><br /><i>info: using existing install for 'nightly-2018-01-09-x86_64-pc-windows-msvc'</i><br /><i>info: override toolchain for 'C:\usr\src\rust\sbenitez-cs140e' set to 'nightly-2018-01-09-x86_64-pc-windows-msvc'</i><br /><br /><i> nightly-2018-01-09-x86_64-pc-windows-msvc unchanged - rustc 1.25.0-nightly (b5392f545 2018-01-08)</i></span></span></blockquote>
That's it. <br />
<br />eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-63320400859082339732019-06-15T03:24:00.001+03:002019-06-15T03:24:08.472+03:00How to generate a usable map file for Rust code - and related (f)rustrations<h2>
Intro </h2>
<br />
Cargo does not produce a .map file, and if it does, mangling makes it very unusable. If you're searching for the TLDR, read from "How to generate a map file" on the bottom of the article.<br />
<h2>
Motivation</h2>
As a person with experience in embedded programming I find it very useful to be able to look into the map file.<br />
<br />
Scenarios where looking at the map file is important:<br />
<ul>
<li>evaluate if the code changes you made had the desired size impact or no undesired impact - recently I saw a compiler optimize for speed an initialization with 0 of an array by putting long blocks of u8 arrays in .rodata section</li>
<li>check if a particular symbol has landed in the appropriate memory section or region</li>
<li>make an initial evaluation of which functions/code could be changed to optimize either for code size or for more readability (if the size cost is acceptable)</li>
<li>check particular symbols have expected sizes and/or alignments </li>
</ul>
<h2>
Rustrations </h2>
Because these kind of scenarios are quite frequent in my work and I am used to looking at the .map file, some "<a href="http://www.rustrations.club/" target="_blank">rustrations</a>" I currently face are:<br />
<ol>
<li>No map file is generated by default via cargo and information on how to do it is sparse</li>
<li>If generated, the symbols are mangled and it seems each symbol is in a section of its own, making per section (e.g. .rodata, .text, .bss, .data) or per file analysys more difficult than it should be</li>
<li>I haven't found a way disable mangling globally, without editing the rust sources. - I remember there is some tool to un-mangle the output map file, but I forgot its name and I find the need to post-process suboptimal</li>
<li>no default map file filename or location - ideally it should be named as the crate or app, as specified in the .toml file.</li>
</ol>
<h2>
How to generate a map file</h2>
<h3>
Generating map file for linux (and possibly other OSes)</h3>
Unfortunately, not all architectures/targets use the same linker, or on some the preferred linker could change for various reasons.<br />
<br />
Here is how I managed to generate a map file for an AMD64/X86_64 linux target where it seems the linker is GLD:<br />
<br />
Create a .cargo/config file with the following content:<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">.cargo/config:</span> </blockquote>
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">[build]</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> rustflags = ["-Clink-args=-Wl,-Map=app.map"]</span></blockquote>
<br />
This should apply to all targets which use GLD as a linker, so I suspect this is not portable to Windows integrated with MSVC compiler.<br />
<br />
<h3>
Generating a map file for thumb7m with rust-lld</h3>
<br />
On baremetal targets such as Cortex M7 (thumbv7m where you might want to use the llvm based rust-lld, more linker options might be necessary to prevent linking with compiler provided startup code or libraries, so the config would look something like this:<br />
<blockquote class="tr_bq">
<pre class="code highlight" lang="plaintext"><span style="font-family: "courier new" , "courier" , monospace;"><span class="line" id="LC1" lang="plaintext">.cargo/config:</span></span> </pre>
</blockquote>
<blockquote class="tr_bq">
<pre class="code highlight" lang="plaintext"><span style="font-family: "courier new" , "courier" , monospace;"><span class="line" id="LC1" lang="plaintext">[build]</span>
<span class="line" id="LC2" lang="plaintext">target = "thumbv7m-none-eabi"</span>
<span class="line" id="LC3" lang="plaintext">rustflags = ["-Clink-args=-Map=app.map"]</span></span></pre>
</blockquote>
The thins I dislike about this is the fact the target is forced to thumbv7m-none-eabi, so some unit tests or generic code which might run on the build computer would be harder to test.<br />
<br />
Note: if using rustc directly, just pass the extra options<br />
<h2>
Map file generation with some readable symbols</h2>
After the changes above ae done, you'll get an app.map file (even if the crate is of a lib) with a predefined name, If anyone knows ho to keep the crate name or at least use lib.map for libs, and app.map for apps, if the original project name can't be used.<br />
<br />
The problems with the generated linker script are that:<br />
<ol>
<li>all symbol names are mangled, so you can't easily connect back to the code; the alternative is to force the compiler to not mangle, by adding the <span style="font-family: "Courier New", Courier, monospace;">#[(no_mangle)]</span> before the interesting symbols.</li>
<li>each symbol seems to be put in its own subsection (e.g. an initalized array in .data.<some anme="" symbol=""></some></li>
</ol>
<h3>
Dealing with mangling</h3>
For problem 1, the fix is to add in the source <span style="font-family: "Courier New", Courier, monospace;">#[no_mangle]</span> to symbols or functions, like this:<br />
<br />
<blockquote class="tr_bq">
<b><span style="font-family: "Courier New", Courier, monospace;">#[no_mangle]</span></b><br />
<span style="font-family: "Courier New", Courier, monospace;">pub fn sing(start: i32, end: i32) -> String {</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> // code body follows</span><br />
<span style="font-family: "Courier New", Courier, monospace;">}</span></blockquote>
<h4>
Dealing with mangling globally</h4>
I wasn't able to find a way to convince cargo to apply no_mangle to the entire project, so if you know how to, please comment. I was thinking using #![no_mangle] to apply the attribute globally in a file would work, but is doesn't seem to work as expected: the subsection still contains the mangled name, while the symbol seems to be "namespaced":<br />
<br />
Here is a some section from the <span style="font-family: "Courier New", Courier, monospace;">#![no_mangle]</span> (global) version:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">.text._ZN9beer_song5verse17h0d94ba819eb8952aE<br /> 0x000000000004fa00 0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)<br /> 0x000000000004fa00 <b>beer_song::verse</b><br /> </span></span></blockquote>
When the #[no_mangle] attribute is attached directly to the function, the subsection is not mangled and the symbol seems to be global:<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;">.text.verse 0x000000000004f9c0 0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)<br /> 0x000000000004f9c0 <b>verse</b></span></blockquote>
I would prefer to have a cargo global option to switch for the entire project, and code changes would not be needed, comment welcome.<br />
<h3>
Each symbol in its section</h3>
The second issue is quite annoying, even if the fact that each symbol is in its own section can be useful to control every symbol's placement via the linker script, but I guess to fix this I need to custom linker file to redirect, say all constants "subsections" into ".rodata" section.<br />
<br />
I haven't tried this, but it should work.<br />
<br />eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-67418864231811443312018-05-22T23:35:00.000+03:002018-12-14T23:20:41.349+02:00rust for cortex-m7 baremetal<i>Update 14 December 2018: After the release of stable 1.31.0 (aka 2018 Edition), it is no longer necessary to switch to the nightly channel to get access to thumb7em-none-eabi / Cortex-M4 and Cortex-M7 components. Updated examples and commands accordingly.</i><br />
<i>For more details on embedded development using Rust, the <a href="https://docs.rust-embedded.org/" target="_blank">official Rust embedded docs site</a> is the place to go, in particular, you can start with </i><i><a href="https://docs.rust-embedded.org/book/index.html">The embedded Rust
book</a>.</i><br />
<br />
<br />
This is a reminder for myself, if you want to install <a href="https://www.rust-lang.org/" target="_blank">Rust</a> for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:<br />
<br />
<a href="https://forge.rust-lang.org/platform-support.html">https://forge.rust-lang.org/platform-support.html</a><br />
<br />
Higlighting the relevant part:<br />
<br />
<table><tbody>
</tbody><thead>
<tr><th>Target</th>
<th>std</th>
<th>rustc</th>
<th>cargo</th>
<th>notes</th>
</tr>
</thead>
<tbody>
<tr><td>...</td></tr>
<tr>
<td><code class="highlighter-rouge">msp430-none-elf</code></td>
<td>*</td>
<td></td>
<td></td>
<td>16-bit MSP430 microcontrollers</td>
</tr>
<tr>
<td><code class="highlighter-rouge">sparc64-unknown-netbsd</code></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>NetBSD/sparc64</td>
</tr>
<tr>
<td><code class="highlighter-rouge">thumbv6m-none-eabi</code></td>
<td>*</td>
<td></td>
<td></td>
<td>Bare Cortex-M0, M0+, M1</td>
</tr>
<tr>
<td><span style="color: red;"><code class="highlighter-rouge">thumbv7em-none-eabi</code></span></td>
<td><span style="color: red;">*</span></td>
<td><br /></td>
<td><br /></td>
<td><span style="color: red;">Bare Cortex-M4, M7</span></td>
</tr>
<tr>
<td><code class="highlighter-rouge">thumbv7em-none-eabihf</code></td>
<td>*</td>
<td></td>
<td></td>
<td>Bare Cortex-M4F, M7F, FPU, hardfloat</td>
</tr>
<tr>
<td><code class="highlighter-rouge">thumbv7m-none-eabi</code></td>
<td>*</td>
<td></td>
<td></td>
<td>Bare Cortex-M3</td>
</tr>
<tr>
<td>...
</td></tr>
<tr>
<td><code class="highlighter-rouge">x86_64-unknown-openbsd</code></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>64-bit OpenBSD</td></tr>
</tbody></table>
<br />
In order to enable the relevant support, <strike>use the nightly build and</strike> use stable >= 1.31.0 and add the relevant target:<br />
<blockquote class="tr_bq">
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~/usr/src/rust-uc$ rustup show</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">Default host: x86_64-unknown-linux-gnu</span></span></strike><br />
<strike><br /></strike>
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">installed toolchains</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">--------------------</span></span></strike><br />
<strike><br /></strike>
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">stable-x86_64-unknown-linux-gnu</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">nightly-x86_64-unknown-linux-gnu (default)</span></span></strike><br />
<strike><br /></strike>
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">active toolchain</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">----------------</span></span></strike><br />
<strike><br /></strike>
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">nightly-x86_64-unknown-linux-gnu (default)</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">rustc 1.28.0-nightly (cb20f68d0 2018-05-21)</span></span></strike></blockquote>
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~/usr/src/rust$ rustup show</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">Default host: x86_64-unknown-linux-gnu</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">stable-x86_64-unknown-linux-gnu (default)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">rustc 1.31.0 (abe02cefd 2018-12-04)</span></span></blockquote>
<br />
<strike>If not using nightly, switch to that:</strike><br />
<strike><br /></strike>
<br />
<blockquote class="tr_bq">
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">info: using existing install for 'nightly-x86_64-unknown-linux-gnu'</span></span></strike><br />
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'</span></span></strike><br />
<strike><br /></strike>
<strike><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)</span></span></strike><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span></blockquote>
Add the needed target:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"></span></span></blockquote>
<blockquote>
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~/usr/src/rust$ rustup target add thumbv7em-none-eabi<br />info: downloading component 'rust-std' for 'thumbv7em-none-eabi'<br />info: installing component 'rust-std' for 'thumbv7em-none-eabi'<br />eddy@feodora:~/usr/src/rust$ rustup show<br />Default host: x86_64-unknown-linux-gnu<br /><br />installed targets for active toolchain<br />--------------------------------------<br /><br />thumbv7em-none-eabi<br />x86_64-unknown-linux-gnu<br /><br />active toolchain<br />----------------<br /><br />stable-x86_64-unknown-linux-gnu (default)<br />rustc 1.31.0 (abe02cefd 2018-12-04)</span></span></blockquote>
Then compile with <span style="font-family: "courier new" , "courier" , monospace;">--target</span>.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-7813698028033319262018-05-10T02:20:00.001+03:002018-05-10T02:28:59.511+03:00"Where does Unity store its launch bar items?" or "Convincing Ubuntu's Unity 7.4.5 to run the newer version of PyCharm when starting from the launcer"I have been driving a System76 Oryx-Pro for some time now. And I am running Ubuntu 16.04 on it.<br />
I typically try to avoid polluting global name spaces, so any apps I install from source I tend to install under a versioned directory under <span style="font-family: "courier new" , "courier" , monospace;">~/opt</span>, for instance, PyCharm Community Edition 2016.3.1 is installed under <span style="font-family: "courier new" , "courier" , monospace;">~/opt/pycharm-community-2016.3.1</span>.<br />
<br />
Today, after Pycharm suggested I install a newer version, I downloaded the current package, and ran it, as instructed in the embedded readme.txt, via the wrapper script:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">~/opt/pycharm-community-2018.1.2/bin$ ./pycharm.sh</span></blockquote>
Everything looked OK, but when wanting to lock the icon on the launch bar I realized Unity did not display a separate Pycharm Community Edition icon for the 2018.1.2 version, but showed the existing icon as active.<br />
<br />
"I guess it's the same filename, so maybe unity confuses the older version with the new one, so I have to replace the launcher to user the newer version by default", I said.<br />
<br />
So I closed the interface, then removed the PyCharm Community Edition, then I restarted the newer Pycharm from the command line, then blocked the icon, then I closed PyCharm once more, then clicked on the launcher bar.<br />
<br />
Surprise! Unity was launching the old version! What?!<br />
<br />
Repeated the entire series of steps, suspecting some <a href="https://www.urbandictionary.com/define.php?term=pebkac" target="_blank">PEBKAC</a>, but was surprised to see the same result.<br />
<br />
"Damn! Unity is stupid! I guess is a good thing they decided to kill it!", I said to myself.<br />
<br />
Well, it shouldn't be that hard to find the offending item, so I started to grep in <span style="font-family: "courier new" , "courier" , monospace;">~/.config</span>, then in <span style="font-family: "courier new" , "courier" , monospace;">~/.*</span> for the string "<span style="font-family: "courier new" , "courier" , monospace;">PyCharm Cummunity Edition</span>" without success.<br />
Hmm, I guess the Linux world copied a lot of bad ideas from the windows world, probably the configs are not in <span style="font-family: "courier new" , "courier" , monospace;">~/.*</span> in plain text, they're probably in that simulacrum of a Windows registry called dconf, so I installed dconf-editor and searched once more for the keyword "<span style="font-family: "courier new" , "courier" , monospace;">Community</span>", but only found one entry in the gedit filebrowser context.<br />
<br />
So where does Unity gets its items from the launchbar? Since there is no "Properties" entry context menu and didn't want to try to debug the starting of my graphic environment, but Unity is open source, I had to look at the sources.<br />
<br />
After some fighting with dead links to unity.ubuntu.com subpages, then searching for "<span style="font-family: "courier new" , "courier" , monospace;">git repository Ubuntu Unity</span>", I realized Ubuntu loves Bazaar, so searched for "<span style="font-family: "courier new" , "courier" , monospace;">bzr Ubuntu Unity repository</span>", no luck. Luckly, Wikipedia usually has those kind of links, and found the damn thing.<br />
<br />
<i>BTW, am I the only one considering some strong hits with a clue bat the developers which name projects by some generic term that has no chance to override the typical term in common parlance such as "Unity" or "Repo"?</i><br />
<br />
Finding the sources and looking a little at the repository did not make it clear which was the entry point. I was expecting at least the <a href="https://bazaar.launchpad.net/~unity-team/unity/trunk/view/head:/README" target="_blank">README</a> or the <a href="https://bazaar.launchpad.net/~unity-team/unity/trunk/view/head:/INSTALL" target="_blank">INSTALL</a> file would give some relevant hints on the config or the initalization. M patience was running dry.<br />
<br />
Maybe looking on my own system would be a better approach?<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~$ which unity</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">/usr/bin/unity</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~$ ll $(which unity)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">-rwxr-xr-x 1 root root 9907 feb 21 21:38 /usr/bin/unity*</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">eddy@feodora:~$ ^ll^file</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">file $(which unity)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">/usr/bin/unity: Python script, ASCII text executable</span></span></blockquote>
BINGO! This looks like a python script executable, it's not a binary, in spite of the <a href="https://bazaar.launchpad.net/~unity-team/unity/trunk/files/head:/launcher/" target="_blank">many .cpp sources in the Unity tree</a>.<br />
<br />
I opened the file with less, then found this interesting bit:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> def <span style="background-color: yellow;">reset_launcher_icons</span> ():<br /> '''Reset the default launcher icon and restart it.'''<br /> subprocess.Popen(["<span style="background-color: yellow;">gsettings</span>", "reset" ,"<span style="background-color: yellow;">com.canonical.Unity.Launcher</span>" , "favorites"])</span></blockquote>
Great! So it stores that stuff in the pseudo-registry, but have to look under <span style="font-family: "Courier New", Courier, monospace;">com.canonical.Unity.Launcher.favorites</span>. Firing dconf-editor again found the relevant bit in the value of that key:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">'application://jetbrains-pycharm-ce.desktop'</span></span></blockquote>
So where is this .desktop file? I guess using <span style="font-family: "Courier New", Courier, monospace;">find</span> is going to bring it up:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">find /home/eddy/.local/ -name jetbrains* -exec vi {} \;</span></span></blockquote>
It did, and the content made it obvious what was happening:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">[Desktop Entry]</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Version=1.0</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Type=Application</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Name=PyCharm Community Edition</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><span style="background-color: yellow;">Icon=/home/eddy/opt/pycharm-community-2016.3.1/bin/pycharm.png<br />Exec="/home/eddy/opt/pycharm-community-2016.3.1/bin/pycharm.sh" %f</span></span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Comment=The Drive to Develop</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Categories=Development;IDE;</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Terminal=false</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">StartupWMClass=jetbrains-pycharm-ce</span></blockquote>
Probably Unity did not create a new desktop file when locking the icon, it would simply check if the jetbrains-pycharm-ce.desktop file existed already in my.local directory, saw it was, so it skipped its recreation.<br />
<br />
Just as somebody said, all difficult computer science problems are eiether caused by leaky abstractions or caching. I guess here we're having some sort of caching issue, but is easy to fix, just edit the file:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">eddy@feodora:~$ cat /home/eddy/.local/share/applications/jetbrains-pycharm-ce.desktop</span><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">[Desktop Entry]</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Version=1.0</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Type=Application</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Name=PyCharm Community Edition</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Icon=/home/eddy/opt/pycharm-community-<span style="background-color: yellow;">2018.1.2</span>/bin/pycharm.png</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Exec="/home/eddy/opt/pycharm-community-<span style="background-color: yellow;">2018.1.2</span>/bin/pycharm.sh" %f</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Comment=The Drive to Develop</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Categories=Development;IDE;</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">Terminal=false</span><br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">StartupWMClass=jetbrains-pycharm-ce</span></blockquote>
Checked again the start, and now the expected slash screen appears. GREAT!<br />
<br />
I wonder if this is a Unity issue or is it due to some broken library that could affect other desktop environments such as MATE, GNOME or XFCE?<br />
<br />
"Only" lost 2 hours (including this post) with this stupid bug, so I can go back to what I was trying in the first place, but now is already to late, so I have to go to sleep.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com4tag:blogger.com,1999:blog-1134723925265805427.post-5346223305003684822018-01-26T16:57:00.000+02:002018-01-26T16:57:14.533+02:00Detecting binary files in the history of a git repository<h3>
Git, VCSes and binary files</h3>
Git is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being <a href="https://git-lfs.github.com/" target="_blank">git-lfs</a> and <a href="https://git-annex.branchable.com/" target="_blank">git-annex</a>.<br />
<br />
My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.<br />
<br />
Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).<br />
<br />
The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.<br />
<br />
But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?<br />
<h3>
Detecting any binary files, only in the current commit</h3>
As with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">find . -type f -print | grep -v '^\.\/\.git\/'</span></blockquote>
Then we can use the 'file' utility to list for non-text files:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'</span></blockquote>
And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK</span></blockquote>
Of course, we assume here, the work tree is clean.<br />
<h3>
Checking all commits in a branch</h3>
Since we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.<br />
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.<br />
<br />
Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">git checkout -b test_bins</span></blockquote>
Git has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">cat > ../check_file_text.sh</span></blockquote>
<blockquote>
<span style="font-family: "Courier New", Courier, monospace;">#!/bin/sh</span><br /><span style="font-family: "Courier New", Courier, monospace;"></span><br /><span style="font-family: "Courier New", Courier, monospace;">(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK</span><br /><span style="font-family: "Courier New", Courier, monospace;"></span></blockquote>
then (ab)use 'git rebase' to execute that for us for all commits:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">git rebase --exec="sh ../check_file_text.sh" -i $startcommit</span></blockquote>
After we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.<br />
<br />
Here is an example output when checking the newest 5 commits:<br />
<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;">$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5</span><br /><span style="font-family: "Courier New", Courier, monospace;">Executing: sh ../check_file_text.sh</span><br /><span style="font-family: "Courier New", Courier, monospace;">OK</span><br /><span style="font-family: "Courier New", Courier, monospace;">Executing: sh ../check_file_text.sh</span><br /><span style="font-family: "Courier New", Courier, monospace;">OK</span><br /><span style="font-family: "Courier New", Courier, monospace;">Executing: sh ../check_file_text.sh</span><br /><span style="font-family: "Courier New", Courier, monospace;">OK</span><br /><span style="font-family: "Courier New", Courier, monospace;">Executing: sh ../check_file_text.sh</span><br /><span style="font-family: "Courier New", Courier, monospace;">OK</span><br /><span style="font-family: "Courier New", Courier, monospace;">Executing: sh ../check_file_text.sh</span><br /><span style="font-family: "Courier New", Courier, monospace;">OK</span><br /><span style="font-family: "Courier New", Courier, monospace;">Successfully rebased and updated refs/heads/test_bins.</span><br /><span style="font-family: "Courier New", Courier, monospace;"></span></blockquote>
<br />
Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New", Courier, monospace;"><br /></span>
<span style="font-family: "Courier New", Courier, monospace;">$ git co master<br />Switched to branch 'master'</span><br />
<span style="font-family: "Courier New", Courier, monospace;">Your branch is up-to-date with 'origin/master' </span></blockquote>
<blockquote>
<span style="font-family: "Courier New", Courier, monospace;">$ git branch -D test_bins</span><br /><span style="font-family: "Courier New", Courier, monospace;">Deleted branch test_bins (was 6358b91).</span></blockquote>
Enjoy!eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com3tag:blogger.com,1999:blog-1134723925265805427.post-23039995553124909812018-01-11T17:22:00.000+02:002018-01-19T08:51:11.723+02:00Suppressing color output of the Google Repo toolOn Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its <a href="https://github.com/esrlabs/git-repo" target="_blank">windows port made by ESRLabs</a>) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.<br />
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">repo --color=never info</span></blockquote>
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto. eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-53598062907587351882017-03-25T17:39:00.002+02:002017-03-25T17:39:50.891+02:00LVM: Converting root partition from linear to raid1 leads to boot failure... and how to recoverI have a system which has 3 distinct HDDs used as physucal volumes for Linux LVM. One logical volume is the root partition and it was initally created as a linear LV (vg0/OS).<br />
Since I have PV redundancy, I thought it might be a good idea to convert the root LV from liear to raid1 with 2 mirrors.<br />
<br />
<span style="color: red;">WARNING: It seems LVM raid1 logicalvolume for / is not supported with grub2, at least not with Ubuntu's </span><span style="color: red;">2.02~beta2-9ubuntu1.6 (14.04LTS) or Debian Jessie's grub-pc </span><span style="color: red;">2.02~beta2-22+deb8u1!</span><br />
<br />
So I did this:<br />
<span style="font-family: "Courier New",Courier,monospace;">lvconvert -m2 --type raid1 vg0/OS</span><br />
<br />
<span style="font-family: inherit;">Then I restarted to find myself at the 'grub rescue>' prompt.</span><br />
<br />
<span style="font-family: inherit;">The initial problem was seen on an Ubuntu 14.04 LTS (aka trusty) system, but I reproduced it on a VM with Debian Jessie.</span><br />
<br />
<span style="font-family: inherit;">I downloaded the <a href="http://www.supergrubdisk.org/wizard-step-download-super-grub2-disk/" target="_blank">Super Grub2 Disk</a> and tried to boot the VM. After choosing the option to load the LVM and RAID support, I was able to boot my previous system.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">I tried several times to reinstall GRUB, thinking that was the issue, but I always got this kind of error:</span><br />
<span style="font-family: inherit;"><br /></span>
<br /><span style="font-family: "Courier New",Courier,monospace;">/usr/sbin/grub-probe: error: disk
`lvmid/QtJiw0-wsDf-A2zh-2v2y-7JVA-NhPQ-TfjQlN/phCDlj-1XAM-VZnl-RzRy-g3kf-eeUB-dBcgmb'
not found.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">In the end, after digging for more than 4 hours for answers, I decided I might be able to revert the config to linear configuration, from the (initramfs) prompt.</span><br />
<br />
<span style="font-family: inherit;">Initally the LV was inactive, so I activated it:</span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">lvchange -a y /dev/vg0/OS</span><br />
<br />
<b><span style="font-family: inherit;">Then restored the LV to linear:</span></b><br />
<b><span style="font-family: inherit;"><br /></span></b>
<b><span style="font-family: "Courier New",Courier,monospace;">lvconvert -m0 vg0/OS</span></b><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Then tried to reboot without reinstalling GRUB, just for kicks, which succeded.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">In order to confirm this was the issue, I redid the whole thing, and indeed, with a raid1 root, I always got the error lvmid error.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">I'll have to check on Monday at work if I can revert it the same way the Ubuntu 14.04 system, but I suspect I will have no issues.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Is it true root on lvm-raid1 is nto supported?</span>eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com2tag:blogger.com,1999:blog-1134723925265805427.post-2880034207068209932015-12-10T20:22:00.001+02:002015-12-10T20:22:30.459+02:00HOWTO: Setting and inserting/using MS Word 2013 document properties in the body of the documentI wrote this so I won't forget it and for others to find, if confronted with the same issue. <br />
<br />
I hate Microsoft Office in all its incarnations, but I have to use it at work for various stuff. One of them is maintaining some technical documentation. we now use Office 365 and Office 2013.<br />
<br />Since MS Office Word 2013 is not a technical documentation program, some of the support for it is clunky. For things such as version numbers or others strings that might repeat throughout the document, (advanced) document properties is a way to go.<br />
<br />
<b>To set them select<span style="font-family: "Courier New",Courier,monospace;"> File > Info > Properties > Advanced Properties > Custom</span> then fill in the '<span style="font-family: "Courier New",Courier,monospace;">Name:</span>', '<span style="font-family: "Courier New",Courier,monospace;">Type:</span>' and '<span style="font-family: "Courier New",Courier,monospace;">Value:</span>', then press <span style="font-family: "Courier New",Courier,monospace;">Add</span>, then <span style="font-family: "Courier New",Courier,monospace;">OK</span>.</b><br />
<br />
<b>Once the properties are set, it can be inserted in the document by selecting its name in the '<span style="font-family: "Courier New",Courier,monospace;">Property:</span>' list from the menu: <span style="font-family: "Courier New",Courier,monospace;">INSERT > Quick Parts > Field... > Categories:Document Information > DocProperty</span>.</b><br />
<br />
After updating the<b> </b>value of any property (from the Advanced Properties dialog), to update all the places where the properties were used in the document, press Ctrl+A then right click > <span style="font-family: "Courier New",Courier,monospace;">Update Filed > Update entire table > OK</span>.<br />
<br />
And, yes, '<span style="font-family: "Courier New",Courier,monospace;">Update entire table</span>' will update the values, although it's name is stupid. eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-6760735703313486052015-05-23T17:44:00.002+03:002015-05-23T17:44:40.914+03:00HOWTO: No SSH logins SFTP only chrooted server configuration with OpenSSHIf you are in a situation where you want to set up a SFTP server in a more secure way, don't want to expose anything from the server via SFTP and do not want to enable SSH login on the account allowed to sftp, you might find the information below useful.<br />
<br />
What do we want to achive:<br />
<ul>
<li>SFTP server</li>
<li>only a specified account is allowed to connect to SFTP</li>
<li>nothing outside the SFTP directory is exposed</li>
<li>no SSH login is allowed</li>
<li>any extra security measures are welcome</li>
</ul>
To obtain all of the above we will create a dedicated account which will be chroot-ed, its home will be stored on a removable/no always mounted drive (acessing SFTP will not work when the drive is not mounted).<br />
<br />
Mount the removable drive which will hold the SFTP area (you might need to add some entry in fstab). <br />
<br />
Create the account to be used for SFTP access (on a Debian system this will do the trick):<br />
<blockquote class="tr_bq">
# adduser --system --home /media/Store/sftp --shell /usr/sbin/nologin sftp</blockquote>
<br />
This will create the account sftp which has login disabled, shell is /usr/sbin/nologin and create the home directory for this user.<br />
<br />
Unfortunately the default ownership of the home directory of this user are incompatible with chroot-ing in SFTP (which prevents access to other files on the server). A message like the one below will be generated in this kind of case:<br />
<blockquote class="tr_bq">
$ sftp -v sftp@localhost<br />
[..]<br />
sftp@localhost's password: <br />
debug1: Authentication succeeded (password).<br />
Authenticated to localhost ([::1]:22).<br />
debug1: channel 0: new [client-session]<br />
debug1: Requesting no-more-sessions@openssh.com<br />
debug1: Entering interactive session.<br />
<b>Write failed: Broken pipe</b><br />
<b>Couldn't read packet: Connection reset by peer</b></blockquote>
Also <b>/var/log/auth.log</b> will contain something like this:<br />
<blockquote class="tr_bq">
<b>fatal: bad ownership or modes for chroot directory "/media/Store/sftp"</b></blockquote>
<br />
The default permissions are visible using the 'namei -l' command on the sftp home directory:<br />
<blockquote class="tr_bq">
# namei -l /media/Store/sftp<br />
f: /media/Store/sftp<br />
drwxr-xr-x root root /<br />
drwxr-xr-x root root media<br />
drwxr-xr-x root root Store<br />
drwxr-xr-x sftp nogroup sftp</blockquote>
We change the ownership of the sftp directory and make sure there is a place for files to be uploaded in the SFTP area:<br />
<blockquote class="tr_bq">
# chown root:root /media/Store/sftp<br />
# mkdir /media/Store/sftp/upload<br />
# chown sftp /media/Store/sftp/upload</blockquote>
<br />
We isolate the sftp users from other users on the system and configure a chroot-ed environment for all users accessing the SFTP server:<br />
<blockquote class="tr_bq">
# addgroup sftpusers<br />
# adduser sftp sftusers</blockquote>
Set a password for the sftp user so password authentication works:<br />
<blockquote class="tr_bq">
# passwd sftp</blockquote>
Putting all pieces together, we restrict access only to the sftp user, allow it access via password authentication only to SFTP, but not SSH (and disallow tunneling and forwarding or empty passwords).<br />
<br />
Here are the changes done in /etc/ssh/sshd_config:<br />
<blockquote class="tr_bq">
PermitEmptyPasswords no<br />
PasswordAuthentication yes<br />
AllowUsers sftp<br />
Subsystem sftp internal-sftp<br />
Match Group sftpusers<br />
ChrootDirectory %h<br />
ForceCommand internal-sftp<br />
X11Forwarding no<br />
AllowTcpForwarding no <br />
PermitTunnel no</blockquote>
Reload the sshd configuration (I'm using systemd):<br />
<blockquote class="tr_bq">
# systemctl reload ssh.service</blockquote>
Check sftp user can't login via SSH:<br />
<blockquote class="tr_bq">
$ ssh sftp@localhost<br />
sftp@localhost's password: <br />
This service allows sftp connections only.<br />
Connection to localhost closed.</blockquote>
But SFTP is working and is restricted to the SFTP area:<br />
<blockquote class="tr_bq">
$ sftp sftp@localhost<br />
sftp@localhost's password: <br />
Connected to localhost.<br />
sftp> ls<br />
upload <br />
sftp> pwd<br />
Remote working directory: /<br />
sftp> put netbsd-nfs.bin<br />
Uploading netbsd-nfs.bin to /netbsd-nfs.bin<br />
remote open("/netbsd-nfs.bin"): Permission denied<br />
sftp> cd upload<br />
sftp> put netbsd-nfs.bin<br />
Uploading netbsd-nfs.bin to /upload/netbsd-nfs.bin<br />
netbsd-nfs.bin 100% 3111KB 3.0MB/s 00:00 </blockquote>
Now your system is ready to accept sftp connections, things can be uploaded in the upload directory and whenever the external drive is unmounted, SFTP will NOT work.<br />
<br />
<b>Note: Since we added 'AllowUsers sftp', you can test no local user can login via SSH. If you don't want to restrict access only to the sftp user, you can whitelist other users by adding them in the AllowUsers directive, or dropping it entirely so all local users can SSH into the system.</b>eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com1tag:blogger.com,1999:blog-1134723925265805427.post-1011150812518392722015-05-20T04:07:00.002+03:002015-05-23T18:15:40.600+03:00Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 2 - RedBoot reverse engineering and APEX hacking(continuation of <a href="http://ramblingfoo.blogspot.ro/2015/05/linksys-nslu2-adventures-into-netbsd.html" target="_blank">Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1</a>; meanwhile, my article was mentioned briefly in <a href="http://www.bsdnow.tv/episodes/2015_05_13-exclusive_disjunction" target="_blank">BSDNow Episode 89 - Exclusive Disjunction</a> around minute <a href="https://www.youtube.com/watch?v=f1Owgoj9TXc#t=36m25s" target="_blank">36:25</a>)<br />
<br />
<h4>
Choosing to call RedBoot from a hacked Apex</h4>
<br />
As I was saying in my previous post, in order to be able to automate the booting of the NetBSD image via TFTP, I opted for using a 2nd stage bootloader (planning to flash it in the NSLU2 instead of a Linux kernel), and since Debian was already using Apex, I chose Apex, too.<br />
<br />
The first problem I found was that the networking support in Apex was relying on an old version of the Intel NPE library which I couldn't find on Intel's site. The new version was incompatible/not building with the old build wrapper in Apex, so I was faced with 3 options:<br />
<ol>
<li>Fight with the availabel Intel code and try to force it to compile in Apex</li>
<li>Incorporate the NPE driver from NetBSD into a rump kernel to be included in Apex instead of the original Intel code, since the NetBSD driver only needed an easily compilable binary blob</li>
<li>Hack together an Apex version that simulates the typing necessary RedBoot commands to load via TFTP the netbsd image and execute it.</li>
</ol>
After taking a look at the NPE driver buildsystem, I concluded there
were very few options less attractive that option 1, among which was
hammering nails through my forehead as a improvement measure against the
severe brain damage which I would probably be likely to be inflicted
with after dealing with the NPE "build system".<br />
<br />
Option 2
looked like the best option I could have, given the situation, but my
NetBSD foo was too close to 0 to even dream to endeavor on such a task.
In my opinion, this still remains the technically superior solution
to the problem since is very portable and a flexible way to ensure
networking works in spite of the proprietary NPE code.<br />
<br />
But, in practice, the best option I could implement at the time was option 3. I initially planned to pre-fill from Apex my desired commands into the RedBoot buffer that stored the keyboard strokes typed by the user:<br />
<blockquote class="tr_bq">
<br />
load -r -b 0x200000 -h 192.168.0.2 netbsd-nfs.bin<br />
g</blockquote>
Since this was the first time ever for me I was going to do less than trivial reverse engineering in order to find the addresses and signatures of interesting functions in the RedBoot code, it wasn't bad at all that I had a version of the RedBoot source code.<br />
<br />
<h4>
When stuck with reverse engineering, apply JTAG</h4>
<br />
The bad thing was that the code Linksys published as the source of the RedBoot running inside the NSLU2 was, in fact, a different code which had some significant changes around the code pieces I was mostly interested in. That in spite of the GPL terms.<br />
<br />
But I thought that I could manage. After all, how hard could it be to identify the 2-3 functions I was interested in and 1 buffer? Even if I only had the disassembled code from the slug, it shouldn't be that hard.<br />
<br />
I struggled with this for about 2-3 weeks on the few occasions I had during that time, but the excitement of leaning something new kept me going. Until I got stuck somewhere between the misalignment between the published RedBoot code and the disassembled code, the state of the system at the time of dumping the contents from RAM (for purposes of disassemby), the assembly code generated by GCC for some specific C code I didn't have at all, and the particularities of ARM assembly.<br />
<br />
What was most likely to unblock me was to actually see the code in action, so I decided attaching a JTAG dongle to the slug and do a session of in-circuit-debugging was in order.<br />
<br />
Luckily, the <a href="http://www.nslu2-linux.org/wiki/Info/PinoutOfJTAGPort" target="_blank">pinout of the JTAG interface</a> was already identified in the NSLU2 Linux project, so I only had to solder some wires to the specified places and a 2x20 header to be able to connect through JTAG to the board.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-Bd-8ATPRiNc/VVvEVzfL5SI/AAAAAAAAAjw/bu4vrmZ022I/s1600/kinder-jtag.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="" border="0" height="400" src="http://4.bp.blogspot.com/-Bd-8ATPRiNc/VVvEVzfL5SI/AAAAAAAAAjw/bu4vrmZ022I/s400/kinder-jtag.jpg" title="JTAG connections on Kinder" width="276" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">JTAG connections on Kinder (the NSLU2 targeting NetBSD)</td></tr>
</tbody></table>
<br />
After this was done I tried immediately to see if when using a JTAG debugger I could break the execution of the code on the system. The answer was sadly, no.<br />
<br />
The chip was identified, but breaking the execution was not happening. I tried this in OpenOCD and in another proprietary debugger application I had access to, and the result was the same, breaking <a href="http://pastebin.com/fFLiwdrF" target="_blank">was not happening</a>.<br />
<div id="selectable">
<div class="text">
<blockquote class="tr_bq">
<div class="de1">
$ openocd -f interface/ftdi/olimex-arm-usb-ocd.cfg -f board/linksys_nslu2.cfg</div>
<div class="de2">
Open On-Chip Debugger 0.8.0 (2015-04-14-09:12)</div>
<div class="de1">
Licensed under GNU GPL v2</div>
<div class="de2">
For bug reports, read</div>
<div class="de1">
http://openocd.sourceforge.net/doc/doxygen/bugs.html</div>
<div class="de2">
</div>
<div class="de1">
Info : only one transport option; autoselect 'jtag'</div>
<div class="de2">
adapter speed: 300 kHz</div>
<div class="de1">
Info : ixp42x.cpu: hardware has 2 breakpoints and 2 watchpoints</div>
<div class="de2">
0</div>
<div class="de1">
Info : clock speed 300 kHz</div>
<div class="de2">
Info : JTAG tap: ixp42x.cpu tap/device found: 0x29277013 (mfg: 0x009,</div>
<div class="de1">
part: 0x9277, ver: 0x2)</div>
<div class="de1">
[..]</div>
</blockquote>
<br />
<blockquote class="tr_bq">
<div class="de2">
$ telnet localhost 4444</div>
<div class="de1">
Trying ::1...</div>
<div class="de2">
Trying 127.0.0.1...</div>
<div class="de1">
Connected to localhost.</div>
<div class="de2">
Escape character is '^]'.</div>
<div class="de1">
Open On-Chip Debugger</div>
<div class="de2">
> halt</div>
<div class="de1">
target was in unknown state when halt was requested</div>
<div class="de2">
in procedure 'halt'</div>
<div class="de1">
> poll</div>
<div class="de2">
background polling: on</div>
<div class="de1">
TAP: ixp42x.cpu (enabled)</div>
<div class="de2">
target state: unknown </div>
</blockquote>
</div>
</div>
Looking into the documentation I found a bit of information on the XScale processors[X] which suggested that XScale processors might necessarily need the (otherwise optional) SRST signal on the JTAG interface to be able to single step the chip.<br />
<br />
This confused me a lot since I was sure other people had already used JTAG on the NSLU2.<br />
<br />
The options I saw at the time were:<br />
<ol>
<li>my NSLU2 did have a fully working JTAG interface (either due to the missing SRST signal on the interface or maybe due to a JTAG lock on later generation NSLU-s, as was my second slug)</li>
<li>nobody ever single stepped the slug using OpenOCD or other JTAG debugger, they only reflashed, and I was on totally new ground </li>
</ol>
I even contacted Rod Whitby, the project leader of the NSLU2 project to try to confirm single stepping was done before. Rod told me he never did that and he only reflashed the device.<br />
<br />
This confused me even further because, from what I encountered on other platforms, in order to flash some device, the code responsible for programming the flash is loaded in the RAM of the target microcontroller and that code is executed on the target after a RAM buffer with the to be flashed data is preloaded via JTAG, then the operation is repeated for all flash blocks to be reprogrammed.<br />
<br />
I was aware it was possible to program a flash chip situated on the board, outside the chip, by only playing with the chip's pads, strictly via JTAG, but I was still hoping single stepping the execution of the code in RedBoot was possible.<br />
<br />
Guided by that hope and the possibility the newer versions of the device to be locked, I decided to add a JTAG interface to my older NSLU2, too. But this time I decided I would also add the TRST and SRST signals to the JTAG interface, just in case single stepping would work.<br />
<br />
This mod involved even more extensive changes than the ones done on the other NSLU, but I was so frustrated by the fact I was stuck that I didn't mind poking a few holes through the case and the prospect of a connector always sticking out from the other NSLU2, which was doing some small, yet useful work in my home LAN.<br />
<br />
<h4>
It turns out NOBODY single stepped the NSLU2</h4>
<h4>
</h4>
After biting the bullet and soldering JTAG interface with also the TRST and the SRST signals connected as the <a href="http://www.nslu2-linux.org/wiki/Info/PinoutOfJTAGPort" target="_blank">pinout page</a> from the NSLU2 Linux wiki suggested, I was disappointed to observe that I was not able to single step the older NSLU2 either, in spite of the presence of the extra signals.<br />
<br />
I even tinkered with the reset configurations of OpenOCD, but had not success. After obtaining the same result on the proprietary debugger, digging through a presentation made by Rod back in the hay day of the project and the conversations on the <a href="https://groups.yahoo.com/neo/groups/nslu2-linux/info" target="_blank">NSLU2 Linux Yahoo mailing list</a>, I finally concluded:<br />
<blockquote class="tr_bq">
<b>Actually nobody single stepped the NSLU2, no matter the version of the NSLU2 or connections available on the JTAG interface!</b></blockquote>
So I was back to square 1, I had to either struggle with disassembly, reevaluate my initial options, find another option or even drop entirely the idea. At that point I was already committed to the project, so dropping entirely the idea didn't seem like the reasonable thing to do.<br />
<br />
Since I was feeling I was really close to finish on the route I had chosen a while ago, I was not any significantly more knowledgeable in the NetBSD code, and looking at the NPE code made me feel like washing my hands, the only option which seemed reasonable was to go on.<br />
<br />
Digging a lot more through the internet, I was finally able to find <a href="https://github.com/nslu2/iop-redboot" target="_blank">another version of the RedBoot source which was modified for Intel ixp42x systems</a>. A few checks here and there revealed this newly found code was actually almost identical to the code I had disassembled from the slug I was aiming to run NetBSD on. This was a huge step forward.<br />
<br />
Long story short, a couple of days later I had a <a href="https://github.com/nslu2/apex/tree/netbsd" target="_blank">hacked Apex</a> that could go through the RedBoot data structures, search for available commands in RedBoot and successfully call any of the built-in RedBoot commands!<br />
<br />
Testing with loading this modified Apex by hand in RAM via TFTP then jumping into it to see if things woked as expected revealed a few small issues which I corrected right away.<br />
<br />
<h4>
Flashing a modified RedBoot?! But why? Wasn't Apex supposed to avoid exactly that risky operation?</h4>
<br />
Since the tests when executing from RAM were successful, my <a href="https://github.com/nslu2/apex/commits/netbsd" target="_blank">custom second stage Apex bootloader for NetBSD net booting</a> was ready to be flashed into the NSLU2.<br />
<br />
I added two more targets in the Makefile in the code on the dedicated netbsd branch of my Apex repository to generate the images ready for flashing into the NSLU2 flash (RedBoot needs to find a Sercomm header in flash, otherwise it will crash) and the exact commands to be executed in RedBoot are also print out after generation. This way, if the command is copy-pasted, there is no risk the NSLU2 is bricked by mistake.<br />
<br />
After some flashing and reflashing of the <i><b><span class="blob-code-inner"><span class="pl-en">apex_nslu2.flash</span></span></b></i> image into the NSLU2 flash, some manual testing, tweaking and modifying the default built in APEX commands, checking that the sequence of commands 'move', 'go 0x01d00000' would jump into Apex, which, in turn, would call RedBoot to transfer the netbsd-nfs.bin image from a TFTP to RAM and then execute it successfully, it was high time to check NetBSD would boot automatically after the NSLU was powered on.<br />
<br />
It didn't. Contrary to my previous tests, no call made from Apex to the RedBoot code would return back to Apex, not even the execution of a basic command such as the 'version' command.<br />
<br />
It turns out the default commands hardcoded into RedBoot were '<i>boot; exec 0x01d00000</i>', but I had tested '<i>boot; go 0x01d0000</i>', which is not the same thing.<br />
<br />
While 'go' does a plain jump at the specified address, the 'exec' command also does some preparations so it allows a jump into the Linux kernel and those preparations break some environment the RedBoot commands expect. I don't know which those are and didn't had the mood or motivation to find out.<br />
<br />
So the easiest solution was to change the RedBoot's built-in command and turn that 'exec' into a 'go'. But that meant this time I was actually risking to brick the NSLU, unless I <br />
was able to reflash via JTAG the NSLU2.<br />
<br />
<br />
(to be continued - next, changing RedBoot and bisecting through the NetBSD history)<br />
<br />
[X] Linksys NSLU2 has an XScale IXP420 processor which is compatible at ASM level with the ARMv5TEJ instruction seteddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-38781758847370599192015-05-08T03:08:00.001+03:002015-05-23T18:21:09.580+03:00Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1About 2 months ago I set a goal to run some kind of BSD on the spare Linksys NSLU2 I had. This was driven mostly by curiosity, after listening to a few <a href="http://www.bsdnow.tv/" target="_blank">BSDNow</a> episodes and becoming a regular listener, but it was a really interesting experience (it was also somewhat frustrating, mostly due to lacking documentation or proprietary code).<br />
<br />
Looking for documentation on how to install any BSD flavour on the Linksys NSLU2, I have found what appears to be some too-incomplete-to-be-useful-for-a-BSD-newbie information about <a href="https://wiki.freebsd.org/FreeBSDSlug" target="_blank">installing FreeBSD</a>, no information about OpenBSD and some <a href="http://wiki.netbsd.org/tutorials/how_to_install_netbsd_on_the_linksys_nslu2___40__slug__41___without_a_serial_port__44___using_nfs_and_telnet/" target="_blank">very detailed information about NetBSD on the Linksys NSLU2</a>.<br />
<br />
I was very impressed by the NetBSD build.sh script which can be used to <a href="http://www.netbsd.org/docs/guide/en/chap-build.html" target="_blank">cross-compile the entire NetBSD</a> system - to do that, it also builds the appropriate toolchain - NetBSD kernel and the base system, even when ran on a Linux host. Having some experience with cross compilation for GNU/Linux embedded systems I can honestly say this is immensely impressive, well done NetBSD!<br />
<br />
Gone were a few failed attempts to properly follow the instruction and lots of hours of (re)building, but then I had the kernel and the sets (the NetBSD system is split into several parts which are grouped by functionality, these are the sets), so I was in the position to have to set things up to be able to net boot - kernel loading via TFTP and rootfs on NFS.<br />
<br />
But it wouldn't be challenging if the instructions were followed to the letter, so the first thing I wanted to change was that I didn't want to run dhcpd just to pass the DHCP boot configuration to the NSLU2, that seemed like a waste of resources since I already had dnsmasq running.<br />
<br />
After some effort and struggling with missing documentation, I managed to <a href="http://ramblingfoo.blogspot.com/2015/04/howto-dnsmasq-server-for-network.html" target="_blank">use dnsmasq to pass DHCP boot parameters to the slug, but also use it as TFTP</a> server - after some time I documented this for future reference on my blog and expect to refer to it in the future.<br />
<br />
Setting up NFS wasn't a problem, but, when trying to boot, I found that I managed to misread at least 3 or 4 times some of the <a href="http://wiki.netbsd.org/tutorials/how_to_install_netbsd_on_the_linksys_nslu2___40__slug__41___without_a_serial_port__44___using_nfs_and_telnet/" target="_blank">NSLU2 related information on the NetBSD wiki</a>. To be able to debug what was happening I concluded the slug should have a serial console attached to it, which helped a lot.<br />
<br />
Still the result was that I wasn't able to boot the trunk version of the NetBSD code on my NSLU2.<br />
<br />
Long story short, with the help of some people from the #netbsd IRC channel on Freenode and from the port-arm NetBSD mailing list I found out that I might have a better chance with specific older versions. In practice <a href="http://mail-index.netbsd.org/port-arm/2015/03/24/msg002963.html" target="_blank">what really worked was the code from the netbsd_6_1</a> branch.<br />
<br />
Discussions on the port-arm mailing list, some digging into the (recently found) PR (problem reports), and <a href="http://mail-index.netbsd.org/port-arm/2015/03/27/msg002975.html" target="_blank">a successful execution of the trunk kernel (at the time, version 7.99.4) together with 6.1.5 userspace</a> lead me to the conclusion the NetBSD userspace for armbe was broken in the trunk branch.<br />
<br />
And since I concluded this would be a good occasion to learn a few details about NetBSD, I set out to git bisect through the trunk history to identify when this happened. But that meant being able to easily load kernels and run them from TFTP, which was not how the RedBoot bootloader flashed into the slug behaves by default.<br />
<br />
Be default, the RedBoot bootloader flashed into the NSLU2 waits for 2 seconds for a manual interaction (it waits for a ^C) on the serial console or on the <a href="http://www.nslu2-linux.org/wiki/HowTo/TelnetIntoRedBoot" target="_blank">telnet RedBoot prompt</a>, then, if no such event happens, it copies the Linux image it has in flash starting with adress 0x50060000 into RAM at address 0x01d00000 (after stripping the Sercomm header) and then executes the copied code from RAM.<br />
<br />
Of course, this is not a very handy way to try to boot things from TFTP, so my first idea to overcome this limitation was to use a second stage bootloader which would do the loading via TFTP of the NetBSD kernel, then execute it from RAM. Flashing this second stage bootloader instead of the Linux kernel at 0x50060000 would make sure that no manual intervention except power on would be necessary when a new kernel+userspace pair is ready to be tested.<br />
<br />
Another advantage was that I would not risk bricking the NSLU2 since I would not be changing RedBoot, the original bootloader.<br />
<br />
I knew Apex was used as the second stage bootloader in Debian, so I started configuring <a href="https://github.com/nslu2/apex" target="_blank">my own version of the APEX bootloader to make it work for the netbsd-nfs.bin image</a> to be loaded via TFTP. <br />
<br />
My first disappointment was that Apex was did not support receiving the boot parameters via DHCP, but only via RARP (it was clear it was less tested with BOOTP or DHCP) and TFTP was documented in the code as being problematic. That meant that I would have to hard code the boot configuration or configure RARP, but that wasn't too bad.<br />
<br />
Later I found out that I wasted time on that avenue because the network driver in Apex was some Intel code (NPE Access Library) which can't be freely distributed, but could have been downloaded from Intel's site back in 2008-2009. The bad news was that current versions did not work at all with the old patch work that was done in Apex to allow for the driver made for Linux to compile in a world of its own so it could be incorporated in Apex.<br />
<br />
I was stuck and the only options I were:<br />
<ol>
<li>Fight with the available Intel code and make it compile in Apex</li>
<li>Incorporate the NPE driver from NetBSD into a <a href="http://wiki.netbsd.org/rumpkernel/" target="_blank">rump kernel</a> which will be included in Apex, since I knew the NetBSD driver only needed a very easily obtainable binary blob, instead of the entire driver as was in Apex before</li>
<li>Hack together an Apex version that simulates the typing of the necessary commands to load the netbsd-nfs.bin image inside RedBoot, or in other words, call from Apex the RedBoot functions necessary to load from TFTP and execute NetBSD.</li>
</ol>
Option 1 did not look that appealing after looking into the horrible Intel build system and its endless dependencies into a specific Linux kernel version.<br />
<br />
Option 2 was more appealing, but since I didn't knew NetBSD and only tried once to build and run a NetBSD rump kernel, it seemed like a doable project only for an experienced NetBSD developer or at least an experienced NetBSD user, which I was not.<br />
<br />
So I was left with option 3, which meant I had to do some reverse engineering of the code, because, although RedBoot is GPL, Linksys did not publish the source from which the running RedBoot was built from.<br />
<br />
<br />
(continues <a href="http://ramblingfoo.blogspot.com/2015/05/linksys-nslu2-adventures-into-netbsd_20.html" target="_blank">here</a>)eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-91130565342449659472015-04-30T03:42:00.001+03:002015-04-30T03:42:29.139+03:00Linksys NSLU2 JTAG help requestedSome time ago I have embarked on a jurney to install NetBSD on one of my two NSLU2-s. I have ran into all sorts of hurdles and problems which I finally managed to overcome, except one:<br />
<br />
The NSLU I am using has a standard 20 pin ARM JTAG connector attached to it (as per this page <a href="http://www.nslu2-linux.org/wiki/Info/PinoutOfJTAGPort">http://www.nslu2-linux.org/wiki/Info/PinoutOfJTAGPort</a>, only TDI, TDO, TMS, TCK, Vref and GND signals), but, although the chip is identified, I am <a href="http://pastebin.com/fFLiwdrF" target="_blank">unable to halt the CPU</a>:<br />
<div id="selectable">
<div class="text">
<ol><div class="de1">
<span style="font-size: x-small;">$ openocd -f interface/ftdi/olimex-arm-usb-ocd.cfg -f board/linksys_nslu2.cfg</span></div>
<div class="de2">
<span style="font-size: x-small;">Open On-Chip Debugger 0.8.0 (2015-04-14-09:12)</span></div>
<div class="de1">
<span style="font-size: x-small;">Licensed under GNU GPL v2</span></div>
<div class="de2">
<span style="font-size: x-small;">For bug reports, read</span></div>
<div class="de1">
<span style="font-size: x-small;"> http://openocd.sourceforge.net/doc/doxygen/bugs.html</span></div>
<div class="de2">
</div>
<div class="de1">
<span style="font-size: x-small;">Info : only one transport option; autoselect 'jtag'</span></div>
<div class="de2">
<span style="font-size: x-small;">adapter speed: 300 kHz</span></div>
<div class="de1">
<span style="font-size: x-small;">Info : ixp42x.cpu: hardware has 2 breakpoints and 2 watchpoints</span></div>
<div class="de2">
<span style="font-size: x-small;">0</span></div>
<div class="de1">
<span style="font-size: x-small;"><b>Info : clock speed 300 kHz</b></span></div>
<div class="de2">
<span style="font-size: x-small;"><b>Info : JTAG tap: ixp42x.cpu tap/device found: 0x29277013 (mfg: 0x009,</b></span></div>
<div class="de1">
<span style="font-size: x-small;"><b>part: 0x9277, ver: 0x2)</b></span></div>
<div class="de2">
<span style="font-size: x-small;">[..]</span></div>
<div class="de1">
</div>
<div class="de2">
<span style="font-size: x-small;">$ telnet localhost 4444</span></div>
<div class="de1">
<span style="font-size: x-small;">Trying ::1...</span></div>
<div class="de2">
<span style="font-size: x-small;">Trying 127.0.0.1...</span></div>
<div class="de1">
<span style="font-size: x-small;">Connected to localhost.</span></div>
<div class="de2">
<span style="font-size: x-small;">Escape character is '^]'.</span></div>
<div class="de1">
<span style="font-size: x-small;">Open On-Chip Debugger</span></div>
<div class="de2">
<span style="font-size: x-small;">> halt</span></div>
<div class="de1">
<span style="font-size: x-small;"><b>target was in unknown state when halt was requested</b></span></div>
<div class="de2">
<span style="font-size: x-small;"><b>in procedure 'halt'</b></span></div>
<div class="de1">
<span style="font-size: x-small;">> poll</span></div>
<div class="de2">
<span style="font-size: x-small;"><b>background polling: on</b></span></div>
<div class="de1">
<span style="font-size: x-small;"><b>TAP: ixp42x.cpu (enabled)</b></span></div>
<div class="de2">
<span style="font-size: x-small;"><b>target state: unknown </b></span></div>
</ol>
</div>
</div>
My main goal is to make sure I can flash the device via JTAG, in case I break it, but it would be ideal if I could use the JTAG to single step through the code.<br />
<br />
I have found that other people have managed to flash the device via <a href="https://groups.yahoo.com/neo/groups/nslu2-linux/conversations/messages/16493" target="_blank">JTAG without the other signals</a>, and some have even <a href="https://groups.yahoo.com/neo/groups/nslu2-linux/conversations/messages/10655" target="_blank">changed the bootloader</a> (and had JTAG confirmed as backup solution), so I am stuck.<br />
<br />
So if anyone can give some insights into ixp42x / Xscale / NSLU2 specific JTAG issues or hints regarding this issue on OpenOCD or other such tool, I would be really grateful.<br />
<br />
<br />
<i><span style="font-size: x-small;">Note: I have made a hacked second stage Apex bootloader to laod the NetBSD image via TFTP, but the default RedBoot sequence 'boot; exec 0x01d00000' should be 'boot; go 0x01d00000' for NetBSD to work, so I am considering changing the RedBoot partition to alter that command. The gory details can be summed as my Apex is calling RedBoot functions to be network enabled (because Intel's NPE current code is not working on Apex) and I have tested this to work with go, but not with exec.</span></i>eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-26673064926858255862015-04-06T10:27:00.003+03:002015-04-06T10:27:58.153+03:00HOWTO: Dnsmasq server for network booting using TFTP and DHCP<a href="http://www.thekelleys.org.uk/dnsmasq/doc.html" target="_blank">Dnsmasq</a> is a very lightweight server that besides the expected DNS caching functionality, it also offers DHCP and TFTP functionality in a single binary.<br />
<br />
This makes it very useful if one needs to network boot a system since you can have the TFTP and DHCP part of the setup done easily, and only add NFS for a complete network boot.<br />
<br />
Add to that that <br />
<br />
One extra nice thing dnsmasq has is that it can mark specific hosts, addresses or ranges with some internal markers, then use those markers as symbolic names to apply settings based for classes of devices.<br />
<br />
In the configuration snippet below, there is a rule I set up to make sure I would apply the 'netbsd' label to any system connecting through specific ethernet interfaces (one is the interface of the system, the other is a USB NIC I use from time to time):<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">
#You will need a range for static IPs in the main file<br />
dhcp-range=192.168.77.250,192.168.77.254,static<br />
<br />
# give the name 'kinder' to any machine connecting through the given ethernet nics and apply 'netbsd' label<br />
dhcp-host=00:1a:70:99:60:BB,00:06:4F:0D:B1:95, kinder, 192.168.77.251, set:netbsd<br />
<br />
# Machines tagged 'netbsd' shall use the given NFS root path<br />
dhcp-option=tag:netbsd, option:root-path,/export/netbsd-nslu2/root<br />
# Enable dnsmasq's built-in TFTP server<br />
enable-tftp<br />
<br />
# Set the root directory for files available via FTP.<br />
tftp-root=/srv/tftp</span></blockquote>
Saving this configuration file in /etc/dnsmasq.d/kinder-netboot will enable this to be used by dnsmasq if this line is present in /etc/dnsmasq.conf<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">conf-dir=/etc/dnsmasq.d</span></blockquote>
Commenting it will disable the netbsd part easily.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-8092292453701311242015-03-29T01:52:00.002+02:002015-03-29T16:30:52.235+03:00HOWTO: Disassemble a big endian Arm raw memory dump with objdumpThis is trivial and very useful for embedded code dumps, but in case somebody (including future me) needs this, here it goes:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">arm-none-eabi-objdump -D -b binary -m arm -EB dump.bin | less</span></blockquote>
The options mean:<br />
<ul>
<li>-D - disassemble</li>
<li>-b binary - input file is a raw file</li>
<li>-m arm - arm architecture</li>
<li>-EB - big endian</li>
</ul>
By default, endianness is assumed to be little endian, or at least that's happened with my toolchain.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-70834438203558447592015-03-11T11:30:00.000+02:002015-03-29T01:40:37.299+02:00Net NeutralityI have seen <a href="http://theoatmeal.com/blog/net_neutrality" target="_blank">this awesomeness</a> way too late, but is still awesome.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-83047455633129077482015-03-06T23:32:00.000+02:002015-03-06T23:39:57.446+02:00Occasional Rsnapshot v1.3.1I was writing in the <a href="http://ramblingfoo.blogspot.ro/2015/02/occasional-rsnapshot-v130.html" target="_blank">previous post</a> about <a href="https://github.com/eddyp/occasional_rsnapshot" target="_blank">Occasional Rsnapshot</a> and how I ended up writing it.<br />
<br />
Just before releasing v1.2.1 I realized it would make sense <a href="http://semver.org/" target="_blank">Semantic Versioning</a> which, in just a few words means:<br />
<blockquote class="tr_bq">
Given a version number MAJOR.MINOR.PATCH, increment the:<br />
<pre><code>MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
</code></pre>
</blockquote>
I just released Occasional Rsnapshot v1.3.1 which mainly fixed <a href="https://github.com/eddyp/occasional_rsnapshot/issues/4" target="_blank">issue 4</a>:<br />
<div class="edit-comment-hide">
<div class="comment-body markdown-body markdown-format js-comment-body">
<blockquote class="tr_bq">
When deciding to do a backup for interval INT, there should
be a check that the oldest snapshot in INT-1 interval is older than the
threshold for the INT interval. Otherwise the INT interval will be
populated with backups more frequent than desired and it is possible
older backups in INT interval to completely lost.<br />
The condition should be:<br />
<pre><code>ts(oldest(INT-1)) - ts(newest(INT)) >= threshold(INT)
</code></pre>
For example:<br />
<ul class="task-list">
<li>if <code>weekly.0</code> is from 15th of February and <code>daily.6</code> is from 17th of February, <code>weekly</code> should not be triggered, but</li>
<li>if <code>weekly.0</code> is from 15th of February and <code>daily.6</code> is from 23rd of February, <code>weekly</code> shall be triggered</li>
</ul>
This extra check should probably added in <code>can_backup_interval</code>.</blockquote>
It was a small bug, but it might have lead to losing important older backups because newer and more frequent backups would be pushed from hourly, interval by interval up to the yearly interval, in spite of the fact that the distance between backups wouldn't have been respecting the upper interval minimum distance in time.<br />
<br />
There was also a small syntax bugfix, but functionally nothing has changed because bash was doing the right thing even with that small error.<br />
<br />
If you have started using <a href="https://github.com/eddyp/occasional_rsnapshot" target="_blank">Occasional Rsnapshot</a>, you definitely want now Occasional Rsnapshot<a href="https://github.com/eddyp/occasional_rsnapshot/releases/tag/v1.3.1" target="_blank"> v1.3.1</a>. If you haven't started and you don't have backups, please start doing backups, and while you're at it, you might want to try Occasional Rsnapshot (with Rsnapshot).<br />
<br /></div>
</div>
eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-71285820041618949932015-02-25T21:39:00.002+02:002015-02-25T21:41:44.312+02:00Occasional Rsnapshot v1.3.0It is almost exactly 1 year and a half since I came up with the idea of having a way of making backups using Rsnapshot automatically triggered by my laptop when I have the backup media connected to my laptop. This could mean connecting a USB drive directly to the laptop or mounting a NFS/sshfs share in my home network. Today I tagged <a href="https://github.com/eddyp/occasional_rsnapshot">Occasional Rsnapshot</a> the <a href="https://github.com/eddyp/occasional_rsnapshot/releases/tag/v1.3.0">v1.3.0</a> version, the first released version that makes sure even when you connect your backup media occasionally, your <a href="http://www.rsnapshot.org/">Rsnapshot</a> backups are done if and when it makes sense to do it, according to the rsnapshot.conf file and the status of the existing backups on the backup media.<br />
<br />
Quoting from the <a href="https://github.com/eddyp/occasional_rsnapshot/blob/master/README.md">README</a>, here is what Occasional Rsnapshot does:<br />
<blockquote class="tr_bq">
<br />
This is a tool that allows automatic backups using rsnapshot
when the external backup drive or remote backup media is
connected.<br />
<br />
Although the ideal setup would be to have periodic backups on
a system that is always online, this is not always possible.
But when the connection is done, the backup should start fairly
quickly and should respect the daily/weekly/... schedules of
rsnapshot so that it accurately represents history.<br />
<br />
In other words, if you backup to an external drive or to some
network/internet connected storage that you don't expect to
have always connected (which is is case with laptops) you can
use <code>occasional_rsnapshot</code> to make sure your data is backed
up when the backup storage is connected.<br />
<br />
<code>occasional_rsnapshot</code> is appropriate for:<br />
<ul class="task-list">
<li>laptops backing up on:
<ul class="task-list">
<li>a NAS on the home LAN or</li>
<li>a remote or an internet hosted storage location</li>
</ul>
</li>
<li>systems making backups online (storage mounted locally
somehow)</li>
<li>systems doing backups on an external drive that is not
always connected to the system</li>
</ul>
The only caveat is that all of these must be mounted in the
local file system tree somehow by any arbitrary tool,
<code>occasional_rsnapshot</code> or <code>rsnapshot</code> do not care, as long as
the files are mounted.<br />
<br />
So if you find yourself in a simillar situation, this script
might help you to easily do backups in spite of the occasional
availability of the backup media, instead of no backups at all.
You can even trigger backups semi-automatically when you
remember to or decide is time to backup, by simply pulging in
your USB backup HDD.</blockquote>
<br />
But how did I end up here, you might ask?<br />
<br />
In December 2012 I was asking about suggestions for <a href="http://ramblingfoo.blogspot.ro/2012/12/looking-for-backup-application-for.html" target="_blank">backup solutions that would work for my very modest setup with Linux and Windows</a> so I can backup my and my wife's system without worrying about loss of data.<br />
<br />
One month later <a href="http://ramblingfoo.blogspot.ro/2013/01/rsnapshot-backup-and-security-some.html" target="_blank">I was explaining my concept of a backup solution</a> that would not trust the backup server, and leave to the clients as much as possible the decision to start the backup at their desired time. I was also pondering on the problems I might encounter.<br />
<br />
From a security PoV, what I wanted was that:<br />
<ol>
<li>clients would be isolated from each other</li>
<li>even in the case of a server compromise:</li>
<ol>
</ol>
<ul>
<li>the data would not be accessible since they would be already encrypted before leaving the client</li>
<li>the clients could not be compromised </li>
</ul>
</ol>
<br />
<br />
The general concept was sane and supplemental security measures such as <a href="http://linux.die.net/man/1/knockd" target="_blank">port knocking</a> and initiation of backups only during specific time frames could be added.<br />
<br />
The problem I ran to was that when I set up this in my home network a sigle backup cycle would take more than a day, due to the fact that I wanted to do backup of all of my data and my server was a humble Linksys NSLU2 with a 3TB storage attached on USB.<br />
<br />
Even when the initial copy was done by attaching the USB media directly to the laptop, so the backup would only copy changed data, the backup with the HDD attached to the NSLU2 was not finished even after more than 6 hours.<br />
<br />
The bottleneck was the CPU speed and the USB speed. I tried even mounting the storage media over sshfs so the tiny xscale processor in the NSLU2 would not be bothered by any of the rsync computation. This proved to an exercise in futility, any attempt to put the NSLU2 anywhere in the loop resulted in an unacceptable and impractically long backup time.<br />
<br />
All these attempts, of course, took time, but that meant that I was aware I still didn't have appropriate backups and I wasn't getting closer to the desired result.<br />
<br />
So this brings us August 2013, when I realized I was trying to manually trigger Rsnapshot backups from time to time, but having to do all sorts of mental gymnastics and manual listing to evaluate if I needed to do monthly, weekly and daily backups or if weekly and daily was due.<br />
<br />
This had to stop.<br />
<blockquote class="tr_bq">
Triggering a backup should happen automatically as soon as the backup media is available, without any intervention from the user.</blockquote>
I said.<br />
<br />
Then I came up with the basic concept for <a href="https://github.com/eddyp/occasional_rsnapshot">Occasional Rsnapshot</a>: a very silent script that would be called from cron every 5 minutes, would check if the backup media is mounted, if is not, exit silently to not generate all sorts of noise in cron emails, but if mounted, compute which backup intervals should be triggered, and trigger them, if the appropriate amount of time passed since the most recent backup in that backup interval.<br />
<br />
<a href="https://github.com/eddyp/occasional_rsnapshot">Occasional Rsnapshot</a> version <a href="https://github.com/eddyp/occasional_rsnapshot/releases/tag/v1.3.0">v1.3.0</a> is the 8th and most recent release of the script. Even if I used Occasional Rsnapshot since the day 1, v1.3.0 is the first one I can recommend to others, without fearing they might lose data due to it.<br />
<br />
The backup media can be anything, starting from your regular USB mounted HDD, your sshfs mounted backup partition on the home NAS server to even a remote storage such as <a href="https://en.wikipedia.org/wiki/Amazon_S3">Amazon S3</a> online storage, and there are even brief instructions on how to do encrypted backups for the cases where you don't trust the remote storage.<br />
<br />
So if you think you might find anything that I described remotely interesting, I recommend downloading the latest <a href="https://github.com/eddyp/occasional_rsnapshot/releases">release of Occasional Rsnapshot</a>, go through the README and trying it out.<br />
<br />
Feedback and bug reports are welcome.<br />
Patches are welcomed with a 'thank you'.<br />
<a href="https://github.com/eddyp/occasional_rsnapshot/pulls">Pull requests</a> are eagerly waited for :) .eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-3838593907727406062015-02-09T23:40:00.002+02:002015-02-09T23:40:54.114+02:00uClibc based toolchain using Gentoo for NSLU2I was asked in <a href="http://ramblingfoo.blogspot.com/2015/02/using-gentoo-to-create-cross-toolchain.html" target="_blank">the previous post</a> why I didn't used the Debian armel port for my NSLU2.<br />
<br />
My intention was to create a uclibc based system (and a uclibc based toolchain) for my NSLU2. This was not obvious in the post because building the uclibc based toolchain resulted in this error:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><b>crossdev armv5-softfloat-linux-uclibceabi</b></span></span></blockquote>
<blockquote>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">[..]</span></span></span></span><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">/var/tmp/portage/cross-armv5-softfloat-linux-uclibceabi/gcc-4.9.2/work/gcc-4.9.2/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:67:21: fatal error: wordexp.h: No such file or directory</span></span><br /><b><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> #include <wordexp .h=""></wordexp></span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> ^</span></span></b><br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">compilation terminated.</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Makefile:416: recipe for target 'sanitizer_platform_limits_posix.lo' failed</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">make[4]: *** [sanitizer_platform_limits_posix.lo] Error 1</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">make[4]: *** Waiting for unfinished jobs....</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"></span></span></blockquote>
<br />
So, yesterday, to not forget what I did, I tried building a glibc based toolchain and posted that.<br />
<br />
Today, after looking at the offending error, it seems gcc 4.9.2 assumes wordexp.h is always available on non-Android platforms and Gentoo does not make that file availble when installing uclibc.<br />
<br />
<br />
<strike>I think the problem was introduced in gcc in 2013 with <a href="https://gcc.gnu.org/viewcvs/gcc/trunk/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc?r1=204367&r2=204368&" target="_blank">this</a> commit, but I haven't cheked in detail. </strike>What I know for sure is that gcc 4.8.3 works at this moment with uclibc and the buildroot guys are still using gcc 4.8.4 in their default uClibc based toolchain. So here it is the command that generated the uClibc based toolchain:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"><b>crossdev --g 4.8.3 armv5-softfloat-linux-uclibceabi</b></span><br />
<br />
I hope this helps other people. Yes, I know I should report the issue to GCC/Gentoo after further investigation.eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-82619109208589248392015-02-07T21:07:00.000+02:002015-02-07T21:07:26.258+02:00Using Gentoo to create a cross toolchain for the old NSLU2 systems (armv5te)<i>This is mostly written so I don't forget how to create a custom (Arm) toolchain the Gentoo way (in a Gentoo chroot).</i><br />
<br />
I have been a Debian user since 2001, and I like it a lot. Yet I have had my share of problems with it, mostly because due to lack of time I have very little disposition to try to track unstable or testing, so I am forced to use stable.<br />
<br />
This led me to be a fan of <a href="http://www.eyrie.org/~eagle/software/scripts/backport.html" target="_blank">Russ Albery's backport</a> script and to create a lot of local backports of packages that are already in unstable or testing.<br />
<br />
But this does not help when packages are simply missing from Debian or when something like creating an arm uclibc based system that should be kept up to date, from a security PoV.<br />
<br />I have experience with <a href="http://buildroot.org/" target="_blank">Buildroot</a> and I must say I like it a lot for creating custom root filesystems and even toolchains. It allows a lot of flexibility that binary distros like Debian don't offer, it does its designated work, creating root filesystems. But buildroot is not appropriate for a system that should be kept up to date, because it lacks a mechanism by which to be able to update to new versions of packages without recompiling the entire rootfs.<br />
<br />
So I was hearing from the guys from the <a href="http://www.jupiterbroadcasting.com/show/linuxactionshow/" target="_blank">Linux Action Show</a> (and <a href="http://www.jupiterbroadcasting.com/show/linuxun/" target="_blank">Linux Unplugged</a> - <span style="font-size: xx-small;">by the way, Jupiter Broadcast, why do I need scripts enabled from several sites just to see the links for the shows?</span>) how Arch is great and all, that is a binary rolling release, and that you can customize packages by building your own packages from source using <a href="https://wiki.archlinux.org/index.php/Makepkg" target="_blank">makepkg</a>. I tried it, but Arm support is provided for some specific (modern) devices, my venerable Linksys NSLU2's (I have 2 of them) not being among them.<br />
<br />
So I tried Arch in a chroot, then dropped it in favour of a Gentoo chroot since I was under the feeling running Arch from a chroot wasn't such a great idea and I don't want to install Arch on my SSD.<br />
<br />
I used succesfully Gentoo in the past to create an arm-unknown-linux-gnueabi chroot back in 2008 and I always liked the idea of USE flags from Gentoo, so I knew I could do this.<br />
<br />
<br />
So here it goes:<br />
<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"># create a local portage overlay - necessary for cross tools</span><br />
<span style="font-family: "Courier New",Courier,monospace;">export LP=/usr/local/portage</span><br />
<span style="font-family: "Courier New",Courier,monospace;">mkdir -p $LP/{metadata,profiles}</span><br />
<span style="font-family: "Courier New",Courier,monospace;">echo 'mycross' > $LP/profiles/repo_name</span><br />
<span style="font-family: "Courier New",Courier,monospace;">echo 'masters = gentoo' > $LP/metadata/layout.conf</span><br />
<span style="font-family: "Courier New",Courier,monospace;">chown -R portage:portage $LP</span><br />
<span style="font-family: "Courier New",Courier,monospace;">echo 'PORTDIR_OVERLAY="'$LP' ${PORTDIR_OVERLAY}"' >> /etc/portage/make.conf</span><br />
<span style="font-family: "Courier New",Courier,monospace;">unset LP</span><br />
<span style="font-family: "Courier New",Courier,monospace;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace;"># install crossdev, setup for the desired target, build toolchain</span><br />
<span style="font-family: "Courier New",Courier,monospace;">emerge crossdev</span><br />
<span style="font-family: "Courier New",Courier,monospace;">crossdev --init-target -t arm-</span><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: "Courier New",Courier,monospace;">softfloat</span>-linux-gnueabi -oO /usr/local/portage/mycross</span><br />
<span style="font-family: "Courier New",Courier,monospace;">crossdev -t arm-softfloat-linux-gnueabi</span><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;"> </span><br />
<br />
<br />eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com1tag:blogger.com,1999:blog-1134723925265805427.post-43595158405823637272013-10-28T23:46:00.002+02:002013-10-29T09:52:38.559+02:00The stupidest trend in laptop design is...... numpads on laptop keyboards.<br />
<br />
Just because a very, very, very restricted segment of the population is into accounting or other jobs needing frequent numeric input, almost all laptop manufacturers feel the stupid urge to place a numpad on laptops with screens over 14''.<br />
<br />
<b>Reasons why a numpad on a laptop is a bad idea</b>:<br />
<ul>
<li>can lead to health issues: it forces the user to assume a bad position in front of the screen; this can affect the eyes and the backbone; It's bad enough many people have a bad posture in front of the computer as it is, no need to pump up Scoliosis' position in the laundry list of modern day health risks</li>
</ul>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-VlghglAst_k/Um7WEO0i7jI/AAAAAAAAAbg/eK-HqmIQv5o/s1600/54398-hp-envy-17-box.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="http://1.bp.blogspot.com/-VlghglAst_k/Um7WEO0i7jI/AAAAAAAAAbg/eK-HqmIQv5o/s1600/54398-hp-envy-17-box.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The numpad forces eye focus and users' hands to be way off center.<br />
The touchpad looks as if it was thrown to the side by accident<br />
Note how the designer tried to offset the misalignment forced<br />
by the numpad by "positioning" the touchpad closer to center,<br />
but not in line with the space bar (and the keyboard)</td></tr>
</tbody></table>
<br />
<ul>
<li>ugly design: it simply looks ugly; why do people think Apple didn't jump into this stupid bandwagon? Because it's ugly design!</li>
</ul>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-uwZXEK-HI_E/Um7ZjRGP-1I/AAAAAAAAAbs/HqSW-V32yzI/s1600/52274-dell-inspiron-xps-box.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="264" src="http://4.bp.blogspot.com/-uwZXEK-HI_E/Um7ZjRGP-1I/AAAAAAAAAbs/HqSW-V32yzI/s1600/52274-dell-inspiron-xps-box.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Without numpad the position is almost perfect!<br />
The symmetry of the laptop improves on the design.<br />
Notice how the touchpad is also centered.</td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
<ul>
<li>the numpad is useless for the vast majority of people, and those who need numpads, already use them at desktop (keyboard) or, can buy numpads</li>
<li>it's more expensive to manufacture (more moving parts, more complex wiring and more expensive materials - plastic is cheaper than plastic+copper+rubber)</li>
<li>less resilient: more ways to fail, higher risk to get liquids inside</li>
<li>bad mechanical design: wider keyboard means less distance from the edge to the keyboard, which can mean a more fragile case</li>
<li>makes the laptop heavier: the plastic or material that covers the keys would be enough to cover the insides; the extra rubber, wiring, support etc, add extra weight</li>
<li>it kills the opportunity to use the real estate for other useful things (or things which improve the overall design): speakers, volume keys, finger print readers, other special function keys, LEDs or other output devices</li>
</ul>
What's worse is that most resellers do not have filters for this particular mis-feature, so you can't easily exclude laptops corrupted by this horrendous idea.<br />
<span style="font-size: x-large;"><br /></span>
<span style="font-size: x-large;">So, if you're a laptop designer, please stop putting numpads on laptops.</span><br />
<br />
It will make for better looking laptops and you'll have the opportunity to be the one stating the obvious: numpads are generally useless! Even on desktop keyboards!eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com46tag:blogger.com,1999:blog-1134723925265805427.post-62176260944401506612013-09-02T00:46:00.001+03:002013-09-02T00:51:28.306+03:00Integrating Beyond Compare with Semanticmerge<i>Note: This post will probably not be on the liking of those who think free software is always preferable to closed source software, so if you are such a person, please take this article as an invitation to implement better open source alternatives that can realistically compete with the closed source applications I am mentioning here. I am not going to mention here where the open source alternatives are not up to the same level as the commercial tools, I'll leave that for the readers or for another article.</i><br />
<br />
<br />
<br />
<a href="http://www.semanticmerge.com/" target="_blank">Semanticmerge</a> is a merge tool that attempts to do the right thing when it comes to merging source code. It is language aware and it currently supports Java and C#. Just today the creators of the software have <a href="http://plasticscm.uservoice.com/forums/196398-mergebegins/suggestions/3866314-c-support?tracking_code=a086fa1a750914b9a523a22d6eed3ec8" target="_blank">started working on the support for C</a>.<br />
<br />
Recently they added Debian packages, so I installed it on my system. For open source development Codice Software, the creators of Semanticmerge, offers free licenses, so I decided to ask for one today, and, although is Sunday, I received an answer and I will get my license on Monday.<br />
<br />
When a method is moved from one place to another and changed in a conflicting way in two parallel development lines, Semanticmerge can isolate the offending method and can pass all its incarnations (base, source and destination or, if you prefer, base, mine and theirs) to a text based merge tool to allow the developer to decide how to resolve the merge. On Linux, the Semanticmerge samples are using <a href="http://kdiff3.sourceforge.net/" target="_blank">kdiff3</a> as the text-based merge tool, which is nice, but I don't use kdiff3, I use <a href="http://meldmerge.org/" target="_blank">Meld</a>, another open source visual tool for merges and comparisons.<br />
<br />
<br />
OTOH, <a href="http://www.scootersoftware.com/" rel="nofollow" target="_blank">Beyond Compare</a> is a merge and compare tool made by Scooter Software which provides a very good text based 3-way merge with a 3 sources + 1 result pane, and can compare both files and directories. Two of its killer features is having the ability split differences into important and non-important ones according to the syntax of the compared/merged files, and the ability to easily change or add to the syntax rules in a very user-friendly way. This allows to easily ignore changes in comments, but also basic refactoring such as variable renaming, or other trivial code-wide changes, which allows the developer to focus on the important changes/differences during merges or code reviews.<br />
<br />
Syntax support for usual file formats like C, Java, shell, Perl etc. is built in (but can be modified, which is a good thing) and new file types with their syntaxes can be added via the GUI from scratch or based on existing rules.<br />
<br />
I evaluated Beyond Compare at my workplace and we decided it would be a good investment to purchases licenses for it for the people in our department.<br />
<br />
<br />
Having these two software separate is good, but having them integrated with each other would be even better. So I decided I would try to see how it can be done. I installed Beyond compare on my system, too and looked through the examples <br />
<br />
<br />
The first thing I discovered is that the main assumption of Semanticmerge developers was that the application would be called via the SCM when merges are to be done, so passing lots of parameters would not be problem. I realised that when I saw how one of the samples' starting script invoked semantic merge:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"" -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\"" -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""</span></blockquote>
Can you see the problem? It seems Semanticmerge has no persistent knowledge of the user preferences with regards to the text-based merge tool and exports the issue to the SCM, at the price of overcomplicating the command line. I already mentioned this issue in my license request mail and added <a href="http://plasticscm.uservoice.com/forums/196398-mergebegins/suggestions/4371367-external-merge-tool-and-diff-tool-should-be-persis" target="_blank">the issue</a> and my fix suggestion in their voting system of features to be implemented.<br />
<br />
The upside was that by comparing the command line for kdiff3 invocations, the <a href="http://kdiff3.sourceforge.net/doc/documentation.html#id2488370" target="_blank">kdiff3 documentation</a> and, by comparison, the Beyond Compare SCM <a href="http://www.scootersoftware.com/support.php?zz=kb_vcs" target="_blank">integration information</a>, I could deduce what is the command line necessary for Semanticmerge to use Beyond Compare as an external merge and diff tool.<br />
<br />
The -edt, -emt and -e2mt options are the ones which specify how the external diff tool, external 3-way merge tool and external 2-way merge tool is to be called. Once I understood that, I split the problem in its obvious parts, each invocation had to be mapped, from kdiff3 options to beyond compare options, adding the occasional bell and whistle, if possible.<br />
<br />
The parts to figure out, ordered by compexity, were:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;">
</span>
<br />
<ol><span style="font-family: "Courier New",Courier,monospace;">
<li><span style="font-size: x-small;">-edt="kdiff3 \"#sourcefile\" \"#destinationfile\"</span></li>
<span style="font-size: x-small;">
</span><span style="font-size: x-small;"><br />
</span>
<li><span style="font-size: x-small;">-e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""</span></li>
<span style="font-size: x-small;">
</span><span style="font-size: x-small;"><br />
</span>
<li><span style="font-size: x-small;">-emt="kdiff3
\"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1
\"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\"
-o \"#output\""</span></li>
</span></ol>
</blockquote>
<br />
Semantic merge integrates with kdiff3 in diff mode via the <span style="font-family: "Courier New",Courier,monospace;">-edt</span> option. This was easy to map to Beyond Compare, I just replaced kdiff3 with bcompare:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">-edt="</span>bcompare \"#sourcefile\" \"#destinationfile\"</span><span style="font-family: "Courier New",Courier,monospace; font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">"</span></span></blockquote>
Integration for 2-way merges was also quite easy, the mapping to Beyond Compare was:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">-e2mt="bcompare \"#sourcefile\" \"#destinationfile\" <span style="font-size: x-small;">-</span>savetarget=\"#output\""</span></span> </blockquote>
For the 3-way merge I was a little confused because the Beyond Compare documentation and options were inconsistent between Windows and Linux. On Windows, for some of the SCMs, the options that set the titles for the panes are '/title1', '/title2', '/title3' and '/title4' (way too descriptive for my taste /sarcasm), but for some others are '/lefttitle', '/centertitle', '/righttitle', '/outputtitle', while on Linux the options are the more explicit kind, but with a '-' instead of a '/'.<br />
<br />
The basic things were easy, ordering the parameters in the 'source, destination, base, output' instead of kdiff3's 'base, source, destination, -o ouptut', so I wanted to add the bells and whistles, since it really makes more sense for the developer to see something like 'Destination: [method] readOptions' instead of '/tmp/tmp4327687242.tmp', and because that's exactly what is necessary for Semanticmerge when merging methods, since on conflicts the various versions of the functions are placed in temporary files which don't mean anything.<br />
<br />
So, after some digging into the examples from Beyond Compare and kdiff3 documentation, I ended up with:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">-emt="bcompare \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'"</span></span></blockquote>
<br />
Sadly, I wasn't able to identify the symbolic name for the output, so I added the hard coded 'merge result'. If Codice people would like to help with with this information (or if it exists), I would be more than willing to do update the information and do the necessary changes.<br />
<br />
Then I added the bells and whistles for the -edt and -e2mt options, so I ended up with an even more complicated command line. The end result was this monstrosity:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="bcompare \"#sourcefile\" \"#destinationfile\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'" -emt="bcompare \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'" -e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'"</span></blockquote>
So when I 3-way merge a function I get something like this (sorry for high resolution, lower resolutions don't do justice to the tools):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-PeFa5WysZW4/UiO011ZpFKI/AAAAAAAAAas/fvAob4aFFb4/s1600/semanticmerge_beyondcompare2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="212" src="http://3.bp.blogspot.com/-PeFa5WysZW4/UiO011ZpFKI/AAAAAAAAAas/fvAob4aFFb4/s400/semanticmerge_beyondcompare2.png" width="400" /></a></div>
<br />
<br />
I don't expect this post to remain relevant for too much time, because after sending my feedback to Codice, they were open to my suggestion to have persistent settings for the external tool integration, so, in the future, the command line could probably be as simple as:<br />
<blockquote class="tr_bq">
<span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: "Courier New",Courier,monospace; font-size: x-small;">semanticmergetool
-s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java
-r=/tmp/semanticmergetoolresult.java</span></span></blockquote>
And the integration could be done via the GUI, while the command line can become a way to override the defaults. eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com0tag:blogger.com,1999:blog-1134723925265805427.post-24367345491609854672013-09-01T11:37:00.000+03:002013-09-02T01:44:05.642+03:00HOWTO add a shell script wrapper and preserve quoting for parametersIf you ever find yourself in the situation where you have to add a shell script wrapper over a command, but the parameters' quoting gets lost and you end up with the wrong parameters in the wrapped command/tool, you might want to read this post.<br />
<br />
On my system I have some command line tools which are Windows only
and, in order to easily use the same build system as on Windows on my Linux
machine I added a wrapper script which invokes wine on the commands and made symlinks to the wrapper with the file names as the tools, but without the '.exe' suffix.<br />
<br />
Of course, I wanted to properly pass the parameters through the wrapper to the tools so I wrote (note the bold text):<br />
<blockquote>
<span style="font-family: "Courier New",Courier,monospace;">#!/bin/sh<br />
wine $0.exe <b>"$@"</b></span></blockquote>
So the answer is: use $@ and quote like I did in the code above and the parameters will be passed correctly.<br />
<br />
<hr />
<br />
<br />
Update: stbuehler suggested to use exec to replace the shell process with wine with this construct:
<br />
<blockquote>
Use:</blockquote>
<blockquote>
<span style="font-family: "Courier New",Courier,monospace;">#!/bin/sh</span><br />
<span style="font-family: "Courier New",Courier,monospace;">exec wine $0.exe "$@"</span></blockquote>
<br />
Thanks for the suggestion.
eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com2tag:blogger.com,1999:blog-1134723925265805427.post-39267371823966293522013-07-24T18:02:00.002+03:002013-07-24T18:02:51.225+03:00HOWTO: git - change branch without touching working copy (at all)Did you ever had the need in a git repository to change to another branch without altering AT ALL the working copy and ever wondered how that's done?<br />
<br />
Usual use cases might be when you mde some changes to the working copy thinking you were on another branch, or you double-track in git a directory which is also tracked by another VCS (e.g. ClearCase).<br />
<br />
What you need, in fact, is to update the index and not touch the working copy. The command that does that is<br />
<br />
<blockquote class="tr_bq">
<b><span style="font-family: "Courier New",Courier,monospace;">git read-tree otherbranch</span></b></blockquote>
If you also need to commit the state of your working tree to the otherbranch, you also need to tell git to actually associate the curent HEAD with the branch you just switched to:<br />
<blockquote class="tr_bq">
<b><span style="font-family: "Courier New",Courier,monospace;">git symbolic-ref HEAD refs/heads/otherbranch</span></b></blockquote>
I use this approach at my work place* to develop/experiment with possible code improvements on my machine before considering the merge into the official code.<br />
<br />
<span style="font-size: x-small;">* The preferred VCS is (Base) ClearCase, and I keep a git repository over the relevant part of the project in the ClerCase Dynamic View, so for synchronisations, the files in the working copy are updated by ClearCase and I have to resync my git branch (clearcaseint) following the latest official code from time to time, so I can pull in my local disk git repository the clearcaseint branch and merge it with my experimental changes in my git feature branches. </span><br />
<br />
<span style="font-size: x-small;">If people are curious about how I work with ClearCase and git, I can expand on this in another post. </span>eddyphttp://www.blogger.com/profile/13986125106284142716noreply@blogger.com1