<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Marcin Juszkiewicz - ubuntu</title><link href="https://marcin.juszkiewicz.com.pl/" rel="alternate"/><link href="https://marcin.juszkiewicz.com.pl/tag/ubuntu/feed/" rel="self"/><id>https://marcin.juszkiewicz.com.pl/</id><updated>2023-04-17T12:36:00+02:00</updated><entry><title>“Ten” years at Linaro</title><link href="https://marcin.juszkiewicz.com.pl/2023/04/17/ten-years-at-linaro/" rel="alternate"/><published>2023-04-17T12:36:00+02:00</published><updated>2023-04-17T12:36:00+02:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2023-04-17:/2023/04/17/ten-years-at-linaro/</id><summary type="html">Three + seven = ten.&amp;nbsp;Right?</summary><content type="html">&lt;p&gt;Some time ago was a day when I reached &amp;#8220;ten&amp;#8221; years at Linaro. Why &amp;#8220;&amp;#8221;? Because it
was 3 + 7 rather than 10 years straight. First three years as Canonical
contractor now seven years at Red Hat employee assigned as Member&amp;nbsp;Engineer.&lt;/p&gt;
&lt;h3&gt;My first three years at&amp;nbsp;Linaro&lt;/h3&gt;
&lt;h4&gt;NewCo or NewCore? Or Ubuntu on &lt;span class="caps"&gt;ARM&lt;/span&gt;?&lt;/h4&gt;
&lt;p&gt;In 2010 I signed contract with Canonical as &amp;#8220;Foundation &lt;span class="caps"&gt;OS&lt;/span&gt; Engineer&amp;#8221;. Once there
I signed another paper which moved me to NewCo project (also called NewCore but
NewCo name is on paper I&amp;nbsp;signed).&lt;/p&gt;
&lt;p&gt;On 30th April 2010 I got &amp;#8220;Welcome to Linaro&amp;#8221;&amp;nbsp;e-mail.&lt;/p&gt;
&lt;!--MORE--&gt;

&lt;p&gt;Then &lt;a href="/2010/05/14/uds-continues/"&gt;&lt;span class="caps"&gt;UDS&lt;/span&gt;-M happened&lt;/a&gt; where we were hiding under
&amp;#8220;Ubuntu on Arm&amp;#8221; name (despite the fact that Ubuntu had such&amp;nbsp;team).&lt;/p&gt;
&lt;h4&gt;Ah, it is Linaro now&amp;nbsp;:D&lt;/h4&gt;
&lt;p&gt;On 3rd June 2010 Linaro was officially announced.
No more hiding, we went public with&amp;nbsp;name.&lt;/p&gt;
&lt;h4&gt;My&amp;nbsp;team&lt;/h4&gt;
&lt;p&gt;I became a part of Developer Platform team and worked mostly on toolchain
packages for Ubuntu and later Debian. There were funny moments at
sprints/summits/connects when I joked that I have two managers to listen to (one
from Toolchain Working Group and one from Developer&amp;nbsp;Platform).&lt;/p&gt;
&lt;p&gt;There were Debian/Ubuntu developers there and people from other&amp;nbsp;environments.&lt;/p&gt;
&lt;p&gt;At some moment it was renamed to &amp;#8220;Base and Baselines&amp;#8221;. Or &amp;#8220;Bed and Breakfast&amp;#8221; as
most of time &amp;#8220;&lt;span class="caps"&gt;BB&lt;/span&gt;&amp;#8221; was used instead of full&amp;nbsp;name.&lt;/p&gt;
&lt;h4&gt;AArch64 bring&amp;nbsp;up&lt;/h4&gt;
&lt;p&gt;In 2012 I dusted off my OpenEmbedded knowledge and started working on getting
AArch64 architecture bring up. Lot of not-yet-public patches was in use. The fun
of seeing &amp;#8220;Hello world!&amp;#8221; message in emulator printed by &lt;span class="caps"&gt;OS&lt;/span&gt; image I built from
scratch was something I hope to never&amp;nbsp;forget.&lt;/p&gt;
&lt;p&gt;Each time I am choosing mug for my coffee I see Pac-Man one I bought during
Linaro/Arm AArch64 sprint we had in October&amp;nbsp;2012.&lt;/p&gt;
&lt;figure id="__yafg-figure-1"&gt;
&lt;img alt="My AArch64 mug" loading="lazy" src="/files/2023/04/IMG_20230417_140623-700x.jpg" title="My AArch64 mug"&gt;
&lt;figcaption&gt;My AArch64 mug&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h4&gt;The&amp;nbsp;End&lt;/h4&gt;
&lt;p&gt;There was lot of noise in 2011 about deal between Canonical and Linaro. Several
engineers at Linaro were from Canonical and there was some messy situation
related with&amp;nbsp;money.&lt;/p&gt;
&lt;p&gt;It ended with retiring of people every quarter. Some moved back to Canonical,
some changed job and got hired directly by Linaro. There were also people who
moved to Linaro member companies and stayed with their Linaro position. Some
people left both companies and went to other&amp;nbsp;jobs.&lt;/p&gt;
&lt;p&gt;I was supposed to &lt;a href="/2012/10/26/so-long-and-thanks-for-all-the-fish/"&gt;leave Linaro in 2012&lt;/a&gt;
but it was postponed by half a year. So &lt;a href="/2013/05/31/my-time-at-linaro-is-over/"&gt;I left&lt;/a&gt; after about 37&amp;nbsp;months.&lt;/p&gt;
&lt;h3&gt;Second&amp;nbsp;round&lt;/h3&gt;
&lt;p&gt;Time passed, I was working at Red Hat on getting AArch64 first class citizen in
&lt;span class="caps"&gt;RHEL&lt;/span&gt; and Fedora Linux distributions. And one day my manager asked &lt;a href="/2016/01/25/i-may-go-back-to-linaro/"&gt;do I want to
work at Linaro again&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I did some research, discussed with friends at Linaro and on &lt;a href="/2016/04/08/back-linaro-org/"&gt;8th April 2016 I
was back&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;My team &lt;span class="caps"&gt;II&lt;/span&gt;&lt;/h4&gt;
&lt;p&gt;This time I became part of &lt;span class="caps"&gt;LEG&lt;/span&gt;: Linaro Enterprise Group. Servers, data centres
etc. There were several teams to choose and I ended in &lt;span class="caps"&gt;SDI&lt;/span&gt; (Software Defined
Infrastructure) one. We were behind &lt;span class="caps"&gt;LDC&lt;/span&gt; (Linaro Developer Cloud)&amp;nbsp;project.&lt;/p&gt;
&lt;p&gt;At some moment &lt;span class="caps"&gt;LEG&lt;/span&gt; became &lt;span class="caps"&gt;LDCG&lt;/span&gt; (Linaro Datacenter and Cloud Group). Some years
later &lt;span class="caps"&gt;LDCG&lt;/span&gt; lost &amp;#8220;and Cloud&amp;#8221; part as &lt;span class="caps"&gt;AWS&lt;/span&gt; and other cloud providers started
offering AArch64 systems so we did not had to deal with it any&amp;nbsp;more.&lt;/p&gt;
&lt;h4&gt;OpenStack all&amp;nbsp;over&lt;/h4&gt;
&lt;p&gt;First version of &lt;span class="caps"&gt;LDC&lt;/span&gt; was Debian based with OpenStack &amp;#8216;liberty&amp;#8217;. Then were weird
months when we had to reinvent deployment several times. It was mess most of&amp;nbsp;time.&lt;/p&gt;
&lt;p&gt;So we abandoned own solutions and went with OpenStack Kolla. I quickly became
one of core developers there. &lt;span class="caps"&gt;LDC&lt;/span&gt; moved to be container&amp;nbsp;based.&lt;/p&gt;
&lt;p&gt;In 2022 we ended working on OpenStack. &lt;span class="caps"&gt;LDC&lt;/span&gt; is now used only for internal&amp;nbsp;projects.&lt;/p&gt;
&lt;h4&gt;Building&amp;nbsp;stuff&lt;/h4&gt;
&lt;p&gt;With my &amp;#8220;give me software and I will build it&amp;#8221; mantra I ended also as kind of &lt;span class="caps"&gt;CI&lt;/span&gt;
jobs developer for &lt;span class="caps"&gt;LEG&lt;/span&gt; teams. Apache Bigtop, Apache Arrow, TensorFlow, &lt;span class="caps"&gt;EDK2&lt;/span&gt; and
several other projects. Some used containers, some were running shell scripts,
some&amp;nbsp;Ansible.&lt;/p&gt;
&lt;h4&gt;&lt;span class="caps"&gt;SBSA&lt;/span&gt; Reference&amp;nbsp;Platform&lt;/h4&gt;
&lt;p&gt;I was involved in some work around getting &lt;span class="caps"&gt;QEMU&lt;/span&gt; to emulate &lt;span class="caps"&gt;SBSA&lt;/span&gt; Reference
Platform (&amp;#8220;sbsa-ref&amp;#8221; machine). Created some &lt;span class="caps"&gt;CI&lt;/span&gt; jobs to run test suites, build
firmware images etc. After running test suites I created a bunch of issues in
Jira so we can track how things&amp;nbsp;go.&lt;/p&gt;
&lt;p&gt;During recent months I became more involved. I am testing patches, running them
through both &lt;span class="caps"&gt;SBSA&lt;/span&gt; and &lt;span class="caps"&gt;BSA&lt;/span&gt; Arm Compliance Suites and reporting&amp;nbsp;results.&lt;/p&gt;
&lt;p&gt;I have own &lt;a href="https://github.com/hrw/sbsa-ref-status"&gt;set of scripts&lt;/a&gt; to handle
logs to make it easier to track how things are&amp;nbsp;now.&lt;/p&gt;
&lt;h4&gt;Summary&lt;/h4&gt;
&lt;p&gt;At last Linaro Connect events there was always a moment when they announced
people who worked at Linaro for 5 (or later) 10 years. I have to admit &amp;#8212; I felt
envy several&amp;nbsp;times.&lt;/p&gt;
&lt;p&gt;And when I was 5 years straight at Linaro we had &lt;span class="caps"&gt;COVID&lt;/span&gt;-19 pandemic so there was
no&amp;nbsp;event.&lt;/p&gt;</content><category term="misc"/><category term="linaro"/><category term="development"/><category term="virtualization"/><category term="fedora"/><category term="ubuntu"/><category term="debian"/><category term="aarch64"/><category term="arm"/></entry><entry><title>EBBR on EspressoBin</title><link href="https://marcin.juszkiewicz.com.pl/2021/02/15/ebbr-on-espressobin/" rel="alternate"/><published>2021-02-15T21:53:00+01:00</published><updated>2021-02-15T21:53:00+01:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2021-02-15:/2021/02/15/ebbr-on-espressobin/</id><summary type="html">Takes some time but EspressoBin can be &lt;span class="caps"&gt;EBBR&lt;/span&gt;&amp;nbsp;too.</summary><content type="html">&lt;blockquote&gt;
&lt;p&gt;&lt;span class="caps"&gt;SBBR&lt;/span&gt; or &lt;span class="caps"&gt;GTFO&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Me.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Yeah, right. But world is not so nice and there are many cheap &lt;span class="caps"&gt;SBC&lt;/span&gt; on market
which are not &lt;span class="caps"&gt;SBBR&lt;/span&gt; compliant and probably never will. And with small amount of
work they can do &lt;span class="caps"&gt;EBBR&lt;/span&gt; (&lt;a href="https://github.com/ARM-software/ebbr/"&gt;Embedded Base Boot Requirements&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;&lt;span class="caps"&gt;NOTE&lt;/span&gt;: I have similar post about &lt;a href="/2020/06/17/ebbr-on-rockpro64/"&gt;&lt;span class="caps"&gt;EBBR&lt;/span&gt; on RockPro64 board&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;&lt;span class="caps"&gt;WTH&lt;/span&gt; is &lt;span class="caps"&gt;EBBR&lt;/span&gt;?&lt;/h3&gt;
&lt;p&gt;It is specification for devices which are not servers and do not pretend to be
such. U-Boot is all they have and with properly configured one they have some
subset of &lt;span class="caps"&gt;EFI&lt;/span&gt; Boot/Runtime Services to load distribution bootloader (grub-efi
usually) like it is done on&amp;nbsp;servers.&lt;/p&gt;
&lt;p&gt;&lt;span class="caps"&gt;ACPI&lt;/span&gt; is not required but may be present. DeviceTree is perfectly fine. You may
provide both or one of&amp;nbsp;them.&lt;/p&gt;
&lt;p&gt;Firmware can be stored wherever you wish. Even &lt;span class="caps"&gt;MBR&lt;/span&gt; partitioning is available if
really&amp;nbsp;needed.&lt;/p&gt;
&lt;h3&gt;Few words about board&amp;nbsp;itself&lt;/h3&gt;
&lt;p&gt;EspressoBin has &lt;span class="caps"&gt;4MB&lt;/span&gt; of &lt;span class="caps"&gt;SPI&lt;/span&gt; flash on board. Less than on RockPro64 but still
enough for storing firmware (U-Boot takes less than &lt;span class="caps"&gt;1MB&lt;/span&gt;).&lt;/p&gt;
&lt;p&gt;This &lt;span class="caps"&gt;SBC&lt;/span&gt; is nothing new &amp;#8212; first version was released in 2016. There were
several revisions with different memory type, amount of ram chips, type of them
(ddr3 or ddr4), &lt;span class="caps"&gt;CPU&lt;/span&gt; speed and some more&amp;nbsp;changes.&lt;/p&gt;
&lt;p&gt;I got EspressoBin revision 5 with &lt;span class="caps"&gt;1GB&lt;/span&gt; ram of ddr3 in 2 chips. And 1GHz&amp;nbsp;processor.&lt;/p&gt;
&lt;p&gt;It may sound silly that I repeat that information but it matters when you start
building firmware for that&amp;nbsp;board.&lt;/p&gt;
&lt;h3&gt;So let us build fresh&amp;nbsp;firmware&lt;/h3&gt;
&lt;p&gt;This is Marvell so abandon all hope for&amp;nbsp;sanity.&lt;/p&gt;
&lt;p&gt;Thanks to Arm Trusted Firmware authors there is &lt;a href="https://trustedfirmware-a.readthedocs.io/en/latest/plat/marvell/armada/build.html"&gt;good documentation on how to
build firmware for EspressoBin&lt;/a&gt;
which guides step by step and explains all arguments you need. For me it was&amp;nbsp;several &lt;code&gt;git clone&lt;/code&gt; calls and then&amp;nbsp;two &lt;code&gt;make&lt;/code&gt; calls:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;make -C u-boot CROSS_COMPILE=aarch64-linux-gnu- \
mvebu_espressobin-88f3720_defconfig u-boot.bin -j12

make -C trusted-firmware-a CROSS_COMPILE=aarch64-linux-gnu- \
CROSS_CM3=arm-none-linux-gnueabihf- PLAT=a3700 \
CLOCKSPRESET=CPU_1000_DDR_800 DDR_TOPOLOGY=2 \
MV_DDR_PATH=$PWD/marvell/mv-ddr-marvell/ \
WTP=$PWD/marvell/A3700-utils-marvell/ \
CRYPTOPP_PATH=$PWD/marvell/cryptopp/ \
BL33=$PWD/u-boot/u-boot.bin \
mrvl_flash -j12
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And I had to install cross toolchain for 32-bit arm because the one I had was
for building kernels/bootloaders&amp;nbsp;only.&lt;/p&gt;
&lt;h3&gt;Is your U-Boot friendly or&amp;nbsp;not?&lt;/h3&gt;
&lt;p&gt;First you need to check which version of U-Boot and hardware you have. Then
check does it recognize &lt;span class="caps"&gt;SPI&lt;/span&gt; flash or&amp;nbsp;not:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Marvell&amp;gt;&amp;gt; sf probe
SF: unrecognized JEDEC id bytes: 9d, 70, 16
Failed to initialize SPI flash at 0:0 (error -2)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I had bad luck as my board used &lt;span class="caps"&gt;SPI&lt;/span&gt; chip not recognized by any official U-Boot&amp;nbsp;build&amp;#8230;&lt;/p&gt;
&lt;h3&gt;Armbian to the&amp;nbsp;rescue&lt;/h3&gt;
&lt;p&gt;I asked in few places did anyone had some experiences with this board. One of
them was #debian-arm &lt;span class="caps"&gt;IRC&lt;/span&gt; channel where I got hint from Xogium that Armbian may
have U-Boot&amp;nbsp;builds.&lt;/p&gt;
&lt;p&gt;And they have &lt;a href="https://www.armbian.com/espressobin/"&gt;whole page about EspressoBin&lt;/a&gt;.
With information how to choose firmware files&amp;nbsp;etc.&lt;/p&gt;
&lt;p&gt;So I downloaded archive with proper files for &lt;a href="http://wiki.espressobin.net/tiki-index.php?page=Bootloader+recovery+via+UART"&gt;&lt;span class="caps"&gt;UART&lt;/span&gt; recovery&lt;/a&gt;.
The important thing to remember is that once you move jumpers and load all
firmware files over serial they are not written in &lt;span class="caps"&gt;SPI&lt;/span&gt; flash so reset of board
means you start&amp;nbsp;over.&lt;/p&gt;
&lt;p&gt;Quick check is &lt;span class="caps"&gt;SPI&lt;/span&gt; flash&amp;nbsp;detected:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Marvell&amp;gt;&amp;gt; sf probe
SF: Detected is25wp032 with page size 256 Bytes, erase size 4 KiB, total 4 MiB
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Yeah! Now can start &lt;span class="caps"&gt;USB&lt;/span&gt; and flash own firmware&amp;nbsp;build:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Marvell&amp;gt;&amp;gt; bubt flash-image.bin spi usb
Burning U-Boot image &amp;quot;flash-image.bin&amp;quot; from &amp;quot;usb&amp;quot; to &amp;quot;spi&amp;quot;
Bus usb@58000: Register 2000104 NbrPorts 2
Starting the controller
USB XHCI 1.00
Bus usb@5e000: USB EHCI 1.00
scanning bus usb@58000 for devices... 1 USB Device(s) found
scanning bus usb@5e000 for devices... 2 USB Device(s) found
Image checksum...OK!
SF: Detected is25wp032 with page size 256 Bytes, erase size 4 KiB, total 4 MiB
Erasing 991232 bytes (242 blocks) at offset 0 ...Done!
Writing 990944 bytes from 0x6000000 to offset 0 ...Done!
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Quick reset and board boots to fresh, mainline&amp;nbsp;U-Boot:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;TIM-1.0
WTMI-devel-18.12.1-1a13f2f
WTMI: system early-init
SVC REV: 5, CPU VDD voltage: 1.108V
NOTICE:  Booting Trusted Firmware
NOTICE:  BL1: v2.4(release):v2.4-345-g04c122310 (Marvell-devel-18.12.2)
NOTICE:  BL1: Built : 17:11:19, Feb 15 2021
NOTICE:  BL1: Booting BL2
NOTICE:  BL2: v2.4(release):v2.4-345-g04c122310 (Marvell-devel-18.12.2)
NOTICE:  BL2: Built : 17:11:20, Feb 15 2021
NOTICE:  BL1: Booting BL31
NOTICE:  BL31: v2.4(release):v2.4-345-g04c122310 (Marvell-devel-18.12.2)
NOTICE:  BL31: Built : 18:07:02, Feb 15 2021


U-Boot 2021.01 (Feb 15 2021 - 19:25:41 +0100)

DRAM:  1 GiB
Comphy-0: USB3_HOST0    5 Gbps    
Comphy-1: PEX0          2.5 Gbps  
Comphy-2: SATA0         5 Gbps    
SATA link 0 timeout.
AHCI 0001.0300 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
flags: ncq led only pmp fbss pio slum part sxs 
PCIE-0: Link down
MMC:   sdhci@d0000: 0, sdhci@d8000: 1
Loading Environment from SPIFlash... SF: Detected is25wp032 with page size 256 Bytes, erase size 4 KiB, total 4 MiB
OK
Model: Globalscale Marvell ESPRESSOBin Board
Card did not respond to voltage select! : -110
Net:   eth0: neta@30000
Hit any key to stop autoboot:  0 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Final&amp;nbsp;steps&lt;/h3&gt;
&lt;p&gt;&lt;span class="caps"&gt;OK&lt;/span&gt;, so &lt;span class="caps"&gt;SBC&lt;/span&gt; has fresh, mainline firmware. Nice. But still some stuff needs to be&amp;nbsp;done.&lt;/p&gt;
&lt;p&gt;First note &lt;span class="caps"&gt;MAC&lt;/span&gt; addresses of Ethernet ports.&amp;nbsp;Use &lt;code&gt;printenv&lt;/code&gt; command to check
stored environment and note few&amp;nbsp;variables:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;eth1addr=f0:ad:4b:aa:97:01
eth2addr=f0:ad:4b:aa:97:02
eth3addr=f0:ad:4b:aa:97:03
ethaddr=f0:ad:4e:72:10:ef
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Of course you may also skip that step and rely on random ones or choose own ones
(I had router with C0:&lt;span class="caps"&gt;FF&lt;/span&gt;:&lt;span class="caps"&gt;EE&lt;/span&gt;:C0:&lt;span class="caps"&gt;FF&lt;/span&gt;:&lt;span class="caps"&gt;EE&lt;/span&gt; in&amp;nbsp;past).&lt;/p&gt;
&lt;p&gt;Then reset environment to default values stored in U-Boot binary and set those
&lt;span class="caps"&gt;MAC&lt;/span&gt; addresses by&amp;nbsp;hand:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;=&amp;gt; env default -a -f
=&amp;gt; setenv eth1addr f0:ad:4b:aa:97:01
=&amp;gt; setenv eth2addr f0:ad:4b:aa:97:02
=&amp;gt; setenv eth3addr f0:ad:4b:aa:97:03
=&amp;gt; setenv ethaddr f0:ad:4e:72:10:ef
=&amp;gt; saveenv
Saving Environment to SPIFlash... Erasing SPI flash...Writing to SPI flash...done
OK
=&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;What &lt;span class="caps"&gt;EBBR&lt;/span&gt;&amp;nbsp;brings?&lt;/h3&gt;
&lt;p&gt;Now your board is ready to boot Debian, Fedora and several other distribution
install media with two&amp;nbsp;commands:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;=&amp;gt; set boot_targets usb0
=&amp;gt; boot
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It will find &lt;span class="caps"&gt;EFI&lt;/span&gt; bootloader and start it. Just like on any other boring
&lt;span class="caps"&gt;SBBR&lt;/span&gt;/&lt;span class="caps"&gt;EBBR&lt;/span&gt;&amp;nbsp;system.&lt;/p&gt;
&lt;p&gt;Distributions with old style &amp;#8216;boot.scr&amp;#8217; script (like OpenWRT for example) will
also work so no functionality&amp;nbsp;loss.&lt;/p&gt;</content><category term="misc"/><category term="aarch64"/><category term="firmware"/><category term="sbc"/><category term="debian"/><category term="fedora"/><category term="ubuntu"/></entry><entry><title>Standards are boring</title><link href="https://marcin.juszkiewicz.com.pl/2021/01/20/standards-are-boring/" rel="alternate"/><published>2021-01-20T17:33:00+01:00</published><updated>2021-01-20T17:33:00+01:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2021-01-20:/2021/01/20/standards-are-boring/</id><summary type="html">Your Arm board can be compliant&amp;nbsp;too.</summary><content type="html">&lt;blockquote&gt;
&lt;p&gt;We have made Arm servers&amp;nbsp;boring.&lt;/p&gt;
&lt;p&gt;Jon&amp;nbsp;Masters&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Standards are boring. Satisfied users may not want to migrate to other boards
the market tries to sell&amp;nbsp;them.&lt;/p&gt;
&lt;p&gt;So Arm market is flooded with piles of small board computers (&lt;span class="caps"&gt;SBC&lt;/span&gt;). Often they
are compliant to standards only when it comes to&amp;nbsp;connectors.&lt;/p&gt;
&lt;h3&gt;But our hardware is not&amp;nbsp;standard&lt;/h3&gt;
&lt;p&gt;It is not a matter of &amp;#8216;let produce &lt;span class="caps"&gt;UEFI&lt;/span&gt; ready hardware&amp;#8217; but rather &amp;#8216;let write
&lt;span class="caps"&gt;EDK2&lt;/span&gt; firmware for boards we already&amp;nbsp;have&amp;#8217;.&lt;/p&gt;
&lt;p&gt;Look at Raspberry/Pi then. It is shitty hardware but got popular. And group of
people wrote &lt;span class="caps"&gt;UEFI&lt;/span&gt; firmware for it. Probably without vendor support&amp;nbsp;even.&lt;/p&gt;
&lt;h3&gt;Start with &lt;span class="caps"&gt;EBBR&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;Each new board should be &lt;span class="caps"&gt;EBBR&lt;/span&gt; compliant at start. Which is easy &amp;#8212; do &amp;#8216;whatever
hardware&amp;#8217; and put properly configured U-Boot on it. Upstreaming support for your
small device should not be hard as you often base on some already existing&amp;nbsp;hardware.&lt;/p&gt;
&lt;p&gt;Add &lt;span class="caps"&gt;16MB&lt;/span&gt; of &lt;span class="caps"&gt;SPI&lt;/span&gt; flash to store firmware. Your users will be able to boot &lt;span class="caps"&gt;ISO&lt;/span&gt;
without wondering where on boot media they need to write&amp;nbsp;bootloaders.&lt;/p&gt;
&lt;p&gt;Then work on &lt;span class="caps"&gt;EDK2&lt;/span&gt; for board. Do &lt;span class="caps"&gt;SMBIOS&lt;/span&gt; (easy) and keep your existing Device
Tree.  You are still &lt;span class="caps"&gt;EBBR&lt;/span&gt;. Remember about upstreaming your work &amp;#8212; some people
will complain, some will improve your&amp;nbsp;code.&lt;/p&gt;
&lt;h3&gt;Add &lt;span class="caps"&gt;ACPI&lt;/span&gt;, go &lt;span class="caps"&gt;SBBR&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;Next step is moving from Device Tree to &lt;span class="caps"&gt;ACPI&lt;/span&gt;. May take some time to understand
why there are so many tables and what &lt;span class="caps"&gt;ASL&lt;/span&gt; is. But as several other systems show
it can be&amp;nbsp;done.&lt;/p&gt;
&lt;p&gt;And this brings you to &lt;span class="caps"&gt;SBBR&lt;/span&gt; compliance. Or SystemReady &lt;span class="caps"&gt;ES&lt;/span&gt; if you like&amp;nbsp;marketing.&lt;/p&gt;
&lt;h3&gt;&lt;span class="caps"&gt;SBSA&lt;/span&gt; for future&amp;nbsp;design&lt;/h3&gt;
&lt;p&gt;Doing new SoC tends to be &amp;#8220;let us take previous one and improve a bit&amp;#8221;. So this
time change it a bit and make your next SoC compliant with &lt;span class="caps"&gt;SBSA&lt;/span&gt; level 3. All
needed components are probably already included in your Arm&amp;nbsp;license.&lt;/p&gt;
&lt;p&gt;Grab &lt;span class="caps"&gt;EDK2&lt;/span&gt; support you did for previous board. Look at &lt;span class="caps"&gt;QEMU&lt;/span&gt; &lt;span class="caps"&gt;SBSA&lt;/span&gt; Reference
Platform support, look at other &lt;span class="caps"&gt;SBSA&lt;/span&gt; compliant hardware. Copy, reuse their
drivers, their&amp;nbsp;code.&lt;/p&gt;
&lt;h3&gt;Was it&amp;nbsp;worth?&lt;/h3&gt;
&lt;p&gt;At the end you will have &lt;span class="caps"&gt;SBSA&lt;/span&gt; compliant hardware running &lt;span class="caps"&gt;SBBR&lt;/span&gt; compliant&amp;nbsp;firmware. &lt;/p&gt;
&lt;p&gt;Congratulations, your board is SystemReady &lt;span class="caps"&gt;SR&lt;/span&gt; compliant. Your marketing team may
write that you are on same list as Ampere with their Altra&amp;nbsp;server.&lt;/p&gt;
&lt;p&gt;Users buy your hardware and can install whatever &lt;span class="caps"&gt;BSD&lt;/span&gt;, Linux distribution they
want. Some will experiment with Microsoft Windows. Others may work on porting
Haiku or other exotic operating&amp;nbsp;system. &lt;/p&gt;
&lt;p&gt;But none of them will have to think &amp;#8220;how to get this shit running&amp;#8221;. And they
will tell friends that your device is as boring as it should be when it comes to
running &lt;span class="caps"&gt;OS&lt;/span&gt; on it == more&amp;nbsp;sales.&lt;/p&gt;</content><category term="misc"/><category term="aarch64"/><category term="debian"/><category term="fedora"/><category term="ubuntu"/><category term="development"/></entry><entry><title>So long, and thanks for all the fun</title><link href="https://marcin.juszkiewicz.com.pl/2020/12/10/so-long-and-thanks-for-all-the-fun/" rel="alternate"/><published>2020-12-10T12:33:00+01:00</published><updated>2020-12-10T12:33:00+01:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2020-12-10:/2020/12/10/so-long-and-thanks-for-all-the-fun/</id><summary type="html">My Mustang is no&amp;nbsp;more.</summary><content type="html">&lt;p&gt;During last days I tried to get my Applied Micro Mustang running again. And it
looks like it is no more. Like that Norwegian Blue&amp;nbsp;parrot.&lt;/p&gt;
&lt;h3&gt;Tried some&amp;nbsp;things&lt;/h3&gt;
&lt;p&gt;By default Mustang outputs information on serial console. It does not here.
Checked serial cables, serial to usb dongles.&amp;nbsp;Nothing.&lt;/p&gt;
&lt;p&gt;Tried to &lt;a href="/2015/11/18/unbricking-apm-mustang/"&gt;load firmware from &lt;span class="caps"&gt;SD&lt;/span&gt; card&lt;/a&gt;
instead of on-board flash.&amp;nbsp;Nope.&lt;/p&gt;
&lt;p&gt;Time to put it to&amp;nbsp;rest.&lt;/p&gt;
&lt;h3&gt;How it&amp;nbsp;looked&lt;/h3&gt;
&lt;p&gt;When &lt;a href="/2014/06/10/aarch64-is-in-the-house/"&gt;I got it in June 2014&lt;/a&gt; it came in 1U
server case. With several loud fans.  Including one on cpu radiator. So I took
the board out and put into &lt;span class="caps"&gt;PC&lt;/span&gt; Tower case. Also replaced 50mm processor fan with
80mm&amp;nbsp;one:&lt;/p&gt;
&lt;figure id="__yafg-figure-1"&gt;
&lt;img alt="Top view of Mustang" loading="lazy" src="/files/2020/12/IMG_20201210_220604-700x.jpg" title="Top view of Mustang"&gt;
&lt;figcaption&gt;Top view of Mustang&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure id="__yafg-figure-2"&gt;
&lt;img alt="Side view" loading="lazy" src="/files/2020/12/IMG_20201210_220612-700x.jpg" title="Side view"&gt;
&lt;figcaption&gt;Side view&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3&gt;All that&amp;nbsp;development&amp;#8230;&lt;/h3&gt;
&lt;p&gt;I did several things on&amp;nbsp;it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="/2014/10/30/usb-on-mustang/"&gt;tested &lt;span class="caps"&gt;USB&lt;/span&gt;&amp;nbsp;support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;tested &lt;span class="caps"&gt;PCI&lt;/span&gt; Express&amp;nbsp;support&lt;/li&gt;
&lt;li&gt;&lt;a href="/2015/01/14/started-x11-on-aarch64-hardware-this-time/"&gt;run X11 on&amp;nbsp;hardware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="/2015/09/11/how-to-get-xserver-running-out-of-box-on-aarch64/"&gt;debugged why X11 does not start out of the&amp;nbsp;box&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="/2015/11/18/unbricking-apm-mustang/"&gt;wrote new instruction how to unbrick&amp;nbsp;it&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;run 64 and 32-bit virtual&amp;nbsp;machines&lt;/li&gt;
&lt;li&gt;used it as a desktop (&lt;a href="/2015/09/21/aarch64-desktop-day-one/"&gt;day 1&lt;/a&gt;, &lt;a href="/2015/09/22/aarch64-desktop-day-two/"&gt;day 2&lt;/a&gt;, &lt;a href="/2015/09/25/aarch64-desktop-last-day/"&gt;last day&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;tested&amp;nbsp;OpenStack&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some of them were done for first time on&amp;nbsp;AArch64.&lt;/p&gt;
&lt;p&gt;Board gave me lot of fun. I built countless software packages on it. For CentOS,
Debian, Fedora, &lt;span class="caps"&gt;RHEL&lt;/span&gt;. Tested installers of each of&amp;nbsp;them.&lt;/p&gt;
&lt;p&gt;Was running OpenStack on it since &amp;#8216;liberty&amp;#8217; (especially after moving from &lt;span class="caps"&gt;16GB&lt;/span&gt;
to &lt;span class="caps"&gt;32GB&lt;/span&gt;&amp;nbsp;ram).&lt;/p&gt;
&lt;h3&gt;What&amp;nbsp;next?&lt;/h3&gt;
&lt;p&gt;I am going to frame it. With few other devices which helped me during 
&lt;a href="/2020/02/11/my-whole-career-is-built-on-foss/"&gt;my career&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Replacement?&lt;/h3&gt;
&lt;p&gt;It would be nice to replace Mustang with some newer AArch64 hardware. From what
is available on mass market SolidRun HoneyComb looks closest. But I will wait
for something with Armv8.4 cores to be able to play with nested&amp;nbsp;virtualization.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;span class="caps"&gt;UPDATE&lt;/span&gt;:&lt;/strong&gt; I got it working&amp;nbsp;again.&lt;/p&gt;</content><category term="misc"/><category term="aarch64"/><category term="debian"/><category term="fedora"/><category term="ubuntu"/><category term="linaro"/><category term="red hat"/><category term="mustang"/></entry><entry><title>8 years of my work on AArch64</title><link href="https://marcin.juszkiewicz.com.pl/2020/09/23/8-years-of-my-work-on-aarch64/" rel="alternate"/><published>2020-09-23T17:33:00+02:00</published><updated>2020-09-23T17:33:00+02:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2020-09-23:/2020/09/23/8-years-of-my-work-on-aarch64/</id><summary type="html">I spent half of my Arm life on&amp;nbsp;AArch64.</summary><content type="html">&lt;p&gt;Back in 2012 AArch64 was something new, unknown yet. There was no toolchain
support (so no gcc, binutils or glibc). And I got assigned to get some stuff
running around&amp;nbsp;it.&lt;/p&gt;
&lt;h3&gt;OpenEmbedded&lt;/h3&gt;
&lt;p&gt;As there was no hardware cross compilation was the only way. Which meant
OpenEmbedded as we wanted to have wide selection of software&amp;nbsp;available.&lt;/p&gt;
&lt;p&gt;I learnt how to use modern &lt;span class="caps"&gt;OE&lt;/span&gt; (with &lt;span class="caps"&gt;OE&lt;/span&gt; Core and layers) by building images for
ARMv7 and checking them on some boards I had floating around my&amp;nbsp;desk.&lt;/p&gt;
&lt;h4&gt;Non-public toolchain&amp;nbsp;work&lt;/h4&gt;
&lt;p&gt;Some time later first non-public patches for binutils and gcc arrived in my
inbox. Then eglibc ones. So I started building and on 12th September 2012 I was
able to build&amp;nbsp;helloworld:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;12:38 hrw@puchatek:aarch64-oe-linux$ ./aarch64-oe-linux-gcc ~/devel/sources/hello.c -o hello
12:38 hrw@puchatek:aarch64-oe-linux$ file hello
hello: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.39, not stripped
12:39 hrw@puchatek:aarch64-oe-linux$ objdump -f hello

hello:     file format elf64-littleaarch64
architecture: aarch64, flags 0x00000112: 
EXEC_P, HAS_SYMS, D_PAGED 
start address 0x00000000004003e0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then images followed. Several people at Linaro (and outside) used those images
to test misc&amp;nbsp;things.&lt;/p&gt;
&lt;p&gt;At that moment we ran ARMv8 Fast models (quite slow system emulator from Arm).
There was a joke that Arm developers formed a queue for single core 10 GHz x86-64
cpus to get AArch64 running&amp;nbsp;faster.&lt;/p&gt;
&lt;h4&gt;Toolchain became&amp;nbsp;public&lt;/h4&gt;
&lt;p&gt;Then 1st October 2012 came. I entered Linaro office in Cambridge for AArch64
meeting and was greeted with &amp;#8220;glibc patches went to public &lt;span class="caps"&gt;ML&lt;/span&gt;&amp;#8221; information. So I
rebased my OpenEmbedded repository, updated patches, removed any traces of
non-public ones and published whole&amp;nbsp;work.&lt;/p&gt;
&lt;h4&gt;Building on&amp;nbsp;AArch64&lt;/h4&gt;
&lt;p&gt;My work above added support for AArch64 as a target architecture. But can it be
used as a host? One day I decided to check and ran &lt;a href="/2014/02/14/aarch64-can-build-openembedded/"&gt;OpenEmbedded on
AArch64&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;After one small patch it worked&amp;nbsp;fine.&lt;/p&gt;
&lt;h3&gt;X11&amp;nbsp;anyone?&lt;/h3&gt;
&lt;p&gt;As I had access to Arm Fast model I was able to play with graphics. So one day
in January 2013 I did a build and and &lt;a href="/2013/01/07/started-x11-on-aarch64/"&gt;started Xorg&lt;/a&gt;.
Through next years I had fun when people wrote that they got X11 running on
their AArch64 devices&amp;nbsp;;D&lt;/p&gt;
&lt;p&gt;Two years later I had Applied Micro Mustang at home (still have it). Once it had
working &lt;span class="caps"&gt;PCI&lt;/span&gt; Express support I added graphics card and &lt;a href="/2015/01/14/started-x11-on-aarch64-hardware-this-time/"&gt;started X11 on hardware&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Then went debugging why Xorg requires configuration file and one day with help
from Dave Airlie, Mark Salter and Matthew Garrett &lt;a href="/2015/09/11/how-to-get-xserver-running-out-of-box-on-aarch64/"&gt;I got two solutions for the
problem&lt;/a&gt;. Do not
remember did any of them went upstream but some time later problem was&amp;nbsp;solved.&lt;/p&gt;
&lt;p&gt;Few years later I met Dave Airlie at Linux Plumbers. We introduced to each other
and he said &amp;#8220;ah, you are the ‘arm64 + radeon guy’&amp;#8221;&amp;nbsp;;D&lt;/p&gt;
&lt;h4&gt;AArch64 Desktop&amp;nbsp;week&lt;/h4&gt;
&lt;p&gt;One day in September 2015 I had an idea. PCIe worked, &lt;span class="caps"&gt;USB&lt;/span&gt; too. So I did AArch64
desktop week. Connected monitors, keyboard, mouse, speakers and used Mustang
instead of my x86-64&amp;nbsp;desktop.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="/2015/09/21/aarch64-desktop-day-one/"&gt;day&amp;nbsp;one&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="/2015/09/22/aarch64-desktop-day-two/"&gt;day&amp;nbsp;two&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="/2015/09/25/aarch64-desktop-last-day/"&gt;last&amp;nbsp;day&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It was&amp;nbsp;fun.&lt;/p&gt;
&lt;h3&gt;Distributions&lt;/h3&gt;
&lt;p&gt;First we had nothing. Then I added AArch64 target into&amp;nbsp;OpenEmbedded. &lt;/p&gt;
&lt;p&gt;Same month &lt;a href="/2012/10/25/aarch64-for-everyone/"&gt;Arm released Foundation model&lt;/a&gt; so
anyone was able to play with AArch64 system. No screen, just storage, serial and
network but it was enough for some to even start building whole distributions
like Debian, Fedora, OpenSUSE,&amp;nbsp;Ubuntu.&lt;/p&gt;
&lt;p&gt;At that moment several patches were shared by all distributions as it was faster
way than waiting for upstreams. I saw multiple versions of some of them during
my journey of fixing packages in some&amp;nbsp;distributions.&lt;/p&gt;
&lt;h4&gt;Debian and&amp;nbsp;Ubuntu&lt;/h4&gt;
&lt;p&gt;In February 2013 &lt;a href="/2013/02/27/aarch64-port-of-debianubuntu-is-alive/"&gt;Debian/Ubuntu team presented their AArch64 port&lt;/a&gt;. 
It was their first architecture bootstrapped without using external toolchains.
Work was done in Ubuntu due to different approach to development than Debian
has. All work was merged back so some time later Debian also had AArch64&amp;nbsp;port.&lt;/p&gt;
&lt;h4&gt;Fedora&lt;/h4&gt;
&lt;p&gt;Fedora team started early &amp;#8212; October 2012, right after toolchain became public.
Used Fedora 17 packages and switched to Fedora 19 during&amp;nbsp;work.&lt;/p&gt;
&lt;p&gt;When I joined Red Hat in September 2013 one of my duties was fixing packages in
Fedora to get them built on&amp;nbsp;AArch64.&lt;/p&gt;
&lt;h4&gt;OpenSUSE&lt;/h4&gt;
&lt;p&gt;In January 2014 first versions of &lt;span class="caps"&gt;QEMU&lt;/span&gt; support arrived and people moved from
using Foundation model. March/April OpenSUSE team did massive amount of builds
to get their distribution built that&amp;nbsp;way.&lt;/p&gt;
&lt;h4&gt;&lt;span class="caps"&gt;RHEL&lt;/span&gt;&lt;/h4&gt;
&lt;p&gt;Fedora bootstrap also meant &lt;span class="caps"&gt;RHEL&lt;/span&gt; 7 bootstrap. When I joined Red Hat there were
images ready to use in models. My work was testing them and fixing packages.
There were multiple times when AArch64 fix helped to build also on ppc64le and
s390x&amp;nbsp;architectures.&lt;/p&gt;
&lt;h3&gt;Hardware I played&amp;nbsp;with&lt;/h3&gt;
&lt;p&gt;First Linux capable hardware was announced in June 2013. I got access to it at
Red Hat. Building and debugging was much faster than using fast models&amp;nbsp;;D&lt;/p&gt;
&lt;h4&gt;Applied Micro&amp;nbsp;Mustang&lt;/h4&gt;
&lt;p&gt;Soon Applied Micro Mustangs were everywhere. Distributions used them to build
packages etc. Even without support for half of hardware (no &lt;span class="caps"&gt;PCI&lt;/span&gt; Express, no
&lt;span class="caps"&gt;USB&lt;/span&gt;).&lt;/p&gt;
&lt;p&gt;I got one in June 2014. Running &lt;span class="caps"&gt;UEFI&lt;/span&gt; firmware out of the box. At first months I
had a feeling that firmware is developed at Red Hat as we had fresh versions
often right after first patches for missing hardware functionality were written.
In reality it was maintained by Applied Micro and we had access to sources so
there were some internal changes in testing (that&amp;#8217;s why I had firmware versions
like&amp;nbsp;&amp;#8216;0.12-rh&amp;#8217;).&lt;/p&gt;
&lt;p&gt;All those graphics cards I collected to test how &lt;span class="caps"&gt;PCI&lt;/span&gt; Express works. Or testing
&lt;span class="caps"&gt;USB&lt;/span&gt; before it was even merged into Linux mainline kernel. Using virtualization
for development of armhf build fixes (8 cores, 12 gigabytes of ram and plenty of
storage beat all armv7 hardware I&amp;nbsp;had).&lt;/p&gt;
&lt;p&gt;I stopped using Mustang around 2018. It is still under my&amp;nbsp;desk.&lt;/p&gt;
&lt;p&gt;For those who use: make sure you have 3.06.25&amp;nbsp;firmware.&lt;/p&gt;
&lt;h4&gt;96boards&lt;/h4&gt;
&lt;p&gt;In February 2015 Linaro announced 96boards initiative. The plan was to make
small, unified &lt;span class="caps"&gt;SBC&lt;/span&gt; with different Arm chips. Both 32- and 64-bit&amp;nbsp;ones.&lt;/p&gt;
&lt;p&gt;First ones were &amp;#8216;Consumer Edition&amp;#8217;. Small, limited to basic connectivity. Now
there are tens of them. 32-bit, 64-bit, fpga etc. Choose your poison&amp;nbsp;;D&lt;/p&gt;
&lt;p&gt;Second ones were &amp;#8216;Enterprise Edition&amp;#8217;. Few attempts existed, most of them did
not survived prototype phase. There was joke that full length &lt;span class="caps"&gt;PCI&lt;/span&gt; Express slot
and two &lt;span class="caps"&gt;USB&lt;/span&gt; ports requirements are there because I wanted to have AArch64
desktop&amp;nbsp;;D&lt;/p&gt;
&lt;p&gt;Too bad that nothing worth using came from &lt;span class="caps"&gt;EE&lt;/span&gt;&amp;nbsp;spec.&lt;/p&gt;
&lt;h4&gt;Servers&lt;/h4&gt;
&lt;p&gt;As Linaro assignee I have access to several servers from Linaro members. Some
are mass-market ones, some never made to market. We had over hundred X-Gene1
based systems (mostly as m400 cartridges in HPe Moonshot chassis&amp;#8217;) and shutdown
them in 2018 as they were getting more and more&amp;nbsp;obsolete.&lt;/p&gt;
&lt;p&gt;Main system I use for development is one of those &amp;#8216;never went to mass-market&amp;#8217;
ones. 46 cpu cores, 96 &lt;span class="caps"&gt;GB&lt;/span&gt; of ram make it nice machine for building container
images, Debian packages or running virtual machines in&amp;nbsp;OpenStack.&lt;/p&gt;
&lt;h4&gt;Desktop&lt;/h4&gt;
&lt;p&gt;For some time I was waiting for some desktop class hardware to have development
box more up-to-date than Mustang. Months turned into years. I no longer wait as
it looks like there will be no such&amp;nbsp;thing.&lt;/p&gt;
&lt;p&gt;Solidrun has made some attempts in this area. First with &lt;a href="https://www.solid-run.com/product-tag/macchiatobin-double-shot/"&gt;Macchiatobin&lt;/a&gt; and later
with &lt;a href="https://shop.solid-run.com/product/SRLX216S00D00GE064H06CH/"&gt;Honeycomb&lt;/a&gt;.
I did not used any of&amp;nbsp;them.&lt;/p&gt;
&lt;h4&gt;Cloud&lt;/h4&gt;
&lt;p&gt;When I (re)joined Linaro in 2016 I became part of team working on getting
OpenStack working on AArch64 hardware. We used Liberty, Mitaka, Newton releases
and then changed way we work and started contributing more. And more. Kolla,
Nova, Dib and other projects. Added aarch64 nodes to OpenDev &lt;span class="caps"&gt;CI&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;The effect of it was Linaro Developer Cloud used by
hundreds of projects to speed-up their aarch64 porting, tens of projects hosting
their &lt;span class="caps"&gt;CI&lt;/span&gt; systems&amp;nbsp;etc.&lt;/p&gt;
&lt;p&gt;Two years later Amazon started offering aarch64 nodes in &lt;span class="caps"&gt;AWS&lt;/span&gt;.&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;I spent half of my life with Arm on AArch64. Had great moments like building
helloworld as one of first people outside of Arm Ltd. Got involved in far more
projects then ever thought. Met new friends, visited several places in the
world I would probably never go&amp;nbsp;otherwise.&lt;/p&gt;
&lt;p&gt;I also got grumpy and complained far too many times that AArch64 market is
&amp;#8216;cheap but limited sbc or fast but expensive servers and nearly nothing in
between&amp;#8217;. Wrote some posts about missing systems targeting software developers
and lost hope that such will&amp;nbsp;happen.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;span class="caps"&gt;NOTE&lt;/span&gt;&lt;/strong&gt;: It is 8 years of my work on AArch64. I work with Arm since&amp;nbsp;2004.&lt;/p&gt;</content><category term="misc"/><category term="aarch64"/><category term="debian"/><category term="fedora"/><category term="ubuntu"/><category term="linaro"/><category term="red hat"/><category term="openembedded"/><category term="openstack"/></entry><entry><title>EBBR on RockPro64</title><link href="https://marcin.juszkiewicz.com.pl/2020/06/17/ebbr-on-rockpro64/" rel="alternate"/><published>2020-06-17T17:53:00+02:00</published><updated>2020-06-17T17:53:00+02:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2020-06-17:/2020/06/17/ebbr-on-rockpro64/</id><summary type="html">Few minutes of work and no more boot firmware issues on&amp;nbsp;RockPro64.</summary><content type="html">&lt;blockquote&gt;
&lt;p&gt;&lt;span class="caps"&gt;SBBR&lt;/span&gt; or &lt;span class="caps"&gt;GTFO&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Me.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;But Arm world no longer ends on &amp;#8220;&lt;span class="caps"&gt;SBBR&lt;/span&gt; compliant or complete mess&amp;#8221;. For over a
year there is new specification called &lt;span class="caps"&gt;EBBR&lt;/span&gt; (&lt;a href="https://github.com/ARM-software/ebbr/"&gt;Embedded Base Boot Requirements&lt;/a&gt;).&lt;/p&gt;
&lt;h3&gt;&lt;span class="caps"&gt;WTH&lt;/span&gt; is &lt;span class="caps"&gt;EBBR&lt;/span&gt;?&lt;/h3&gt;
&lt;p&gt;In short it is kind of &lt;span class="caps"&gt;SBBR&lt;/span&gt; for devices which can not comply. So you still need
to have some subset of &lt;span class="caps"&gt;UEFI&lt;/span&gt; Boot/Runtime Services but it can be provided by
whatever bootloader you use. So U-Boot is fine as long it&amp;#8217;s &lt;span class="caps"&gt;EFI&lt;/span&gt; implementation
is&amp;nbsp;enabled.&lt;/p&gt;
&lt;p&gt;&lt;span class="caps"&gt;ACPI&lt;/span&gt; is not required but may be present. DeviceTree is perfectly fine. You may
provide both or one of&amp;nbsp;them.&lt;/p&gt;
&lt;p&gt;Firmware can be stored wherever you wish. Even &lt;span class="caps"&gt;MBR&lt;/span&gt; partitioning is available if
really&amp;nbsp;needed.&lt;/p&gt;
&lt;h3&gt;Make it nice&amp;nbsp;way&lt;/h3&gt;
&lt;p&gt;RockPro64 has &lt;span class="caps"&gt;16MB&lt;/span&gt; of &lt;span class="caps"&gt;SPI&lt;/span&gt; flash on board. This is far more than needed for
storing firmware (I remember time when it was enough for palmtop&amp;nbsp;Linux).&lt;/p&gt;
&lt;p&gt;During last month I sent a bunch of patches to U-Boot to make this board as
comfortable to use as possible. Including storing of all firmware parts into on
board &lt;span class="caps"&gt;SPI&lt;/span&gt;&amp;nbsp;flash.&lt;/p&gt;
&lt;h4&gt;Needed&amp;nbsp;files&lt;/h4&gt;
&lt;p&gt;To have U-Boot in &lt;span class="caps"&gt;SPI&lt;/span&gt; flash there you need to fetch two&amp;nbsp;files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;idbloader.img (&lt;span class="caps"&gt;SPL&lt;/span&gt; + &lt;span class="caps"&gt;TPL&lt;/span&gt;)&lt;/li&gt;
&lt;li&gt;u-boot.itb (U-Boot&amp;nbsp;itself)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Store them as files on &lt;span class="caps"&gt;USB&lt;/span&gt; pen&amp;nbsp;drive. &lt;/p&gt;
&lt;h4&gt;Flashing RockPro64&amp;nbsp;board&lt;/h4&gt;
&lt;p&gt;&lt;span class="caps"&gt;NOTE&lt;/span&gt;: I assume that you already have a way to boot your board to U-Boot shell
(most common is to use microSD card with U-Boot on&amp;nbsp;it).&lt;/p&gt;
&lt;p&gt;Reboot board to U-Boot shell. Plug &lt;span class="caps"&gt;USB&lt;/span&gt; pen drive into any of RockPro64 &lt;span class="caps"&gt;USB&lt;/span&gt;&amp;nbsp;ports.&lt;/p&gt;
&lt;p&gt;Next do this set of commands to update&amp;nbsp;U-Boot:&lt;/p&gt;
&lt;pre&gt;
Hit any key to stop autoboot:  0 
=&gt; usb start

=&gt; ls usb 0:1
   163807   idbloader.img
   867908   u-boot.itb

2 file(s), 0 dir(s)

=&gt; sf probe
SF: Detected gd25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB

=&gt; load usb 0:1 ${fdt_addr_r} idbloader.img
163807 bytes read in 16 ms (9.8 MiB/s)

=&gt; sf update ${fdt_addr_r} 0 ${filesize}
device 0 offset 0x0, size 0x27fdf
163807 bytes written, 0 bytes skipped in 2.93s, speed 80066 B/s

=&gt; load usb 0:1 ${fdt_addr_r} u-boot.itb
867908 bytes read in 53 ms (15.6 MiB/s)

=&gt; sf update ${fdt_addr_r} 60000 ${filesize}
device 0 offset 0x60000, size 0xd3e44
863812 bytes written, 4096 bytes skipped in 11.476s, speed 77429 B/s
&lt;/pre&gt;

&lt;p&gt;And reboot&amp;nbsp;board.&lt;/p&gt;
&lt;p&gt;After this your RockPro64 will have firmware stored in on board &lt;span class="caps"&gt;SPI&lt;/span&gt; flash. No
need for wondering which offsets to use to store them on &lt;span class="caps"&gt;SD&lt;/span&gt; card&amp;nbsp;etc.&lt;/p&gt;
&lt;h3&gt;Booting installation&amp;nbsp;media&lt;/h3&gt;
&lt;p&gt;The nicest part of it is that no longer you need to mess with installation
media. Fetch Debian/Fedora installer &lt;span class="caps"&gt;ISO&lt;/span&gt;, write it to &lt;span class="caps"&gt;USB&lt;/span&gt; pen drive, plug into
port and reboot&amp;nbsp;board.&lt;/p&gt;
&lt;p&gt;Should work with any generic AArch64 installation media. Of course kernel on
media needs to support RockPro64 board. I played with Debian &amp;#8216;testing&amp;#8217; and
Fedora 32 and rawhide and they booted&amp;nbsp;fine.&lt;/p&gt;
&lt;h3&gt;Testing U-Boot on&amp;nbsp;microSD&lt;/h3&gt;
&lt;p&gt;By default RockPro64 loads U-Boot from &lt;span class="caps"&gt;SPI&lt;/span&gt; flash. If you need/want to boot from
microSD then follow &lt;a href="https://wiki.pine64.org/wiki/ROCKPro64#Disable_SPI_.28while_booting.29"&gt;instruction from official wiki&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you mess-up your &lt;span class="caps"&gt;SPI&lt;/span&gt; and are unable to boot, jumpering pins 23 (&lt;span class="caps"&gt;CLK&lt;/span&gt;) and 25
pin (&lt;span class="caps"&gt;GND&lt;/span&gt;) on the &lt;span class="caps"&gt;PI&lt;/span&gt;-2-bus header will disable the &lt;span class="caps"&gt;SPI&lt;/span&gt; as a boot&amp;nbsp;device.&lt;/p&gt;
&lt;p&gt;You have to remove the jumper 2 seconds after having started your &lt;span class="caps"&gt;RP64&lt;/span&gt; (before
the white &lt;span class="caps"&gt;LED&lt;/span&gt; turns &lt;span class="caps"&gt;ON&lt;/span&gt;) otherwise the &lt;span class="caps"&gt;SPI&lt;/span&gt; will be missing and you won&amp;#8217;t be
able to flash&amp;nbsp;it. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;My setup has &amp;#8216;disable &lt;span class="caps"&gt;SPI&lt;/span&gt;&amp;#8217; button next to &amp;#8216;disconnect Tx serial line&amp;#8217; switch &amp;#8212;
both on small breadboard next to the&amp;nbsp;board.&lt;/p&gt;
&lt;figure id="__yafg-figure-1"&gt;
&lt;img alt="RockPro64 setup on my desk" loading="lazy" src="/files/2020/06/board-700x.jpg" title="RockPro64 setup on my desk"&gt;
&lt;figcaption&gt;RockPro64 setup on my desk&lt;/figcaption&gt;
&lt;/figure&gt;</content><category term="misc"/><category term="aarch64"/><category term="firmware"/><category term="sbc"/><category term="rockpro64"/><category term="debian"/><category term="fedora"/><category term="ubuntu"/></entry><entry><title>OpenDev CI speed-up for AArch64</title><link href="https://marcin.juszkiewicz.com.pl/2020/06/15/opendev-ci-speed-up-for-aarch64/" rel="alternate"/><published>2020-06-15T17:53:00+02:00</published><updated>2020-06-15T17:53:00+02:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2020-06-15:/2020/06/15/opendev-ci-speed-up-for-aarch64/</id><summary type="html">Weeks of changes, patches ended with visible&amp;nbsp;speed-ups.</summary><content type="html">&lt;p&gt;I work with OpenDev &lt;span class="caps"&gt;CI&lt;/span&gt; for a while. My first Kolla patches were over three years
ago. We (Linaro) added AArch64 nodes few times &amp;#8212; some nodes were taken down,
some replaced, some&amp;nbsp;added.&lt;/p&gt;
&lt;h4&gt;Speed or lack of&amp;nbsp;it&lt;/h4&gt;
&lt;p&gt;Whenever you want to install some Python package&amp;nbsp;using &lt;code&gt;pip&lt;/code&gt; it is downloaded
from Pypi (directly or mirror). If there is a binary package then you get it, if
not then &amp;#8220;noarch&amp;#8221; package is&amp;nbsp;fetched.&lt;/p&gt;
&lt;p&gt;In worst case source tarball is downloaded and whole build process starts. You
need to have all required compilers installed, development headers for Python
and all required libraries and rest of needed tools. And then wait. And wait as
some packages require a lot of&amp;nbsp;time.&lt;/p&gt;
&lt;p&gt;And then repeat it again and again as you are not allowed to upload packages
into Pypi for projects you do not&amp;nbsp;own.&lt;/p&gt;
&lt;h4&gt;Argh you,&amp;nbsp;protobuf&lt;/h4&gt;
&lt;p&gt;There was a new release of protobuf package. OpenStack bot picked it up, sent
patch for review and it got&amp;nbsp;merged.&lt;/p&gt;
&lt;p&gt;And all AArch64 &lt;span class="caps"&gt;CI&lt;/span&gt; jobs&amp;nbsp;failed&amp;#8230;&lt;/p&gt;
&lt;p&gt;Turned out that protobuf 3.12.0 was released with x86 wheels only. No source
tarball. At&amp;nbsp;all.&lt;/p&gt;
&lt;p&gt;This turned out to be new maintainer mistake &amp;#8212; after 2-3 weeks it was fixed in
3.12.2&amp;nbsp;release.&lt;/p&gt;
&lt;h4&gt;Another &lt;span class="caps"&gt;CI&lt;/span&gt; job&amp;nbsp;then&lt;/h4&gt;
&lt;p&gt;So I started looking at &amp;#8216;requirements&amp;#8217; project and created a new &lt;span class="caps"&gt;CI&lt;/span&gt; job for it.
To check are new package versions are available for AArch64. Took some time and
several side updates as well (yak shaving all the way&amp;nbsp;again).&lt;/p&gt;
&lt;p&gt;Stuff got merged and works&amp;nbsp;now.&lt;/p&gt;
&lt;h4&gt;Wheels&amp;nbsp;cache&lt;/h4&gt;
&lt;p&gt;While working on above &lt;span class="caps"&gt;CI&lt;/span&gt; job I had a discussion with OpenDev infra team how to
make it work properly. Turned out that there were old jobs doing exactly what I
wanted: building wheels and caching them for next &lt;span class="caps"&gt;CI&lt;/span&gt;&amp;nbsp;tasks.&lt;/p&gt;
&lt;p&gt;It took several talks and patches from Ian Wienand, Clark Boylan, Jeremy &amp;#8216;fungi&amp;#8217;
Stanley and others. Several &lt;span class="caps"&gt;CI&lt;/span&gt; jobs got renamed, some were moved from one
project to another. Servers got configuration changes&amp;nbsp;etc.&lt;/p&gt;
&lt;p&gt;Now we have wheels built for both x86-64 and AArch64 architectures. Covering
CentOS 7/8, Debian &amp;#8216;buster&amp;#8217; and Ubuntu &amp;#8216;xenial/bionic/focal&amp;#8217; releases. For
OpenStack &amp;#8216;master&amp;#8217; and few stable&amp;nbsp;branches.&lt;/p&gt;
&lt;h4&gt;Effect&lt;/h4&gt;
&lt;p&gt;Requirements project has quick &amp;#8216;check-uc&amp;#8217; job running on AArch64 to make sure
that all packages are available for both architectures. All OpenStack projects
profit from&amp;nbsp;it.&lt;/p&gt;
&lt;p&gt;In Kolla &amp;#8216;openstack-base&amp;#8217; image went from 23:49 to just 5:21 minutes. Whole
Debian/source build is now 57 minutes instead of 2 hours 20&amp;nbsp;minutes.&lt;/p&gt;
&lt;p&gt;Nice result, isn&amp;#8217;t&amp;nbsp;it?&lt;/p&gt;</content><category term="misc"/><category term="aarch64"/><category term="centos"/><category term="debian"/><category term="ubuntu"/><category term="python"/><category term="openstack"/><category term="development"/></entry><entry><title>CirrOS 0.5.0 released</title><link href="https://marcin.juszkiewicz.com.pl/2020/03/04/cirros-050-released/" rel="alternate"/><published>2020-03-04T11:53:00+01:00</published><updated>2020-03-04T11:53:00+01:00</updated><author><name>Marcin Juszkiewicz</name></author><id>tag:marcin.juszkiewicz.com.pl,2020-03-04:/2020/03/04/cirros-050-released/</id><summary type="html">Someone may say that I am main reason why CirrOS project does&amp;nbsp;releases.</summary><content type="html">&lt;p&gt;Someone may say that I am main reason why CirrOS project does&amp;nbsp;releases.&lt;/p&gt;
&lt;p&gt;In 2016 I got task at Linaro to get it running on AArch64. More details are in
my &lt;a href="/2016/07/22/my-work-on-changing-cirros-images/"&gt;blog post &amp;#8216;my work on changing CirrOS images&amp;#8217;&lt;/a&gt;.
Result was 0.4.0&amp;nbsp;release. &lt;/p&gt;
&lt;p&gt;Last year I got another task at Linaro. So &lt;a href="https://github.com/cirros-dev/cirros/releases/tag/0.5.0"&gt;we released 0.5.0 version&lt;/a&gt;&amp;nbsp;today.&lt;/p&gt;
&lt;p&gt;But that&amp;#8217;s not how it&amp;nbsp;happened.&lt;/p&gt;
&lt;h3&gt;Multiple&amp;nbsp;contributors&lt;/h3&gt;
&lt;p&gt;Since 0.4.0 release there were changes done by several&amp;nbsp;developers.&lt;/p&gt;
&lt;p&gt;Robin H. Johnson took care of kernel modules. Added new ones, updated names.
Also added several new&amp;nbsp;features.&lt;/p&gt;
&lt;p&gt;Murilo Opsfelder Araujo fixed build on Ubuntu 16.04.3 as gcc changed
preprocessor&amp;nbsp;output.&lt;/p&gt;
&lt;p&gt;Jens Harbott took care of lack of space for data read from&amp;nbsp;config-drive.&lt;/p&gt;
&lt;p&gt;Paul Martin upgraded CirrOS build system to BuildRoot 2019.02.1 and bumped
kernel/grub&amp;nbsp;versions.&lt;/p&gt;
&lt;p&gt;Maciej Józefczyk took care of metadata&amp;nbsp;requests.&lt;/p&gt;
&lt;p&gt;Marcin Sobczyk fixed starting of Dropbear and dropped creation of &lt;span class="caps"&gt;DSS&lt;/span&gt; ssh key
which was no longer&amp;nbsp;supported.&lt;/p&gt;
&lt;h3&gt;My Linaro&amp;nbsp;work&lt;/h3&gt;
&lt;p&gt;At Linaro I got Jira card with &amp;#8220;Upgrade CirrOS&amp;#8217; kernel to Ubuntu 18.04&amp;#8217;s
kernel&amp;#8221;&amp;nbsp;title.&lt;/p&gt;
&lt;p&gt;This was needed as 4.4 kernel was far too old and gave us several booting
issues. Internally we had builds with 4.15 kernel but it should be done properly
and&amp;nbsp;upstream.&lt;/p&gt;
&lt;p&gt;So I fetched code, did some test builds and started looking how to improve
situation. Spoke with Scott Moser (owner of CirrOS project) and he told me about
his plans to migrate from Launchpad to GitHub. So we did that in December 2019
and then fun&amp;nbsp;started.&lt;/p&gt;
&lt;h3&gt;Continuous&amp;nbsp;Integration&lt;/h3&gt;
&lt;p&gt;GitHub has several ways of adding &lt;span class="caps"&gt;CI&lt;/span&gt; to projects. First we tried GitHub Actions
but turned out that it is paid service. Looked around and then I decided to go
with Travis &lt;span class="caps"&gt;CI&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;Scott generated all required keys and integration started. Soon we had every
pull request going through &lt;span class="caps"&gt;CI&lt;/span&gt;. Then I added simple script (bin/test-boot) so
each image was booted after build. Scott improved script and fixed Power boot&amp;nbsp;issue.&lt;/p&gt;
&lt;p&gt;Next step was caching downloads and ccache files. This was huge&amp;nbsp;improvement!&lt;/p&gt;
&lt;p&gt;In meantime Travis bumped free service to 5 simultaneous builders which got our
builds even&amp;nbsp;faster.&lt;/p&gt;
&lt;p&gt;CirrOS supports building only under Ubuntu &lt;span class="caps"&gt;LTS&lt;/span&gt;. But I use Fedora so we merged
two changes to make sure that proper &amp;#8216;grub(2)-mkimage&amp;#8217; command is&amp;nbsp;used.&lt;/p&gt;
&lt;h3&gt;Kernel&amp;nbsp;changes&lt;/h3&gt;
&lt;p&gt;4.4 kernel had to go. First idea was to move to 4.18 from Ubuntu 18.04 release.
But if we upgrade then why not going for &lt;span class="caps"&gt;HWE&lt;/span&gt; one? I checked 5.0 and 5.3
versions. As both worked fine we decided to go with newer&amp;nbsp;one.&lt;/p&gt;
&lt;h3&gt;Modules&amp;nbsp;changes&lt;/h3&gt;
&lt;p&gt;During start of CirrOS image several kernel modules are loaded. But there were
several &amp;#8220;no kernel module found&amp;#8221; like messages for built-in&amp;nbsp;ones.&lt;/p&gt;
&lt;p&gt;We took care of it by querying /sys/module/ directory so now module loading is
quiet process. At the end a list of loaded ones is&amp;nbsp;printed.&lt;/p&gt;
&lt;h3&gt;VirtIO&amp;nbsp;changes&lt;/h3&gt;
&lt;p&gt;Lot of things happened since 4.4 kernels. So we added several VirtIO&amp;nbsp;modules.&lt;/p&gt;
&lt;p&gt;One of results is working graphical console on AArch64. Thanks to &amp;#8216;virtio-gpu&amp;#8217;
providing framebuffer and &amp;#8216;hid-generic&amp;#8217; handling usb input&amp;nbsp;devices.&lt;/p&gt;
&lt;p&gt;As lack of entropy is common issue in &lt;span class="caps"&gt;VM&lt;/span&gt; instances we added &amp;#8216;virtio-rng&amp;#8217; module.
No more &amp;#8216;uninitialized urandom read&amp;#8217; messages from&amp;nbsp;kernel.&lt;/p&gt;
&lt;h3&gt;Final&amp;nbsp;words&lt;/h3&gt;
&lt;p&gt;Yesterday Scott created 0.5.0 tag and &lt;span class="caps"&gt;CI&lt;/span&gt; built all release images. Then I wrote
release notes (based on ones from pre-releases). Kolla project got patch to move
to use new&amp;nbsp;version.&lt;/p&gt;
&lt;p&gt;When next release? Looking at history someone may say 2023 as previous one was
in 2016 year. But know knows. Maybe we will get someone with &amp;#8220;please add s390x
support&amp;#8221; question&amp;nbsp;;D&lt;/p&gt;</content><category term="misc"/><category term="aarch64"/><category term="cirros"/><category term="openstack"/><category term="ubuntu"/><category term="kolla"/><category term="virtualization"/></entry></feed>