<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xml:base="http://levlafayette.com"  xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
 <title>Lev Lafayette blogs</title>
 <link>http://levlafayette.com/blog</link>
 <description></description>
 <language>en</language>
<item>
 <title>National Supercomputing Centre and Sunway TaihuLight </title>
 <link>http://levlafayette.com/node/808</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;&lt;img src=&quot;http://levlafayette.com/files/lev-sunway.jpg&quot; align=&quot;left&quot; vspace=&quot;5&quot; hspace=&quot;5&quot; /&gt;Wuxi is home to China&#039;s &lt;a href=&quot;https://baike.baidu.com/en/item/National%20Supercomputing%20Center/20982&quot;&gt; National Supercomputing Centre&lt;/a&gt; and the Sunway TaihuLight supercomputer. This system came to global attention in June 2016, when it &lt;a href=&quot;https://www.top500.org/lists/top500/2016/06/&quot;&gt;topped the global Top500 list&lt;/a&gt; of the world&#039;s most powerful supercomputers, far surpassing the second system (also Chinese, the Tianhe-2A) let alone the third (Titan, from the United States). Not only that, Sunway TaihuLight held the top position for an unprecedented and never-repeated two years in succession. until the US system, Summit, hosted at Oak Ridge National Laboratory, took the top position in the June 2018 metrics. Nevertheless, almost ten years later, Sunway TaihuLight has remained not only a Top500 supercomputer, but remains at in the top 25 systems (placed #24 in November 2025) without any change to the original configuration in all that time, a blunt indication of how advanced it was at the time.&lt;/p&gt;
&lt;p&gt;The Sunway TaihuLight consists of 10,649,600 cores and a Rmax of 93.01 petaflops and an Rpeak of 125.44 petaflops. With various import restrictions in place (those who preach free trade don&#039;t like practising it with a competitor), the processors are a home-grown variety, &lt;a href=&quot;http://www.netlib.org/utk/people/JackDongarra/PAPERS/sunway-report-2016.pdf&quot;&gt;Sunway SW26010 260C&lt;/a&gt;, a 64-bit RISC running at 1.45GHz. The clock rate might seem low, but it matches the requirements of a manycore processor and, as the manufacturer code suggests, the Sunway SW26010 consists of a truly impressive 260 cores per processor, arranged as four clusters of 64 Compute-Processing Elements (CPEs) in an eight-by-eight array. These CPEs support SIMD instructions, making the chip (at a very high level) seem like something between a traditional CPU and a GPU architecture. The CPE clusters also have a more conventional general-purpose core, the Management Processing Element (MPE), that provides supervisory functions. As each node has 260 cores, there are &quot;supernodes&quot; of 256 nodes, and each cabinet holds 4 supernodes. There are 40 cabinets in total, providing over 10 million cores. Sunway has its own interconnect, with a five-level integrated hierarchy: (i) computing node, (ii) computing board, (iii) super-nodes, (iv) cabinet, and (v) complete system with a network link bandwidth of 16GB/s and total I/O bandwidth of 288 GB/s.&lt;/p&gt;
&lt;p&gt;The operating system is also custom-built, Sunway Raise OS, based on Linux. Common compilers (C, C++, Fortran), an automatic vectorisation tool, basic math libraries, and a customised version of OpenACC are available. &lt;a href=&quot;https://link.springer.com/article/10.1007/s11432-016-5588-7&quot;&gt;The software build system&lt;/a&gt; is also specialised, targetting the Sunway processor. Whilst minimal modifications have been sought, those applications designed for GPUs have been &quot;signiﬁcantly more challenging&quot;. Nevertheless, dozens of applications have been written which can, in theory, scale to use the entire system, with early tests of atmospheric simulations scaling effectively to eight million cores. Other early simulations of note include atomistic simulations of silicon nanowires and ultra-high-resolution global ocean surface wave numerical simulations. Parallel software compilation at the node level generally uses MPI. For the four CGs within the same processor, software can use either MPI or OpenMP, but within each CG, Sunway OpenACC is used. Sunway OpenACC uses the  OpenACC 2.0 syntax but targets the CPE clusters and includes parallel task management, heterogeneous code extraction, and data transfer descriptions. Syntax extensions from the original OpenACC 2.0 standard include ﬁner control over multi-dimensional array buﬀering, and packing distributed variables for data transfer.&lt;/p&gt;
&lt;p&gt;The Sunway TaihuLight will be remembered alongside other systems that are &quot;giants&quot; in history for their performance, architecture, and lasting impact, and contributions to science, such as ENIAC, UNIVAC, CDC-6600, Cray-1, Beowulf, and RoadRunner. A system as powerful, innovative, and novel as this comes along perhaps once a decade, and after 10 years of operation, Sunway TaihuLight has earned its place in computing history. All the engineers and administrators who have built, operated, and maintained this system deserve respect for their contributions. What is especially remarkable is that, due to the political climate, this system had an additional requirement for novel design. However, I have been informed by a trustworthy source to &quot;watch this space&quot;; there are plans for a system ten times as powerful in the very near future.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Mon, 06 Apr 2026 12:34:25 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">808 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/808#comments</comments>
</item>
<item>
 <title>Guizhou National Data Centre and Radio Telescope</title>
 <link>http://levlafayette.com/node/807</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;&lt;img src=&quot;http://levlafayette.com/files/lev-FAST.jpg&quot; align=&quot;left&quot; vspace=&quot;5&quot; hspace=&quot;5&quot; /&gt;A few weeks ago, I had the opportunity to visit Guizhou, China&#039;s &lt;a href=&quot;https://baike.baidu.com/en/item/National%20Big%20Data%20%28Guizhou%29%20Comprehensive%20Pilot%20Zone/58456&quot;&gt;National Big Data Comprehensive Pilot Zone&lt;/a&gt;. At first blush, the choice of Guizhou seems to be an unusual one. With a population of 38 million, it ranks 17th in the list of administrative divisions and 18th in population density, and it is even further behind in terms of GDP (22nd). From the principle that it is most efficient to conduct compute near where data is located, one would expect that such a data centre would be located in provinces with a greater population and economic activity, such as Guangdong, Jiangsu, or Shandong, or maybe even in accord to scientific output, which would also include Beijing and Shanghai. Instead, Guizhou, with its rugged karst formations and dense forests and lower level of economic development (fourth lowest in GDP per capita in 2020), has been part of a &quot;Big Data Guizhou&quot; strategic plan launched in 2014 which includes a threefold approach; &quot;Big Data&quot;, &quot;Big Poverty Reduction&quot;, and &quot;Big Ecology&quot; by then governer, Chen Min&#039;er. &lt;/p&gt;
&lt;p&gt;With low energy costs and a consistently cool climate, Guizhou has established the Guizhou Cloud, sponsored by the Guizhou Big Data Development Administration and supervised by the Board of Supervisors of Guizhou State-Owned Enterprises. This in turn has attracted major national corporations to its &lt;a href=&quot;https://english.news.cn/20231030/8e6f320963d147d2bbd509c5afaf056d/c.html&quot;&gt;numerous data centers&lt;/a&gt;, notably Apple&#039;s iCloud China and Huawei&#039;s Cloud Service Base, along with Tencent, Alibaba Cloud, the AI firm SenseTime, as well as hosting the annual &lt;a href=&quot;https://english.news.cn/20240828/93b5a770bb874a37957c8d8f096c296e/c.html&quot;&gt;China International Big Data Industry Expo&lt;/a&gt; since 2015. These corporate decisions are notable enough in their own right, but what really makes Guizhou such an attractive place for a national data hub is the presence of the Five-hundred-meter Aperture Spherical Telescope &lt;a href=&quot;https://fast.bao.ac.cn/&quot;&gt;(FAST)&lt;/a&gt; which, as is evident from the name, has a 500m diameter dish (I call it a &quot;wok&quot;) making it the world&#039;s largest single-dish telescope. &lt;/p&gt;
&lt;p&gt;Radio telescopes are essentially antennas and receivers for radio waves, with frequencies ranging from around 20 kHz to around 300 GHz, just as an optical telescope collects data from the visible portion of the spectrum. Primary local sources for radio waves include the Sun, Jupiter (due to its magnetosphere), and Jupiter&#039;s moon, Ganymede. The Galactic Centre of the Milky Way is an especially powerful source, as are supernova remnants, such as Cassiopeia, and neutron stars, such as pulsars and Rotating Radio Transients (RRATs). Primordial black holes and extraterrestrial intelligence communications are two other speculative, currently unobserved sources. The main point is that to derive information about these radio sources, one needs to collect radio wave data, and the more data you collect, the greater the chance you will find something interesting. Thus, to collect more data, one wants a larger receiver. &lt;/p&gt;
&lt;p&gt;FAST is a &lt;i&gt;very&lt;/i&gt; big receiver and it collects &lt;i&gt;a lot&lt;/i&gt; of data, roughly 100TB per day. Construction began in 2011, testing began in 2016, and it was fully operational in 2020. Even before becoming operational it discovered &lt;a href=&quot;https://gbtimes.com/chinas-huge-new-fast-radio-telescope-discovers-two-new-pulsars&quot;&gt;two pulsars&lt;/a&gt; and by 2021 it has discovered &lt;a href=&quot;http://en.people.cn/n3/2021/1216/c90000-9933484.html&quot;&gt;an incredible 500&lt;/a&gt;. Hardware innovations are continuing; late last year, China Environment for Network Innovation (CENI), announced that they had conducted a (somewhat contrived) data transfer test of 72TB between FAST to Huazhong University of Science and Technology in Central China&#039;s Hubei province. The data transfer test, which would normally take 699 days was completed in &lt;a href=&quot;https://en.cae.cn/cae/html/en/col2005/2025-12/29/20251229105835807729582_1.html&quot;&gt;a mere 1.6 hours&lt;/a&gt;. Further, China has announced that it will build an additional 24 radio dishes of 40m diameter around FAST, creating an array that will mimic a massive 10km diameter dish, and boost telescope &lt;a href=&quot;https://interestingengineering.com/science/chinas-fast-to-become-30-times-more-powerful&quot;&gt;resolution by 30 times&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;Mention must also be made of the rest of the &lt;a href=&quot;https://baike.baidu.com/en/item/China%20Sky%20Eye%20Scenic%20Area/989361&quot;&gt;Tianyan Scenic Area&lt;/a&gt;, which hosts the FAST system. Apart from the stunning natural beauty of the region, and the FAST Observation Platform (complete with bungy jump during holiday weeks), the site also hosts an impressive Astronomy Experience Hall, the Astronomical Space-Time Tower (at 99.999 metres), the Nan Rendong Memorial Hall (the astronomer who was the main drive for setting up FAST), the Dome Flight Cinema, and a Aerospace Science and Technology Museum. Like many massive engineering projects in China, they have also turned the site into an informative destination for national and international visitors.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Mon, 06 Apr 2026 11:14:33 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">807 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/807#comments</comments>
</item>
<item>
 <title>Critical Issues for the Global Climate</title>
 <link>http://levlafayette.com/node/804</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;This presentation covers the science of Earth&#039;s &quot;energy budget&quot; of heat inputs and losses, and describes the overall climate system. It continues with a description of the Industrial Age, the introduction of direct temperature measurement, and the resulting temperature rise from burning fossil fuels. This is followed by a description of how Greenhouse Gases operate on a molecular level, the increase since the pre-industrial period, and the carbon cycle. After this, human activity and projections are considered, followed by changes to species&#039; habitats and the possibility of an Anthropocene Extinction Event, then energy trajectories and future global policy directions. Concluding remarks identify climate change as a critical issue and one subject to &quot;race conditions&quot;, and note that the policy route, whilst necessary, is currently falling short of requirements.&lt;/p&gt;
&lt;p&gt;This was a presentation to &lt;a href=&quot;https://www.scifuture.org/events/future-day-2026/&quot;&gt;Future Day 2026&lt;/a&gt;, March 2-4. A transcript is provided along with the accompanying slide deck.&lt;/p&gt;
&lt;p&gt;Transcript:&lt;br /&gt;
&lt;a href=&quot;http://levlafayette.com/files/2026FutureDayGlobalClimateTranscript.pdf&quot;&gt;http://levlafayette.com/files/2026FutureDayGlobalClimateTranscript.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Slides:&lt;br /&gt;
&lt;a href=&quot;http://levlafayette.com/files/2026FutureDay_GlobalClimate.pdf&quot;&gt;http://levlafayette.com/files/2026FutureDay_GlobalClimate.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-upload field-type-file field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;table class=&quot;sticky-enabled&quot;&gt;
 &lt;thead&gt;&lt;tr&gt;&lt;th&gt;Attachment&lt;/th&gt;&lt;th&gt;Size&lt;/th&gt; &lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
 &lt;tr class=&quot;odd&quot;&gt;&lt;td&gt;&lt;span class=&quot;file&quot;&gt;&lt;img class=&quot;file-icon&quot; alt=&quot;PDF icon&quot; title=&quot;application/pdf&quot; src=&quot;/modules/file/icons/application-pdf.png&quot; /&gt; &lt;a href=&quot;http://levlafayette.com/files/2026FutureDayGlobalClimateTranscript.pdf&quot; type=&quot;application/pdf; length=89221&quot;&gt;2026FutureDayGlobalClimateTranscript.pdf&lt;/a&gt;&lt;/span&gt;&lt;/td&gt;&lt;td&gt;87.13 KB&lt;/td&gt; &lt;/tr&gt;
 &lt;tr class=&quot;even&quot;&gt;&lt;td&gt;&lt;span class=&quot;file&quot;&gt;&lt;img class=&quot;file-icon&quot; alt=&quot;PDF icon&quot; title=&quot;application/pdf&quot; src=&quot;/modules/file/icons/application-pdf.png&quot; /&gt; &lt;a href=&quot;http://levlafayette.com/files/2026FutureDay_GlobalClimate.pdf&quot; type=&quot;application/pdf; length=1432979&quot;&gt;2026FutureDay_GlobalClimate.pdf&lt;/a&gt;&lt;/span&gt;&lt;/td&gt;&lt;td&gt;1.37 MB&lt;/td&gt; &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Tue, 03 Mar 2026 06:40:13 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">804 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/804#comments</comments>
</item>
<item>
 <title>The Environmental Management of Artificial Intelligence</title>
 <link>http://levlafayette.com/node/802</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;With at least three disciplines of interest - energy and climatology, public economics, and high-performance computing -  there is the issue of whether current trends in artificial intelligence are environmentally sustainable. The following is a basic sketch of electricity usage, needs, and costings.&lt;/p&gt;
&lt;p&gt;The promise of artificial intelligence is as old as computing itself, and, in some ways, it is difficult to distinguish from computing in general. As the old joke goes, it is no contest for real stupidity, and an issue that became all too evident to Charles Babbage when he developed the idea of a programmable computer:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;
On two occasions I have been asked, - &quot;Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?&quot; ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.&lt;br /&gt;
-- Passages from the Life of a Philosopher (1864), ch. 5 &quot;Difference Engine No. 1&quot;
&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Of course, we now have computational devices that can attempt to solve problems with incorrect inputs and perhaps provide a correct answer. That, if anything, is what distinguishes classical computation from contemporary artificial intelligence. Time does not permit a thorough exploration of the rise and fall of several attempts to implement AI; however, the most recent version of the last decade, which involves the application of transformer deep learning and the use of Graphics Processing Units, continues to attract investment and interest. &lt;/p&gt;
&lt;p&gt;Transformer architectures for artificial neural networks are a fascinating topic in their own right; attention (pun intended) is directed to the use of GPUs as the main issue. Whilst the physical architecture of GPUs makes them particularly suitable for graphics processing, it was also realised that they could be used for a variety of vector processing, providing massive data parallelism, i.e., &quot;general purpose (computing on) graphics processing units&quot;, GPGPUs. However, physics gets in the way of the pure mathematical potential of GPUs; they generate a significant amount of heat and require substantial electricity, and that&#039;s where the environmental question arises. &lt;/p&gt;
&lt;p&gt;The current global electricity consumption by data centres is approximately 1.5%, according to the International Energy Agency. However, that is expected to reach 3.0% by 2030, &lt;a href=&quot;https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai&quot;&gt;primarily due to the growth of AI&lt;/a&gt;, which would include not just the GPUs themselves, but also the proportional contributions by CPU hosts, cooling, transport, installation, and so forth. A doubling of energy consumption over a period of a few years (from c400TWh in 2024 to c900TWh in 2030) is very significant and, if the estimates prove to be even roughly correct, then further energy utilisation needs to be considered as an ongoing trajectory for at least another two decades until it becomes ubiquitous.&lt;/p&gt;
&lt;p&gt;There are essentially two ways of managing energy used in production with an environmental perspective, given a particular policy. One approach is high-energy and high-production, concentrating on renewables or non-GHG energy sources. The other is a reduced-energy, high-efficiency approach that concentrates on better outcomes, &quot;doing more with less&quot;. More important than either of these, in my opinion, is the incorporation of externalised costs into the internal price of an energy source. One graphic example of this is the &lt;a href=&quot;https://ourworldindata.org/grapher/death-rates-from-energy-production-per-twh&quot;&gt;deaths per Terawatt-hour&lt;/a&gt; by energy source. Solar, for example, is more than three orders of magnitude safer than coal.&lt;/p&gt;
&lt;p&gt;To satisfy existing and expected demand, the AI industry is, in part, &lt;a href=&quot;https://www.nytimes.com/2025/09/27/business/dealbook/why-dont-data-centers-use-more-green-energy.html&quot;&gt;turning to nuclear&lt;/a&gt; for its energy needs. Data centres tend to be located within population centres, partially due to latency reasons, whereas renewables like wind and solar require a significant amount of land area. Additionally, where existing nuclear power plants and infrastructure are already in place, it is relatively inexpensive, even compared to battery technologies. Nuclear provides sustained power generation not just throughout the day, but across months and seasons. With approximately 5% of generation &lt;a href=&quot;https://www.eia.gov/tools/faqs/faq.php?id=105&amp;amp;t=3&quot;&gt;lost in transmission&lt;/a&gt;, nearby power sources are more efficient. &lt;/p&gt;
&lt;p&gt;The main weakness of nuclear power is the time and cost associated with the construction of new plants, and in this regard, the big data centre and technology groups are taking a gamble. They assume that there will be sufficient demand for AI and that they can generate enough income over the next decade to cover the costs. Whilst they are very likely to be correct in this assessment, and certainly the choice of nuclear is preferable to the fossil fuel sources that are currently driving most data centres (e.g., methane gas in the United States, coal in China). &lt;/p&gt;
&lt;p&gt;As for demand-side considerations, these can include the energy efficiency of the data centres themselves, the way models are designed, and the way AI is utilised. Cooling is an especially interesting case; as mentioned, GPUs run quite hot, and to avoid catastrophic failure, they require effective cooling. This is usually done with evaporative cooling, which means significant water loss, or by chillers, which doesn&#039;t mean much loss, but a huge amount of water for cooling. A third option is dielectric liquids, such as mineral oil, which results in a data centre that is quiet and at room temperature, while servers operate at an optimal temperature. The main disadvantage is the messy and time-consuming procedures for upgrading system units. &lt;/p&gt;
&lt;p&gt;The model design also presents some opportunities for improvement. The typical approach is to train neural networks with large quantities of data; however, the more indiscriminate the data collection is, the greater the possibility of conventional error. As some critics suggest, an LLM is essentially a language interface that sits in front of a search engine. A smaller but more accurate collection of data can be more accurate, as well as being less resource-intensive to train in the first place. &lt;a href=&quot;https://archive.is/KaMGd&quot;&gt;A number of smaller models&lt;/a&gt; can operate with connective software for matters outside of the initial module&#039;s scope instead of a monolithic approach.&lt;/p&gt;
&lt;p&gt;Finally, there is the matter of what AIs are being used for. Certainly, there are some powerful and important success stories such as the key designers behind AlphaFold winning &lt;a href=&quot;https://www.nobelprize.org/prizes/chemistry/2024/press-release/&quot;&gt;the 2024 Nobel Prize in Chemsitry&lt;/a&gt; for protein structure prediction and, as many contemporary workers (especially in computer science) are all too aware, the ability of AIs to produce code is quite good, assuming th developer knows how to structure the questions with care and engages in thorough testing. Additionally, the increasing application of these technologies in robotics and autonomous vehicles is disconcerting, as illustrated by the predictive and plausible video &lt;a href=&quot;https://www.youtube.com/watch?v=HipTO_7mUOw&quot;&gt;&quot;Slaughterbots&quot;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;On the consumer level, an AI can perform tasks which a human is less efficient at. So rather than simply asking &quot;how many tonnes of a GHG does AI cause&quot;, a net emissions question should be asked, appending &quot;... compared to human activity&quot;, that is, productivity substitution. However, with effectiveness comes the lure of convenience, as it attempts to extend the use of AI to everything, even when human energy usage would be less than that of an AI-mediated task. Ultimately, it is the combination of human failings, a combination of laziness (always choosing convenience), wilful ignorance (not knowing and caring about energy efficiency), distractibility (extending AI for trivialities rather than tasks of importance), and powerlust (commercial or political), that present a continuing challenge to the prospect of implementing an environmentally-sustainable development of artificial intelligence. &lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 07 Dec 2025 02:21:32 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">802 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/802#comments</comments>
</item>
<item>
 <title>Against Default Modules</title>
 <link>http://levlafayette.com/node/799</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;Environment Modules allow users to dynamically modify the $PATHs in their shell environment. In particular, it provides the ability to switch between different software applications and versions. It is a very handy tool for developers who want to test against multiple versions or compilation options of an application and for users on a multi-user system (e.g., high performance computing) where both consistency and the opportunity to introduce newer features needs to exist for different research groups. The two major environment modules systems are the older, Tcl-based system, and the newer Lua-based system (Lmod). Both allow default versions of a software application to be set as in the $PATH when invoked. This can cause problems and should be disabled.&lt;/p&gt;
&lt;p&gt;With a standard module system, one can see the modules for an application. For example, using a LMod system:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
$ module av GCC/&lt;br /&gt;
------------------------------------ /apps/easybuild-2022/easybuild/modules/all/Core ----------------------------&lt;br /&gt;
   GCC/11.3.0 (L)    GCC/12.3.0     GCC/13.3.0 (D)&lt;br /&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This illustrates that from the GNU Compiler suite (GCC), the application GCC/11.3.0 is loaded (by default in this case as part of the login) and that GCC/11.3.0 is the default as part of a wider toolchain. That is, if the command &lt;/p&gt;
&lt;pre&gt;module load GCC&lt;/pre&gt;&lt;p&gt; is invoked, the existing environment $PATH will change so that it points to the binary, libraries, etc for GCC/12.3.0 instead of GCC/11.3.0. In the older, Tcl-based modules system, it was possible to have both versions loaded simultaneously; the most recent version would be the initial search path, but if a particular library, for example, couldn&#039;t be found, it would search whatever was in the $PATH. That could be an interesting issue for the replication of results.&lt;/p&gt;
&lt;p&gt;Having a default obviously saves a few keystrokes, but problems can result. The major issue is that when new software is introduced by the local friendly HPC engineers, then the default changes. For example, the latest version of GCC, at the time of writing, is 15.1 (April 2025). If that was installed on a system, one would witness:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
$ module av GCC/&lt;br /&gt;
------------------------------------ /apps/easybuild-2022/easybuild/modules/all/Core ----------------------------&lt;br /&gt;
   GCC/11.3.0 (L)    GCC/12.3.0     GCC/13.3.0 	GCC/15.1.0 (D)&lt;br /&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Instead of loading GCC/12.3.0 from the &lt;/p&gt;
&lt;pre&gt;module load GCC&lt;/pre&gt;&lt;p&gt; command, LMod will add GCC/15.1.0 to the $PATH. Is this a problem? Most certainly because when software (version, compiler) changes, the results can change. For example, with &lt;a href=&quot;https://gcc.gnu.org/gcc-13/changes.html&quot;&gt;the release of GCC/13&lt;/a&gt; it included the statement: &lt;quote&gt;-Ofast, -ffast-math and -funsafe-math-optimizations will no longer add startup code to alter the floating-point environment when producing a shared object with -shared.&lt;/quote&gt; In a nutshell, this means that one can get different numerical results by compiling &lt;i&gt;the same code&lt;/i&gt; between GCC/12 and GCC/13 due to different ways the compiler would handle rounding, precision, and exceptions when using floating-point calculations. This could be very interesting if one cares about that supposed hallmark of science, &quot;reproducibility&quot;.&lt;/p&gt;
&lt;p&gt;To avoid this, at a user-level, one should always use the fully-qualified version of a software application. For example, instead of loading a module with &lt;/p&gt;
&lt;pre&gt;module load GCC&lt;/pre&gt;&lt;p&gt;, whether in an interactive session or job script, one should use the specific version, e.g., &lt;/p&gt;
&lt;pre&gt;module load GCC/13.3.0&lt;/pre&gt;&lt;p&gt;. Of course, sometimes users don&#039;t follow the friendly and sensible advice from their system engineers, and that&#039;s where a blunter tool needs to be invoked.&lt;/p&gt;
&lt;p&gt;Site-wide LMod configuration (at let&#039;s face it, LMod is the overwhelmingly dominant version of environment modules these days) occurs in the lmod_config.lua file, &lt;/p&gt;
&lt;pre&gt;$LMOD_DIR/../init/lmod_config.lua&lt;/pre&gt;&lt;p&gt;. Adding the following lines is sufficient:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
-- Do not automatically pick a version when multiple options are available&lt;br /&gt;
always_load_default = false&lt;br /&gt;
-- Do not automatically swap modules from the same family&lt;br /&gt;
auto_swap = false&lt;br /&gt;
-- Disallow default-version resolution and force the user to specify which version&lt;br /&gt;
site_defaults = {&lt;br /&gt;
    load_default = false,&lt;br /&gt;
}&lt;br /&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Another option is to export optins in &lt;/p&gt;
&lt;pre&gt;/etc/profile.d/lmod.sh&lt;/pre&gt;&lt;p&gt; or equivalent. This avoids making changes to the lmod_config file.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
export LMOD_DISABLE_NAME_AUTOSWAP=yes&lt;br /&gt;
export LMOD_DISABLE_VERSION_AUTODEFAULT=yes&lt;br /&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If a system is using EasyBuild for software installs (and many of us are), EasyBuild will create &lt;/p&gt;
&lt;pre&gt;.version&lt;/pre&gt;&lt;p&gt; files which include the default; this can be removed by changing the &lt;/p&gt;
&lt;pre&gt;easybuild.cfg&lt;/pre&gt;&lt;p&gt; file to have &lt;/p&gt;
&lt;pre&gt;module_install_defaults = False&lt;/pre&gt;&lt;p&gt;.&lt;/p&gt;
&lt;p&gt;One issue that arises from the above, from a support perspective, is that existing user expectations and job submission scripts may come into conflict with a new policy. For example, if a job submission script included &lt;/p&gt;
&lt;pre&gt;module load GCC&lt;/pre&gt;&lt;p&gt;, the job would fail as LMod would be asking for user input on which specific version of GCC they wish to load.&lt;/p&gt;
&lt;p&gt;This is a valid concern in the short term, but in the longer term, the benefits of providing consistent environment variables, reproducibility, and educating users far outweigh the costs. Further, the longer one persists with the policy of allowing application defaults the greater the damage.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Tue, 21 Oct 2025 05:57:21 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">799 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/799#comments</comments>
</item>
<item>
 <title>Command-Line CD Extraction and Formatting</title>
 <link>http://levlafayette.com/node/798</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;&lt;a href=&quot;https://lame.sourceforge.io/index.php&quot;&gt;LAME&lt;/a&gt; (recursive acronym Lame ain&#039;t an MP3 encoder) has been an exceptional application for creating MP3 audio files from other formats (e.g., WAV, FLAC, MPEG-1, MPEG-2) or even for re-encoding existing MP3 files. Creating MP3 files is popular due to their extensive use in portable media players, as they combine an impressive reduction in file size with minimal loss from uncompressed audio. LAME was first released 27 years ago, has been in regular development since, and is now bundled with the very popular audio editor, &lt;a href=&quot;https://www.audacityteam.org/&quot;&gt;Audacity&lt;/a&gt;, and other excellent audio conversion applications like &lt;a href=&quot;https://ffmpeg.org/&quot;&gt;FFmpeg&lt;/a&gt;. The following are several examples of how to use LAME on the command line, convert files, and automate this process, along with the use of cdparanoia and ffmpeg. This might be handy if you want to (for example) back up a collection of CDs.&lt;/p&gt;
&lt;p&gt;To extract files from existing media, use &lt;a href=&quot;https://xiph.org/paranoia/&quot;&gt;cdparanoia&lt;/a&gt;, a delightfully old and stable tool where the last release was September 2008 and the front-end for libparanoia, which is also used in the cdrtools suite, which can be used for the creation of audio and data CDs. To use cdparanoia to extract files from disk, first use the command to get the information about what drives you have, i.e., &lt;code&gt;$ cdparanoia -vsQ&lt;/code&gt;, a verbose search and query. Then, to extract the files, simply use &lt;code&gt;$ cdparanoia -B&lt;/code&gt;, batch mode, saving each track to a separate file. The default format is WAV; if one wants RAW, use &lt;code&gt;$ cdparanoia -rB&lt;/code&gt;. &lt;/p&gt;
&lt;p&gt;Using LAME to convert files, the simplest action is to convert a file with no options. e.g., &lt;code&gt;$ lame input.wav output.mp3&lt;/code&gt;. This can be extended with the numerous options offered by LAME. Perhaps the most common is the -V% (variable bit rate) (e.g., &lt;code&gt;$ lame -V0 input.wav output.mp3&lt;/code&gt;. Values are from 0 to 9, where 0 is the highest quality (but largest file size), which is good for very high-fidelity recording, whilst 9.999 is the lowest one, which is fine for low fidelity needs (e.g., a monophonic voice recording); the default is 4. Run a test on a WAV file to see if you can detect the difference in audio quality and check the file size differences. Note that despite being a &lt;i&gt;good&lt;/i&gt; encoder, LAME is not recommended for archives; use a lossless audio codec (e.g., FLAC).&lt;/p&gt;
&lt;p&gt;If one has multiple files to convert, a short loop is an effective strategy (e.g., &lt;code&gt;$ for item in *.wav; do lame &quot;$item&quot; &quot;${item%.wav}.mp3&quot;; done&lt;/code&gt;). This is certainly preferable to opening files with a GUI application and running the conversion one file at a time. If one has a number of large files, then use of GNU Parallel is an excellent choice (e.g., &lt;code&gt;$ parallel lame -V0 {} {.}.mp3 ::: *.wav&lt;/code&gt;). On a sample CD, the loop conversion took 57.201 seconds; with parallel, 13.264 seconds; it is fairly clear how this makes a difference if you have a large number of CDs. Note that with a smaller number and size of files, the use of GNU Parallel will actually take a longer time than a loop due to the overhead of splitting up the tasks. Certainly, if you are converting the WAV files from a CD, use GNU Parallel. Note that GNU Parallel will use all the cores available on a system; the number of concurrent jobs can be limited with the -j option (e.g., &lt;code&gt;parallel -j2 lame -V0 {} {.}.mp3 ::: *.wav&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Finally, there is a great little tool called &lt;a href=&quot;https://mp3wrap.sourceforge.net/&quot;&gt;mp3wrap&lt;/a&gt;, which can wrap multiple MP3 files into a single MP3, but keeps filenames and metadata information, which can be later split back to original files (using the mp3splt command). The output file will be named &lt;code&gt;OUTPUTFILE_MP3WRAP.mp3&lt;/code&gt;; keep the MP3WRAP string so that the split program detects that the file is wrapped with this utility. To create a single file, simply use &lt;code&gt;$mp3wrap album.mp3 track*.mp3&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 12 Oct 2025 00:47:48 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">798 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/798#comments</comments>
</item>
<item>
 <title>Research Software Engineering New Zealand Conference 2025</title>
 <link>http://levlafayette.com/node/797</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;The 2025 New Zealand Research Software Engineering Conference was held on September 23-24th. It has run for almost a decade, starting in 2016 as the CRI Coding Conference before becoming the Science Coding Conference, a title it kept until 2020, when it became the Research Software Engineering Conference. Throughout this time, it has focused on science and engineering, high performance computing, and cloud computing and is aimed at various coders, sysadmins, software engineers, data analysts, and IT managers from public research institutions and has the endorsement of the &lt;a href=&quot;https://rse-aunz.org/&quot;&gt;RSE Association&lt;/a&gt; of Australia and New Zealand.&lt;/p&gt;
&lt;p&gt;As &lt;a href=&quot;https://bpb-ap-se2.wpmucdn.com/blogs.auckland.ac.nz/dist/8/725/files/2025/09/RSENZ-2025-Programme-website-3.pdf&quot;&gt;the programme&lt;/a&gt; indicates, this year&#039;s selection of presentations and BoFs has a strong emphasis on machine learning and developments in artificial intelligence. This was evident right from the very start with Nick Jones&#039; keynote address on the first day, and with numerous presentations throughout the conference. A concern must be raised when this pivot to AI/ML involves a recursive comparative testing being conducted with other AI/ML systems. Ultimately, the validity of computational modelling must come not only from the quality of the inputs but the real-world predictive (and hindcasting) value of the outputs. &lt;/p&gt;
&lt;p&gt;Thankfully, there was little in the way of &quot;AI/ML marketing hype&quot; at this conference, which really was firmly dedicated to actual computational practice, development of skills and knowledge, and research outputs. Further, there was only a moderate amount of theory, and that&#039;s primarily for foundations, as it should be. Being in Aotearoa New Zealand, it is perhaps unsurprising that there were several presentations on Earth sciences, but also of note was the emphasis on climatology, oceanography, and new developments in forensics.&lt;/p&gt;
&lt;p&gt;Interestingly, Australian research software development and education were present in a number of presentations, including speakers from WEHI and CSIRO. From Research Computing Services at the University of Melbourne received a surprising highlight with my own presentation, &lt;a href=&quot;https://levlafayette.com/files/2025NZRSE.pdf&quot;&gt;&quot;Programming Principles in a High Performance Computing Environment&quot;&lt;/a&gt;, the first presentation of the conference and Daniel Tosello&#039;s &quot;VSCode on the node&quot; being the third. It bodes well for current and future Trans-Tasman collaboration.&lt;/p&gt;
&lt;p&gt;Finally, the Research Software Engineering New Zealand Conference included a small number of explicit community-building presentations. In conferences such as these, presentations are often of a special interes,t but with a much wider and latent feature being the awareness of the directions that other institutions and individuals are taking and the strength of professional connections, a critical requirement not only for an individual&#039;s development, but also for raising the collective knowledge of the institutions that they belong to.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Tue, 30 Sep 2025 06:49:44 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">797 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/797#comments</comments>
</item>
<item>
 <title>Parallel wget with xargs or parallel</title>
 <link>http://levlafayette.com/node/796</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;The following is an illustration of how to use xargs to conduct parallel operations on single-threaded applications, specifically wget.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://git.savannah.gnu.org/cgit/wget.git&quot;&gt;GNU wget&lt;/a&gt; is a great tool for downloading content from websites. The wget command is a non-interactive network downloader; by &quot;non-interactive&quot; what is meant is that it can be run in the background. Some very hand options include -c (continue, for partially downloaded files), -m (mirror, for an entire website), -r --no-parent (recursive, no parent, to download part of a website and its subdirectories). The cURL application has a wider range of protocols and includes upload options, but is non-recursive. &lt;/p&gt;
&lt;p&gt;Recently, I had the need to download a small number of PDF files. The wildcard-based approach would be:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$ wget -r -nd --no-parent -A &#039;rpgreview_*.pdf&#039; http://rpgreview.net/files/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The -r and --no-parent options have already been explained. The -nd option allows one to save all files to the current directory, without hierarchy of directories. The -A option (&#039;accept&#039;, or -R &#039;reject) allows one to specify comma-separated lists of file name suffixes or patterns to accept or reject. Note that if any of the wildcard characters, *, ?, or ranges [] to be in an acclist or rejlist.&lt;/p&gt;
&lt;p&gt;Running the above has the following time:&lt;/p&gt;
&lt;p&gt;real	2m19.353s&lt;br /&gt;
user	0m0.836s&lt;br /&gt;
sys	0m2.998s&lt;/p&gt;
&lt;p&gt;An alternatiive, looping through each file one at a time, would have been something like:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
for issue in {1..53}&lt;br /&gt;
do&lt;br /&gt;
    wget &quot;https://rpgreview.net/files/rpgreview_$issue.pdf&quot;&lt;br /&gt;
done&lt;br /&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;(Just for the record, wget &lt;a href=&quot;https://unix.stackexchange.com/questions/117988/wget-with-wildcards-in-http-downloads&quot;&gt;can get a bit gnarly&lt;/a&gt; when dealing with http requests because for some webservers there is no requirement for path delimiters to match directory delimiters. For the purposes of this discussion it is assumed that we&#039;re dealing with a rational being where the two are equivalent.)&lt;/p&gt;
&lt;p&gt;Using a combination of the printf command and the xargs command a list of the URLs can be constructed which is then passed to xargs which can split the list to run in parallel. &lt;/p&gt;
&lt;p&gt;By itself, xargs simply reads items from standard input, delimited by blanks or newlines, and executes commands from that list of items as arguments. This is somewhat different to the pipe command which, by itself, sends the output of one command as the input stream to another. In contrast, xargs takes data from standard input and executes a command which, by default, the data is appended to the end of the command as an argument. However, the data can be inserted anywhere however, using a placeholder for the input; the typical placeholder is {}.&lt;/p&gt;
&lt;p&gt;The value -P 8 is entirely arbitrary here and should be modified according to available resources. Adding -nc prevents xargs attempting to download the same file more than once (wget will not overwrite an existing file, but rather append a new file with .1, etc. The -n option ensures that only one argument is run per process.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;printf &quot;https://rpgreview.net/files/rpgreview_%d.pdf\n&quot; {1..53} | xargs -n 1 -P 8 wget -q -nc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The time of the above comes to:&lt;/p&gt;
&lt;p&gt;real	1m23.534s&lt;br /&gt;
user	0m1.567s&lt;br /&gt;
sys	0m2.732s&lt;/p&gt;
&lt;p&gt;Yet another choice is to use GNU parallel and seq.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;seq 53 | parallel -j8 wget &quot;https://rpgreview.net/files/rpgreview_{}.pdf&quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
real	1m57.647s&lt;br /&gt;
user	0m1.830s&lt;br /&gt;
sys	0m4.214s&lt;br /&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;A final option, most common in high-performance computing systems with job schedulers, is to make use of a job array. This is effective assuming resource availability. This is a very effective option if each task in the array takes more than a couple of minutes each (given that there is an overhead involved in constructing the job, submitting it to the queue, etc). In Slurm, a script the directives and code would look like the following:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&lt;br /&gt;
#SBATCH --job-name=&quot;file-array&quot;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
#SBATCH --array=1-53&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;wget &quot;https://rpgreview.net/files/rpgreview_${SLURM_ARRAY_TASK_ID}.pdf&quot;&lt;br /&gt;
&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 14 May 2025 02:12:12 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">796 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/796#comments</comments>
</item>
<item>
 <title>Quantum Computing and Quantum Computers</title>
 <link>http://levlafayette.com/node/795</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;&lt;img src=&quot;http://levlafayette.com/files/system1.jpg&quot; align=&quot;left&amp;quot;&quot; vspace=&quot;10&quot; hspace=&quot;10&quot; height=&quot;50%&quot; width=&quot;50%&quot; /&gt;&lt;br /&gt;
It is appropriate, on &lt;a href=&quot;https://worldquantumday.org/&quot;&gt;World Quantum Day&lt;/a&gt;, to talk about quantum computing and quantum computers, as the two are often confused. Quantum computing is any method to generate quantum effects whereby qubit states can exist in superposition (o,1, both) rather than binary states (0,1). Binary states are represented in classical computing in low-level software as logical 0s and 1s but in hardware as high and low-voltage states.&lt;/p&gt;
&lt;p&gt;The typical system to do quantum computing, or at least simulate it, is usually High Performance Computing (HPC). That works, it&#039;s a proven technology that has a rate of return of &lt;a href=&quot;https://www.hpcwire.com/2020/09/07/the-roi-on-hpc-44-in-profit-for-every-1-in-hpc/&quot;&gt;$44 per $1 invested&lt;/a&gt; - and higher when COVID research is considered. The development of HPC clusters with message passing is one of the most successful technological developments in computing in the last thirty years.&lt;/p&gt;
&lt;p&gt;In contrast, a quantum computer directly uses a quantum mechanical system and requires appropriate specialised hardware. For example, GENCI in France uses a photonic computer, LRZ in Germany uses superconducting qubits, PSNC in Poland uses trapped ions, etc. David P. DiVincenzo offers &lt;a href=&quot;https://arxiv.org/abs/quant-ph/0002077&quot;&gt;the most significant physical challenges&lt;/a&gt; that face quantum computers, regardless of what technology is used; these include scaling qubits, initialisation of values, developing a universal gate for the construction of a quantum operation, developing gates that are faster than decoherence from a quantum state due to environmental interactions, and reading qubits (especially considering that can alter the quantum state).&lt;/p&gt;
&lt;p&gt;As a result, classical computers outperform quantum computers in all real-world applications. Not only that, there is a serious issue of whether quantum computers will ever be able to outperform classical computers. Mikhail Dyakonov &lt;a href=&quot;https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing&quot;&gt;points out&lt;/a&gt; that the rudimentary qubits used in quantum computing systems is insufficient for useful calculations.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;
&quot;Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10^300. That&#039;s a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.&quot;
&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The promise of quantum computers is, of course, very significant in theory. In theory, it can perform some calculations incredibly fast, and the larger the task, the more impressive the result, to the extent that common secure encryption systems could be broken, as well as the more prosaic use in quantum simulations. In reality, the physical implementation has been more than challenging, to put it mildly. Classical computers, in principle, can solve the same problems as a quantum computer can, in principle solve as well. For a classical computer the problem is the sheer quantity of time that is required. For quantum computers, the problem is the implementation in reality. For the time being, and for the foreseeable future, it seems that quantum computing will continue to be done on classical computers. &lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-upload field-type-file field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;table class=&quot;sticky-enabled&quot;&gt;
 &lt;thead&gt;&lt;tr&gt;&lt;th&gt;Attachment&lt;/th&gt;&lt;th&gt;Size&lt;/th&gt; &lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
 &lt;tr class=&quot;odd&quot;&gt;&lt;td&gt;&lt;span class=&quot;file&quot;&gt;&lt;img class=&quot;file-icon&quot; alt=&quot;Image icon&quot; title=&quot;image/jpeg&quot; src=&quot;/modules/file/icons/image-x-generic.png&quot; /&gt; &lt;a href=&quot;http://levlafayette.com/files/system1.jpg&quot; type=&quot;image/jpeg; length=194369&quot; title=&quot;system1.jpg&quot;&gt;IBM System 1 Quantum Computer&lt;/a&gt;&lt;/span&gt;&lt;/td&gt;&lt;td&gt;189.81 KB&lt;/td&gt; &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Mon, 14 Apr 2025 13:13:36 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">795 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/795#comments</comments>
</item>
<item>
 <title>Scheduler Parallelisation</title>
 <link>http://levlafayette.com/node/794</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot;&gt;&lt;p&gt;The standard computing model uses unithreaded instructions and data with automation through looping and conditional branching. Automation is encouraged as it results in the computer doing the work that it is designed for. However, this can be inefficient when using a multicore system. An alternative in HPC systems is to make use of job arrays, which use a job to allocate resources to sub-jobs which can be individually controlled, whether directed toward instruction sets or datasets. Further, job arrays can be combined with job dependencies, allowing for conditional chains of job submission and runs. Finally, job arrays can be simulated through the use of heredocs with looped submission. This may even allow more familiar control with shell scripting.&lt;/p&gt;
&lt;p&gt;This slidedeck is derived from a presentation to the Univerity of Melbourne&#039;s &quot;Spartan Champions&quot; group on March 7, 2025.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://levlafayette.com/files/2025Champions_Arrays.pdf&quot;&gt;https://levlafayette.com/files/2025Champions_Arrays.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Mon, 17 Mar 2025 11:51:34 +0000</pubDate>
 <dc:creator>lev_lafayette</dc:creator>
 <guid isPermaLink="false">794 at http://levlafayette.com</guid>
 <comments>http://levlafayette.com/node/794#comments</comments>
</item>
</channel>
</rss>
