<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"><channel><title>IEE/CSE 598: Bio-Inspired AI and Optimization</title><description>Archived lectures from graduate course on nature-inspired metaheuristics given at Arizona State University by Ted Pavlic </description><managingEditor>noreply@blogger.com (Ted Pavlic)</managingEditor><pubDate>Thu, 7 May 2026 13:55:50 -0700</pubDate><generator>Blogger http://www.blogger.com</generator><openSearch:totalResults xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">117</openSearch:totalResults><openSearch:startIndex xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">1</openSearch:startIndex><openSearch:itemsPerPage xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">25</openSearch:itemsPerPage><link>https://asu-iee598-bioinspired.blogspot.com/search/label/podcast</link><language>en-us</language><itunes:explicit>no</itunes:explicit><copyright>Copyright (c) 2020 by Theodore P. Pavlic</copyright><itunes:image href="https://www.dropbox.com/s/dl/wlt5o25b3rwqhd9/2000px-Newton_optimization_vs_grad_descent.svg-cropped.png"/><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords><itunes:summary>Graduate-level survey of a variety of nature-inspired metaheuristics for optimization, as well as some physically emboddied multi-agent systems techniques (such as stochastic robotics). Course (IEE/CSE 598) taught by Theodore Pavlic at Arizona State University.</itunes:summary><itunes:subtitle>IEE/CSE 598@ASU: Bio-Inspired AI and Optimization</itunes:subtitle><itunes:category text="Education"><itunes:category text="Higher Education"/></itunes:category><itunes:author>Theodore P. Pavlic</itunes:author><itunes:owner><itunes:email>ted@tedpavlic.com</itunes:email><itunes:name>Theodore P. Pavlic</itunes:name></itunes:owner><item><title>Lecture 8A (2026-04-30): Complex Systems Models of Computation – Cellular Automata and Neighbors</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/blog-post.html</link><category>podcast</category><pubDate>Thu, 30 Apr 2026 23:16:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-2494641692250125646</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;In this (bonus) lecture, we discuss distributed, spatially explicit models of computation that come from the complex systems community. We start with a brief introduction to interacting particle systems (IPS), with a specific focus on the voter model. The voter model is simultaneously a model of neutral evolution (genetic drift leading to fixation) and a basic model of consensus/agreement in opinion dynamics. We discuss the voter model in 1, 2, and 3+ dimensions. To analyze this case, we introduce a dual model of the voter model that focuses on "contact tracing" of opinion provenance, which leads to a time-reversed set of coalescing Markov chains. From this perspective, studying the probability of consensus is equivalent to studying the probability of Markov chains intersecting (Polya's theorem). This implies that while 1D and 2D voter models are guaranteed to come to consensus, this cannot be said of 3D or higher. After this result, we pivot to introducing cellular automata, and specifically 1D elementary cellular automata (ECA). We discuss how ECA's are named and operate, we highlight several key ECA rules and their properties, and we close by using lessons learned from ECA's to connect back to niching methods for GA's we introduced in our first unit.
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Interactive demonstrations referenced in this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Voter Model (and Consensus Dynamics) Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/voter_model.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/voter_model.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Elementary Cellular Automaton Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/eca_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/eca_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/q9z3tqmv1wq57ki3kg1q5/IEE598-Lecture8A-2026-04-30-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=x4zwop6e7swkdugkejyx7s1st&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/q9z3tqmv1wq57ki3kg1q5/IEE598-Lecture8A-2026-04-30-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=x4zwop6e7swkdugkejyx7s1st&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/I6Kw1ynzbqs" width="320" youtube-src-id="I6Kw1ynzbqs"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/t1e0pz5f2tu80oicdr64o/IEE598-Lecture8A-2026-04-30-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-audio_only.mp3?rlkey=5n5md14n17ig2cdinejoxsnv0&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/I6Kw1ynzbqs/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this (bonus) lecture, we discuss distributed, spatially explicit models of computation that come from the complex systems community. We start with a brief introduction to interacting particle systems (IPS), with a specific focus on the voter model. The voter model is simultaneously a model of neutral evolution (genetic drift leading to fixation) and a basic model of consensus/agreement in opinion dynamics. We discuss the voter model in 1, 2, and 3+ dimensions. To analyze this case, we introduce a dual model of the voter model that focuses on "contact tracing" of opinion provenance, which leads to a time-reversed set of coalescing Markov chains. From this perspective, studying the probability of consensus is equivalent to studying the probability of Markov chains intersecting (Polya's theorem). This implies that while 1D and 2D voter models are guaranteed to come to consensus, this cannot be said of 3D or higher. After this result, we pivot to introducing cellular automata, and specifically 1D elementary cellular automata (ECA). We discuss how ECA's are named and operate, we highlight several key ECA rules and their properties, and we close by using lessons learned from ECA's to connect back to niching methods for GA's we introduced in our first unit. Interactive demonstrations referenced in this lecture can be found at: Voter Model (and Consensus Dynamics) Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/voter_model.htmlElementary Cellular Automaton Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/eca_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/q9z3tqmv1wq57ki3kg1q5/IEE598-Lecture8A-2026-04-30-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=x4zwop6e7swkdugkejyx7s1st&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this (bonus) lecture, we discuss distributed, spatially explicit models of computation that come from the complex systems community. We start with a brief introduction to interacting particle systems (IPS), with a specific focus on the voter model. The voter model is simultaneously a model of neutral evolution (genetic drift leading to fixation) and a basic model of consensus/agreement in opinion dynamics. We discuss the voter model in 1, 2, and 3+ dimensions. To analyze this case, we introduce a dual model of the voter model that focuses on "contact tracing" of opinion provenance, which leads to a time-reversed set of coalescing Markov chains. From this perspective, studying the probability of consensus is equivalent to studying the probability of Markov chains intersecting (Polya's theorem). This implies that while 1D and 2D voter models are guaranteed to come to consensus, this cannot be said of 3D or higher. After this result, we pivot to introducing cellular automata, and specifically 1D elementary cellular automata (ECA). We discuss how ECA's are named and operate, we highlight several key ECA rules and their properties, and we close by using lessons learned from ECA's to connect back to niching methods for GA's we introduced in our first unit. Interactive demonstrations referenced in this lecture can be found at: Voter Model (and Consensus Dynamics) Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/voter_model.htmlElementary Cellular Automaton Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/cellular_automata/eca_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/q9z3tqmv1wq57ki3kg1q5/IEE598-Lecture8A-2026-04-30-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=x4zwop6e7swkdugkejyx7s1st&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7G (2026-04-30): Spiking Neural Networks and Neuromorphic Computing</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7g-2026-04-30-spiking-neural.html</link><category>podcast</category><pubDate>Thu, 30 Apr 2026 19:05:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-6693149017920587693</guid><description>&lt;p&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;In this lecture, we introduce spiking neural networks and neuromorphic computing, starting with a refresher of the biological neuron and an introduction to Carver Mead, one of the founders of modern neurlmorphic computing. We discuss the Leaky Integrate and Fire (LIF) model for a spiking neuron and spike-timing dependent plasticity (STDP) for (unsupervised) learning of these neurons (temporary/working memory). We focus on rate coding and show examples of rate coded signals as inputs and outputs from LIF neurons. We introduce SNN implementations from SpiNNaker to IBM TrueNorth to Intel Loihi and a crossbar array memristor example published in 2017 that shows unsupervised STDP learning. We then pivot to show that Hebbian updating in traditional ANN's can also perform this task (albeit possibly not as efficient as an SNN implementation). We close with some comments about the possible future of SNN's.
&lt;/span&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;Interactive widgets referenced in this lecture can be found at:&lt;/span&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;Spiking Neural Network Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/spiking_neural_networks/snn_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/spiking_neural_networks/snn_explorer.html&lt;/a&gt;

&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;Memristor Crossbar Array Unsupervised STDP Learning (with Latent Inhibition): &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html&lt;/a&gt;

&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;ANN Unsupervised STDP Learning (with Latent Inhibition): &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/7fpnrrriu4ez0sbfneyhm/IEE598-Lecture7G-2026-04-30-Spiking_Neural_networks_and_Neuromorphic_Computing-Notes.pdf?rlkey=9mdotvp12ka5g9j4dzoi9qloi&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/7fpnrrriu4ez0sbfneyhm/IEE598-Lecture7G-2026-04-30-Spiking_Neural_networks_and_Neuromorphic_Computing-Notes.pdf?rlkey=9mdotvp12ka5g9j4dzoi9qloi&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;div&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/mUucpWMPIlo" width="320" youtube-src-id="mUucpWMPIlo"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;
&lt;/span&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/p&gt;&lt;div&gt;&lt;span face="Roboto, Noto, sans-serif" style="background-color: white; color: #0d0d0d; font-size: 15px; white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/he4tum4c3jks37b8fnbf4/IEE598-Lecture7G-2026-04-30-Spiking_Neural_networks_and_Neuromorphic_Computing-audio_only.mp3?rlkey=vwpemrob47fsscy9lj46wu92r&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/mUucpWMPIlo/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we introduce spiking neural networks and neuromorphic computing, starting with a refresher of the biological neuron and an introduction to Carver Mead, one of the founders of modern neurlmorphic computing. We discuss the Leaky Integrate and Fire (LIF) model for a spiking neuron and spike-timing dependent plasticity (STDP) for (unsupervised) learning of these neurons (temporary/working memory). We focus on rate coding and show examples of rate coded signals as inputs and outputs from LIF neurons. We introduce SNN implementations from SpiNNaker to IBM TrueNorth to Intel Loihi and a crossbar array memristor example published in 2017 that shows unsupervised STDP learning. We then pivot to show that Hebbian updating in traditional ANN's can also perform this task (albeit possibly not as efficient as an SNN implementation). We close with some comments about the possible future of SNN's. Interactive widgets referenced in this lecture can be found at: Spiking Neural Network Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/spiking_neural_networks/snn_explorer.html Memristor Crossbar Array Unsupervised STDP Learning (with Latent Inhibition): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html ANN Unsupervised STDP Learning (with Latent Inhibition): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.htmlWhiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7fpnrrriu4ez0sbfneyhm/IEE598-Lecture7G-2026-04-30-Spiking_Neural_networks_and_Neuromorphic_Computing-Notes.pdf?rlkey=9mdotvp12ka5g9j4dzoi9qloi&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we introduce spiking neural networks and neuromorphic computing, starting with a refresher of the biological neuron and an introduction to Carver Mead, one of the founders of modern neurlmorphic computing. We discuss the Leaky Integrate and Fire (LIF) model for a spiking neuron and spike-timing dependent plasticity (STDP) for (unsupervised) learning of these neurons (temporary/working memory). We focus on rate coding and show examples of rate coded signals as inputs and outputs from LIF neurons. We introduce SNN implementations from SpiNNaker to IBM TrueNorth to Intel Loihi and a crossbar array memristor example published in 2017 that shows unsupervised STDP learning. We then pivot to show that Hebbian updating in traditional ANN's can also perform this task (albeit possibly not as efficient as an SNN implementation). We close with some comments about the possible future of SNN's. Interactive widgets referenced in this lecture can be found at: Spiking Neural Network Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/spiking_neural_networks/snn_explorer.html Memristor Crossbar Array Unsupervised STDP Learning (with Latent Inhibition): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html ANN Unsupervised STDP Learning (with Latent Inhibition): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.htmlWhiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7fpnrrriu4ez0sbfneyhm/IEE598-Lecture7G-2026-04-30-Spiking_Neural_networks_and_Neuromorphic_Computing-Notes.pdf?rlkey=9mdotvp12ka5g9j4dzoi9qloi&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7F (2026-04-28): Predictive Coding, Latent Learning, and Self-Supervised Learning</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7f-2026-04-28-predictive-coding.html</link><category>podcast</category><pubDate>Tue, 28 Apr 2026 20:54:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-8138496081907577125</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;In this lecture, we pivot from our discussion of the autoencoder as an example of unsupervised learning to an introduction to predictive coding, latent learning, and ultimately self-supervised learning (like pre-trained transformers including BERT and GPT). A key historical example described is the case of Tolman's rats and their "latent learning" of a "cognitive map" that allowed them to more quickly learn the location of a reward when presented in a later trial. We connect this with modern pre-training of large language models (LLM's) that gives them the ability to make later inferences that benefit from long-range relationships they learned (by way of complex attention heads) without retraining. We close with some remarks about large multimodal models and their connection with embedding spaces like CLIP (which we introduced earlier as we transitioned from the opening example of the autoencoder).
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Interactive widgets mentioned/used in this lecture can be found at:&lt;br /&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Autoencoder Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Transformer Architecture Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Toward Multimodal AI: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/pihkdryix5e4ynqy5zotx/IEE598-Lecture7F-2026-04-28-Predictive_Coding_Latent_Learning_and_Self_Supervised_Learning-Notes.pdf?rlkey=b0ejd4usvqn4fpievga9sbl4m&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/pihkdryix5e4ynqy5zotx/IEE598-Lecture7F-2026-04-28-Predictive_Coding_Latent_Learning_and_Self_Supervised_Learning-Notes.pdf?rlkey=b0ejd4usvqn4fpievga9sbl4m&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/8Bg6M4OSBk8" width="320" youtube-src-id="8Bg6M4OSBk8"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;/span&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/9aiyyb28dsyxu3935ixk4/IEE598-Lecture7F-2026-04-28-Predictive_Coding_Latent_Learning_and_Self_Supervised_Learning-audio_only.mp3?rlkey=vvuj8n9u5ez8b0jp9uo8ghu3u&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/8Bg6M4OSBk8/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we pivot from our discussion of the autoencoder as an example of unsupervised learning to an introduction to predictive coding, latent learning, and ultimately self-supervised learning (like pre-trained transformers including BERT and GPT). A key historical example described is the case of Tolman's rats and their "latent learning" of a "cognitive map" that allowed them to more quickly learn the location of a reward when presented in a later trial. We connect this with modern pre-training of large language models (LLM's) that gives them the ability to make later inferences that benefit from long-range relationships they learned (by way of complex attention heads) without retraining. We close with some remarks about large multimodal models and their connection with embedding spaces like CLIP (which we introduced earlier as we transitioned from the opening example of the autoencoder). Interactive widgets mentioned/used in this lecture can be found at: Autoencoder Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.htmlTransformer Architecture Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.htmlToward Multimodal AI: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.htmlWhiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/pihkdryix5e4ynqy5zotx/IEE598-Lecture7F-2026-04-28-Predictive_Coding_Latent_Learning_and_Self_Supervised_Learning-Notes.pdf?rlkey=b0ejd4usvqn4fpievga9sbl4m&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we pivot from our discussion of the autoencoder as an example of unsupervised learning to an introduction to predictive coding, latent learning, and ultimately self-supervised learning (like pre-trained transformers including BERT and GPT). A key historical example described is the case of Tolman's rats and their "latent learning" of a "cognitive map" that allowed them to more quickly learn the location of a reward when presented in a later trial. We connect this with modern pre-training of large language models (LLM's) that gives them the ability to make later inferences that benefit from long-range relationships they learned (by way of complex attention heads) without retraining. We close with some remarks about large multimodal models and their connection with embedding spaces like CLIP (which we introduced earlier as we transitioned from the opening example of the autoencoder). Interactive widgets mentioned/used in this lecture can be found at: Autoencoder Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.htmlTransformer Architecture Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.htmlToward Multimodal AI: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.htmlWhiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/pihkdryix5e4ynqy5zotx/IEE598-Lecture7F-2026-04-28-Predictive_Coding_Latent_Learning_and_Self_Supervised_Learning-Notes.pdf?rlkey=b0ejd4usvqn4fpievga9sbl4m&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7E (2026-04-23): Natural Learning Experiences – Reinforcement and Unsupervised Learning</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7e-2026-04-23-natural-learning.html</link><category>podcast</category><pubDate>Thu, 23 Apr 2026 22:58:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-8049269631686231103</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;In this lecture, we introduce Temporal Difference (TD) Q-learning and Deep Q Networks, starting with an analogy to how ants encode estimates of reward for state–action pairs in pheromone trails in the environment (another way to store a "Q" table in a network). We then pivot to discussing unsupervised learning – including both clustering and multi-dimensional scaling. After discussing PCA and t-SNE (briefly), we pivot to describing the deep autoencoder and show an example of its use in an MNIST-like clustering task.
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Interactive demonstrations mentioned in this lecture include:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;* Marginal Value Theorem Explorer (to better understand discount rate): &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/optimal_foraging_theory/mvt_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/optimal_foraging_theory/mvt_explorer.html&lt;/a&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;* Autoencoder Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html&lt;/a&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/assv5cheln8xqp2tvzj1k/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes.pdf?rlkey=a3iyshlufzgkyfxl7gby85pe2&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/assv5cheln8xqp2tvzj1k/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes.pdf?rlkey=a3iyshlufzgkyfxl7gby85pe2&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;An unabridged version of the whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/lgibsff4lhh0ezb1lnm41/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes-Full.pdf?rlkey=jayejujed8ervq8zsrddi1vcr&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/lgibsff4lhh0ezb1lnm41/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes-Full.pdf?rlkey=jayejujed8ervq8zsrddi1vcr&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/DPIbp4e5kAo" width="320" youtube-src-id="DPIbp4e5kAo"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;p&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/vj3m3iectpuwnwijvwd1f/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-audio_only.mp3?rlkey=opo5lm8gxwwiuoapwzdotnvzs&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/DPIbp4e5kAo/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we introduce Temporal Difference (TD) Q-learning and Deep Q Networks, starting with an analogy to how ants encode estimates of reward for state–action pairs in pheromone trails in the environment (another way to store a "Q" table in a network). We then pivot to discussing unsupervised learning – including both clustering and multi-dimensional scaling. After discussing PCA and t-SNE (briefly), we pivot to describing the deep autoencoder and show an example of its use in an MNIST-like clustering task. Interactive demonstrations mentioned in this lecture include: * Marginal Value Theorem Explorer (to better understand discount rate): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/optimal_foraging_theory/mvt_explorer.html * Autoencoder Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/assv5cheln8xqp2tvzj1k/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes.pdf?rlkey=a3iyshlufzgkyfxl7gby85pe2&amp;amp;dl=0 An unabridged version of the whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/lgibsff4lhh0ezb1lnm41/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes-Full.pdf?rlkey=jayejujed8ervq8zsrddi1vcr&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we introduce Temporal Difference (TD) Q-learning and Deep Q Networks, starting with an analogy to how ants encode estimates of reward for state–action pairs in pheromone trails in the environment (another way to store a "Q" table in a network). We then pivot to discussing unsupervised learning – including both clustering and multi-dimensional scaling. After discussing PCA and t-SNE (briefly), we pivot to describing the deep autoencoder and show an example of its use in an MNIST-like clustering task. Interactive demonstrations mentioned in this lecture include: * Marginal Value Theorem Explorer (to better understand discount rate): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/optimal_foraging_theory/mvt_explorer.html * Autoencoder Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/unsupervised_learning/autoencoder_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/assv5cheln8xqp2tvzj1k/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes.pdf?rlkey=a3iyshlufzgkyfxl7gby85pe2&amp;amp;dl=0 An unabridged version of the whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/lgibsff4lhh0ezb1lnm41/IEE598-Lecture7E-2026-04-23-Natural_Learning_Experiences-Reinforcement_and_Unsupervised_Learning-Notes-Full.pdf?rlkey=jayejujed8ervq8zsrddi1vcr&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7D (2026-04-21): RNN's – Backpropagation Through Time (BPTT), Long Short Term Memory (LSTM), and Reservoir Computing/Echo State Networks (ESNs)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7d-2026-04-21-rnns.html</link><category>podcast</category><pubDate>Tue, 21 Apr 2026 13:36:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5831696967126153210</guid><description>&lt;p&gt;In this lecture, we continue our discussion of Recurrent Neural Networks (RNN's) as generalized forms of Time Delay Neural Network (TDNN) that can do time-series classification (and prediction) using an inductive bias that can pull in information from a wide range of times (well beyond the simple size of the neural network, due to the use of output feedback to maintain state). We discuss how these networks can be trained with Backpropagation Through Time (BPTT) and some limitations of this approach. This motivates the more constrained Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures, which mitigate some issues with training general RNN's. We then pivot to a different approach entirely -- using recurrent neural networks as untrained "reservoirs" whose outputs are dynamical encoders that spread out temporal patterns into spatial ones that can be learned with a single-layer perceptron. We demonstrate this using an Echo State Network (ESN) and walk through how even small networks can provide significant separability for time series. We also have a discussion of how these approaches can be used for predicting chaotic time series, with applications in finance as well as digital twins (e.g., for manufacturing systems).&lt;/p&gt;&lt;p&gt;Interactive demonstrations connected to this lecture can be found at:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;Multi-Layer Perceptron and Backpropagation Explainer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/multi_layer_perceptron/mlp_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/multi_layer_perceptron/mlp_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Recurrent Neural Networks (and BPTT and LSTM) Explainer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Reservoir Computing/Echo State Networks Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/reservoir_computing/esn_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/reservoir_computing/esn_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/jdwe24zsmevhmaxl7x954/IEE598-Lecture7D-2026-04-21-RNNs-BPTT_LSTM_and_Reservoir_Computing-Notes.pdf?rlkey=22dv6950zcsjl98e0o96de11q&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/jdwe24zsmevhmaxl7x954/IEE598-Lecture7D-2026-04-21-RNNs-BPTT_LSTM_and_Reservoir_Computing-Notes.pdf?rlkey=22dv6950zcsjl98e0o96de11q&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/gkVV8hfTT3w" width="320" youtube-src-id="gkVV8hfTT3w"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/var0m4hsrfqlq0t9vhttf/IEE598-Lecture7D-2026-04-21-RNNs-BPTT_LSTM_and_Reservoir_Computing-audio_only.mp3?rlkey=y9sarb5ln4pbdhu7r4o7tsigw&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/gkVV8hfTT3w/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we continue our discussion of Recurrent Neural Networks (RNN's) as generalized forms of Time Delay Neural Network (TDNN) that can do time-series classification (and prediction) using an inductive bias that can pull in information from a wide range of times (well beyond the simple size of the neural network, due to the use of output feedback to maintain state). We discuss how these networks can be trained with Backpropagation Through Time (BPTT) and some limitations of this approach. This motivates the more constrained Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures, which mitigate some issues with training general RNN's. We then pivot to a different approach entirely -- using recurrent neural networks as untrained "reservoirs" whose outputs are dynamical encoders that spread out temporal patterns into spatial ones that can be learned with a single-layer perceptron. We demonstrate this using an Echo State Network (ESN) and walk through how even small networks can provide significant separability for time series. We also have a discussion of how these approaches can be used for predicting chaotic time series, with applications in finance as well as digital twins (e.g., for manufacturing systems). Interactive demonstrations connected to this lecture can be found at: Multi-Layer Perceptron and Backpropagation Explainer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/multi_layer_perceptron/mlp_explorer.htmlRecurrent Neural Networks (and BPTT and LSTM) Explainer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.htmlReservoir Computing/Echo State Networks Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/reservoir_computing/esn_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/jdwe24zsmevhmaxl7x954/IEE598-Lecture7D-2026-04-21-RNNs-BPTT_LSTM_and_Reservoir_Computing-Notes.pdf?rlkey=22dv6950zcsjl98e0o96de11q&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we continue our discussion of Recurrent Neural Networks (RNN's) as generalized forms of Time Delay Neural Network (TDNN) that can do time-series classification (and prediction) using an inductive bias that can pull in information from a wide range of times (well beyond the simple size of the neural network, due to the use of output feedback to maintain state). We discuss how these networks can be trained with Backpropagation Through Time (BPTT) and some limitations of this approach. This motivates the more constrained Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures, which mitigate some issues with training general RNN's. We then pivot to a different approach entirely -- using recurrent neural networks as untrained "reservoirs" whose outputs are dynamical encoders that spread out temporal patterns into spatial ones that can be learned with a single-layer perceptron. We demonstrate this using an Echo State Network (ESN) and walk through how even small networks can provide significant separability for time series. We also have a discussion of how these approaches can be used for predicting chaotic time series, with applications in finance as well as digital twins (e.g., for manufacturing systems). Interactive demonstrations connected to this lecture can be found at: Multi-Layer Perceptron and Backpropagation Explainer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/multi_layer_perceptron/mlp_explorer.htmlRecurrent Neural Networks (and BPTT and LSTM) Explainer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.htmlReservoir Computing/Echo State Networks Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/reservoir_computing/esn_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/jdwe24zsmevhmaxl7x954/IEE598-Lecture7D-2026-04-21-RNNs-BPTT_LSTM_and_Reservoir_Computing-Notes.pdf?rlkey=22dv6950zcsjl98e0o96de11q&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7C (2026-04-16): Recurrent Networks and Temporal Supervision</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7c-2026-04-16-recurrent.html</link><category>podcast</category><pubDate>Thu, 16 Apr 2026 20:21:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-200768809115866681</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;In this lecture, we finish up our coverage of supervised learning of feedforward multi-layer perceptrons with a discussion of how the Convolutional Neural Network imposes an inductive bias that simplifies training and pays off for images but may not work so well for text strings. We then shift our focus to recurrent networks with temporal supervision, which may help to provide a solution when highly local inductive biases aren't effective (as in for text and time-series analysis). We discuss several coincidence detectors from neuroscience in the context of hearing and vision, and we use them to motivate Time Delay Neural Networks (TDNNs) as our bridge to Recurrent Neural Networks (RNNs). This allows for analogies to be made to Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters. We close by transitioning from a basic output-feedback configuration to a generic RNN with hidden states but effectively no "layers." We will pick up next time with backpropagation-through-time (BPTT), Long Short Term Memory (LSTM), reservoir computing (Echo State Networks, ESN's), and an introduction to reinforcement learning.
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Interactive demonstration widgets related to this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Toward Multimodal AI: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;RNN Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.html&lt;/a&gt;
&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/pi8vxjrn6gbftpdab977w/IEE598-Lecture7C-2026-04-16-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=1xltyg1ttcpyqhvtdquczxjr5&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/pi8vxjrn6gbftpdab977w/IEE598-Lecture7C-2026-04-16-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=1xltyg1ttcpyqhvtdquczxjr5&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/m6t2S84NNdQ" width="320" youtube-src-id="m6t2S84NNdQ"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/ahfueaja8w5h4518het14/IEE598-Lecture7C-2026-04-16-Recurrent_Networks_and_Temporal_Supervision-audio_only.mp3?rlkey=m6fgv0ursoyrbvc0lzdhb03rx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/m6t2S84NNdQ/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we finish up our coverage of supervised learning of feedforward multi-layer perceptrons with a discussion of how the Convolutional Neural Network imposes an inductive bias that simplifies training and pays off for images but may not work so well for text strings. We then shift our focus to recurrent networks with temporal supervision, which may help to provide a solution when highly local inductive biases aren't effective (as in for text and time-series analysis). We discuss several coincidence detectors from neuroscience in the context of hearing and vision, and we use them to motivate Time Delay Neural Networks (TDNNs) as our bridge to Recurrent Neural Networks (RNNs). This allows for analogies to be made to Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters. We close by transitioning from a basic output-feedback configuration to a generic RNN with hidden states but effectively no "layers." We will pick up next time with backpropagation-through-time (BPTT), Long Short Term Memory (LSTM), reservoir computing (Echo State Networks, ESN's), and an introduction to reinforcement learning. Interactive demonstration widgets related to this lecture can be found at: Toward Multimodal AI: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.htmlRNN Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/pi8vxjrn6gbftpdab977w/IEE598-Lecture7C-2026-04-16-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=1xltyg1ttcpyqhvtdquczxjr5&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we finish up our coverage of supervised learning of feedforward multi-layer perceptrons with a discussion of how the Convolutional Neural Network imposes an inductive bias that simplifies training and pays off for images but may not work so well for text strings. We then shift our focus to recurrent networks with temporal supervision, which may help to provide a solution when highly local inductive biases aren't effective (as in for text and time-series analysis). We discuss several coincidence detectors from neuroscience in the context of hearing and vision, and we use them to motivate Time Delay Neural Networks (TDNNs) as our bridge to Recurrent Neural Networks (RNNs). This allows for analogies to be made to Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters. We close by transitioning from a basic output-feedback configuration to a generic RNN with hidden states but effectively no "layers." We will pick up next time with backpropagation-through-time (BPTT), Long Short Term Memory (LSTM), reservoir computing (Echo State Networks, ESN's), and an introduction to reinforcement learning. Interactive demonstration widgets related to this lecture can be found at: Toward Multimodal AI: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.htmlRNN Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/recurrent_neural_networks/rnn_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/pi8vxjrn6gbftpdab977w/IEE598-Lecture7C-2026-04-16-Recurrent_Networks_and_Temporal_Supervision-Notes.pdf?rlkey=1xltyg1ttcpyqhvtdquczxjr5&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7B (2026-04-14): Feeding Forward from Neurons to Networks (SLP, RBFNN, MLP, and CNN)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7b-2026-04-14-feeding-forward.html</link><category>podcast</category><pubDate>Tue, 14 Apr 2026 16:24:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-8834132012386014012</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;In this lecture, we move from the basics of learning foundations from the last lecture into models of neurons that can be combined to form machine learning tools. We start with the single-layer perceptron (SLP), explain where the term "weights" comes, and describe how it can linearly separate a space. We then introduce a hidden layer of receptive field units (RFU's) and discuss how Radial Basis Function Neural Networks use Gaussian or Logistic RBF's as nonlinear projections into high-dimensional space that Cover's theorem suggests should be more likely to e linearly separable. After demonstrating how RBFNN's work, we then introduce Cybenko's Universal Approximation Theorem (UAT) and use it to motivate looking for other (and deeper) latent structures. That leads us to the Multi-Layer Perceptron (MLP), backpropagation, and the Convolutional Neural Network.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Interactive widgets referenced in this lecture include:&lt;/span&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Single Layer Perceptron: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Radial Basis Function Neural Network: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/radial_basis_function_nn/rbfnn_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/radial_basis_function_nn/rbfnn_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Toward Multimodal AI (for visualizing CNN receptive fields): &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Transformer Architecture Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space: pre-wrap;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/t2aoepucn0swlkvisococ/IEE598-Lecture7B-2026-04-14-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=s5pr1zdrnup2ca1nthf7zxp3n&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/t2aoepucn0swlkvisococ/IEE598-Lecture7B-2026-04-14-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=s5pr1zdrnup2ca1nthf7zxp3n&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;div&gt;&lt;span style="color: #0d0d0d; font-family: Roboto, Noto, sans-serif;"&gt;&lt;span style="font-size: 15px; white-space-collapse: preserve;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/span&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/VENaeu519h0" width="320" youtube-src-id="VENaeu519h0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/o7y2n3njg53kedb1rj6yt/IEE598-Lecture7B-2026-04-14-Feeding_Forward_from_Neurons_to_Networks-audio_only.mp3?rlkey=s62okn4qxqxauhlzetc3uk3t0&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/VENaeu519h0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">4.841734352280465 -147.09625540000002 62.009286447719532 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we move from the basics of learning foundations from the last lecture into models of neurons that can be combined to form machine learning tools. We start with the single-layer perceptron (SLP), explain where the term "weights" comes, and describe how it can linearly separate a space. We then introduce a hidden layer of receptive field units (RFU's) and discuss how Radial Basis Function Neural Networks use Gaussian or Logistic RBF's as nonlinear projections into high-dimensional space that Cover's theorem suggests should be more likely to e linearly separable. After demonstrating how RBFNN's work, we then introduce Cybenko's Universal Approximation Theorem (UAT) and use it to motivate looking for other (and deeper) latent structures. That leads us to the Multi-Layer Perceptron (MLP), backpropagation, and the Convolutional Neural Network. Interactive widgets referenced in this lecture include:Single Layer Perceptron: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.htmlRadial Basis Function Neural Network: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/radial_basis_function_nn/rbfnn_explorer.htmlToward Multimodal AI (for visualizing CNN receptive fields): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.htmlTransformer Architecture Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.htmlWhiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/t2aoepucn0swlkvisococ/IEE598-Lecture7B-2026-04-14-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=s5pr1zdrnup2ca1nthf7zxp3n&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we move from the basics of learning foundations from the last lecture into models of neurons that can be combined to form machine learning tools. We start with the single-layer perceptron (SLP), explain where the term "weights" comes, and describe how it can linearly separate a space. We then introduce a hidden layer of receptive field units (RFU's) and discuss how Radial Basis Function Neural Networks use Gaussian or Logistic RBF's as nonlinear projections into high-dimensional space that Cover's theorem suggests should be more likely to e linearly separable. After demonstrating how RBFNN's work, we then introduce Cybenko's Universal Approximation Theorem (UAT) and use it to motivate looking for other (and deeper) latent structures. That leads us to the Multi-Layer Perceptron (MLP), backpropagation, and the Convolutional Neural Network. Interactive widgets referenced in this lecture include:Single Layer Perceptron: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.htmlRadial Basis Function Neural Network: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/radial_basis_function_nn/rbfnn_explorer.htmlToward Multimodal AI (for visualizing CNN receptive fields): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/toward_multimodal_AI.htmlTransformer Architecture Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/transformers/transformer_explorer.htmlWhiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/t2aoepucn0swlkvisococ/IEE598-Lecture7B-2026-04-14-Feeding_Forward_from_Neurons_to_Networks-Notes.pdf?rlkey=s5pr1zdrnup2ca1nthf7zxp3n&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7A (2026-04-09): Neural Foundations of Learning</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-7a-2026-04-09-neural.html</link><category>podcast</category><pubDate>Tue, 14 Apr 2026 16:00:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-365900057756453344</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;In this lecture, we prepare to discuss artificial and spiking neural networks -- bio-inspired information processing mechanisms inspired by the central nervous system and models of learning in psychology. We open with a discussion of the relationship between learning, memory, and neuroplasticity and then introduce a canonical model of a neuron that is the basis of the mechanisms thought to underly neuroplasticity. We discuss the different ways in which neuroplasticity supports working, short-term, and long-term memory. We introduce Hebbian learning (and briefly mention spike-timing-dependent plasticity, STDP) as a foundational learning paradigm that, when combined with neuromodluation and specialized circuits, can implement all forms of learning described in the lecture. Those forms of learning include non-associative learning (habituation and sensitization), associative learning (classical and operant conditioning), and latent learning. We map each of those to machine learing paradigms including unsupervised learning, self-supervised learning/pre-training, reinforcement learning, and supervised learning. In the next lecture, we will directly model the canonical neuron with a signle-layer perceptron and start to build statistical models based on this artificial neuron model.
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Interactive demonstrations mentioned in this video:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;SLP: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/single_layer_perceptron/slp_explainer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Hebbian Learning: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/hebbian_learning/hebbian_competitive_clustering.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Memristor-based STDP Learning: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/memristors/memristor_stdp_array.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/x4t0y6q9rblrn78o8ns2r/IEE598-Lecture7A-2026-04-09-Neural_Foundations_of_Learning-Notes.pdf?rlkey=im6unlrptbfppqeds2y9gpga7&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/x4t0y6q9rblrn78o8ns2r/IEE598-Lecture7A-2026-04-09-Neural_Foundations_of_Learning-Notes.pdf?rlkey=im6unlrptbfppqeds2y9gpga7&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/8VUihQGpSzM" width="320" youtube-src-id="8VUihQGpSzM"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/8VUihQGpSzM/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author></item><item><title>Lecture 6B (2026-04-07): Bacterial Foraging Optimization and Ant Colony Optimization</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-6b-2026-04-07-bacterial.html</link><category>podcast</category><pubDate>Sun, 5 Apr 2026 20:55:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-4746374330515540243</guid><description>&lt;p&gt;Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax).&lt;/p&gt;&lt;p&gt;Interactive demonstrations referenced in this lecture can be found at:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;Particle Swarm Optimization: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Bacterial Foraging Optimization: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Ant Colony Optimization: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Case Study for More Realistic Ant Recruitment Dynamics: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Softmax Exploration: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/bHM8Ek5OMpA" width="320" youtube-src-id="bHM8Ek5OMpA"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/oytzja58t0tq5z08qqnzu/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-audio_only.mp3?rlkey=606fk2h4f9n3pxshuhmo855rx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/bHM8Ek5OMpA/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax). Interactive demonstrations referenced in this lecture can be found at: Particle Swarm Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.htmlBacterial Foraging Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.htmlAnt Colony Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.htmlCase Study for More Realistic Ant Recruitment Dynamics: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.htmlSoftmax Exploration: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax). Interactive demonstrations referenced in this lecture can be found at: Particle Swarm Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.htmlBacterial Foraging Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.htmlAnt Colony Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.htmlCase Study for More Realistic Ant Recruitment Dynamics: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.htmlSoftmax Exploration: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 5E/6A (2026-04-04): Parallel Tempering and Swarm Intelligence through Social Cohesion (Particle Swarm Optimization)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-5e6a-2026-04-04-parallel.html</link><category>podcast</category><pubDate>Thu, 2 Apr 2026 14:38:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7332476527007878850</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates.
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Online interactive demonstration modules associated with this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Simulated Annealing: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html&lt;/a&gt;
&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Parallel Tempering: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Reynolds' Boids Collective Motion Model: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Vicsek Collective Motion Model: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Particle Swarm Optimization (PSO): &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html&lt;/a&gt;
&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/YsJnOcBOkxk" width="320" youtube-src-id="YsJnOcBOkxk"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/s6fzjl6xsqtj1fhvg3qui/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-audio_only.mp3?rlkey=mn2hey41pm6dgjxr76bzn92dx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/YsJnOcBOkxk/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates. Online interactive demonstration modules associated with this lecture can be found at: Simulated Annealing: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Parallel Tempering: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.htmlReynolds' Boids Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.htmlVicsek Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.htmlParticle Swarm Optimization (PSO): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates. Online interactive demonstration modules associated with this lecture can be found at: Simulated Annealing: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Parallel Tempering: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.htmlReynolds' Boids Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.htmlVicsek Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.htmlParticle Swarm Optimization (PSO): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 5D (2026-03-31): Metropolis–Hastings Markov Chain Monte Carlo and Simulated Annealing/Parallel Tempering</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-5d-2026-03-31.html</link><category>podcast</category><pubDate>Tue, 31 Mar 2026 15:29:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7612509158319692342</guid><description>&lt;p&gt;In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture.&lt;/p&gt;&lt;p&gt;On-line simulations referenced in this lecture can be found at:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;Boltzmann–Gibbs distribution: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;SoftMax Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Monte Carlo Estimation/Integration Explorer:&amp;nbsp;&lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Simulated Annealing Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Parallel Tempering Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/0aPqZH2_03w" width="320" youtube-src-id="0aPqZH2_03w"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/hc4i24nrnv5bitpljqs4g/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-audio_only.mp3?rlkey=1px1c6i51ypfceqfzarpletsz&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/0aPqZH2_03w/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture. On-line simulations referenced in this lecture can be found at: Boltzmann–Gibbs distribution: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.htmlSoftMax Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.htmlMonte Carlo Estimation/Integration Explorer:&amp;nbsp;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.htmlSimulated Annealing Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.htmlParallel Tempering Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture. On-line simulations referenced in this lecture can be found at: Boltzmann–Gibbs distribution: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.htmlSoftMax Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.htmlMonte Carlo Estimation/Integration Explorer:&amp;nbsp;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.htmlSimulated Annealing Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.htmlParallel Tempering Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 5B (2026-03-24): From Entropy to Maximum Entropy (MaxEnt) Methods</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-5b-2026-03-24-from-entropy-to.html</link><category>podcast</category><pubDate>Tue, 24 Mar 2026 13:53:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7556797077888999019</guid><description>&lt;p&gt;In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/0EYihuzYYC0" width="320" youtube-src-id="0EYihuzYYC0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/8i0p21lf9jhbvhduas2q2/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-audio_only.mp3?rlkey=8hf4fzry4avdlooen0xhhruhr&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/0EYihuzYYC0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 4D/5A (2026-03-19): Distributed and Parallel GA's and Introduction to Simulated Annealing (SA)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-4d5a-2026-03-19-distributed-and.html</link><category>podcast</category><pubDate>Thu, 19 Mar 2026 15:17:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1999569807343496995</guid><description>&lt;p&gt;In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit.&lt;/p&gt;&lt;p&gt;Shifting-Balance Theory visualizer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Simulated Annealing explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/sb4wiitdWpI" width="320" youtube-src-id="sb4wiitdWpI"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/2q7fnxw2g93acqby7qntd/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-audio_only.mp3?rlkey=q6bdtbj0mqfyd1sqzg2r7hj0v&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/sb4wiitdWpI/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit. Shifting-Balance Theory visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html Simulated Annealing explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit. Shifting-Balance Theory visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html Simulated Annealing explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 4C (2026-03-17): From Niches to Meta-Populations: Toward Distributed and Parallel Genetic Algorithms (DGA/PGA)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-4c-2026-03-17-from-niches-to.html</link><category>podcast</category><pubDate>Tue, 17 Mar 2026 18:17:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-8533144338673972650</guid><description>&lt;p&gt;In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/s7P_gYlRU4s" width="320" youtube-src-id="s7P_gYlRU4s"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/muzyualbin1ft5jfbg7tf/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-audio_only.mp3?rlkey=6ib1tfd1754lq8wxcvir0dhyk&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/s7P_gYlRU4s/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 4B (2026-03-05): Niching Methods for Diversity Preservation in Multi-Objective and Multi-Modal Evolutionary Algorithms</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-4b-2026-03-05-niching-methods.html</link><category>podcast</category><pubDate>Thu, 5 Mar 2026 13:54:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1581837957135219612</guid><description>&lt;p&gt;In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/5p6DKJOr9Y4" width="320" youtube-src-id="5p6DKJOr9Y4"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/b3e1roolrawadtq078acy/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-audio_only.mp3?rlkey=ww0z7qmnv2ylxzlhplizzmb6t&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/5p6DKJOr9Y4/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 3D/4A (2026-03-03): From Multi-Objective to Multi-Modal Optimization</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-3d4a-2026-03-03-from-multi.html</link><category>podcast</category><pubDate>Tue, 3 Mar 2026 21:54:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-3919942192414025083</guid><description>&lt;p&gt;In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/qS6yjHmp-8E" width="320" youtube-src-id="qS6yjHmp-8E"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/5i8sq6kf9m91ayg32q8wa/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-audio_only.mp3?rlkey=xfhrd4glnytnvigy04yi530zx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/qS6yjHmp-8E/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 3C (2026-02-26): Multi-Objective EA’s from Linearization to Pareto Ranking and Beyond</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-3c-2026-02-26-multi-objective.html</link><category>podcast</category><pubDate>Thu, 26 Feb 2026 13:56:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5147815842547983122</guid><description>&lt;p&gt;In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/e8NMFq7VUmo" width="320" youtube-src-id="e8NMFq7VUmo"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/unflgwo5ey9pqcxhh9uck/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-audio_only.mp3?rlkey=zcx8rdwo0vvw75b0y42350iyh&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/e8NMFq7VUmo/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 3B (2026-02-24): Multi-Objective Optimality and Introduction to Multi-Objective Evolutionary Algorithms</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-3b-2026-02-24-multi-objective.html</link><category>podcast</category><pubDate>Tue, 24 Feb 2026 13:41:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-3837405829805907403</guid><description>&lt;p&gt;In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking).&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/f1dtwIox6nk" width="320" youtube-src-id="f1dtwIox6nk"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/z2ispfdj66x6w86l900zl/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-audio_only.mp3?rlkey=072qsr1rdl2r9al3x4npl673t&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/f1dtwIox6nk/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2D/3A (2026-02-19): From Immunocomputing to Games and Multi-Objective Optimization (MOO)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2d3a-2026-02-19-from.html</link><category>podcast</category><pubDate>Thu, 19 Feb 2026 17:09:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-2363111649392013881</guid><description>&lt;p&gt;In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&amp;amp;dl=0&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/OoWI2zbjUaQ" width="320" youtube-src-id="OoWI2zbjUaQ"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/gsebqg3tgszu25ydzbrwu/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-audio_only.mp3?rlkey=xry90blrqhqr6xuac87u4ebr0&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/OoWI2zbjUaQ/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2C (2026-02-17): Genetic Programming and Artificial Immune Systems</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2c-2026-02-17-genetic.html</link><category>podcast</category><pubDate>Tue, 17 Feb 2026 13:39:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7983084346526223061</guid><description>&lt;p&gt;In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit).&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/vkoPQEerKJk" width="320" youtube-src-id="vkoPQEerKJk"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/27ca6mix89b52n2xyarvl/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-audio_only.mp3?rlkey=87sljb5whlvydfzjc6gnch3z8&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/vkoPQEerKJk/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2B (2026-02-12): Evolutionary and Linear Genetic Programming</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2b-2026-02-12-evolutionary-and.html</link><category>podcast</category><pubDate>Thu, 12 Feb 2026 15:32:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1996745117436853982</guid><description>&lt;p&gt;In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).&amp;nbsp; This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/BzOuZpfP7L4" width="320" youtube-src-id="BzOuZpfP7L4"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/ipypvufnsbw719plxispx/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-audio_only.mp3?rlkey=vgidbd2kkf11funzoe7646yko&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/BzOuZpfP7L4/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).&amp;nbsp; This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).&amp;nbsp; This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2A (2026-02-10): Evolution Strategies and Covariance Adaptation (ES, NES, CMA-ES)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2a-2026-02-10-evolution.html</link><category>podcast</category><pubDate>Tue, 10 Feb 2026 11:15:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5674405916478864958</guid><description>&lt;p&gt;In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/qlsFd2FHun0" width="320" youtube-src-id="qlsFd2FHun0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/jd1ns9c17njugb722qlti/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-audio_only.mp3?rlkey=py1qv1jg6qbmtgpvx2ldywzhl&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/qlsFd2FHun0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1H (2026-02-05): Genetic Algorithm (GA) Hyperparameter Tuning</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-1h-2026-02-05-genetic-algorithm.html</link><category>podcast</category><pubDate>Thu, 5 Feb 2026 13:19:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-795254014848423688</guid><description>&lt;p&gt;In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/iX6SFkYMwo0" width="320" youtube-src-id="iX6SFkYMwo0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/zcgy26fcdtr4m52r57751/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-audio_only.mp3?rlkey=52rd3vacvrayqqy8cl14bb68b&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/iX6SFkYMwo0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1G (2026-02-03): GA Wrap Up – Crossover, Mutation, &amp; Tuning GA Operator Choices</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-1g-2026-02-03-ga-wrap-up.html</link><category>podcast</category><pubDate>Tue, 3 Feb 2026 14:00:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-814591260940829834</guid><description>&lt;p&gt;In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/lCQEFw_KNHg" width="320" youtube-src-id="lCQEFw_KNHg"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/35c4ts5ljfvrplttn27xr/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-audio_only.mp3?rlkey=cxq3a28hjqd1tn0e6251264g4&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/lCQEFw_KNHg/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1F (2026-01-29): Operators of the Genetic Algorithm</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/lecture-1f-2026-01-29-operators-of.html</link><category>podcast</category><pubDate>Thu, 29 Jan 2026 15:41:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1127447417534147311</guid><description>&lt;p&gt;In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/OUm-t5ODr54" width="320" youtube-src-id="OUm-t5ODr54"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/7sjmiz8gvg3x53tci62mh/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-audio_only.mp3?rlkey=o63eectpvss0e9l6jb5yea1bs&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/OUm-t5ODr54/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item></channel></rss>