<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"><channel><title>IEE/CSE 598: Bio-Inspired AI and Optimization</title><description>Archived lectures from graduate course on nature-inspired metaheuristics given at Arizona State University by Ted Pavlic </description><managingEditor>noreply@blogger.com (Ted Pavlic)</managingEditor><pubDate>Sun, 5 Apr 2026 20:56:17 -0700</pubDate><generator>Blogger http://www.blogger.com</generator><openSearch:totalResults xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">109</openSearch:totalResults><openSearch:startIndex xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">1</openSearch:startIndex><openSearch:itemsPerPage xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">25</openSearch:itemsPerPage><link>https://asu-iee598-bioinspired.blogspot.com/search/label/podcast</link><language>en-us</language><itunes:explicit>no</itunes:explicit><copyright>Copyright (c) 2020 by Theodore P. Pavlic</copyright><itunes:image href="https://www.dropbox.com/s/dl/wlt5o25b3rwqhd9/2000px-Newton_optimization_vs_grad_descent.svg-cropped.png"/><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords><itunes:summary>Graduate-level survey of a variety of nature-inspired metaheuristics for optimization, as well as some physically emboddied multi-agent systems techniques (such as stochastic robotics). Course (IEE/CSE 598) taught by Theodore Pavlic at Arizona State University.</itunes:summary><itunes:subtitle>IEE/CSE 598@ASU: Bio-Inspired AI and Optimization</itunes:subtitle><itunes:category text="Education"><itunes:category text="Higher Education"/></itunes:category><itunes:author>Theodore P. Pavlic</itunes:author><itunes:owner><itunes:email>ted@tedpavlic.com</itunes:email><itunes:name>Theodore P. Pavlic</itunes:name></itunes:owner><item><title>Lecture 6B (2026-04-07): Bacterial Foraging Optimization and Ant Colony Optimization</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-6b-2026-04-07-bacterial.html</link><category>podcast</category><pubDate>Sun, 5 Apr 2026 20:55:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-4746374330515540243</guid><description>&lt;p&gt;Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax).&lt;/p&gt;&lt;p&gt;Interactive demonstrations referenced in this lecture can be found at:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;Particle Swarm Optimization: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Bacterial Foraging Optimization: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Ant Colony Optimization: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Case Study for More Realistic Ant Recruitment Dynamics: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Softmax Exploration: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/bHM8Ek5OMpA" width="320" youtube-src-id="bHM8Ek5OMpA"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/oytzja58t0tq5z08qqnzu/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-audio_only.mp3?rlkey=606fk2h4f9n3pxshuhmo855rx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/bHM8Ek5OMpA/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax). Interactive demonstrations referenced in this lecture can be found at: Particle Swarm Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.htmlBacterial Foraging Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.htmlAnt Colony Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.htmlCase Study for More Realistic Ant Recruitment Dynamics: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.htmlSoftmax Exploration: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>Closing out the Swarm Intelligence unit, this lecture pivots from Particle Swarm Optimization (PSO) to two examples of stigmergic swarm optimization – Bacterial Foraging Optimization (BFO) and Ant Colony Optimization (ACO). Stigmergy is the act of indirection through modifications of the environment, as in leaving chemical trails or depositing chemical gradients, as opposed to direct communication between one individual and another. BFO solves continuous optimization problems similar to PSO but uses attractants and repellants to modify the environment as opposed to directly informing others of information about discovered solutions. The repellants in BFO along with its reproduction and elimination–dispersal phases help to ensure it searches globally over a space as opposed to the more concentrated search of PSO. ACO also uses chemical coordination, but it is developed for combinatorial optimization problems. Although ACO was originally developed for the Traveling Salesman Problem (TSP), we discuss ACO first in a simpler layered model that better matches the foraging paths of real ants before briefly discussing the application to the TSP. We close with a brief mention of more complex recruitment dynamics in real ants, where trail laying plus noise can provide the ability to track changing feeder distributions and how one-on-one recruitment by some ants and bees can lead to different distributions of recruits across options (similar to changing the temperature in a softmax). Interactive demonstrations referenced in this lecture can be found at: Particle Swarm Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.htmlBacterial Foraging Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/bacterial_foraging_optimization/bfo_explorer.htmlAnt Colony Optimization: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/ant_colony_optimization/aco_explorer.htmlCase Study for More Realistic Ant Recruitment Dynamics: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_behavior/ant_foraging_explorer.htmlSoftmax Exploration: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/fqm4jcfr1mkxsnz8ng61r/IEE598-Lecture6B-2026-04-07-Bacterial_Foraging_Optimization_and_Ant_Colony_Optimization-Notes.pdf?rlkey=q4omc6oyot9vrq8nnq3etx6k4&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 5E/6A (2026-04-04): Parallel Tempering and Swarm Intelligence through Social Cohesion (Particle Swarm Optimization)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/04/lecture-5e6a-2026-04-04-parallel.html</link><category>podcast</category><pubDate>Thu, 2 Apr 2026 14:38:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7332476527007878850</guid><description>&lt;p&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates.
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Online interactive demonstration modules associated with this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Simulated Annealing: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html&lt;/a&gt;
&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Parallel Tempering: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Reynolds' Boids Collective Motion Model: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Vicsek Collective Motion Model: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.html&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Particle Swarm Optimization (PSO): &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html&lt;/a&gt;
&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;Whiteboard notes for this lecture can be found at:&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;
&lt;/span&gt;&lt;span style="background-color: white; color: #0d0d0d; font-family: Roboto, Noto, sans-serif; font-size: 15px; white-space-collapse: preserve;"&gt;&lt;a href="https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/YsJnOcBOkxk" width="320" youtube-src-id="YsJnOcBOkxk"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/s6fzjl6xsqtj1fhvg3qui/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-audio_only.mp3?rlkey=mn2hey41pm6dgjxr76bzn92dx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/YsJnOcBOkxk/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates. Online interactive demonstration modules associated with this lecture can be found at: Simulated Annealing: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Parallel Tempering: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.htmlReynolds' Boids Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.htmlVicsek Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.htmlParticle Swarm Optimization (PSO): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we finish our unit on physics-inspired ML and optimization by covering Parallel Tempering (PT), which combines multiple, parallel Metropolis–Hastings MCMC samplers each with different temperatures (rather than using an annealing schedule, as in Simulated Annealing (SA)). We then pivot toward motivating why certain problem sets, like optimizing high-dimensional weights of neural networks, may not be well suited by the optimization metaheuristics discussed so far in the course. We use this as an opportunity to introduce Swarm Intelligence and the Particle Swarm Optimization (PSO) algorithm, which is particularly good at finding and exploring local optima in spaces with many similarly performing local optima. We explore how PSO was inspired by the Boids Model from Craig Reynolds (in computer graphics) and how it overlaps with the Vicsek model (from statistical physics). We also show how PSO really depends on is social information but, under the influence of social information, tends to very quickly purge the diversity in its solution candidates. Online interactive demonstration modules associated with this lecture can be found at: Simulated Annealing: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Parallel Tempering: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.htmlReynolds' Boids Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/boids_explorer.htmlVicsek Collective Motion Model: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/collective_motion/vicsek_explorer.htmlParticle Swarm Optimization (PSO): https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/particle_swarm_optimization/pso_explorer.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/7jwuytadieywwilqazjq5/IEE598-Lecture5E_6A-2026-04-02-Parallel_Tempering_and_Particle_Swarm_Optimization-Notes.pdf?rlkey=p1pr7cs241okovkgjnevvhdp5&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 5D (2026-03-31): Metropolis–Hastings Markov Chain Monte Carlo and Simulated Annealing/Parallel Tempering</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-5d-2026-03-31.html</link><category>podcast</category><pubDate>Tue, 31 Mar 2026 15:29:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7612509158319692342</guid><description>&lt;p&gt;In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture.&lt;/p&gt;&lt;p&gt;On-line simulations referenced in this lecture can be found at:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style="text-align: left;"&gt;&lt;li&gt;Boltzmann–Gibbs distribution: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;SoftMax Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Monte Carlo Estimation/Integration Explorer:&amp;nbsp;&lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Simulated Annealing Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Parallel Tempering Explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/0aPqZH2_03w" width="320" youtube-src-id="0aPqZH2_03w"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/hc4i24nrnv5bitpljqs4g/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-audio_only.mp3?rlkey=1px1c6i51ypfceqfzarpletsz&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/0aPqZH2_03w/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture. On-line simulations referenced in this lecture can be found at: Boltzmann–Gibbs distribution: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.htmlSoftMax Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.htmlMonte Carlo Estimation/Integration Explorer:&amp;nbsp;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.htmlSimulated Annealing Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.htmlParallel Tempering Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start with a reminder that the Boltzmann–Gibbs distribution is the maximal entropy (MaxEnt) distribution of physical microstates when the average energy is fixed at a temperature at thermal equilibrium. We then move toward motivations where it would be useful to sample microstates from such a distribution. First, we introduce Monte Carlo methods for parameter estimation, and we pivot toward applications of Monte Carlo sampling for numerical integration. This leads us back to physics applications where integration using the Boltzmann–Gibbs is much more practical. This gives the opportunity to introduce Metropolis–Hastings Markov Chain Monte Carlo (MCMC) sampling, which allows for sampling from the Boltzmann–Gibbs and more. After discussing connections to importance sampling (from stochastic simulation) and Bayesian/MCMC statistics, we introduce Simulated Annealing, which combines Metropolis–Hastings sampling with an annealing schedule for temperature. We close with a very brief introduction to Parallel Tempering, which swaps out the annealing schedule for parallel MCMC samplers that periodically swap states based on their relative energies. We will pick up with Parallel Tempering in the next lecture. On-line simulations referenced in this lecture can be found at: Boltzmann–Gibbs distribution: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/boltzmann_maxent/boltzmann_maxent_random_exchange.htmlSoftMax Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/softmax/softmax_temperature_explorer.htmlMonte Carlo Estimation/Integration Explorer:&amp;nbsp;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/monte_carlo/mc_explorer.htmlSimulated Annealing Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.htmlParallel Tempering Explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/parallel_tempering/parallel_tempering.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/s5dcgqrvm4qzz4y0fs64a/IEE598-Lecture5D-2026-03-31-Markov_Chain_Monte_Carlo_Metropolis_and_Simulated_Annealing_Parallel_Tempering-Notes.pdf?rlkey=v2m33lhh7sjhwogffotbyq3k7&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 5B (2026-03-24): From Entropy to Maximum Entropy (MaxEnt) Methods</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-5b-2026-03-24-from-entropy-to.html</link><category>podcast</category><pubDate>Tue, 24 Mar 2026 13:53:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7556797077888999019</guid><description>&lt;p&gt;In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/0EYihuzYYC0" width="320" youtube-src-id="0EYihuzYYC0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/8i0p21lf9jhbvhduas2q2/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-audio_only.mp3?rlkey=8hf4fzry4avdlooen0xhhruhr&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/0EYihuzYYC0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we pivot from our motivation from the Simulated Annealing optimization metaheuristic to thinking about how to sample from microstates within the physically inspired search process. This requires us to introduce the concept of entropy, a quantity which measures the number of microstates in a coarse-grained "macrostate" description of a system. Within the constraints of a system, we seek a distribution of microstates that represents only those constraints and not any additional information. This is the maximal entropy distribution for those constraints. We provide a few formalities on how to make this a little more rigorous and then introduce Maximum Entropy (MaxEnt) methods once popular in NLP that remain to be popular in Species Distribution Modeling and archaeology. We will use MaxEnt to help us define the Boltzmann–Gibbs distribution (and Monte Carlo methods to sample from it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/01pfdkj3d3ilk7wiyu79a/IEE598-Lecture5B-2026-03-24-From_Entropy_to_Maximum_Entropy_MaxEnt_Methods-Notes.pdf?rlkey=xfe1pie4sxu0qklg871czuc05&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 4D/5A (2026-03-19): Distributed and Parallel GA's and Introduction to Simulated Annealing (SA)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-4d5a-2026-03-19-distributed-and.html</link><category>podcast</category><pubDate>Thu, 19 Mar 2026 15:17:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1999569807343496995</guid><description>&lt;p&gt;In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit.&lt;/p&gt;&lt;p&gt;Shifting-Balance Theory visualizer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Simulated Annealing explorer: &lt;a href="https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html"&gt;https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/sb4wiitdWpI" width="320" youtube-src-id="sb4wiitdWpI"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/2q7fnxw2g93acqby7qntd/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-audio_only.mp3?rlkey=q6bdtbj0mqfyd1sqzg2r7hj0v&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/sb4wiitdWpI/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit. Shifting-Balance Theory visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html Simulated Annealing explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we wrap up our units on evolutionary algorithms, closing on Distributed (Island Model) and Parallel Genetic Algorithms. We describe the basic population structure and migration approaches in Distributed GA's and explore whether Sewall Wright's shifting-balance theory (SBT) can explain DGA's success on certain landscapes. We then pivot to a new unit on physics-inspired ML and optimization approaches, where Simulated Annealing (SA) is one of the key topics. We introduce Simulated Annealing and discuss how hardware annealers can solve a broad set of combinatorial problems that can be QUBO (Quadratic Unconstrained Binary Optimization) encoded. We setup the basic content grammar for the unit by introducing macrostate, microstate, temperature, and energy, and then we give an animated outline of how the basic SA algorithm works. We will use this SA to motivate our explorations into entropy, MaxEnt, Boltzmann sampling, and more in future lectures in this unit. Shifting-Balance Theory visualizer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/shifting_balance_theory/sbt_four_peaks.html Simulated Annealing explorer: https://tpavlic.github.io/asu-bioinspired-ai-and-optimization/simulated_annealing/simulated_annealing_demo.html Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/b8v78jmem4j9spju7sa8k/IEE598-Lecture4D_5A-2026-03-19-Distributed_and_Parallel_GAs_and_Introduction_to_Simulated_Annealing_SA-Notes.pdf?rlkey=qfh29uk7ckfb8aphn1k645r9e&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 4C (2026-03-17): From Niches to Meta-Populations: Toward Distributed and Parallel Genetic Algorithms (DGA/PGA)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-4c-2026-03-17-from-niches-to.html</link><category>podcast</category><pubDate>Tue, 17 Mar 2026 18:17:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-8533144338673972650</guid><description>&lt;p&gt;In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/s7P_gYlRU4s" width="320" youtube-src-id="s7P_gYlRU4s"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/muzyualbin1ft5jfbg7tf/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-audio_only.mp3?rlkey=6ib1tfd1754lq8wxcvir0dhyk&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/s7P_gYlRU4s/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we close out our discussion of "niching" diversity-preservation approaches for multi-modal and multi-objective evolutionary algorithms. We had covered clearing/clustering algorithms in the past lecture (Lecture 4B), and so we start on crowding algorithms, including Restricted Tournament Selection (RTS), briefly the introduce Species Conserving Genetic Algorithm (SCGA), and then close with a discussion of islanding approaches. This sets up an introduction to distributed (and parallel) genetic algorithms, which we will start out with in the next lecture. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/ngcurzxer85i4oft1qn68/IEE598-Lecture4C-2026-03-17-From_Niches_to_Meta_Populations-Distributed_and_Parallel_GA-Notes.pdf?rlkey=x8mb0bn5d56lhwtjftjx323u6&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 4B (2026-03-05): Niching Methods for Diversity Preservation in Multi-Objective and Multi-Modal Evolutionary Algorithms</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-4b-2026-03-05-niching-methods.html</link><category>podcast</category><pubDate>Thu, 5 Mar 2026 13:54:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1581837957135219612</guid><description>&lt;p&gt;In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/5p6DKJOr9Y4" width="320" youtube-src-id="5p6DKJOr9Y4"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/b3e1roolrawadtq078acy/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-audio_only.mp3?rlkey=ww0z7qmnv2ylxzlhplizzmb6t&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/5p6DKJOr9Y4/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we cover several of the different "niching methods" used for diversity preservation in both multi-objective and multi-modal evolutionary algorithms. We start with an overall goal to create "negative frequency-dependent selection" (or density dependence) that has the potential to be able to stabilize different subpopulations coexisting with each other. We start by discussing how evolutionary models like Hawk–Dove ("Chicken") have mixed Nash equilibria that can represent stable co-existence of discrete phenotypes (due to negative frequency dependence). But then we pivot to habitat selection models, with particular focus on the Ideal Free Distribution (IFD), as a better match for the diversity-preservation problem in MOEA's and MMEA's. That allows us to introduce "fitness sharing" (which matches very closely to the IFD) and various other fitness-modification methods that each have different computational costs and diversity benefits. We close by introduction selection-based approaches, such as breaking tournament-selection ties by crowding distance. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/d2ucw5j4lqtlj6hzue2mk/IEE598-Lecture4B-2026-03-05-Niching_Methods_for_Diversity_Preservation_in_MOEA_and_MMO-Notes.pdf?rlkey=rvvs5xy2qmbva7xl1glsmhnja&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 3D/4A (2026-03-03): From Multi-Objective to Multi-Modal Optimization</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/03/lecture-3d4a-2026-03-03-from-multi.html</link><category>podcast</category><pubDate>Tue, 3 Mar 2026 21:54:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-3919942192414025083</guid><description>&lt;p&gt;In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/qS6yjHmp-8E" width="320" youtube-src-id="qS6yjHmp-8E"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/5i8sq6kf9m91ayg32q8wa/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-audio_only.mp3?rlkey=xfhrd4glnytnvigy04yi530zx&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/qS6yjHmp-8E/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we wrap up our discussion of Pareto ranking for Multi-Objective Evolutionary Algorithms (MOEA's) and then introduce the topic of diversity-preservation methods ("niching" methods) that maintain diversity across the Pareto frontier. We then pivot to introducing Multi-Modal Optimization (MMO), which also requires "niching" methods to populate the different peaks of the optimization objective. We close by starting to set up background that motivates the particular designs of niche-preserving methods. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/blmmcyw7e0uf2ivjh1lkk/IEE598-Lecture3D_4A-2026-03-03-From_Multi_Objective_to_Multi_Modal_Optimization-Notes.pdf?rlkey=q95a982to30ovnv6izej1cd5y&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 3C (2026-02-26): Multi-Objective EA’s from Linearization to Pareto Ranking and Beyond</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-3c-2026-02-26-multi-objective.html</link><category>podcast</category><pubDate>Thu, 26 Feb 2026 13:56:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5147815842547983122</guid><description>&lt;p&gt;In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/e8NMFq7VUmo" width="320" youtube-src-id="e8NMFq7VUmo"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/unflgwo5ey9pqcxhh9uck/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-audio_only.mp3?rlkey=zcx8rdwo0vvw75b0y42350iyh&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/e8NMFq7VUmo/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we review the concept of Pareto optimality (Pareto improvements, Pareto efficiency, Pareto-efficient sets of non-dominated solutions, and the Pareto frontier/front) and then start laying the foundations of building multi-objective evolutionary algorithms to find the Pareto front. This starts with introducing historical MOEA's – like WBGA-MO, RWGA, and VEGA – which are all based on a linear scalarization of multi-objective problems. We then show that these methods not only have trouble promoting diversity along the discovered samples of the Pareto frontier, but they completely miss non-convex portions of the Pareto frontier. To address these issues, we introduce Pareto ranking (from SPGA, MOGA, and NSGA) and the general concept of the community ecology of multi-objective optimization (where fitness is inversely proportional to distance to the Pareto frontier, and diversity is maintained in coexisting "niches" along the community of similar-fitness individuals). We will pick up with this idea and transition to multi-modal optimization (and the various diversity-preserving "niching" methods taht enable it) next time. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/5umaoz9i6bt1w0fkeuw37/IEE598-Lecture3C-2026-02-26-Multi_Objective_EA-s_from_Linearization_to_Pareto_Ranking_and_Beyond-Notes.pdf?rlkey=2cixolbjafhd61r88055rxr3r&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 3B (2026-02-24): Multi-Objective Optimality and Introduction to Multi-Objective Evolutionary Algorithms</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-3b-2026-02-24-multi-objective.html</link><category>podcast</category><pubDate>Tue, 24 Feb 2026 13:41:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-3837405829805907403</guid><description>&lt;p&gt;In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking).&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/f1dtwIox6nk" width="320" youtube-src-id="f1dtwIox6nk"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/z2ispfdj66x6w86l900zl/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-audio_only.mp3?rlkey=072qsr1rdl2r9al3x4npl673t&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/f1dtwIox6nk/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start with a review of equilibrium and efficiency/dominance concepts from game theory – specifically the Nash equilibrium, Pareto efficiency, and payoff and risk dominance. We apply these for both a discrete game (the Stag Hunt) and a generic continuous game. That allows us to introduce Variational Inequalities as a more general numerical problem set that includes the Nash equilibrium as a member (for continuous games). We then pivot to Multi-Objective Optimization (MOO) and motivate the concept of Pareto improvements, Pareto efficiency, Pareto-efficient sets, and Pareto frontiers/fronts. We close with discussions about scalarization approaches to solve MOO problems, including linear scalarization, targets, satisficing, and Chebyshev/weighted minimax. We discuss problems with these approaches and then hint that we will move forward toward fitness concepts that do not require weighting/scalarization. We will pick up with that point in the next lecture, where we introduce several different forms of Multi-Objective Evolutionary Algorithms (and Pareto ranking). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/hqfni3lxa8z09c8kzkfi9/IEE598-Lecture3B-2026-02-24-Multi_Objective_Optimality_and_Intro_to_Multi_Objectivce_Genetic_Algrithms-Notes.pdf?rlkey=si10th7dvglfj25wcv2kyoqrh&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2D/3A (2026-02-19): From Immunocomputing to Games and Multi-Objective Optimization (MOO)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2d3a-2026-02-19-from.html</link><category>podcast</category><pubDate>Thu, 19 Feb 2026 17:09:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-2363111649392013881</guid><description>&lt;p&gt;In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&amp;amp;dl=0&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/OoWI2zbjUaQ" width="320" youtube-src-id="OoWI2zbjUaQ"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/gsebqg3tgszu25ydzbrwu/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-audio_only.mp3?rlkey=xry90blrqhqr6xuac87u4ebr0&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/OoWI2zbjUaQ/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start with a description of two major classes of Artificial Immune System strategies – negative selection and clonal selection – along with the biological processes in the acquired/adaptive/specific immune system in vertebrates that inspired these algorithms. We focus on how both approaches maintain useful diversity, and we frame clonal selection as a form of multi-modal optimization (which will be discussed in more detail in Unit 4). This allows us to pivot to multi-objective optimization. In the last section of the lecture, we start outlining fundamentals of thinking about systems with multiple competing objectives – focusing first on game theory and the concept of the Nash equilibrium. Next time, we will define Pareto efficiency and start to introduce classical algorithms for finding Pareto-efficient sets. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/p8ly7l88ouvyc7m60lq8v/IEE598-Lecture3A-2026-02-19-Multicriteria_Decision_Making_Pareto_Optimality_and_Intro_to_Multiobjective_Evolutionary_Algorithms_MOEAs-Notes.pdf?rlkey=yhb60lgihm2mv0w1eid1nxas1&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2C (2026-02-17): Genetic Programming and Artificial Immune Systems</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2c-2026-02-17-genetic.html</link><category>podcast</category><pubDate>Tue, 17 Feb 2026 13:39:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7983084346526223061</guid><description>&lt;p&gt;In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit).&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/vkoPQEerKJk" width="320" youtube-src-id="vkoPQEerKJk"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/27ca6mix89b52n2xyarvl/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-audio_only.mp3?rlkey=87sljb5whlvydfzjc6gnch3z8&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/vkoPQEerKJk/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we review the core principles of Genetic Programming, starting with Linear Genetic Programming (LGP) and transitioning to tree-based Genetic Programming (GP) that incorporates Abstract Syntax Trees (AST's) as its genotypes. We cover the different mutation operators and selection operators for these forms of GP and typical application spaces that use GP. We then close the lecture with an introduction to Immunocomputing and Artificial Immune Systems (AIS), which mimic the acquired/adaptive/specific immune system of (jawless) vertebrates. We will continue our discussion of immunocomputing/AIS in the next lecture and use it to pivot to multi-objective optimization (the subject of the next unit). Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/men7ns3f44783gqk20b64/IEE598-Lecture2C-2026-02-17-Genetic_Programming_and_Artificial_Immune_Systems-Notes.pdf?rlkey=yyjbn4sm6wssd8no5qvnqs8ft&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2B (2026-02-12): Evolutionary and Linear Genetic Programming</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2b-2026-02-12-evolutionary-and.html</link><category>podcast</category><pubDate>Thu, 12 Feb 2026 15:32:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1996745117436853982</guid><description>&lt;p&gt;In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).&amp;nbsp; This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/BzOuZpfP7L4" width="320" youtube-src-id="BzOuZpfP7L4"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/ipypvufnsbw719plxispx/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-audio_only.mp3?rlkey=vgidbd2kkf11funzoe7646yko&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/BzOuZpfP7L4/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).&amp;nbsp; This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start by reviewing the strengths and weaknesses of the GA, CMA-ES (with adaptive restarts), and Stochastic Gradient Descent (SGD).&amp;nbsp; This paints a picture of complementarity and not competition. Each algorithm fits within its own niche, and the algorithms can be used together to help compensate for weaknesses and find better solutions more efficiently. Whereas CMA-ES and the SGD require continuous-valued decision spaces, the GA does not, and so we then pivot to thinking about how a GA might be able to be used to write software (where code comes from a discrete decision space off limits from CMA-ES and SGD). We start this exploration with an introduction to the Evolutionary Programming of the 1960's -- which focuses on the evolution of populations of Finite State Machines (FSM's) using discrete mutation and no crossover We then think about how GA's with crossover might be able to be applied to lines of code. We start with Linear Genetic Programming (LGP), which restricts the programming language to one without multi-line control/logic blocks (where assembly languages fit within this class). We demonstrate how One-Instruction Set Computers (like Subtract and Branch if Negative, SBN) are well suited for Linear Genetic Programming (with both mutation and crossover), and we talk about how the presence of "introns" can speed up convergence in LGP (with possible implications for understanding the presence of introns in biological systems/DNA). In the next lecture, we will complete the story with Genetic Programming based on abstract syntax trees (AST's) and then introduce Artificial Immune Systems. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/qsv6necfvgkny8rrxqvuw/IEE598-Lecture2B-2026-02-12-Evolutionary_and_Linear_Genetic_Programming-Notes.pdf?rlkey=aux54wr6rnju9o4nlsvcptft0&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 2A (2026-02-10): Evolution Strategies and Covariance Adaptation (ES, NES, CMA-ES)</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-2a-2026-02-10-evolution.html</link><category>podcast</category><pubDate>Tue, 10 Feb 2026 11:15:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5674405916478864958</guid><description>&lt;p&gt;In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/qlsFd2FHun0" width="320" youtube-src-id="qlsFd2FHun0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/jd1ns9c17njugb722qlti/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-audio_only.mp3?rlkey=py1qv1jg6qbmtgpvx2ldywzhl&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/qlsFd2FHun0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we introduce a fundamentally different family of evolution-inspired search algorithms, the Evolution Strategies (ES). Rather than treating a population as a set of hypothetical good solutions that must be retained or discarded, as in the GA, the Evolution Strategies adapt the search process itself by allowing different decision variables to be able to mutate using different step sizes, and the resulting adaptive step sizes reflect the curvature of the underlying fitness landscape. We discuss how this heuristic idea was formalized in Natural Evolution Strategies (NES), which leverage the information-theoretic natural gradient to learn productive directions to climb, and then how that was made more practical and effective via Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We close with a discussion of how CMA-ES facilitates adaptive restarts, making CMA-ES not only a good tool for high-resolution search of a single fitness peak but also a candidate for global optimization – seeking out new peaks in a sort of "depth-first" order (in contrast to the "breadth-first" order of the GA). We then put the GA, ES, and conventional (stochastic) gradient descent together as complementary tools for complex optimization. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/tnq6ol9soph5cxjcqnf73/IEE598-Lecture2A-2026-02-10-Introduction_to_Evolution_Strategies_and_CMA-ES-Notes.pdf?rlkey=9brna7e54fkh9ljf00uexxmjk&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1H (2026-02-05): Genetic Algorithm (GA) Hyperparameter Tuning</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-1h-2026-02-05-genetic-algorithm.html</link><category>podcast</category><pubDate>Thu, 5 Feb 2026 13:19:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-795254014848423688</guid><description>&lt;p&gt;In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/iX6SFkYMwo0" width="320" youtube-src-id="iX6SFkYMwo0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/zcgy26fcdtr4m52r57751/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-audio_only.mp3?rlkey=52rd3vacvrayqqy8cl14bb68b&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/iX6SFkYMwo0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we complete our coverage of the Genetic Algorithm (GA) by discussing how to improve the function of selection operators and, in general, how to tune hyperparameters to improve the performance of the GA for a given problem. We start with a discussion of the effects of Stochastic Uniform Sampling (SUS) over roulette-wheel selection and how the effective drop in variance in number of parents eliminates the fixation-causing effects of drift while also continuing to leave a barrier on precision in place. We also discuss how to use exponential ranking in ranking selection to have better control over selective pressure, but we mention that tournament selection ultimately is a stronger choice computationally when rank-based selection is desired. We discuss a framework that puts the 5 major hyperparameters (M, R, E, Pm, and Pc [as well as selection pressure]) on one graph to help guide choice of different hyperparameters based on context. We draw connections between the two types of selection operator (fitness-proportionate and rank based) and Generalized Linear Modeling (GLM; continuous and ordinal response variables) and discuss connections between the number of parents and the number of samples/statistical power in a GLM. Finally, we close with a brief introduction to Evolution Strategies (ES), which will be the topic we will start with in the next unit. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/8z1we6jycmealo2ww2ik0/IEE598-Lecture1H-2026-02-05-GA_Hyperparameter_Tuning-Notes.pdf?rlkey=gosm6672x8v9c66zdf6ho09mb&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1G (2026-02-03): GA Wrap Up – Crossover, Mutation, &amp; Tuning GA Operator Choices</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/02/lecture-1g-2026-02-03-ga-wrap-up.html</link><category>podcast</category><pubDate>Tue, 3 Feb 2026 14:00:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-814591260940829834</guid><description>&lt;p&gt;In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/lCQEFw_KNHg" width="320" youtube-src-id="lCQEFw_KNHg"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/35c4ts5ljfvrplttn27xr/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-audio_only.mp3?rlkey=cxq3a28hjqd1tn0e6251264g4&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/lCQEFw_KNHg/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we almost finish our discussion of the canonical Genetic Algorithm (GA) by covering different crossover and mutation operator choices. We discuss how mutation and crossover rates might change over time. We then end by returning to the selection operator to introduce Stochastic Uniform Sampling, a stratified sampling approach that reduce the variance in the number of offspring selected per high-fitness individual without affecting the mean. Next time, we will discuss how the five major hyperparameters and selection pressure work together to determine the effectiveness of the GA for a particular objective. We will also transition to Unit 2, where we will start by introducing ES and CMA-ES. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/rvv4bkrsiz4ixrhgitt39/IEE598-Lecture1G-2026-02-03-GA_Wrap_Up-Crossover_Mutation_and_Tuning_GA_Operator_Choices-Notes.pdf?rlkey=vcgleumzuv85moizkqibdnjhw&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1F (2026-01-29): Operators of the Genetic Algorithm</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/lecture-1f-2026-01-29-operators-of.html</link><category>podcast</category><pubDate>Thu, 29 Jan 2026 15:41:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1127447417534147311</guid><description>&lt;p&gt;In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/OUm-t5ODr54" width="320" youtube-src-id="OUm-t5ODr54"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/7sjmiz8gvg3x53tci62mh/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-audio_only.mp3?rlkey=o63eectpvss0e9l6jb5yea1bs&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/OUm-t5ODr54/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we dive deeper into the basic Genetic Algorithm by describing the three major operators in any GA iteration – the selection operator, the crossover operator, and the mutation operator. We describe different forms of selection (fitness proportionate, ranking, and tournament) and how they vary in their ability to control selection pressure. We also discuss several forms of crossover (from single point to multi-point to uniform to taking random linear combinations) and their function as they move individuals around fitness landscapes. We will finish with the mutation operator next time, but that content is also covered in the pre-written slide notes linked below. After discussing the mutation operator and some optimizations of the GA itself, we will transition next to to evolutionary computing/programming. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/sa96xjlv5d8q3mc8l0bde/IEE598-Lecture1F-2026-01-29-Operators_of_the_Genetic_Algorithm-Notes.pdf?rlkey=54eow2a79g1437r7g19be7gjy&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1E (2026-01-27): Structure of the Basic Genetic Algorithm</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/lecture-1e-2026-01-27-structure-of.html</link><category>podcast</category><pubDate>Tue, 27 Jan 2026 13:16:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7476402717170279388</guid><description>&lt;p&gt;In this lecture, we reveal the basic architecture of the simple GA. We start with defining how to concretely implement chromosomes/genomes, genes, alleles, characters, and traits numerically within an Engineering Design Optimization context. We then move on to a general definition of multi-objective fitness (which we will return to in Unit 3 when we study multi-objective evolutionary algorithms) and show how fitness functions can be scaled not only to meet the assumptions on fitness functions but also to adjust selective pressure as desired. We close with a flowchart of the steps of a basic genetic algorithm, highlighting operators (selection, crossover, and mutation) that we will discuss in detail in the next lecture.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/zdvfjtp88fl8omly7sd6u/IEE598-Lecture1E-2026-01-27-Structure_of_the_Basic_Genetic_Algorithm-Notes.pdf?rlkey=f0t4apfkyj0v9ketqxoy1xew1&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/zdvfjtp88fl8omly7sd6u/IEE598-Lecture1E-2026-01-27-Structure_of_the_Basic_Genetic_Algorithm-Notes.pdf?rlkey=f0t4apfkyj0v9ketqxoy1xew1&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/tQxta7uzvW0" width="320" youtube-src-id="tQxta7uzvW0"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/te9wq6v7kem8zrab5dnvp/IEE598-Lecture1E-2026-01-27-Structure_of_the_Basic_Genetic_Algorithm-audio_only.mp3?rlkey=31re6xy8k18zqhr2s977btitp&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/tQxta7uzvW0/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we reveal the basic architecture of the simple GA. We start with defining how to concretely implement chromosomes/genomes, genes, alleles, characters, and traits numerically within an Engineering Design Optimization context. We then move on to a general definition of multi-objective fitness (which we will return to in Unit 3 when we study multi-objective evolutionary algorithms) and show how fitness functions can be scaled not only to meet the assumptions on fitness functions but also to adjust selective pressure as desired. We close with a flowchart of the steps of a basic genetic algorithm, highlighting operators (selection, crossover, and mutation) that we will discuss in detail in the next lecture. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/zdvfjtp88fl8omly7sd6u/IEE598-Lecture1E-2026-01-27-Structure_of_the_Basic_Genetic_Algorithm-Notes.pdf?rlkey=f0t4apfkyj0v9ketqxoy1xew1&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we reveal the basic architecture of the simple GA. We start with defining how to concretely implement chromosomes/genomes, genes, alleles, characters, and traits numerically within an Engineering Design Optimization context. We then move on to a general definition of multi-objective fitness (which we will return to in Unit 3 when we study multi-objective evolutionary algorithms) and show how fitness functions can be scaled not only to meet the assumptions on fitness functions but also to adjust selective pressure as desired. We close with a flowchart of the steps of a basic genetic algorithm, highlighting operators (selection, crossover, and mutation) that we will discuss in detail in the next lecture. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/zdvfjtp88fl8omly7sd6u/IEE598-Lecture1E-2026-01-27-Structure_of_the_Basic_Genetic_Algorithm-Notes.pdf?rlkey=f0t4apfkyj0v9ketqxoy1xew1&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1D (2025-01-22): The Four Forces of Evolution and The Drift Barrier</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/lecture-1d-2025-01-22-four-forces-of.html</link><category>podcast</category><pubDate>Thu, 22 Jan 2026 13:37:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5806291925624409198</guid><description>&lt;p&gt;In this lecture, we review the four forces of evolution -- mutation, migration/gene flow, genetic drift, and natural selection -- and the contribution that each one makes to either increasing or decreasing variance in a population over time. So, a more complete picture of evolution is a tense combination of these forces, each of them leading to different kinds of effects on the distribution of alleles (strategies) in a population. We discuss the so-called "drift barrier" -- how the tendency for natural selection to produce higher quality solutions is ultimately limited by genetic drift that dominates when populations have low fitness diversity (low selective pressure) -- and we discuss how this sets up a speed–accuracy tradeoff between mutation (which counteracts drift in a way&amp;nbsp; that does not require more time for convergence but makes it impossible to fine tune solutions) and population size (which can fine tune solutions but requires a longer time to converge to a good solution). Selection operators and evolutionary hyper-parameters should be chosen with these pressures and tradeoffs in mind.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/g4sqsadzh22p2lay05n18/IEE598-Lecture1D-2025-01-22-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-Notes.pdf?rlkey=ky9ol5itw1ipuehdmuhmylqd1&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/g4sqsadzh22p2lay05n18/IEE598-Lecture1D-2025-01-22-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-Notes.pdf?rlkey=ky9ol5itw1ipuehdmuhmylqd1&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/nCMMux4cHJA" width="320" youtube-src-id="nCMMux4cHJA"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/y5phvtfpdbd8rqfauncnc/IEE598-Lecture1D-2025-01-22-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-audio_only.mp3?rlkey=or5qrg0oo4q34otzwz51wlp1n&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/nCMMux4cHJA/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we review the four forces of evolution -- mutation, migration/gene flow, genetic drift, and natural selection -- and the contribution that each one makes to either increasing or decreasing variance in a population over time. So, a more complete picture of evolution is a tense combination of these forces, each of them leading to different kinds of effects on the distribution of alleles (strategies) in a population. We discuss the so-called "drift barrier" -- how the tendency for natural selection to produce higher quality solutions is ultimately limited by genetic drift that dominates when populations have low fitness diversity (low selective pressure) -- and we discuss how this sets up a speed–accuracy tradeoff between mutation (which counteracts drift in a way&amp;nbsp; that does not require more time for convergence but makes it impossible to fine tune solutions) and population size (which can fine tune solutions but requires a longer time to converge to a good solution). Selection operators and evolutionary hyper-parameters should be chosen with these pressures and tradeoffs in mind. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/g4sqsadzh22p2lay05n18/IEE598-Lecture1D-2025-01-22-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-Notes.pdf?rlkey=ky9ol5itw1ipuehdmuhmylqd1&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we review the four forces of evolution -- mutation, migration/gene flow, genetic drift, and natural selection -- and the contribution that each one makes to either increasing or decreasing variance in a population over time. So, a more complete picture of evolution is a tense combination of these forces, each of them leading to different kinds of effects on the distribution of alleles (strategies) in a population. We discuss the so-called "drift barrier" -- how the tendency for natural selection to produce higher quality solutions is ultimately limited by genetic drift that dominates when populations have low fitness diversity (low selective pressure) -- and we discuss how this sets up a speed–accuracy tradeoff between mutation (which counteracts drift in a way&amp;nbsp; that does not require more time for convergence but makes it impossible to fine tune solutions) and population size (which can fine tune solutions but requires a longer time to converge to a good solution). Selection operators and evolutionary hyper-parameters should be chosen with these pressures and tradeoffs in mind. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/g4sqsadzh22p2lay05n18/IEE598-Lecture1D-2025-01-22-The_Four_Forces_of_Evolution_and_The_Drift_Barrier-Notes.pdf?rlkey=ky9ol5itw1ipuehdmuhmylqd1&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1C (2026-01-20): Population Genetics of Evolutionary Algorithms</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/lecture-1c-2026-01-20-population.html</link><category>podcast</category><pubDate>Tue, 20 Jan 2026 19:32:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1225333296103143365</guid><description>&lt;p&gt;In this lecture, we start by reviewing the basics our motivation to solve Engineering Design Optimization problems with evolutionary metaheuristics (a form of population-baed direct search approach). To prepare to introduce the Genetic Algorithm (GA), one of the most well-known Evolutionary Algorithms, we spend most of this lecture covering foundational topics from population and quantitative genetics that will give us the necessary vocabulary for discussing the GA. In particular, we introduce concepts of qualitative and quantitative traits, characters, phenotypes, genes, chromosomes, genomes, and genotypes. We also discuss the "GxE to P" relationship between genotype and phenotype and the connection between phenotype and fitness. We close with a discussion of the four forces of evolution (mutation, gene flow/migration, natural selection, and genetic drift). Next time, we will discuss the constant tension between natural selection and genetic drift (and mutation) and how to manage (and sometimes harness) this tension in an evolutionary metaheuristic.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/0llubbbjkxxconlia235z/IEE598-Lecture1C-2026-01-20-Population_Genetics_of_Evolutionary_Algorithms-Notes.pdf?rlkey=pyc5jxvcmbjfietop5jchgiyp&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/0llubbbjkxxconlia235z/IEE598-Lecture1C-2026-01-20-Population_Genetics_of_Evolutionary_Algorithms-Notes.pdf?rlkey=pyc5jxvcmbjfietop5jchgiyp&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/YwSLqO58y88" width="320" youtube-src-id="YwSLqO58y88"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/5fv8hhtxh1svw6h6q145b/IEE598-Lecture1C-2026-01-20-Population_Genetics_of_Evolutionary_Algorithms-audio_only.mp3?rlkey=rxzrn47mdev9fr6xwrpqtv84s&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/YwSLqO58y88/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we start by reviewing the basics our motivation to solve Engineering Design Optimization problems with evolutionary metaheuristics (a form of population-baed direct search approach). To prepare to introduce the Genetic Algorithm (GA), one of the most well-known Evolutionary Algorithms, we spend most of this lecture covering foundational topics from population and quantitative genetics that will give us the necessary vocabulary for discussing the GA. In particular, we introduce concepts of qualitative and quantitative traits, characters, phenotypes, genes, chromosomes, genomes, and genotypes. We also discuss the "GxE to P" relationship between genotype and phenotype and the connection between phenotype and fitness. We close with a discussion of the four forces of evolution (mutation, gene flow/migration, natural selection, and genetic drift). Next time, we will discuss the constant tension between natural selection and genetic drift (and mutation) and how to manage (and sometimes harness) this tension in an evolutionary metaheuristic. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/0llubbbjkxxconlia235z/IEE598-Lecture1C-2026-01-20-Population_Genetics_of_Evolutionary_Algorithms-Notes.pdf?rlkey=pyc5jxvcmbjfietop5jchgiyp&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we start by reviewing the basics our motivation to solve Engineering Design Optimization problems with evolutionary metaheuristics (a form of population-baed direct search approach). To prepare to introduce the Genetic Algorithm (GA), one of the most well-known Evolutionary Algorithms, we spend most of this lecture covering foundational topics from population and quantitative genetics that will give us the necessary vocabulary for discussing the GA. In particular, we introduce concepts of qualitative and quantitative traits, characters, phenotypes, genes, chromosomes, genomes, and genotypes. We also discuss the "GxE to P" relationship between genotype and phenotype and the connection between phenotype and fitness. We close with a discussion of the four forces of evolution (mutation, gene flow/migration, natural selection, and genetic drift). Next time, we will discuss the constant tension between natural selection and genetic drift (and mutation) and how to manage (and sometimes harness) this tension in an evolutionary metaheuristic. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/0llubbbjkxxconlia235z/IEE598-Lecture1C-2026-01-20-Population_Genetics_of_Evolutionary_Algorithms-Notes.pdf?rlkey=pyc5jxvcmbjfietop5jchgiyp&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1B (2026-01-15): Evolutionary Approach to Engineering Design Optimization</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/iee-598-lecture-1b-2026-01-15.html</link><category>podcast</category><pubDate>Thu, 15 Jan 2026 13:40:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-4530465575795440635</guid><description>&lt;p&gt;In this lecture, we formally introduce the Engineering Design Optimization (EDO) problem and several application spaces where it may apply. We then discuss classical approaches for using computational methods to solve this difficult optimization problem -- including both gradient-based and direct search methods. This allows us to introduce the categories of trajectory and local search methods (like tabu search and simulated annealing) and population-based methods (like the genetic algorithm, ant colony optimization, and particle swarm optimization). We then start down the path of exploring evolutionary algorithms, a special (but very large) set of population-based methods. In the next lecture, we will connect this discussion to population genetics and a basic Genetic Algorithm (GA).&lt;/p&gt;&lt;p&gt;The whiteboard notes taken during this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/kslpzf961mp4viwj557ed/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-Notes.pdf?rlkey=xb0zoc1h74kbl5m1je7jb1p1c&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/kslpzf961mp4viwj557ed/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-Notes.pdf?rlkey=xb0zoc1h74kbl5m1je7jb1p1c&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/4fLejT9IQhM" width="320" youtube-src-id="4fLejT9IQhM"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/m9bhjjznxhqqrm6wqi6n1/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-audio_only.mp3?rlkey=6i31g0nu1uyz6ou8pr2sl9dsr&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/4fLejT9IQhM/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>In this lecture, we formally introduce the Engineering Design Optimization (EDO) problem and several application spaces where it may apply. We then discuss classical approaches for using computational methods to solve this difficult optimization problem -- including both gradient-based and direct search methods. This allows us to introduce the categories of trajectory and local search methods (like tabu search and simulated annealing) and population-based methods (like the genetic algorithm, ant colony optimization, and particle swarm optimization). We then start down the path of exploring evolutionary algorithms, a special (but very large) set of population-based methods. In the next lecture, we will connect this discussion to population genetics and a basic Genetic Algorithm (GA). The whiteboard notes taken during this lecture can be found at: https://www.dropbox.com/scl/fi/kslpzf961mp4viwj557ed/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-Notes.pdf?rlkey=xb0zoc1h74kbl5m1je7jb1p1c&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>In this lecture, we formally introduce the Engineering Design Optimization (EDO) problem and several application spaces where it may apply. We then discuss classical approaches for using computational methods to solve this difficult optimization problem -- including both gradient-based and direct search methods. This allows us to introduce the categories of trajectory and local search methods (like tabu search and simulated annealing) and population-based methods (like the genetic algorithm, ant colony optimization, and particle swarm optimization). We then start down the path of exploring evolutionary algorithms, a special (but very large) set of population-based methods. In the next lecture, we will connect this discussion to population genetics and a basic Genetic Algorithm (GA). The whiteboard notes taken during this lecture can be found at: https://www.dropbox.com/scl/fi/kslpzf961mp4viwj557ed/IEE598-Lecture1B-2026-01-15-Evolutionary_Approach_to_Engineering_Design_Optimization-Notes.pdf?rlkey=xb0zoc1h74kbl5m1je7jb1p1c&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 1A (2026-01-13): Introduction to Course Policies and Motivations</title><link>https://asu-iee598-bioinspired.blogspot.com/2026/01/lecture-1a-2026-01-13-introduction-to.html</link><category>podcast</category><pubDate>Tue, 13 Jan 2026 13:34:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-5442067174469570292</guid><description>&lt;p&gt;This lecture introduces the main policies of the course and an outline of its content. We close with an introduction to the concepts of heuristics, metaheurisitcs, and hyperheuristics in the context of Engineering Design Optimization specifically and optimization more generally. We hint at the idea that nature provides templates for heuristics at all three levels, and this class aims to understand how these natural systems work and what can be taken from them in the design of heuristics for engineered systems.&lt;/p&gt;&lt;p&gt;Whiteboard note for this lecture can be found at: &lt;a href="https://www.dropbox.com/scl/fi/gza807hargomj414wo7fz/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-Notes.pdf?rlkey=bl3tx0oa1vbaz79vxahrxipmi&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/gza807hargomj414wo7fz/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-Notes.pdf?rlkey=bl3tx0oa1vbaz79vxahrxipmi&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/FSCN45BaDCc" width="320" youtube-src-id="FSCN45BaDCc"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/tytv2s3y5ekqvjihryk0h/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-audio_only.mp3?rlkey=5urxtrkwbo2a6v5v448740amq&amp;ext=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/FSCN45BaDCc/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>This lecture introduces the main policies of the course and an outline of its content. We close with an introduction to the concepts of heuristics, metaheurisitcs, and hyperheuristics in the context of Engineering Design Optimization specifically and optimization more generally. We hint at the idea that nature provides templates for heuristics at all three levels, and this class aims to understand how these natural systems work and what can be taken from them in the design of heuristics for engineered systems. Whiteboard note for this lecture can be found at: https://www.dropbox.com/scl/fi/gza807hargomj414wo7fz/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-Notes.pdf?rlkey=bl3tx0oa1vbaz79vxahrxipmi&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>This lecture introduces the main policies of the course and an outline of its content. We close with an introduction to the concepts of heuristics, metaheurisitcs, and hyperheuristics in the context of Engineering Design Optimization specifically and optimization more generally. We hint at the idea that nature provides templates for heuristics at all three levels, and this class aims to understand how these natural systems work and what can be taken from them in the design of heuristics for engineered systems. Whiteboard note for this lecture can be found at: https://www.dropbox.com/scl/fi/gza807hargomj414wo7fz/IEE598-Lecture1A-2026-01-13-Introduction_to_Course_Policies_and_Motivations-Notes.pdf?rlkey=bl3tx0oa1vbaz79vxahrxipmi&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 8A (2025-04-29): Complex Systems Models of Computation – Cellular Automata and Neighbors</title><link>https://asu-iee598-bioinspired.blogspot.com/2025/04/lecture-8a-2025-04-29-complex-systems.html</link><category>podcast</category><pubDate>Mon, 28 Apr 2025 22:44:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-7924302317383181685</guid><description>&lt;p&gt;This lecture introduces approaches for understanding (and building) computational systems that emerge out of Complex Adaptive Systems (CAS). It first motivates the idea that systems of many, interconnected parts that each are relatively easy to understand in isolation can come together in a system whose network of interactions leads to emergent global phenomena that cannot be predicted from the properties or behaviors of any individual component. We then focus on the role of space in the functions and properties that emerge at a global level. We do this through the example of the Interacting Particle System (IPS) known as the "voter model", which can be viewed as a model for neutral evolution in spatially structured populations. We show that the dual process for the Voter Model is a time-reversed set of coalescing random walkers for which consensus in the model corresponds to whether walkers are sure to coalesce into a single walker in the past of the dual process. This lets us apply Pólya's recurrence theorem and show that consensus is guaranteed (with probability 1) for 1- and 2-dimensional lattices but not guaranteed for lattices of 3 dimensions or higher. This implies that neutral evolution (for example) in a 3D spatial structure may not always lead to fixation on one genotype. We then pivot to introducing Elementary Cellular Automata (ECA) and describe a few rules that demonstrate how they work. We close the regular lecture by connecting CA's back to neural networks (the previous unit) and evolutionary algorithms (the first unit), thus introducing the Cellular Evolutionary Algorithm (cEA). We then extend the lecture a little longer than usual in order to do a demonstration of several ECA's in NetLogo, including a demonstration of how to combine two ECA rules to generate a reliable density classifier.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/gfna3a2aj49fq2a4sovst/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=mo5jag4axljxrbpnq6wkk9egh&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/gfna3a2aj49fq2a4sovst/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=mo5jag4axljxrbpnq6wkk9egh&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/x08dR8Yvlso" width="320" youtube-src-id="x08dR8Yvlso"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/dgfk992n46f42nm1yxmfo/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-audio_only.mp3?rlkey=z6or0c0r7ydn7fsw039dsvef8&amp;extension=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/x08dR8Yvlso/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>This lecture introduces approaches for understanding (and building) computational systems that emerge out of Complex Adaptive Systems (CAS). It first motivates the idea that systems of many, interconnected parts that each are relatively easy to understand in isolation can come together in a system whose network of interactions leads to emergent global phenomena that cannot be predicted from the properties or behaviors of any individual component. We then focus on the role of space in the functions and properties that emerge at a global level. We do this through the example of the Interacting Particle System (IPS) known as the "voter model", which can be viewed as a model for neutral evolution in spatially structured populations. We show that the dual process for the Voter Model is a time-reversed set of coalescing random walkers for which consensus in the model corresponds to whether walkers are sure to coalesce into a single walker in the past of the dual process. This lets us apply Pólya's recurrence theorem and show that consensus is guaranteed (with probability 1) for 1- and 2-dimensional lattices but not guaranteed for lattices of 3 dimensions or higher. This implies that neutral evolution (for example) in a 3D spatial structure may not always lead to fixation on one genotype. We then pivot to introducing Elementary Cellular Automata (ECA) and describe a few rules that demonstrate how they work. We close the regular lecture by connecting CA's back to neural networks (the previous unit) and evolutionary algorithms (the first unit), thus introducing the Cellular Evolutionary Algorithm (cEA). We then extend the lecture a little longer than usual in order to do a demonstration of several ECA's in NetLogo, including a demonstration of how to combine two ECA rules to generate a reliable density classifier. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/gfna3a2aj49fq2a4sovst/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=mo5jag4axljxrbpnq6wkk9egh&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>This lecture introduces approaches for understanding (and building) computational systems that emerge out of Complex Adaptive Systems (CAS). It first motivates the idea that systems of many, interconnected parts that each are relatively easy to understand in isolation can come together in a system whose network of interactions leads to emergent global phenomena that cannot be predicted from the properties or behaviors of any individual component. We then focus on the role of space in the functions and properties that emerge at a global level. We do this through the example of the Interacting Particle System (IPS) known as the "voter model", which can be viewed as a model for neutral evolution in spatially structured populations. We show that the dual process for the Voter Model is a time-reversed set of coalescing random walkers for which consensus in the model corresponds to whether walkers are sure to coalesce into a single walker in the past of the dual process. This lets us apply Pólya's recurrence theorem and show that consensus is guaranteed (with probability 1) for 1- and 2-dimensional lattices but not guaranteed for lattices of 3 dimensions or higher. This implies that neutral evolution (for example) in a 3D spatial structure may not always lead to fixation on one genotype. We then pivot to introducing Elementary Cellular Automata (ECA) and describe a few rules that demonstrate how they work. We close the regular lecture by connecting CA's back to neural networks (the previous unit) and evolutionary algorithms (the first unit), thus introducing the Cellular Evolutionary Algorithm (cEA). We then extend the lecture a little longer than usual in order to do a demonstration of several ECA's in NetLogo, including a demonstration of how to combine two ECA rules to generate a reliable density classifier. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/gfna3a2aj49fq2a4sovst/IEE598-Lecture8A-2025-04-29-Complex_Systems_Models_of_Computation-Cellular_Automata_and_Neighbors-Notes.pdf?rlkey=mo5jag4axljxrbpnq6wkk9egh&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7F (2025-04-24): Spiking Neural Networks and Neuromorphic Computation</title><link>https://asu-iee598-bioinspired.blogspot.com/2025/04/lecture-7f-2025-04-24-spiking-neural.html</link><category>podcast</category><pubDate>Thu, 24 Apr 2025 16:20:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-1049214640021036770</guid><description>&lt;p&gt;This lecture explores how real and artificial brains learn using spikes. We begin by reviewing the structure and behavior of spiking neurons, focusing on the Leaky Integrate-and-Fire (LIF) model and the efficiency of sparse, event-driven temporal coding. We then introduce Spike-Timing-Dependent Plasticity (STDP), a biologically inspired learning rule that adjusts synaptic strength based on the relative timing of spikes. From there, we survey major neuromorphic hardware platforms—SpiNNaker, TrueNorth, and Loihi—highlighting their architectural differences and support for learning. We then examine memristor-based crossbar arrays as an analog substrate for STDP, including a case study from Boyn et al. (2017). Finally, we return to Hebbian learning as a conceptual foundation ("fire together, wire together") and explore how simple local decentralized unsupervised Hebbian-like learning rules for conventional ANNs can also produce meaningful clustering behavior. We close with a discussion of future directions, including neuromodulation, synaptic adaptability, and recent research on using sleep-inspired replay to prevent catastrophic forgetting in spiking neural networks.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/8mqjreoitin3qadzk9ofm/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-Notes.pdf?rlkey=l83a286aig0fpibafuvofr0hc&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/8mqjreoitin3qadzk9ofm/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-Notes.pdf?rlkey=l83a286aig0fpibafuvofr0hc&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/BY1ck2QnnHM" width="320" youtube-src-id="BY1ck2QnnHM"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/bp2m3htcfpvq6gt779eu6/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-audio_only.mp3?rlkey=tz6h7yer9erb94bc6p0vq027m&amp;extension=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/BY1ck2QnnHM/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>This lecture explores how real and artificial brains learn using spikes. We begin by reviewing the structure and behavior of spiking neurons, focusing on the Leaky Integrate-and-Fire (LIF) model and the efficiency of sparse, event-driven temporal coding. We then introduce Spike-Timing-Dependent Plasticity (STDP), a biologically inspired learning rule that adjusts synaptic strength based on the relative timing of spikes. From there, we survey major neuromorphic hardware platforms—SpiNNaker, TrueNorth, and Loihi—highlighting their architectural differences and support for learning. We then examine memristor-based crossbar arrays as an analog substrate for STDP, including a case study from Boyn et al. (2017). Finally, we return to Hebbian learning as a conceptual foundation ("fire together, wire together") and explore how simple local decentralized unsupervised Hebbian-like learning rules for conventional ANNs can also produce meaningful clustering behavior. We close with a discussion of future directions, including neuromodulation, synaptic adaptability, and recent research on using sleep-inspired replay to prevent catastrophic forgetting in spiking neural networks. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/8mqjreoitin3qadzk9ofm/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-Notes.pdf?rlkey=l83a286aig0fpibafuvofr0hc&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>This lecture explores how real and artificial brains learn using spikes. We begin by reviewing the structure and behavior of spiking neurons, focusing on the Leaky Integrate-and-Fire (LIF) model and the efficiency of sparse, event-driven temporal coding. We then introduce Spike-Timing-Dependent Plasticity (STDP), a biologically inspired learning rule that adjusts synaptic strength based on the relative timing of spikes. From there, we survey major neuromorphic hardware platforms—SpiNNaker, TrueNorth, and Loihi—highlighting their architectural differences and support for learning. We then examine memristor-based crossbar arrays as an analog substrate for STDP, including a case study from Boyn et al. (2017). Finally, we return to Hebbian learning as a conceptual foundation ("fire together, wire together") and explore how simple local decentralized unsupervised Hebbian-like learning rules for conventional ANNs can also produce meaningful clustering behavior. We close with a discussion of future directions, including neuromodulation, synaptic adaptability, and recent research on using sleep-inspired replay to prevent catastrophic forgetting in spiking neural networks. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/8mqjreoitin3qadzk9ofm/IEE598-Lecture7F-2025-04-24-Spiking_Neural_Networks_and_Neuromorphic_Computation-Notes.pdf?rlkey=l83a286aig0fpibafuvofr0hc&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item><item><title>Lecture 7E (2025-04-22): Learning without a Teacher – Unsupervised and Self-Supervised Learning</title><link>https://asu-iee598-bioinspired.blogspot.com/2025/04/lecture-7e-2025-04-22-learning-without.html</link><category>podcast</category><pubDate>Tue, 22 Apr 2025 14:36:00 -0700</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-4814069442588278591.post-4620650381750112782</guid><description>&lt;p&gt;This lecture covers unsupervised and self-supervised learning, focusing on how both brains and machines discover structure without external labels or rewards (akin to non-associative learning). It begins with examples of unsupervised learning, including clustering, principal component analysis, and autoencoders, and then explores how biological systems like the olfactory pathway in insects organize complex sensory input into compressed, low-dimensional codes. We take a detailed look at the structure of the honeybee brain, examining how floral odors are transformed through the antennal lobe’s glomerular code into organized neural representations. We then transition into self-supervised learning (akin to latent learning) by introducing predictive coding and sensorimotor prediction, highlighting how brains use internal models to anticipate and correct sensory input. Finally, we close by discussing how modern AI systems like GPT (and BERT) leverage self-supervised objectives to build rich internal representations from raw data.&lt;/p&gt;&lt;p&gt;Whiteboard notes for this lecture can be found at:&lt;br /&gt;&lt;a href="https://www.dropbox.com/scl/fi/qwezfleqplmxtiobfpoew/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-Notes.pdf?rlkey=4k5o8j8no3s9x7xc5di676qz3&amp;amp;dl=0"&gt;https://www.dropbox.com/scl/fi/qwezfleqplmxtiobfpoew/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-Notes.pdf?rlkey=4k5o8j8no3s9x7xc5di676qz3&amp;amp;dl=0&lt;/a&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/3MG_JDlP6Xw" width="320" youtube-src-id="3MG_JDlP6Xw"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;br /&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</description><enclosure length="0" type="audio/mpeg" url="https://dl.dropboxusercontent.com/scl/fi/3yqv4akgxj9igoptxzlv2/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-audio_only.mp3?rlkey=y2nfex5vucm35ybp4iq257f3h&amp;extension=.mp3"/><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/3MG_JDlP6Xw/default.jpg" width="72"/><thr:total xmlns:thr="http://purl.org/syndication/thread/1.0">0</thr:total><georss:featurename xmlns:georss="http://www.georss.org/georss">Tempe, AZ, USA</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">33.4255104 -111.9400054</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">5.1152765638211548 -147.09625540000002 61.735744236178846 -76.7837554</georss:box><author>ted@tedpavlic.com (Theodore P. Pavlic)</author><itunes:explicit>no</itunes:explicit><itunes:subtitle>This lecture covers unsupervised and self-supervised learning, focusing on how both brains and machines discover structure without external labels or rewards (akin to non-associative learning). It begins with examples of unsupervised learning, including clustering, principal component analysis, and autoencoders, and then explores how biological systems like the olfactory pathway in insects organize complex sensory input into compressed, low-dimensional codes. We take a detailed look at the structure of the honeybee brain, examining how floral odors are transformed through the antennal lobe’s glomerular code into organized neural representations. We then transition into self-supervised learning (akin to latent learning) by introducing predictive coding and sensorimotor prediction, highlighting how brains use internal models to anticipate and correct sensory input. Finally, we close by discussing how modern AI systems like GPT (and BERT) leverage self-supervised objectives to build rich internal representations from raw data. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/qwezfleqplmxtiobfpoew/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-Notes.pdf?rlkey=4k5o8j8no3s9x7xc5di676qz3&amp;amp;dl=0</itunes:subtitle><itunes:author>Theodore P. Pavlic</itunes:author><itunes:summary>This lecture covers unsupervised and self-supervised learning, focusing on how both brains and machines discover structure without external labels or rewards (akin to non-associative learning). It begins with examples of unsupervised learning, including clustering, principal component analysis, and autoencoders, and then explores how biological systems like the olfactory pathway in insects organize complex sensory input into compressed, low-dimensional codes. We take a detailed look at the structure of the honeybee brain, examining how floral odors are transformed through the antennal lobe’s glomerular code into organized neural representations. We then transition into self-supervised learning (akin to latent learning) by introducing predictive coding and sensorimotor prediction, highlighting how brains use internal models to anticipate and correct sensory input. Finally, we close by discussing how modern AI systems like GPT (and BERT) leverage self-supervised objectives to build rich internal representations from raw data. Whiteboard notes for this lecture can be found at: https://www.dropbox.com/scl/fi/qwezfleqplmxtiobfpoew/IEE598-Lecture7E-2025-04-22-Learning_without_a_Teacher-Unsupervised_and_Self-Supervised_Learning-Notes.pdf?rlkey=4k5o8j8no3s9x7xc5di676qz3&amp;amp;dl=0</itunes:summary><itunes:keywords>optimization,metaheuristics,AI,optimization,nature,inspired,bio,inspired,genetic,algorithms,evolutionary,algorithms,ant,colony,optimization</itunes:keywords></item></channel></rss>