<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Tautvidas Sipavičius</title>
    <description>Tautvidas Sipavičius on programming, building software systems and experimenting with cool tech</description>
    <link>https://www.tautvidas.com/blog/</link>
    <atom:link href="https://www.tautvidas.com/blog/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sat, 26 Dec 2020 21:16:20 +0200</pubDate>
    <lastBuildDate>Sat, 26 Dec 2020 21:16:20 +0200</lastBuildDate>
    <generator>Jekyll v4.2.0</generator>
    
      <item>
        <title>My engineering and management reading list of 2020</title>
        <description>&lt;p&gt;We all have stacks of books we bought and never read - information overload pressures us to carefully choose what we devote our scarce attention to.
With 2020 coming to a close I thought I’d share several books I’ve enjoyed or found valuable this year - perhaps you fill find a few useful too.&lt;/p&gt;

&lt;h3 id=&quot;1-high-output-management-by-andrew-grove&quot;&gt;1. &lt;a href=&quot;https://www.penguinrandomhouse.com/books/72467/high-output-management-by-andrew-s-grove-former-chairman-and-ceo-of-intel/&quot;&gt;High Output Management&lt;/a&gt; by Andrew Grove&lt;/h3&gt;

&lt;p&gt;Andrew Grove (former Intel CEO and Chairman) provides some guidance on how to be effective in a management role by focusing on 4 key areas: business operations, the manager role, scaling beyond a single team, team members as individuals.&lt;/p&gt;

&lt;p&gt;The book provided more food for thought for how I fit into my current organization, as well as some ideas I had not considered before, such as the “information gathering and sharing, nudging, decision making” activities managers perform, thinking about processes and bottlenecks as a manufacturer would.
If like myself you’ve unintentionally stumbled into a leading or managerial role, “High Output Management” will provide you with a framework for success and deepen your perspective. I would strongly recommend this book.&lt;/p&gt;

&lt;h3 id=&quot;2-debugging-teams-better-productivity-through-collaboration-by-brian-fitzpatrick-and-ben-collins-sussman&quot;&gt;2. &lt;a href=&quot;https://www.oreilly.com/library/view/debugging-teams/9781491932049/&quot;&gt;Debugging Teams: Better Productivity through Collaboration&lt;/a&gt; by Brian Fitzpatrick and Ben Collins-Sussman&lt;/h3&gt;

&lt;p&gt;While Robert Martin’s &lt;a href=&quot;https://www.amazon.com/Clean-Coder-Conduct-Professional-Programmers/dp/0137081073/&quot;&gt;The Clean Coder&lt;/a&gt; focuses more on how to “play nice with others” from the individual engineer’s perspective, Fitzpatrick and Sussman extend the same idea to teams (engineering and non-engineering too) and users.&lt;/p&gt;

&lt;p&gt;The book builds teamwork on top of 3 pillars: Humility, Respect and Trust, and suggests that most social conflicts arise from lack of one of the components - I’m sure you’ve heard of or (sadly) even experienced horror stories caused by poor company culture or problematic managers/team members.
While it’s not a silver bullet, the book provides some guidance for how and why to build team culture, guidance for leaders (lose the ego, providing a safe place for failure), shares several tips for dealing with poisonous people and how to fit within a larger company (e.g. seeking friends, navigating politics).&lt;/p&gt;

&lt;h3 id=&quot;3-outliers-the-story-of-success-by-malcolm-gladwell&quot;&gt;3. &lt;a href=&quot;https://www.penguinrandomhouse.com/books/577716/outliers-fuera-de-serieoutliers-the-story-of-success-by-malcolm-gladwell/&quot;&gt;Outliers: The Story of Success&lt;/a&gt; by Malcolm Gladwell&lt;/h3&gt;

&lt;p&gt;The author explores why some people achieve more than others. One of the key points popularized by the book is that it takes about 10 000 hours of practice to master a skill.
Of course, success is more than that (e.g. the the book notes opportunity is also a significant contributor to being successful), but as the saying goes “hard work does not guarantee success, but makes it more likely”.&lt;/p&gt;

&lt;p&gt;The book altered my mindset for learning new skills. I now look for more (enjoyable) ways to put in the necessary hours of deliberate practice. If we can find enjoyment in practice - we’re more likely to stick with it. And as an added benefit if others view similar practice as work - in the long term we’ll have a competitive edge.
E.g. I’m currently going through the &lt;a href=&quot;https://www.amazon.com/Deep-Learning-Coders-fastai-PyTorch/dp/1492045527&quot;&gt;FastAI Deep Learning book&lt;/a&gt;, and the authors too recommend practicing with domains you’re already an expert in. I find it difficult to internalize some of the datasets (e.g. Kaggle’s Credit Card Fraud), impeding my progress. However, I find it more enjoyable practicing deep learning in building game bots.&lt;/p&gt;

&lt;p&gt;On a similar note, a chess streamer &lt;a href=&quot;https://www.twitch.tv/gothamchess&quot;&gt;GothamChess&lt;/a&gt; (IM Levy Rozman) recently talked how in tournament play it’s common to prepare for specific opponents by reviewing their openings and targeting opponent’s less explored lines in an attempt to gain an edge.
Being well-rounded is valuable in real world too, so deliberate practice needs to evolve as well.&lt;/p&gt;

&lt;p&gt;I find similar ideas resonating in the &lt;a href=&quot;https://www.hachette.co.uk/titles/david-kushner/masters-of-doom/9780749924898/&quot;&gt;Masters of Doom&lt;/a&gt; book, which described how John Carmack and John Romero built Id Software and its legendary games (such as Doom, Quake, Commander Keen).
Both Johns had created tens/hundreds of games and attained mastery even before their paths crossed, and both found sufficiently significant satisfaction in their work to keep pushing their skillsets and the field’s boundary on arduous schedules for years.&lt;/p&gt;

&lt;h3 id=&quot;4-the-phoenix-project-a-novel-about-it-devops-and-helping-your-business-win-by-gene-kim-kevin-behr-george-spafford&quot;&gt;4. &lt;a href=&quot;https://itrevolution.com/the-phoenix-project/&quot;&gt;The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win&lt;/a&gt; by Gene Kim, Kevin Behr, George Spafford&lt;/h3&gt;

&lt;p&gt;The book introduces us to the concepts of Lean Manufacturing and the “Three ways” (fast flow of work, enabling fast and constant feedback, continuous learning and experimentation) in the form of a novel, where the story follows Bill - an IT manager tasked with sorting out the mess in the IT department of Parts Unlimited.&lt;/p&gt;

&lt;p&gt;I enjoyed this storytelling approach - you get to understand and cringe at the problems (and think “hey, are the authors writing about our organization?”), and then the mentor in the book gradually guides derivation and understanding of solutions, while the main hero stumbles through mistakes and realizations.
It provides better context why the devops movement does what it does within a business, shows the problems it solves.&lt;/p&gt;

&lt;p&gt;There is a second book - &lt;a href=&quot;https://itrevolution.com/the-unicorn-project/&quot;&gt;The Unicorn Project&lt;/a&gt;, however I found it more difficult to relate to to the point of being distracting, mostly because from what I gathered the characters are portrayed as all-knowing flawless folks at the tippy-top of the field, effortlessly accomplishing all of their tasks and have gotten into the mess only because incompetence of or being undermined by the company, their managers or other teams. Even the mentor from “The Phoenix Project” ended up being a “yes-man”, only praising characters’ ideas rather than actually mentoring.&lt;/p&gt;

&lt;h3 id=&quot;5-the-design-of-everyday-things-by-don-norman&quot;&gt;5. &lt;a href=&quot;https://mitpress.mit.edu/books/design-everyday-things&quot;&gt;The Design of Everyday Things&lt;/a&gt; by Don Norman&lt;/h3&gt;

&lt;p&gt;The book explores usability design in the real world. While the examples are mostly physical (e.g. how do (or don’t) we know how to open a door?), the underlying principles (discoverability, feedback, affordances) are apt in the software world as well.
If you build something for other people to use - this is one of those must-read books.&lt;/p&gt;

&lt;h3 id=&quot;6-clean-architecture-a-craftsmans-guide-to-software-structure-and-design-by-robert-martin&quot;&gt;6. &lt;a href=&quot;https://www.pearson.ch/Informatik/PrenticeHall/EAN/9780134494166/Clean-Architecture&quot;&gt;Clean Architecture: A Craftsman’s Guide to Software Structure and Design&lt;/a&gt; by Robert Martin&lt;/h3&gt;

&lt;p&gt;Other Robert Martin’s “Clean” books (clean code, coder) are great, and “Clean architecture” is no different. The book does a deep dive into software architecture, covering basic software design rinciples like SRP, LSP, component design guidelines (e.g. cohesion, coupling) and more architectural topics (layers and boundaries, business rule handling, details).&lt;/p&gt;

&lt;h3 id=&quot;7-designing-data-intensive-applications-the-big-ideas-behind-reliable-scalable-and-maintainable-systems-by-martin-klepmann&quot;&gt;7. &lt;a href=&quot;https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/&quot;&gt;Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems&lt;/a&gt; by Martin Klepmann&lt;/h3&gt;

&lt;p&gt;The scale of data modern systems utilize continues growing, requiring understanding of how to build scalable and maintainable systems.
The book covers 3 major parts: data system fundamentals (reliability, scalability, maintainability, different data storage engines and data evolution over time), distributing data over multiple machines and challenges involved, building systems for deriving data out of existing datasets.&lt;/p&gt;

&lt;h3 id=&quot;8-hackers--painters-big-ideas-from-the-computer-age-by-paul-graham&quot;&gt;8. &lt;a href=&quot;http://paulgraham.com/hackpaint.html&quot;&gt;Hackers &amp;amp; Painters: Big Ideas from the Computer Age&lt;/a&gt; by Paul Graham&lt;/h3&gt;

&lt;p&gt;You likely already know Paul Graham from YCombinator (the investment company) or &lt;a href=&quot;https://news.ycombinator.com/&quot;&gt;Hacker News&lt;/a&gt; (the news community for hackers they operate). PG is also an avid writer, you can find his essays &lt;a href=&quot;http://paulgraham.com/articles.html&quot;&gt;here&lt;/a&gt;.
The book is mostly a compilation of his essays in book form and covers a variety of topics, including fighting spam, how to think about and create wealth, how to think about programming languages and how they shape our work.&lt;/p&gt;

&lt;h3 id=&quot;9-the-healthy-programmer-by-joe-kutner&quot;&gt;9. &lt;a href=&quot;http://healthyprog.com/&quot;&gt;The Healthy Programmer&lt;/a&gt; by Joe Kutner&lt;/h3&gt;

&lt;p&gt;Software engineers tend to be stereotyped as mostly sedentary, pizza and coke fueled coding machines frequently working late into the night - not the healthiest. The book is especially relevant in today’s COVID world when most of us work from home and barely leave the house, as it provides recommendations on how to set oneself up for a lifelong marathon of staying fit, sharp and healthy, and covers topics like habit building, injury prevention and pain management, nutrition and exercise.&lt;/p&gt;

&lt;h3 id=&quot;other-interesting-mentions&quot;&gt;Other interesting mentions&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://markmanson.net/books/subtle-art&quot;&gt;The Subtle Art of Not Giving a Fuck&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.simonandschuster.com/books/Why-We-Sleep/Matthew-Walker/9781501144325&quot;&gt;Why We Sleep: Unlocking the Power of Sleep and Dreams&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://wwnorton.com/books/9781324002185&quot;&gt;A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.penguin.co.uk/authors/1008430/richard-p-feynman.html&quot;&gt;The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://sre.google/books/building-secure-reliable-systems/&quot;&gt;Building Secure and Reliable Systems: Best Practices for Designing, Implementing and Maintaining Systems&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://itrevolution.com/book/making-work-visible/&quot;&gt;Making Work Visible: Exposing Time Theft to Optimize Work &amp;amp; Flow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Sat, 26 Dec 2020 00:00:00 +0200</pubDate>
        <link>https://www.tautvidas.com/blog/2020/12/my-engineering-and-management-reading-list-of-2020/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2020/12/my-engineering-and-management-reading-list-of-2020/</guid>
        
        <category>reading</category>
        
        <category>books</category>
        
        
        <category>Books</category>
        
      </item>
    
      <item>
        <title>Sending Arduino sensor data via Bluetooth to a Python app</title>
        <description>&lt;p&gt;I wanted to build a solution to allow dropping off multiple battery-powered sensors (e.g. temperature, humidity, accelerometer) spread out over an area, collecting and processing their data in near real-time.
One approach would be to use networking over WiFi, but it is not always available. A while back I wrote about using &lt;a href=&quot;#TODO: Add a post link&quot;&gt;JY-MCU Bluetooth module&lt;/a&gt; to communicate with purpose-built Bluetooth utilities. Now I want to take a step further and set up Bluetooth communication with my own Python app.&lt;/p&gt;

&lt;p&gt;The project will look as follows:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Arduino will collect sensor data, in this case - temperature and humidity data, but it could easily be data from other sensors;&lt;/li&gt;
  &lt;li&gt;Arduino will send the sensor data over the JY-MCU Bluetooth module to a Python application we’ll build;&lt;/li&gt;
  &lt;li&gt;The Python listener will listen for sensor data and add them to InfluxDB timeseries database;&lt;/li&gt;
  &lt;li&gt;Grafana will be used to graph the sensor data from InfluxDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;1-arduino-setup&quot;&gt;1. Arduino setup&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bluetooth-python-2020/arduino-connections.jpg&quot; alt=&quot;Arduino Hardware configuration&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For this proof-of-concept I’ll use the &lt;a href=&quot;https://wiki.seeedstudio.com/Grove-Beginner-Kit-For-Arduino/&quot;&gt;Grove Beginner Kit from Seeed&lt;/a&gt; that they graciously sent over.
While the kit is intended for beginners and education, it’s also convenient for small scale prototyping, as the kit comes with multiple pre-wired sensors (including the temperature and humidity sensor used here) and uses the &lt;a href=&quot;https://wiki.seeedstudio.com/Grove_System/&quot;&gt;modular Grove connector system&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the bluetooth module, I’ll use the JY-MCU Bluetooth module from DealExtreme, however, DX no longer sell it.
I have found a nearly identical version of it as &lt;a href=&quot;https://www.taydaelectronics.com/breakout-boards/hc-06-bluetooth-transceiver-module.html&quot;&gt;HC-06&lt;/a&gt; on TaydaElectronics: looks like the chips and the model are the same, just screenprinted markings are slightly different.&lt;/p&gt;

&lt;p&gt;The temperature/humidity sensor is already connected to Arduino on my kit, unless you break out the sensors out of the PCB.
Unfortunately my Bluetooth sensor is not Grove compatible, although with a soldering iron it would be fairly easy to make your own Grove-compatible connector, so I’ll make connections with 4 jumper wires on a breadboard as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;JY-MCU RXD &amp;lt;-&amp;gt; Arduino Pin 9 (&lt;strong&gt;MUST SUPPORT PULSE-WIDTH MODULATION&lt;/strong&gt;)&lt;/li&gt;
  &lt;li&gt;JY-MCU TXD &amp;lt;-&amp;gt; Arduino Pin 10 (&lt;strong&gt;MUST SUPPORT PULSE-WIDTH MODULATION&lt;/strong&gt;)&lt;/li&gt;
  &lt;li&gt;JY-MCU GND &amp;lt;-&amp;gt; Arduino GND&lt;/li&gt;
  &lt;li&gt;JY-MCU VCC &amp;lt;-&amp;gt; Arduino 5V&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the picture above the board is powered by a USB powerbank over a Micro USB connection.&lt;/p&gt;

&lt;p&gt;Seeedstudio provide setup instructions and the sample code on how to use the sensor: &lt;a href=&quot;https://wiki.seeedstudio.com/Grove-Beginner-Kit-For-Arduino/#lesson-8-detecting-surrounding-temperature-humidity&quot;&gt;wiki&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’re curious, the library code can be found here: &lt;a href=&quot;https://github.com/Seeed-Studio/Grove_Temperature_And_Humidity_Sensor&quot;&gt;Github&lt;/a&gt;.
Sensor reads are fairly slow, having to wait for 250ms+, but for this use case it’s good enough.&lt;/p&gt;

&lt;p&gt;For the Bluetooth part I’ve used the Arduino IDE built-in &lt;a href=&quot;https://www.arduino.cc/en/Reference/SoftwareSerial&quot;&gt;SoftwareSerial library&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Third, just for convenience sake, I used &lt;a href=&quot;https://arduinojson.org/&quot;&gt;ArduinoJson&lt;/a&gt; library to send measurements as JSON over bluetooth.
While using JSON will waste too much bandwidth on verbose data structures, it will help keep the data processing straightforward, as we’ll need to deal with partial data chunks later on, which would look as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Received chunk b'{'
Received chunk b'&quot;temperat'
Received chunk b'ure&quot;:22,&quot;'
Received chunk b'humidity&quot;:'
Received chunk b'34}'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Even when trying to read 1kB at a time we get data in chunks. So to get the full record in this case it took 4 reads.&lt;/p&gt;

&lt;p&gt;The final code looks as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;#include &amp;lt;ArduinoJson.h&amp;gt;
#include &amp;lt;SoftwareSerial.h&amp;gt;
#include &quot;DHT.h&quot;

#define DHTPIN 3 // Temperature/Humidity sensor pin
#define DHTTYPE DHT11 // DHT 11
DHT dht(DHTPIN, DHTTYPE);

const int RX_PIN = 10; // Bluetooth RX digital PWM-supporting pin
const int TX_PIN = 9; // Bluetooth TX digital PWM-supporting pin
const int BLUETOOTH_BAUD_RATE = 9600;

SoftwareSerial bluetooth(RX_PIN, TX_PIN);

void setup() {
   dht.begin();
   Serial.begin(9600);
   bluetooth.begin(BLUETOOTH_BAUD_RATE);
}

void loop() {
  StaticJsonDocument&amp;lt;100&amp;gt; doc;

  float measurements[2];
  dht.readTempAndHumidity(measurements);

  doc[&quot;humidity&quot;] = measurements[0];
  doc[&quot;temperature&quot;] = measurements[1];

  serializeJson(doc, bluetooth);
  serializeJson(doc, Serial);
  Serial.println();
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;No surprises. DHT, SoftwareSerial and ArduinoJson all provide straightforward interfaces to use.
This code will read the temperature and humidity sensor data and will write the resulting JSON to Bluetooth and Serial connections.&lt;/p&gt;

&lt;p&gt;Serial output looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bluetooth-python-2020/arduino-serial-output.jpg&quot; alt=&quot;Arduino serial output&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;2-python-setup&quot;&gt;2. Python setup&lt;/h2&gt;

&lt;p&gt;For Bluetooth communication I used &lt;a href=&quot;https://github.com/pybluez/pybluez/&quot;&gt;Pybluez&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pybluez can scan for available Bluetooth clients, connect to and communicate with them.
However, Pybluez does not provide any utilities to pair with the client, which &lt;a href=&quot;https://stackoverflow.com/a/39666806&quot;&gt;a StackOverflow comment&lt;/a&gt; explains:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;PyBluez does not support cross-platform pairing management. This is because some operating systems, like Windows, require super-user level permissions to programatically pair devices.
On most operating systems, when PyBluez attempts to connect to the socket using the connect method, the operating system will try to pair (often asking the user for permission).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Inconvenient, so in the end I hardcoded the target address and having to pair the PC with the sensor manually, by using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bluetoothctl&lt;/code&gt; command from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bluez&lt;/code&gt; package. Not usable for production, but good enough for a prototype.&lt;/p&gt;

&lt;p&gt;Secondly, you see that after receiving every chunk of data we’re trying to parse what we’ve received so far as JSON. This is because data arrives in chunks, and it may take several chunks to receive a valid JSON payload:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Received chunk b'{'
Received chunk b'&quot;temperat'
Received chunk b'ure&quot;:22,&quot;'
Received chunk b'humidity&quot;:'
Received chunk b'34}'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For InfluxDB I used &lt;a href=&quot;https://github.com/influxdata/influxdb-python&quot;&gt;influxdb-python&lt;/a&gt; library, basing the data logging function on their &lt;a href=&quot;https://github.com/influxdata/influxdb-python#examples&quot;&gt;example&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;The final code looks like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;import bluetooth
import json
from influxdb import InfluxDBClient
import datetime

sensor_address = '07:12:05:15:60:07'
socket = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
socket.connect((sensor_address, 1))

influx = InfluxDBClient('localhost', 8086, 'root', 'root', 'example')
influx.create_database('example')

def log_stats(client, sensor_address, stats):
    json_body = [
        {
            &quot;measurement&quot;: &quot;sensor_data&quot;,
            &quot;tags&quot;: {
                &quot;client&quot;: sensor_address,
            },
            &quot;time&quot;: datetime.datetime.now(datetime.timezone.utc).isoformat(),
            &quot;fields&quot;: {
                &quot;temperature&quot;: stats['temperature'],
                &quot;humidity&quot;: stats['humidity'],
            }
        }
    ]

    client.write_points(json_body)

buffer = &quot;&quot;

while True:
    data = socket.recv(1024)
    buffer += str(data, encoding='ascii')
    try:
        data = json.loads(buffer)
        print(&quot;Received chunk&quot;, data)
        log_stats(influx, sensor_address, data)
        buffer = &quot;&quot;
    except json.JSONDecodeError as e:
        continue

socket.close()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;3-influxdb-and-grafana&quot;&gt;3. InfluxDB and Grafana&lt;/h2&gt;

&lt;p&gt;While not necessary, it is convenient to visualize the sensor data, and here I’ve used &lt;a href=&quot;https://www.influxdata.com/&quot;&gt;InfluxDB&lt;/a&gt; (a time-series database) and &lt;a href=&quot;https://grafana.com/&quot;&gt;Grafana&lt;/a&gt; (a monitoring solution).
With one sensor it’s not terribly necessary, but if multiple sensor units were to be used in different locations - graphing the data all in one place makes it much easier to follow and interpret.&lt;/p&gt;

&lt;p&gt;I’ve used &lt;a href=&quot;https://docs.docker.com/compose/&quot;&gt;Docker Compose&lt;/a&gt; to run both Grafana and InfluxDB for prototyping purposes. Of course, in a production environment one would want to set up persistence, proper auth, etc.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;version: '2.0'
services:
  influx:
    image: influxdb
    ports:
    - &quot;8086:8086&quot;
    environment:
      INFLUXDB_ADMIN_USER: admin
      INFLUXDB_ADMIN_PASSWORD: admin
  grafana:
    image: grafana/grafana
    ports:
    - &quot;3000:3000&quot;
    links:
    - influx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-compose up&lt;/code&gt; will start InfluxDB on port &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;8086&lt;/code&gt;, and Grafana on port &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;3000&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ docker-compose ps                                                                                                                                                                                          1 ↵
           Name                     Command           State           Ports         
------------------------------------------------------------------------------------
arduinobluetooth_grafana_1   /run.sh                  Up      0.0.0.0:3000-&amp;gt;3000/tcp
arduinobluetooth_influx_1    /entrypoint.sh influxd   Up      0.0.0.0:8086-&amp;gt;8086/tcp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I won’t mention all of the setup steps, but some necessary ones:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Create a new default DB with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl -G http://localhost:8086/query --data-urlencode &quot;q=CREATE DATABASE example&quot;&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Set up InfluxDB connection as a source for Grafana&lt;/li&gt;
  &lt;li&gt;Create Grafana dashboards based on sensor data:
    &lt;ul&gt;
      &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;SELECT max(&quot;temperature&quot;) FROM &quot;sensor_data&quot; WHERE $timeFilter GROUP BY time(10s) fill(null)&lt;/code&gt;&lt;/li&gt;
      &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;SELECT max(&quot;humidity&quot;) FROM &quot;sensor_data&quot; WHERE $timeFilter GROUP BY time(10s)&lt;/code&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;4-results&quot;&gt;4. Results&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bluetooth-python-2020/sensor-data-in-grafana.png&quot; alt=&quot;Grafana graphs showing logged data&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Putting it all together, here’s how the results look like in Grafana. You’ll note that both temperature and humidity are rounded to the nearest integer - this appears to be behavior of the sensor.
The three spikes in humidity are from me putting my finger on top of the sensor briefly.&lt;/p&gt;

&lt;p&gt;Overall it seems pretty easy to get one device to send data to another device. However, I realize that collecting data from multiple Arduinos would be trickier, requiring to juggle multiple connections, and my JY-MCU Bluetooth module was unable to communicate with multiple Bluetooth clients.&lt;/p&gt;

&lt;p&gt;Still, if the Bluetooth module connector was tidied up, it would be fairly easy to break out Arduino and the sensor from the PCB, add a USB powerbank, put it all into an enclosure and have an easily portable data collector.&lt;/p&gt;

&lt;p&gt;It was rather convenient to use the Grove Beginner Kit for small scale prototyping like this, as it comes with all sensors already connected in the same PCB, as well as a bunch of sample code to use. If you’d like to pick one up, you can find it &lt;a href=&quot;https://www.seeedstudio.com/Grove-Beginner-Kit-for-Arduino-p-4549.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Sat, 09 May 2020 00:00:00 +0300</pubDate>
        <link>https://www.tautvidas.com/blog/2020/05/arduino-bluetooth-communication-with-python-influxdb-and-grafana/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2020/05/arduino-bluetooth-communication-with-python-influxdb-and-grafana/</guid>
        
        <category>arduino</category>
        
        <category>bluetooth</category>
        
        <category>timeseries</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>Overcomplicating meal planning with the Z3 Constraint Solver</title>
        <description>&lt;p&gt;When trying to get fit, proper nutrition is essential. Among other things, one needs to watch the calory intake based on &lt;a href=&quot;https://www.bodybuilding.com/fun/calculate-your-total-daily-energy-expenditure-tdee.html&quot;&gt;TDEE and activity&lt;/a&gt;, watch the macronutrients (&lt;a href=&quot;https://www.bodybuilding.com/fun/macronutrients_calculator.htm&quot;&gt;macro calculator&lt;/a&gt;) while ensuring you’re getting sufficient micronutrients.
In other words, a giant PITA to plan around.&lt;/p&gt;

&lt;p&gt;Commercial tools like &lt;a href=&quot;https://www.eatthismuch.com/&quot;&gt;EatThisMuch.com&lt;/a&gt; are incredibly helpful in meeting the requirements above, adapting to your tastes.
Its pricing is not accurate enough for what I can get locally, and it does not have all of the recipes I like.
I’m sure I add my recipes to the tool, but why use their complicated solution when I can reinvent my overcomplicated solution?&lt;/p&gt;

&lt;h2 id=&quot;the-approach&quot;&gt;The approach&lt;/h2&gt;

&lt;p&gt;With nutrition you get a calory budget. We’ll use 2000 kcal for our example.
A common macro split I’ve seen is 34% proteins (4 kcal per 1g), 33% carbohydrates (4 kcal per 1g), 33% fats (9 kcal per 1g), which for our example would result in 170g protein, 165g carbs, 73g fats targets.&lt;/p&gt;

&lt;p&gt;I want to minimize the price of the food as well, so our objective is to get as close as we can to the target calories and macros (e.g. within 5-10%) while minimizing the cost - a good objective for a constraint solver to take on.
The following implementation will skip micronutrient calculations, just because I don’t have them precalculated for my recipes, but it should be easy to extend the program to support them in the future.&lt;/p&gt;

&lt;p&gt;A while back I used Microsoft’s &lt;a href=&quot;https://github.com/Z3Prover/z3&quot;&gt;Z3 Theorem Prover’s&lt;/a&gt; constraint solving capabilities to &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2019/02/solving-a-puzzle-game-with-z3-theorem-prover/&quot;&gt;solve the PuzLogic puzzle game&lt;/a&gt;. It worked well for this use case as well.&lt;/p&gt;

&lt;p&gt;The optimization task can be written down as a system of equations:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;total_price = meal[0].quantity * meal[0].price + meal[1].quantity * meal[1].price + ... + meal[n].quantity + meal[n].price
total_protein = meal[0].quantity * meal[0].protein + meal[1].quantity * meal[1].protein + ... + meal[n].quantity + meal[n].protein
total_carbs = meal[0].quantity * meal[0].carbs + meal[1].quantity * meal[1].carbs + ... + meal[n].quantity + meal[n].carbs
total_fats = meal[0].quantity * meal[0].fats + meal[1].quantity * meal[1].fats + ... + meal[n].quantity + meal[n].fats
lower_protein_limit &amp;lt;= total_protein &amp;lt;= upper_protein_limit
lower_carbs_limit &amp;lt;= total_carbs &amp;lt;= upper_carbs_limit
lower_fats_limit &amp;lt;= total_fats &amp;lt;= upper_fats_limit
minimize(total_price)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The equations are straightforward to translate for Z3.&lt;/p&gt;

&lt;h2 id=&quot;implementation&quot;&gt;Implementation&lt;/h2&gt;

&lt;p&gt;First we define the targets and some sample meals:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;targets = {
    'calories': { 'ideal': 1997, 'lower_limit': 0.8, 'upper_limit': 1.2 },
    'protein': { 'ideal': 170, 'lower_limit': 0.95, 'upper_limit': 1.2 },
    'carbs': { 'ideal': 165, 'lower_limit': 0.9, 'upper_limit': 1.2 },
    'fat': { 'ideal': 73, 'lower_limit': 0.9, 'upper_limit': 1.2 },
    'price': 'minimize',
}

meals = {
    'cottage_cheese': { 'price': 1.4, 'calories': 483.2, 'fat': 11.59, 'carbs': 40.276, 'protein': 55.24, },
    # ... more recipes
    'tortilla_pizza': { 'price': 0.99, 'calories': 510.25, 'fat': 32.04, 'carbs': 26.71, 'protein': 26.66, },
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Z3 provides convenient objects for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Real&lt;/code&gt; variables, as well as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Sum&lt;/code&gt; operations in its Python bindings. We’ll use the optimizer for this program.
In its most basic form, we can write the optimizing program as:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;from z3 import *

opt = Optimize()
total_price, total_calories, total_protein, total_carbs, total_fat = Reals('total_price total_calories total_protein total_carbs total_fat')
portions_cottage_cheese, portions_tortilla_pizza = Reals('portions_cottage_cheese portions_tortilla_pizza')

# Sum up price, protein, carb and fat totals between our meals
opt.add(total_price == Sum([portions_cottage_cheese*meals['cottage_cheese']['price'], portions_tortilla_pizza*meals['tortilla_pizza']['price']]))
opt.add(total_calories == Sum([portions_cottage_cheese*meals['cottage_cheese']['calories'], portions_tortilla_pizza*meals['tortilla_pizza']['calories']]))
opt.add(total_protein == Sum([portions_cottage_cheese*meals['cottage_cheese']['protein'], portions_tortilla_pizza*meals['tortilla_pizza']['protein']]))
opt.add(total_carbs == Sum([portions_cottage_cheese*meals['cottage_cheese']['carbs'], portions_tortilla_pizza*meals['tortilla_pizza']['carbs']]))
opt.add(total_fat == Sum([portions_cottage_cheese*meals['cottage_cheese']['fat'], portions_tortilla_pizza*meals['tortilla_pizza']['fat']]))

# Add limits that our meals have to sum up to
opt.add(And([
    total_calories &amp;gt;= targets['calories']['ideal'] * targets['calories']['lower_limit'],
    total_calories &amp;gt;= targets['calories']['ideal'] * targets['calories']['upper_limit']]))
opt.add(And([
    total_protein &amp;gt;= targets['protein']['ideal'] * targets['protein']['lower_limit'],
    total_protein &amp;gt;= targets['protein']['ideal'] * targets['protein']['upper_limit']]))
opt.add(And([
    total_carbs &amp;gt;= targets['carbs']['ideal'] * targets['carbs']['lower_limit'],
    total_carbs &amp;gt;= targets['carbs']['ideal'] * targets['carbs']['upper_limit']]))
opt.add(And([
    total_fat &amp;gt;= targets['fat']['ideal'] * targets['fat']['lower_limit'],
    total_fat &amp;gt;= targets['fat']['ideal'] * targets['fat']['upper_limit']]))

# Ensure that portions are non-zero values
opt.add(And([portions_cottage_cheese &amp;gt;= 0, portions_tortilla_pizza &amp;gt;= 0]))

# Let Z3 pick the meal portions while minimizing the price
opt.minimize(total_price)

if opt.check() == sat:
    m = opt.model()
    print(m)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Which outputs:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[portions_cottage_cheese = 150154650/49043707,
 portions_tortilla_pizza = 46250910/49043707,
 total_fat = 657/10,
 total_carbs = 297/2,
 total_protein = 47637960633/245218535,
 total_calories = 192308507415/98087414,
 total_price = 2560049109/490437070]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Or in a more readable format:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[portions_cottage_cheese = 3.061649683210121,
 portions_tortilla_pizza = 0.9430549366914699,
 total_fat = 65.7,
 total_carbs = 148.5,
 total_protein = 194.26737311272169,
 total_calories = 1960.5829083739532,
 total_price = 5.219933943818725]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I had to broaden the upper acceptable range of all of the macros to 1.2x for Z3 to find an acceptable solution with such few possible recipies to choose from, but it did find a satisfactory solution.
About 3 portions of cottage cheese and about a portion of tortilla pizza gets us fairly close to the target TDEE and macro requirements.&lt;/p&gt;

&lt;p&gt;However, if you run this program multiple times, the result will always be the same. That’s because Z3 only looks for the first satisfactory solution.
If we want to instruct Z3 to look for more solutions, we have to invalidate the solutions it already found. Which is fairly easy, just need to change the model displaying part:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;variants_to_show = 3
for i in range(variants_to_show):
    if opt.check() != sat:
        break
    if i &amp;gt;= 1:
        print('=' * 80)

    m = opt.model()
    for d in m.decls():
        print(d.name(), m[d].as_decimal(3))

    opt.add(Or([
        portions_cottage_cheese != m[portions_cottage_cheese],
        portions_tortilla_pizza != m[portions_tortilla_pizza]]))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Resulting in:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;portions_cottage_cheese 3.061?
portions_tortilla_pizza 0.943?
total_fat 65.7
total_carbs 148.5
total_protein 194.267?
total_calories, 1960.582?
total_price 5.219?
================================================================================
portions_cottage_cheese 2.561?
portions_tortilla_pizza 1.697?
total_fat 84.061?
total_carbs 148.5
total_protein 186.747?
total_calories, 2103.685?
total_price 5.266?
================================================================================
portions_cottage_cheese 3.275?
portions_tortilla_pizza 0.865?
total_fat 65.7
total_carbs 155.034?
total_protein 204
total_calories, 2024.325?
total_price 5.442?
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now we can get Z3 to output a bunch of meal plans for us to choose from.&lt;/p&gt;

&lt;p&gt;There are some issues with the code above: it does not scale well with more meals added, the meal portions are calculated in fractions while we’d rather work with full portions, the code could be further cleaned up. But the idea is still there.
I ended up with this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;opt = Optimize()

expressions = dict((name, []) for name in targets.keys())
variables = {}

for meal_name, meal in meals.items():
    portions = variables[f'portions_{meal_name}'] = Real(f'portions_{meal_name}')
    opt.add(Or([portions == option for option in [0, 1, 2]])) # Limit to up to 2 full portions per meal

    for target_name in targets.keys():
        expressions[target_name].append(portions * meal[target_name])

expr_to_minimize = None
for name, value in targets.items():
    total = variables[f'total_{name}'] = Real(f'total_{name}')

    if value == 'minimize':
        opt.add(total == Sum(expressions[name]))
        expr_to_minimize = total
    else:
        opt.add(total == Sum(expressions[name]))
        opt.add(total &amp;gt;= value['ideal'] * value['lower_limit'])
        opt.add(total &amp;lt;= value['ideal'] * value['upper_limit'])

optimized = opt.minimize(expr_to_minimize)
variants_to_show = 5
for i in range(variants_to_show):
    if opt.check() != sat:
        break
    if i &amp;gt;= 1:
        print('=' * 80)

    m = opt.model()
    for name, variable in variables.items():
        if name.startswith('portions_') and m[variable] == 0:
            continue

        if m[variable].is_real():
            print(name, m[variable].as_decimal(3))
        elif m[variable].is_int():
            print(name, m[variable].as_long())
        else:
            print(name, m[variable])

    # Ignore already found variants
    opt.add(Or([variable != m[variable] for name, variable in variables.items() if name.startswith('portions_')]))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;results&quot;&gt;Results&lt;/h2&gt;

&lt;p&gt;With a bigger set of recipes to choose from, Z3 came up with the following meal plans:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;portions_peanuts 2
portions_protein_pancakes 2
portions_chicken_quesadilla_w_cheese 1
total_calories 1994.24
total_protein 169.76
total_carbs 164.76
total_fat 77
total_price 3
================================================================================
portions_peanuts 1
portions_protein_pancakes 2
portions_pulled_pork_quesadilla 2
total_calories 2088.18
total_protein 168.39
total_carbs 180.74
total_fat 79.38
total_price 3.11
================================================================================
portions_chicken_pilaf 1
portions_peanuts 2
portions_protein_pancakes 1
portions_chicken_quesadilla_w_cheese 1
total_calories 2051.22
total_protein 168.23
total_carbs 172.24
total_fat 78.6
total_price 3.46
================================================================================
portions_chicken_quesadilla 1
portions_protein_pancakes 2
portions_tortilla_pizza 1
total_calories 1917.45
total_protein 162.78
total_carbs 171.85
total_fat 66.5
total_price 3.61
================================================================================
portions_almonds 1
portions_peanuts 1
portions_protein_pancakes 2
portions_chicken_quesadilla_w_cheese 1
total_calories 1979.74
total_protein 166.91
total_carbs 162.91
total_fat 80
total_price 3.77
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And with a bigger bank of recipes it beats me having to plan it all out. Now I can let it generate hundreds of plans for me to choose from, and they’ll all be composed of recipes I like and based on prices at my grocery store.
Granted, some of the suggestions are not great: not doable for time reasons or personal tastes, lack of micronutrients (e.g. the plans should definitely include more veggies), but it’s a decent starting point.
With a selection of better recipes the meal plans would be better too.&lt;/p&gt;

&lt;p&gt;You can play around with the full implementation on &lt;a href=&quot;https://colab.research.google.com/drive/1T5gU3EEx1oSJhzizYZKVzWLyp_FIhjBF#scrollTo=dUdDZ9u0xJru&quot;&gt;Google’s Colaboratory&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Sun, 12 Apr 2020 00:00:00 +0300</pubDate>
        <link>https://www.tautvidas.com/blog/2020/04/overcomplicating-meal-planning-with-z3-constraint-solver/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2020/04/overcomplicating-meal-planning-with-z3-constraint-solver/</guid>
        
        <category>fitness</category>
        
        <category>nutrition</category>
        
        <category>z3</category>
        
        <category>constraint solver</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>Creating the always rising Shepard tone with Sonic-Pi</title>
        <description>&lt;p&gt;I’ve recently been toying with &lt;a href=&quot;http://sonic-pi.net/&quot;&gt;Sonic-Pi&lt;/a&gt; - a code based music creation tool created by Sam Aaron.
It’s written in Ruby on top of SuperCollider and is designed to handle timing, threading and synchronization specifically for music creation.&lt;/p&gt;

&lt;p&gt;An older music creation library Overtone, authored by Sam Aaron as well, included the famous &lt;a href=&quot;https://www.youtube.com/watch?v=FWkJ86JqlPA&quot;&gt;THX Surround Test sound&lt;/a&gt; simulation example: &lt;a href=&quot;https://github.com/overtone/overtone/blob/f6d414f884f1b6d3166195b49276174efddf2cf2/src/overtone/examples/instruments/thx.clj&quot;&gt;Code&lt;/a&gt;.
This inspired to try replicate a different interesting sound - the Shepard tone - using Sonic-Pi.&lt;/p&gt;

&lt;h2 id=&quot;shepard-tone&quot;&gt;Shepard tone&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Shepard_tone&quot;&gt;Shepard tone&lt;/a&gt; has an interesting property, in that it feels like it is always increasing in pitch. The effect can be achieved by playing multiple tones with an increasing pitch at the same time, one octave (or a double in frequency) apart. Each tone starts at a low frequency and fades-in in volume, gradually increasing in pitch, and fades-out upon reaching a high pitch as another tone takes its place.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://upload.wikimedia.org/wikipedia/commons/4/48/Shepard_Tones_spectrum_linear_scale.png&quot; alt=&quot;Shepard tone spectrum&quot; /&gt;
The spectrum diagram by Grin (&lt;a href=&quot;https://creativecommons.org/licenses/by-sa/4.0&quot;&gt;CC BY-SA&lt;/a&gt;) illustrates the construction well. Note that the red lines are going up over time in pitch, the thickness of a line - volume - increases as it fades in at a low pitch, and decreases as it fades out at a high pitch.&lt;/p&gt;

&lt;h2 id=&quot;constructing-the-tone&quot;&gt;Constructing the tone&lt;/h2&gt;

&lt;p&gt;Sonic-Pi has a neat sliding mechanism, allowing us to start from one pitch and slide it up to another pitch:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;synth = play note(:C3), note_slide: 1
control synth, note: note(:C4)
sleep 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here it starts by playing a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt; note using a sine wave synthesizer, and slides it up by one octave to a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C4&lt;/code&gt; note over one beat (or 1 second at a 60bpm default tempo):
&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/rising-pitch-example-spectrogram.png&quot; alt=&quot;Rising pitch example spectrogram&quot; /&gt;&lt;/p&gt;

&lt;audio controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/rising-pitch-example.ogg&quot; type=&quot;audio/ogg&quot; /&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/rising-pitch-example.mp3&quot; type=&quot;audio/mpeg&quot; /&gt;
  Your browser does not support the audio element.
&lt;/audio&gt;

&lt;p&gt;Extending this example would allow us to build a tone with an increasing pitch, but it plays always at max volume.&lt;/p&gt;

&lt;p&gt;The second building block is volume control. The tone needs to fade in at the beginning, reach a maximum volume, and fade out at some high pitch.
Sonic-Pi provides an ADSR envelope generator we can use for this purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Envelope_(music)&quot;&gt;ADSR&lt;/a&gt; stands for Attack, Decay, Sustain, Release and describes how sound changes over time, or in this case - its volume.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://upload.wikimedia.org/wikipedia/commons/e/ea/ADSR_parameter.svg&quot; alt=&quot;ADSR Envelope diagram&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://commons.wikimedia.org/wiki/File:ADSR_parameter.svg&quot;&gt;Abdull’s diagram on Wikipedia&lt;/a&gt; illustrates the use well.
Imagine pressing a piano key. If you press it quickly and forcefully, it reaches the full volume quickly (the Attack stage), then decays to some sustainable volume (the Decay stage) that is sustained for a while (the Sustain stage), after you release the key - the sound quickly diminishes to zero.&lt;/p&gt;

&lt;p&gt;This mechanism can be conveniently used to build the fade-in and fade-out of our sine wave:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;synth = play note(:C3), note_slide: 5, attack: 1, decay: 0, sustain: 3, release: 1
control synth, note: note(:C4)
sleep 5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I’ve increased the duration of the wave to 5 beats to illustrate the example. But following the ADSR envelope diagram above, the sound will go from 0 to 100% volume in 1 beat.
Decay is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0&lt;/code&gt;, so we can ignore it. It will stay at maximum volume for 3 beats, and then go from 100% to 0% volume in 1 beat. All the while the pitch of the sound is increasing from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C4&lt;/code&gt;.
&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/fade-in-out-example-amplitude-over-time.png&quot; alt=&quot;Fade-in and out amplitude over time&quot; /&gt;&lt;/p&gt;

&lt;audio controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/fade-in-out-example.ogg&quot; type=&quot;audio/ogg&quot; /&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/fade-in-out-example.mp3&quot; type=&quot;audio/mpeg&quot; /&gt;
  Your browser does not support the audio element.
&lt;/audio&gt;

&lt;p&gt;The last building block is to play multiple tones at the same time.
Sonic-Pi handles threading for us very well, and in Sonic-Pi commands are executed in parallel. E.g. if we try to play two tones at once, the code would look as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;play note(:C3)
play note(:C4)
sleep 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Both notes &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C4&lt;/code&gt; would be played at the exact same time for 1 second.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/multiple-simultaneous-notes-spectrogram.png&quot; alt=&quot;Multiple simultaneous notes - spectrogram&quot; /&gt;&lt;/p&gt;

&lt;audio controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/multiple-simultaneous-notes.ogg&quot; type=&quot;audio/ogg&quot; /&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/multiple-simultaneous-notes.mp3&quot; type=&quot;audio/mpeg&quot; /&gt;
  Your browser does not support the audio element.
&lt;/audio&gt;

&lt;p&gt;In a similar vain, the idea would be to sequentially start multiple waves we created above in independent threads, sleeping for a bit to start another wave just as others have reached octaves.
This can be done roughly as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;total_waves = 10
total_waves.times do
  in_thread do
    synth = play note(:C3), note_slide: 10, attack: 1, decay: 0, sustain: 8, release: 1
    control synth, note: note(:C5)
    sleep 10
  end
  sleep 5
end
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I’ve further increased the duration of a single sine wave to 10 seconds, and the pitch to rise from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C5&lt;/code&gt;, or two octaves.
The code above will play 10 such sine waves, start of each separated by 5 beats. That way:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;At second 0 the first sine wave will start playing at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt;, fading-in in volume&lt;/li&gt;
  &lt;li&gt;At second 5, the first sine wave would have reached &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C4&lt;/code&gt; note at full volume, then a second wave would start playing a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt; note also fading-in in volume;&lt;/li&gt;
  &lt;li&gt;At second 10, the first wave would have reached &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C5&lt;/code&gt; and would’ve finished fading out. The second wave would have reaced &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C4&lt;/code&gt; note, and a third wave would’ve started playing a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;C3&lt;/code&gt; note, and so on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/multiple-rising-waves-spectrogram.png&quot; alt=&quot;Multiple rising waves spectrogram&quot; /&gt;&lt;/p&gt;

&lt;audio controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/multiple-rising-waves.ogg&quot; type=&quot;audio/ogg&quot; /&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/multiple-rising-waves.mp3&quot; type=&quot;audio/mpeg&quot; /&gt;
  Your browser does not support the audio element.
&lt;/audio&gt;

&lt;p&gt;All 3 construction components put together, cleaned up and synchronized using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cue&lt;/code&gt;/&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sync&lt;/code&gt; resulted in the following final snippet:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;starting_note = note(:C2)
total_octaves = 4
octave_duration = 10
total_waves = 20
wave_duration = octave_duration * total_octaves
fade_in_out_duration = wave_duration * 0.2

target_notes = octs(starting_note, total_octaves + 1).drop(1)

in_thread do
  total_waves.times do
    sync :new_octave
    in_thread do
      with_synth :sine do
        instrument = play starting_note,
          note_slide: octave_duration,
          attack: fade_in_out_duration, release: fade_in_out_duration,
          decay: 0, sustain: (wave_duration - 2 * fade_in_out_duration)

        total_octaves.times { |octave|
          cue :new_octave
          control instrument, note: target_notes[octave]
          sleep octave_duration
        }
      end
    end
    sleep octave_duration
  end
end

cue :new_octave
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And the end result would sound like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/shepard-tone-spectrogram.png&quot; alt=&quot;Shepard tone spectrogram&quot; /&gt;&lt;/p&gt;

&lt;audio controls=&quot;&quot;&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/shepard-tone.ogg&quot; type=&quot;audio/ogg&quot; /&gt;
  &lt;source src=&quot;https://www.tautvidas.com/blog/images/posts/shepard-tone-2020/shepard-tone.mp3&quot; type=&quot;audio/mpeg&quot; /&gt;
  Your browser does not support the audio element.
&lt;/audio&gt;

&lt;p&gt;Overall I’m rather impressed by Sonic-Pi’s ability to handle threading, synchronization and sliding. It resulted in a fairly short piece of code, and could likely be made smaller still by replacing the octave-target loop with a single control call to linearly slide the starting note to the highest note the sine wave is supposed to reach.&lt;/p&gt;
</description>
        <pubDate>Mon, 20 Jan 2020 00:00:00 +0200</pubDate>
        <link>https://www.tautvidas.com/blog/2020/01/creating-the-shepard-tone-with-sonic-pi/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2020/01/creating-the-shepard-tone-with-sonic-pi/</guid>
        
        <category>sonic-pi</category>
        
        <category>ruby</category>
        
        <category>music</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>Building a Reinforcement Learning bot for Bubble Shooter with Tensorflow and OpenCV</title>
        <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Here is some resulting gameplay from this Dueling Double Deep Q-learning network based bot:&lt;/p&gt;

&lt;div style=&quot;position:relative;height:0;padding-bottom:56.25%&quot;&gt;&lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/9K343IWO2N4?ecver=2&quot; width=&quot;640&quot; height=&quot;360&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&quot; style=&quot;position:absolute;width:100%;height:100%;left:0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;

&lt;p&gt;For a while I wanted to explore Reinforcement learning, and this time I’ve done so by building a Q-Learning based bot for the &lt;a href=&quot;https://www.kongregate.com/games/Paulussss/bubbles-shooter&quot;&gt;Bubble Shooter&lt;/a&gt; game - an Adobe Flash based game running in the browser.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bubble-shooter/gameplay.png&quot; alt=&quot;Bubble shooter&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The game works as follows: the game board contains bubbles of 6 different colors. Shooting a connected group of same colored bubbles with 3 or more bubbles in it destroys all of them. The goal of the game is to destroy all bubbles on the board.&lt;/p&gt;

&lt;p&gt;I chose the game for simplicity of its UIs and 1-click gameplay, which was reasonably straightforward to automate.&lt;/p&gt;

&lt;h3 id=&quot;what-is-q-learning&quot;&gt;What is Q-Learning?&lt;/h3&gt;

&lt;p&gt;You can learn about it in more detail by reading &lt;a href=&quot;https://en.wikipedia.org/wiki/Q-learning&quot;&gt;this Wikipedia article&lt;/a&gt; or watching &lt;a href=&quot;https://www.youtube.com/watch?v=aCEvtRtNO-M&quot;&gt;this Siraj Raval’s explanatory video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But in short, in Q-Learning given a state the agent tries to evaluate how good it is to take an action.
The “Q” (or “Quality”) value depends on the reward the agent expects to get by taking such action, as well as in any future states.&lt;/p&gt;

&lt;p&gt;E.g. maybe the agent shoots a bubble in one place and doesn’t destroy any bubbles - probably not a great move. Perhaps it shoots a bubble and manages to destroy 10 other bubbles - better! Or perhaps it does not destroy any bubbles on the first shot, but this enables the agent to destroy 20 bubbles with the second shot and win the game.&lt;/p&gt;

&lt;p&gt;The agent chooses its moves by calculating Q-values for every possible action, and choosing the one with the largest Q-value - the move it thinks will provide it with the most reward.&lt;/p&gt;

&lt;p&gt;For simple cases with few unique game states one can get away with a table of Q-values, but state of the art architectures switched to using Deep Neural Networks to learn and predict the Q-values.&lt;/p&gt;

&lt;h3 id=&quot;dueling-double-q-network-architecture&quot;&gt;Dueling Double Q Network architecture&lt;/h3&gt;

&lt;p&gt;The simplest case is a Deep Q Network - a single deep neural net that attempts to predict the Q-value of every step. But apparently it is prone to overestimation, resulting in the network thinking the moves are better than they really are.&lt;/p&gt;

&lt;p&gt;An improvement of that was Double Q Network (&lt;a href=&quot;https://arxiv.org/pdf/1509.06461.pdf&quot;&gt;Arxiv paper: Deep Reinforcement Learning with Double Q-learning&lt;/a&gt;), which has two neural networks: an online one and a target one.
The online network gets actively trained, while the target one doesn’t, so the target network “lags behind” in its Q-value evaluations.
The key is that the online network gets to pick the action, while the target network performs the evaluation of the action.
The target network gets updated every so often by copying the Neural Network weights over from the online network.&lt;/p&gt;

&lt;p&gt;A further improvement still was a Dueling Q network (&lt;a href=&quot;https://arxiv.org/pdf/1511.06581.pdf&quot;&gt;Arxiv paper: Dueling Network Architectures for Deep Reinforcement Learning&lt;/a&gt;).
The idea here is that it splits out the network into 2 streams: advantage and value, results of which are later combined into Q-values. This allows the network to learn which states are valuable without having to learn the effect of every action in the state, allowing it to generalize better.&lt;/p&gt;

&lt;p&gt;In the end I’ve ended up combining all of the techniques above into the following network:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bubble-shooter/dddqn-architecture.png&quot; alt=&quot;Model architecture&quot; /&gt;&lt;/p&gt;

&lt;p&gt;It’s based on the architecture mentioned in one of the papers: several convolutional layers followed by several fully connected layers.&lt;/p&gt;

&lt;h3 id=&quot;game-states-and-rewards&quot;&gt;Game states and rewards&lt;/h3&gt;

&lt;p&gt;DeepMind were able to get their model successfully &lt;a href=&quot;https://arxiv.org/pdf/1312.5602v1.pdf&quot;&gt;playing 2600 Atari games&lt;/a&gt; just by taking game screenshots as an input to the neural net, but for me the network simply failed to learn anything. Well, aside from successfully ending the game in the fewest number of steps.&lt;/p&gt;

&lt;p&gt;So I ended up using Computer Vision to parse the game board into a 35x15 matrix.
35 ball halves fit horizontally, and 15 balls fit vertically into a game board, so every cell in a matrix represents a half-ball horizontally and a full-ball vertically.&lt;/p&gt;

&lt;p&gt;What the current agent actually sees though are 4 different snapshots of the game board:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Positions of half-balls that match the current ball’s color&lt;/li&gt;
  &lt;li&gt;Positions of half-balls that don’t match the current ball’s color&lt;/li&gt;
  &lt;li&gt;Positions of half-balls that match the next ball’s color&lt;/li&gt;
  &lt;li&gt;Positions of half-balls that don’t match the next ball’s color&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bubble-shooter/parsed-game-board.png&quot; alt=&quot;Model architecture&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The idea was to start with at least learning the game for the current and the following ball colors to simplify the rules.
I tried having layers for balls of every color, but wasn’t able to make it work yet.&lt;/p&gt;

&lt;p&gt;For rewards, I’ve settled on two: destroying more than one ball gives a reward of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1&lt;/code&gt;, every other action gives a reward of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0&lt;/code&gt;. Otherwise I found that Q-values gradually grow to infinity.&lt;/p&gt;

&lt;p&gt;For possible actions, the game board is 500x500 px in dimensions, meaning that there are 250000 possible actions for every state, that means lots of computations.
To simplify, I’ve narrowed the action space down to 35 different actions, all oriented in a single horizontal line ~100px from the bottom of the game board.&lt;/p&gt;

&lt;p&gt;This makes some further shots impossible due to lack of accuracy, but seems to be good enough to start with.&lt;/p&gt;

&lt;h3 id=&quot;prioritized-experience-replay&quot;&gt;Prioritized experience replay&lt;/h3&gt;

&lt;p&gt;The first option is to start with no replay at all. The agent could take a step and learn from it only once. But it is a poor option, as important experiences may be forgotten, and adjacent states correlate with each other.&lt;/p&gt;

&lt;p&gt;An improvement over it was to keep some recent experiences (e.g. 1 million experiences), and to train the neural network based on a randomly sampled batch of experiences. This allows to avoid correlation of adjacent states, but important experiences may still end up being revisited rarely.&lt;/p&gt;

&lt;p&gt;A further improvement over that is to keep a buffer of prioritized experiences. The experiences are prioritized based on the Q-value error (difference of predicted Q-value and actual Q-value), and training the neural network by sampling batches from different ranges of errors.
That way more error-prone but rarely occurring states have a better chance of being retrained upon.&lt;/p&gt;

&lt;p&gt;I ended up using Jaromiru’s (based on &lt;a href=&quot;https://jaromiru.com/2016/11/07/lets-make-a-dqn-double-learning-and-prioritized-experience-replay/&quot;&gt;this article&lt;/a&gt;) reference implementation of Prioritized Experience Replay (&lt;a href=&quot;https://github.com/jaromiru/AI-blog/blob/master/SumTree.py&quot;&gt;SumTree&lt;/a&gt;, &lt;a href=&quot;https://github.com/jaromiru/AI-blog/blob/master/Seaquest-DDQN-PER.py#L88&quot;&gt;Memory&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;An experience is a tuple of:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;The current state&lt;/li&gt;
  &lt;li&gt;The action taken&lt;/li&gt;
  &lt;li&gt;The reward received&lt;/li&gt;
  &lt;li&gt;The next resulting state&lt;/li&gt;
  &lt;li&gt;Whether the episode is “done”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;training-in-parallel&quot;&gt;Training in parallel&lt;/h3&gt;

&lt;p&gt;Originally I’ve started with just a single instance of the game for training, but Machine Learning really shines with lots and lots of data (millions, billions of samples - the more the better), so at 2-5 seconds per step it just took too long to get any substantial amount of training done.&lt;/p&gt;

&lt;p&gt;So I used Selenium+Firefox and Python’s multiprocessing module to run 24 isolated game instances at once, allowing it to top out at about 4 steps/second. Even at the low number of instances it’s quite CPU-heavy and makes my poor Ryzen 2700x struggle, making the games run slower.&lt;/p&gt;

&lt;p&gt;You can find an explanation on how it was done in &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2019/08/parallel-and-isolated-game-agents-for-reinforcement-learning/&quot;&gt;my last post&lt;/a&gt;, or find the code &lt;a href=&quot;https://github.com/flakas/bubble-shooter-bot&quot;&gt;on Github&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;results&quot;&gt;Results&lt;/h3&gt;

&lt;p&gt;The most successful hyperparameters so far were:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Exploration probability (epsilon): &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.99&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Minimum exploration probability: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.1&lt;/code&gt; and gradually reducing to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.01&lt;/code&gt; with every training session&lt;/li&gt;
  &lt;li&gt;Exploration probability decay every episode: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.999&lt;/code&gt; gradually reducing to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.9&lt;/code&gt; with every training session&lt;/li&gt;
  &lt;li&gt;Gamma: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.9&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Learning rate: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.00025&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Replay frequency: every &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;4&lt;/code&gt; steps&lt;/li&gt;
  &lt;li&gt;Target network update frequency: every &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1000&lt;/code&gt; steps&lt;/li&gt;
  &lt;li&gt;Memory epsilon: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.01&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Memory alpha: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.6&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Experience memory buffer: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;20000&lt;/code&gt; experiences&lt;/li&gt;
  &lt;li&gt;Batch size: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;32&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In total the model was trained about 6 different times for 200-300k steps - for ~24h each time.&lt;/p&gt;

&lt;p&gt;During training the model on average takes ~40 steps (~70 max), and is able to destroy at least 2 bubbles ~19 times in an episode (~39 max).&lt;/p&gt;

&lt;div style=&quot;position:relative;height:0;padding-bottom:56.25%&quot;&gt;&lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/9K343IWO2N4?ecver=2&quot; width=&quot;640&quot; height=&quot;360&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&quot; style=&quot;position:absolute;width:100%;height:100%;left:0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;

&lt;p&gt;The resulting gameplay is not terribly impressive, the agent is not taking some clear shots when there’s nothing better to do. Its knowledge also seems to be concentrated close to the starting state - it’s not doing well with empty spaces closer to the top or bottom of the board.
But it is playing the game to some extent, and if the game’s RNG is favorable - it may even go pretty far.&lt;/p&gt;

&lt;p&gt;As a next step I might try different network architectures and hyperparameters, to speed things up pretrained offline on the bank of experiences acquired from this model’s gameplay.&lt;/p&gt;

&lt;p&gt;If you’re interested in the code, you can find it &lt;a href=&quot;https://github.com/flakas/bubble-shooter-bot&quot;&gt;on Github&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Tue, 06 Aug 2019 00:00:00 +0300</pubDate>
        <link>https://www.tautvidas.com/blog/2019/08/q-learning-bot-for-bubble-shooter/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2019/08/q-learning-bot-for-bubble-shooter/</guid>
        
        <category>python</category>
        
        <category>bubble shooter</category>
        
        <category>game</category>
        
        <category>reinforcement learning</category>
        
        <category>dueling networks</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>How to build parallel and isolated browser game agents for Reinforcement Machine Learning</title>
        <description>&lt;p&gt;Many Reinforcement Learning tutorials use the &lt;a href=&quot;https://gym.openai.com/&quot;&gt;OpenAI Gym&lt;/a&gt; environment and its supported games to run on different algorithms. But what if you wanted to use Reinforcement Learning for your own environment?
Usually you will no longer have ready-built APIs to interact with the environment, the environment itself may be slower than Gym’s games, making it difficult to take the &lt;a href=&quot;https://arxiv.org/pdf/1312.5602v1.pdf&quot;&gt;millions of steps for weight updates&lt;/a&gt; necessary to train agent models.&lt;/p&gt;

&lt;p&gt;Lately I’ve been exploring RL for a &lt;a href=&quot;https://www.kongregate.com/games/Paulussss/bubbles-shooter&quot;&gt;Bubble Shooter&lt;/a&gt; game - an Adobe Flash based game running in the browser.
Previously I would run the game in my primary browser, and use &lt;a href=&quot;https://pypi.org/project/pynput/&quot;&gt;Pynput&lt;/a&gt; library to control my computer’s mouse and/or keyboard.
This works for a single game instance, but is slow - one step taking 2-5 seconds. So running multiple game instances is necessary to speed up learning, but the original approach of controlling device’s primary mouse or keyboard does not scale that way easily.&lt;/p&gt;

&lt;p&gt;What helped to speed up the learning process was to use Selenium+Firefox to run the isolated game instances, allowing for straightforward controls for each instance without blocking the device’s primary controls, and running every game instance and the RL agent in separate processes.&lt;/p&gt;

&lt;p&gt;Selenium provides some key functionality necessary for this to work:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;An easy way to take screenshots of DOM elements for the Vision module to use&lt;/li&gt;
  &lt;li&gt;An easy way to send mouse/keyboard inputs to a specific DOM element&lt;/li&gt;
  &lt;li&gt;Allows running Flash based applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;game-instance-isolation-with-selenium-and-firefox&quot;&gt;Game instance isolation with Selenium and Firefox&lt;/h3&gt;

&lt;p&gt;Selenium Python driver can be installed with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;pip install selenium&lt;/code&gt;.
Firefox will also require a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;geckodriver&lt;/code&gt; package to be available in your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$PATH&lt;/code&gt;: &lt;a href=&quot;https://askubuntu.com/questions/870530/how-to-install-geckodriver-in-ubuntu#871077&quot;&gt;How to install geckodriver&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Flash is abandoned and modern browsers no longer include it. On Ubuntu &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;browser-plugin-freshplayer-pepperflash&lt;/code&gt; needs to be installed to get flash running: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo apt-get install browser-plugin-freshplayer-pepperflash&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the Selenium’s Firefox webdriver instance Flash can be enabled with a profile preference:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;from selenium import webdriver
# ...
profile = webdriver.FirefoxProfile()
profile.set_preference(&quot;plugin.state.flash&quot;, 2)
driver = webdriver.Firefox(profile)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The webpage with the flash game can be opened with:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;driver.get('https://www.kongregate.com/games/Paulussss/bubbles-shooter')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The correct DOM element can be found with:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;game_board_element = driver.find_element_by_tag_name('embed')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Mouse movements and clicks are performed using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ActionChains&lt;/code&gt; mechanism. It’s a convenient wrapper and will wait for actions to complete before returning:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;target_x, target_y = (50, 50) # some x, y coordinates
action_chains = webdriver.common.action_chains.ActionChains(driver)
action_chains.move_to_element_with_offset(game_board_element, target_x, target_y)
action_chains.click()
action_chains.perform()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Game board screenshots can be taken with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;game_board_element.screenshot_as_png&lt;/code&gt;.
This will return a Base64 encoded PNG image, which we can turn into an OpenCV compatible Numpy array:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;import numpy as np
import cv2
# ...
base64_png = game_board_element.screenshot_as_png
file_bytes = np.asarray(bytearray(base64_png), dtype=np.uint8)
decoded_image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
rect = game_board_element.rect
(x, y, w, h) = (int(rect['x']), int(rect['y']), int(rect['width']), int(rect['height']))
cropped_image = decoded_image[y:y+h, x:x+w]

import namedtuple
Board = namedtuple('Board', ['x', 'y', 'w', 'h', 'screen'])

board = Board(x, y, w, h, cropped_image)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That finds the position of the game board, its dimensions and its contents - all that’s needed for further processing and interaction.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bubble-shooter/gameplay.png&quot; alt=&quot;Bubble shooter&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;parallelizing-game-instances-with-multiprocessing&quot;&gt;Parallelizing game instances with multiprocessing&lt;/h3&gt;

&lt;p&gt;Using threads is an option, but because the goal is to run many game instances (24 in my case), I went with Processes instead to make use of all CPU cores.&lt;/p&gt;

&lt;p&gt;The distribution is as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Main Supervisor, responsible for starting the agents and workers, managing queues&lt;/li&gt;
  &lt;li&gt;Agent process, responsible for running and training the RL model&lt;/li&gt;
  &lt;li&gt;N trainer/player processes, responsible for running Selenium and game instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inter-process communication will be performed through Queue instances, as inspired by Akka’s Actors.&lt;/p&gt;

&lt;h4 id=&quot;1-agent-worker&quot;&gt;1. Agent worker&lt;/h4&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;from multiprocessing import Process, Queue

class AgentProcess:
  def __init__(self, config, my_queue, worker_queues):
    self.agent = Agent(config) # create our agent
    self.my_queue = my_queue
    self.worker_queues = worker_queues

  def send_message(self, worker, message):
    self.worker_queues[worker].put_nowait(message)

  def work(self):
    try:
      while True:
        message = self.my_queue.get(block=True, timeout=60)
        if message['command'] == 'act':
          self.send_message(message['worker'], self.agent.act(message['state']))
        else:
          # Add other Agent actions
    except queue.Empty:
      return False

def agent_worker(config, my_queue, worker_queues):
  agent = AgentProcess(config, my_queue, worker_queues)
  return agent.work()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The idea is that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Agent&lt;/code&gt; process will keep waiting for messages from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; workers with new states of the game board.
Then it will ask the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Agent&lt;/code&gt; to make a move, and will respond back to the correct &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; worker about which action it should take.&lt;/p&gt;

&lt;p&gt;In this case the Agent has one queue to receive messages from all &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; workers, but every &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; will have their own queues to receive messages from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Agent&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Since processes are isolated, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Agent&lt;/code&gt; process will be able to have its own Tensorflow model instance and related resources.&lt;/p&gt;

&lt;h4 id=&quot;2-player-worker&quot;&gt;2. Player worker&lt;/h4&gt;

&lt;p&gt;The player worker is responsible for running the game environment, and just needs to ask the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Agent&lt;/code&gt; which action it should make at certain points.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;class PlayerProcess:
  def __init__(self, agent_queue, my_queue, worker_name):
    self.agent_queue = agent_queue
    self.my_queue = my_queue
    self.worker_name = worker_name

  def send_to_agent(self, message={}):
    message['worker'] = self.worker_name
    self.agent_queue.put_nowait(message)

  def get_from_agent(self):
    return self.my_queue.get(block=True)

  def start(self):
    self.browser = SeleniumBrowser()
    self.browser.setup()

  def stop(self):
    self.browser.cleanup()

  def play(self, episodes, steps):
    for e in range(episodes):
      state = self.browser.get_game_board()
      for step in range(steps):
        self.send_to_agent({ 'command': 'act', 'state': state })
        action = self.get_from_agent()

        self.browser.move_to(action, 400)

def player_worker(config, agent_queue, my_queue, my_name):
  player = PlayerProcess(agent_queue, my_queue, my_name)
  player.start()
  player.play(config['episodes'], config['steps'])
  player.stop()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PlayerProcess&lt;/code&gt; is simplified to show the main ideas.&lt;/p&gt;

&lt;p&gt;But in English all this does is gets the Selenium instance to get the game board state, asks the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Agent&lt;/code&gt; to choose an action based upon the state, and asks the Selenium instance to perform the action.&lt;/p&gt;

&lt;h4 id=&quot;3-training-supervisor&quot;&gt;3. Training supervisor&lt;/h4&gt;

&lt;p&gt;The training supervisor needs to start all of the relevant processes, queues. It also starts training, and releases all resources afterwards.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;class Supervisor:
  def __init__(self, config, total_players=2):
    self.total_players = total_players
    self.agent_queue = Queue()
    self.worker_queues = [Queue() for _ in range(total_players)]
    self.config = config
    self.agent = Process(target=agent_worker, args=(config,))
    self.workers = [Process(target=player_worker, args=(config, self.agent_queue, self.worker_queues[name]) for name in range(total_players)]

  def start(self):
    self.agent.start()
    for worker in self.workers:
      worker.start()

  def wait_to_finish(self):
    for worker in self.workers:
      worker.join()
    self.agent.terminate()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And to actually run the supervisor, all player instances and start the playing process:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;config = #... - some config
if __name__ == '__main__':
  supervisor = Supervisor(config, total_players=10)
  supervisor.start()
  supervisor.wait_to_finish()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once executed, this should start a bunch of browsers:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/bubble-shooter/many-browsers.png&quot; alt=&quot;Many browsers running&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I’m using this approach to run 24 game instances for my Q-learning process for Bubble Shooter. If you’d like to see it in action, you can &lt;a href=&quot;https://github.com/flakas/bubble-shooter-bot/blob/8b5304c047a6f0a25d58c04389247a7bd30d345d/train_multiprocess.py#L193&quot;&gt;find it on GitHub&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Mon, 05 Aug 2019 00:00:00 +0300</pubDate>
        <link>https://www.tautvidas.com/blog/2019/08/parallel-and-isolated-game-agents-for-reinforcement-learning/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2019/08/parallel-and-isolated-game-agents-for-reinforcement-learning/</guid>
        
        <category>python</category>
        
        <category>bubble shooter</category>
        
        <category>game</category>
        
        <category>reinforcement learning</category>
        
        <category>selenium</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>How to solve a puzzle game with Python and Z3 Theorem prover</title>
        <description>&lt;p&gt;This post shows how Microsoft’s &lt;a href=&quot;https://github.com/Z3Prover/z3&quot;&gt;Z3 Theorem prover&lt;/a&gt; can be used to solve the Puzlogic puzzle game. We will replace the previous brute-force approach used in my puzlogic-bot with one that uses Z3.&lt;/p&gt;

&lt;p&gt;Over &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2018/09/solving-puzlogic-puzzle-game-with-python-and-opencv-part-1/&quot;&gt;last few posts&lt;/a&gt; we walked through building up puzlogic-bot: building the basic solver, the vision component using OpenCV, hooking up mouse controls with Pynput.
As I’m learning to use Z3, the Z3 puzzle solver implementation would be an interesting practical exercise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR: you can find the final solver code &lt;a href=&quot;https://github.com/flakas/puzlogic-bot/blob/a4d97992aa017df97dc5613851bbe95a8d4eb449/puzbot/solvers/z3.py&quot;&gt;on Github&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Z3 is a theorem prover made by Microsoft. The SMT (&lt;a href=&quot;https://en.wikipedia.org/wiki/Satisfiability_modulo_theories&quot;&gt;Satisfiability modulo theories&lt;/a&gt;) family of solvers build up on SAT (&lt;a href=&quot;https://en.wikipedia.org/wiki/Boolean_satisfiability_problem&quot;&gt;Boolean satisfiability problem&lt;/a&gt;) family of solvers, providing capabilities to deal with numbers, lists, bit vectors and so on, instead of just boolean formulas.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/level-3.png&quot; alt=&quot;Level 3&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.kongregate.com/games/ejbarreto/puzlogic?haref=HP_HNG_puzlogic&quot;&gt;Puzlogic&lt;/a&gt; is a number-based puzzle game. The goal of the game is to insert all given numbers into the board such that all rows and columns have unique numbers, and where a target sum is given - the row or column sums up to a given value.&lt;/p&gt;

&lt;h3 id=&quot;basics-of-z3&quot;&gt;Basics of Z3&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://yurichev.com/writings/SAT_SMT_by_example.pdf&quot;&gt;“SAT/SMT by Example”&lt;/a&gt; by Dennis Yurichev is a great book to help beginners pick up Z3, and it was the inspiration for this post.&lt;/p&gt;

&lt;p&gt;At school we all worked through equation systems like:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;
&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;0.5&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Z3 can be thought of as a constraint solver. Given a set of constraints, it will try to find a set of variable values that satisfy the constraints.
If it can find a valid set of values - it means the constraints (and thus the theorem) is satisfiable. If not - it is not satisfiable.&lt;/p&gt;

&lt;p&gt;The most basic example given in Yurichev’s book is:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;kn&quot;&gt;from&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;z3&lt;/span&gt; &lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Real&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'x'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Real&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'y'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Real&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;'z'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Solver&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;0.5&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;check&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;())&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;model&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Executing it outputs:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;sat&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;z&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here each equation can be considered as a constraint: a sum of variable values have to sum up to a certain value, satisfying the equation.
Z3 is way more powerful than that, as evident by &lt;a href=&quot;https://z3prover.github.io/api/html/namespacez3py.html&quot;&gt;Z3py reference&lt;/a&gt;, but for the Puzlogic puzzle this is actually sufficient.&lt;/p&gt;

&lt;p&gt;What we need next is how to describe the given puzzle as a set of constraints for use in Z3.&lt;/p&gt;

&lt;h3 id=&quot;solving-puzlogic-puzzles&quot;&gt;Solving Puzlogic puzzles&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/level-6.png&quot; alt=&quot;Puzlogic - Level 6&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To quickly recap, the rules of the game are as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;All numbers on a given row or column must be unique&lt;/li&gt;
  &lt;li&gt;When a sum target is given for a given row or column, all cells in it must sum up to target sum&lt;/li&gt;
  &lt;li&gt;All purple pieces at the bottom of the game board must be used to fill the game board&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As established in the previous post, the game board is defined as a sparse matrix: a list of cells with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x, y&lt;/code&gt; coordinates and a value. E.g.:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;board&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# (row, column, value)
&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;A value of zero is treated as an empty cell.&lt;/p&gt;

&lt;p&gt;Purple game pieces are expressed just as a list of numbers: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[1, 2, 3, 4, 5, 6]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;While target sums can be expressed as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;(dimension, index, target_sum)&lt;/code&gt; tuples:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;target_sums&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# (dimension, index, target_sum
&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;12&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# second row sums up to 12
&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# third row sums up to 10
&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;11&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# first column sums up to 11
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Dimension of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0&lt;/code&gt; marks a row, while &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1&lt;/code&gt; marks a column.&lt;/p&gt;

&lt;p&gt;For the implementation there are 5 constraints to cover:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Limit values of pre-filled cells to the given values. Otherwise Z3 would assign them to some other values.&lt;/li&gt;
  &lt;li&gt;Allow empty cells to get values only of available pieces.&lt;/li&gt;
  &lt;li&gt;Require that there would be no duplicate values in any of the rows or columns.&lt;/li&gt;
  &lt;li&gt;If there are any sum targets specified for given rows or columns, require them to be satisfied.&lt;/li&gt;
  &lt;li&gt;Require that each purple game piece would be used only once.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4 id=&quot;0-z3-variables&quot;&gt;0. Z3 variables&lt;/h4&gt;
&lt;p&gt;First, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;board&lt;/code&gt; is extended by creating a Z3 variable for each cell to make it easier to apply constraints later on:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;cell_name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;'c_%d_%d'&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;%&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;solve&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;solver&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Solver&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Create z3 variables for each cell
&lt;/span&gt;  &lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Int&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cell_name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)))&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Both default values and the purple game pieces are integers, hence the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Int&lt;/code&gt; variable type.&lt;/p&gt;

&lt;h4 id=&quot;1-pre-filled-variables&quot;&gt;1. Pre-filled variables&lt;/h4&gt;

&lt;p&gt;So far Z3 would be able to assign any integer to any of the variables we created so far.
We will constrain the cells to use the already specified value where it is specified.&lt;/p&gt;

&lt;p&gt;As mentioned before, in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;board&lt;/code&gt; sparse matrix the value of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0&lt;/code&gt; indicates an empty cell, so we initialize values for cells with values greater than 0:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;is_cell_empty&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;set_prefilled_cell_values&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;not&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;is_cell_empty&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cell == value&lt;/code&gt; is going to generate an expression object of type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;z3.z3.BoolRef&lt;/code&gt;, as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cell&lt;/code&gt; here is a variable we created above with type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Int&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;value&lt;/code&gt; is a constant.&lt;/p&gt;

&lt;p&gt;E.g. for the cell &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;(0, 3, 6)&lt;/code&gt; the expression would be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Int('c_0_3') == 6)&lt;/code&gt;.&lt;/p&gt;

&lt;h4 id=&quot;2-empty-cells-can-gain-a-value-from-any-of-the-pieces&quot;&gt;2. Empty cells can gain a value from any of the pieces&lt;/h4&gt;

&lt;p&gt;Having available pieces &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[1, 2, 3, 4, 5, 6]&lt;/code&gt;, we want the following constraint for each of the empty cells: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cell == 1 OR cell == 2 OR cell == 3 OR cell == 4 OR cell == 5 OR cell == 6&lt;/code&gt;&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;set_possible_target_cell_values&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;

  &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;is_cell_empty&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
      &lt;span class=&quot;n&quot;&gt;any_of_the_piece_values&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;piece&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;piece&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
      &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;append&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;any_of_the_piece_values&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt;

  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This returns a list of constraints in the format:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# ...
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You may notice that this allows cells on the same row or column to have a common value. This will be fixed by constraint 3.
This also allows the same piece to be used in multiple cells, we’ll fix it with constraint 4.&lt;/p&gt;

&lt;h4 id=&quot;3-no-duplicates-in-rows-or-columns&quot;&gt;3. No duplicates in rows or columns&lt;/h4&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;require_unique_row_and_column_cells&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c2&lt;/span&gt;
      &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;itertools&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;combinations&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
      &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x2&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;or&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;itertools.combinations()&lt;/code&gt;, given a list of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[A, B, C, ...]&lt;/code&gt; will output combinations &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[AB, AC, BC, ...]&lt;/code&gt; without repetitions.
E.g. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;c_0_0 != c_0_3&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;c_0_0 != c_1_0&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;c_0_0 != c_2_0&lt;/code&gt; and so on.&lt;/p&gt;

&lt;p&gt;We use it to make sure that no cells with a common row or column index have the same value.
Instead of manually iterating over each cell combination, we could’ve used &lt;a href=&quot;https://z3prover.github.io/api/html/namespacez3py.html#a9eae89dd394c71948e36b5b01a7f3cd0&quot;&gt;Distinct(args) expression&lt;/a&gt; for each row and column.&lt;/p&gt;

&lt;h4 id=&quot;4-satisfying-sum-targets&quot;&gt;4. Satisfying sum targets&lt;/h4&gt;

&lt;p&gt;Any column or row may have a sum target that its pieces should sum up to.&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;match_sum_requirements&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sum_requirements&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dimension&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;target_sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sum_requirements&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;n&quot;&gt;relevant_cells&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dimension&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
      &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;append&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;relevant_cells&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;target_sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It’s a fairly straightforward sum of relevant cells. E.g. for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;(0, 1, 12)&lt;/code&gt; sum target, the constraint expression would be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Sum(c_1_0, c_1_1, c_1_3) == 12&lt;/code&gt;&lt;/p&gt;

&lt;h4 id=&quot;5-all-pieces-are-used-once&quot;&gt;5. All pieces are used once&lt;/h4&gt;

&lt;p&gt;Here that can be ensured by summing up pieces used in empty cells, and it should match the sum of pieces given originally.
Just checking for unique values is insufficient, as on some levels there may be multiple pieces with the same value, e.g. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[4, 5, 6, 4, 5, 6]&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;target_cells_use_all_available_pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;empty_cells&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;board&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;is_cell_empty&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)]&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;empty_cells&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sum&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;putting-it-all-together&quot;&gt;Putting it all together&lt;/h4&gt;

&lt;p&gt;Now that we have methods that generate constraint expressions, we can add them all to the solver and format the output to make it look more readable:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; \
  &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;set_prefilled_cell_values&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; \
  &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;set_possible_target_cell_values&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; \
  &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;require_unique_row_and_column_cells&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; \
  &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;match_sum_requirements&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sum_requirements&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; \
  &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;target_cells_use_all_available_pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pieces&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;constraint&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;constraints&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;solver&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;constraint&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;solver&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;check&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sat&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;model&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;solver&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;model&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
    &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;model&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;].&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;as_long&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;())&lt;/span&gt;
      &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;row&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;column&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cell&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;extended_board&lt;/span&gt;
      &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;is_cell_empty&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;else&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It’s worth noting that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;solver.check()&lt;/code&gt; may return either &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;z3.sat&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;z3.unsat&lt;/code&gt;. If it is satisfiable, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;solver.model()&lt;/code&gt; will contain a list of values:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# solver.model()
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_3_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_2_2&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
 &lt;span class=&quot;n&quot;&gt;c_0_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The values, however, are still Z3 objects, so e.g. an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Int()&lt;/code&gt; object value may need to be converted into a primitive &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;int&lt;/code&gt; via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;as_long()&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;Once a model is solved, we also nicely format the return values in the form of:&lt;/p&gt;
&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;solution&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# (row, column, expected_piece_value)
&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The steps, if done by the human, will solve the puzzle.&lt;/p&gt;

&lt;p&gt;The final set of constraints for level 6 was:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# 1. Pre-filled cell values
&lt;/span&gt;  &lt;span class=&quot;n&quot;&gt;c_0_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_2&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# 2. Possible values for empty cells
&lt;/span&gt;  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;Or&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# 3. Unique values in rows and columns
&lt;/span&gt;  &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_0_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_2_2&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# 4. Row/column sum targets
&lt;/span&gt;  &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;12&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_2&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;11&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# 5. Ensuring that all pieces have been used once
&lt;/span&gt;  &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_0_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_1_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_2_3&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c_3_0&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;21&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;final-thoughts&quot;&gt;Final thoughts&lt;/h3&gt;

&lt;p&gt;The &lt;a href=&quot;https://github.com/flakas/puzlogic-bot&quot;&gt;puzlogic-bot&lt;/a&gt; project is able to use the Z3 based solver for first few levels, but that’s only because the Vision component is not able to read the row/column target sums correctly.
As we have seen in the example, the Z3-based solver has no issues dealing with target sums in further levels.&lt;/p&gt;

&lt;p&gt;I found it rather surprising just how approachable Z3 was for a newbie. The puzzle problem perhaps was primitive, but the only difficulty was to not get lost in the constraint definitions the puzzle required.&lt;/p&gt;

&lt;p&gt;You can find the final code of the solver &lt;a href=&quot;https://github.com/flakas/puzlogic-bot/blob/a4d97992aa017df97dc5613851bbe95a8d4eb449/puzbot/solvers/z3.py&quot;&gt;on Github&lt;/a&gt;.
If you are curious how &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;puzlogic-bot&lt;/code&gt; was built, I have &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2018/09/solving-puzlogic-puzzle-game-with-python-and-opencv-part-1/&quot;&gt;a series of posts&lt;/a&gt; on it too.&lt;/p&gt;

&lt;p&gt;How do you use or would use constraint solvers?&lt;/p&gt;
</description>
        <pubDate>Mon, 11 Feb 2019 00:00:00 +0200</pubDate>
        <link>https://www.tautvidas.com/blog/2019/02/solving-a-puzzle-game-with-z3-theorem-prover/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2019/02/solving-a-puzzle-game-with-z3-theorem-prover/</guid>
        
        <category>python</category>
        
        <category>puzlogic</category>
        
        <category>game</category>
        
        <category>z3</category>
        
        <category>smt</category>
        
        <category>sat</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>Debugging TLA+ specifications with state dumps</title>
        <description>&lt;p&gt;&lt;a href=&quot;http://lamport.azurewebsites.net/tla/tla.html&quot;&gt;TLA+&lt;/a&gt; (Temporal Logic of Actions) is a formal specification language, used for specifying and checking systems and algorithms for corectness.
As I’m further exploring it (see my last post about &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2017/12/experimenting-with-tla-and-pluscal-3-throttling-multiple-senders/&quot;&gt;modelling subscription throttling&lt;/a&gt;), I’m ocasionally running into situations where I think there may be an issue with the current specification as they unexpectedly do not terminate, but without a clear reason.&lt;/p&gt;

&lt;p&gt;In programming languages it is common to fire up a debugger and step through the program execution, inspecting instruction flow and variable values.
With TLC there is no clear way to do so, at least I was unable to find any. TLC will only provide an error trace for errors it does find, however if TLC execution does not terminate this is of little use.&lt;/p&gt;

&lt;p&gt;In &lt;a href=&quot;https://lamport.azurewebsites.net/tla/tlc-options.html&quot;&gt;TLC Command-line options&lt;/a&gt; I found the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-dump &amp;lt;file&amp;gt;&lt;/code&gt; option, which dumps out every reached state into the specified file.
There is no equivalent GUI option in “TLA+ Toolbox’s” TLC options, so it has to be specified as a TLC command line parameter in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Model &amp;gt; Advanced Options &amp;gt; TLC Options &amp;gt; TLC command line parameters&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/tla-2019/dump-command-line-parameter.png&quot; alt=&quot;TLC command line parameters&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The dump file itself consists of plaintext state values, so it is certainly human readable. For a tautology checking spec when run with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-dump states.txt&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;VARIABLES P, Q

F1(A, B) == A =&amp;gt; B
F2(A, B) == ~A \/ B

ValuesEquivalent == F1(P, Q) &amp;lt;=&amp;gt; F2(P, Q)

Init ==
    /\ P \in BOOLEAN
    /\ Q \in BOOLEAN
    
Next ==
    /\ P' \in BOOLEAN
    /\ Q' \in BOOLEAN
    
Spec ==
    Init /\ [][Next]_&amp;lt;&amp;lt;P, Q&amp;gt;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The dump file in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;SPEC_PATH&amp;gt;/&amp;lt;SPEC_NAME&amp;gt;.toolbox/&amp;lt;MODEL_NAME&amp;gt;/states.txt.dump&lt;/code&gt; contains:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;State 1:
/\ P = FALSE
/\ Q = FALSE

State 2:
/\ P = FALSE
/\ Q = TRUE

State 3:
/\ P = TRUE
/\ Q = FALSE

State 4:
/\ P = TRUE
/\ Q = TRUE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It’s a low-tech approach, it still is difficult to follow execution in cases with many states, but it has helped me get a better grasp on which states TLC is exploring and clarify issue suspicions.
One thing to note is that depending on the size of a state and the number of states, the filesize may quickly grow into gigabytes range, so in certain cases it may be impractical to dump all distinct states, requiring early termination of TLC.&lt;/p&gt;

&lt;p&gt;This is a very basic method, but if you have better ways of debugging specs themselves - please do let me know!&lt;/p&gt;
</description>
        <pubDate>Wed, 02 Jan 2019 00:00:00 +0200</pubDate>
        <link>https://www.tautvidas.com/blog/2019/01/debugging-tla-specifications-with-state-dumps/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2019/01/debugging-tla-specifications-with-state-dumps/</guid>
        
        <category>formal methods</category>
        
        <category>tla</category>
        
        <category>tlc</category>
        
        <category>debugging</category>
        
        <category>toolbox</category>
        
        
        <category>Programming</category>
        
      </item>
    
      <item>
        <title>Adding controls and bot logic - Part 3 of Solving Puzlogic puzzle game with Python and OpenCV</title>
        <description>&lt;p&gt;In the &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2018/11/adding-basic-vision-part-2-of-solving-puzlogic-with-python-and-opencv/&quot;&gt;the second part&lt;/a&gt; of the series we added the Vision component for the &lt;a href=&quot;https://www.kongregate.com/games/ejbarreto/puzlogic?haref=HP_HNG_puzlogic&quot;&gt;Puzlogic puzzle&lt;/a&gt; solving bot.&lt;/p&gt;

&lt;p&gt;In this post we will add a mouse controller as well as handle basic bot logic to get the bot actually solve some of the beginner puzzles.&lt;/p&gt;

&lt;h3 id=&quot;planning&quot;&gt;Planning&lt;/h3&gt;

&lt;p&gt;For controls, as before in the Burrito Bison bot, here we only need the mouse. So I’ll reuse the component, which uses &lt;a href=&quot;https://pypi.org/project/pynput/&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;pynput&lt;/code&gt;&lt;/a&gt;` for mouse controls.&lt;/p&gt;

&lt;p&gt;For the bot logic, we’ll need to use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Vision&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Controls&lt;/code&gt; components, making use of all 3 to solve a given puzzle.
In this case the bot will rely on the human to set up the game level, handle game UIs, while bot itself will be responsible only for finding the right solution and moving game pieces to the target cells.&lt;/p&gt;

&lt;h3 id=&quot;implementation&quot;&gt;Implementation&lt;/h3&gt;

&lt;h4 id=&quot;mouse-controller&quot;&gt;Mouse Controller&lt;/h4&gt;

&lt;p&gt;As mentioned before, we’ll use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Pynput&lt;/code&gt; for mouse controls. We start with the basic outline:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;import time
from pynput.mouse import Button, Controller as MouseController

class Controller:
    def __init__(self):
        self.mouse = MouseController()

    def move_mouse(self, x, y):
        # ...

    def left_mouse_drag(self, from, to):
        # ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;move_mouse(x, y)&lt;/code&gt;, it needs to move the mouse to the given coordinate on screen.&lt;/p&gt;

&lt;p&gt;Pynput would be able to move the mouse in a single frame just by changing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mouse.position&lt;/code&gt; attribute, but for me that caused problems in the past, as the game would simply not keep up and may handle such mouse movements unpredictably (e.g. not registering mouse movements at all, or moving it only partially).
And in a way that makes sense. Human mouse movements normally take several hundred milliseconds, not under 1ms.&lt;/p&gt;

&lt;p&gt;To mimic such gestures I’ve added a way to smooth out the mouse movement over some time period, e.g. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;100ms&lt;/code&gt;, by taking a step every so often (e.g. every &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;2.5ms&lt;/code&gt; in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;40&lt;/code&gt; steps).&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def move_mouse(self, x, y):
    def set_mouse_position(x, y):
        self.mouse.position = (int(x), int(y))
    def smooth_move_mouse(from_x, from_y, to_x, to_y, speed=0.1):
        steps = 40
        sleep_per_step = speed // steps
        x_delta = (to_x - from_x) / steps
        y_delta = (to_y - from_y) / steps
        for step in range(steps):
            new_x = x_delta * (step + 1) + from_x
            new_y = y_delta * (step + 1) + from_y
            set_mouse_position(new_x, new_y)
            time.sleep(sleep_per_step)
    return smooth_move_mouse(
        self.mouse.position[0],
        self.mouse.position[1],
        x,
        y
    )
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The second necessary operation is mouse dragging, which means pushing the left mouse button down, moving the mouse to the right position and releasing the left mouse button.
Again, all those steps can be programatically performed in under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1ms&lt;/code&gt;, but not all games can keep up, so we’ll spread out the gesture actions over roughly a second.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def left_mouse_drag(self, start, end):
    delay = 0.2
    self.move_mouse(*start)
    time.sleep(delay)
    self.mouse.press(Button.left)
    time.sleep(delay)
    self.move_mouse(*end)
    time.sleep(delay)
    self.mouse.release(Button.left)
    time.sleep(delay)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And that should be sufficient for the bot to reasonably interact with the game.&lt;/p&gt;

&lt;h4 id=&quot;bot-logic&quot;&gt;Bot logic&lt;/h4&gt;

&lt;p&gt;The bot component will need to communicate between the other 3 component.
For bot logic there are 2 necessary steps:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt; component needs to get information from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Vision&lt;/code&gt; about available cells, their contents, available pieces in order for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt; to provide a solution.&lt;/li&gt;
  &lt;li&gt;Take the solution from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt;, and map its results into real-life actions, by moving the mouse via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Controls&lt;/code&gt; to the right positions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We start by defining the structure:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;class Bot:
    &quot;&quot;&quot; Needs to map vision coordinates to solver coordinates &quot;&quot;&quot;

    def __init__(self, vision, controls, solver):
        self.vision = vision
        self.controls = controls
        self.solver = solver
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then we need to a way feed information to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt;. But we can’t just feed vision information straight into the solver (at least not in the current setup), as it both work with slightly differently structured data.
So we first will need to map vision information into structures that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt; can understand.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_moves(self):
    return self.solver.solve(self.get_board(), self.get_pieces(), self.get_constraints())
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;self.vision.get_cells()&lt;/code&gt; returns cell information in a list of cells of format &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Cell(x, y, width, height, content)&lt;/code&gt;, but &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt; expects a list of cells in format of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;(row, column, piece)&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_board(self):
    &quot;&quot;&quot; Prepares vision cells for solver &quot;&quot;&quot;
    cells = self.vision.get_cells()

    return list(map(lambda c: (c.x, c.y, c.content), cells))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;self.get_pieces()&lt;/code&gt; expects a list of integers representing available pieces. However, vision information returns &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Cell(x, y, width, height, content)&lt;/code&gt;. So we need to map that as well:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_pieces(self):
    &quot;&quot;&quot; Prepares vision pieces for solver &quot;&quot;&quot;
    return list(map(lambda p: p.content, self.vision.get_pieces()))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;self.get_constraints()&lt;/code&gt; currently is not implemented in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Vision&lt;/code&gt; component, and instead returns an empty list &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[]&lt;/code&gt;. We can just pass that along to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Solver&lt;/code&gt; unchanged for now, but will likely have to change it once constraints are implemented.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_constraints(self):
    &quot;&quot;&quot; Prepares vision constraints for solver &quot;&quot;&quot;
    return self.vision.get_constraints()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now that we have the solution in the form &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[(target_row, target_column, target_piece_value), ...]&lt;/code&gt;, we need to map that back into something that could work for the graphical representation.
We already treat &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;y&lt;/code&gt; of each cell as the “row” and “column”, which works because all cells are arranged in a grid anyway.
Now we only need to find which available pieces to take from which cell, and move those to respective target cells.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def do_moves(self):
    moves = self.get_moves()
    board = self.vision.get_game_board()
    available_pieces = self.vision.get_pieces()

    def get_available_piece(piece, pieces):
        target = list(filter(lambda p: p.content == piece, pieces))[0]
        remaining_pieces = list(filter(lambda p: p != target, pieces))
        return (target, remaining_pieces)

    for (to_x, to_y, required_piece) in moves:
        (piece, available_pieces) = get_available_piece(required_piece, available_pieces)

        # Offset of the game screen within a window + offset of the cell + center of the cell
        move_from = (board.x + piece.x + piece.w/2, board.y + piece.y + piece.h/2)
        move_to = (board.x + to_x + piece.w/2, board.y + to_y + piece.h/2)
        print('Moving', move_from, move_to)

        self.controls.left_mouse_drag(
            move_from,
            move_to
        )
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get_available_piece&lt;/code&gt; takes the first one from the vision’s set of pieces, and uses that as a source for the solution.&lt;/p&gt;

&lt;p&gt;As for handling coordinates, one thing to not forget is that coordinates point to the top left corner of the corner, and are not always absolute in relation to the OS screen.
E.g. a piece’s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x&lt;/code&gt; coordinate is relative to the game screen’s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x&lt;/code&gt; coordinate, so to find its absolute coordinate we need to sum the two: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;absolute_x = board.x + piece.x&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Top left corner of the piece is also not always usable - the game may not respond. However, if we target the center of the cell - that works reliably.
So we offset the absolute coordinate with half of the cell’s width or height: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;absolute_x = board.x + piece.x + piece.width&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;finally-running-the-bot&quot;&gt;Finally running the bot!&lt;/h3&gt;

&lt;p&gt;Now we have all the tools for the bot to solve basic puzzles of Puzlogic. The last remaining piece is to build a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;run.py&lt;/code&gt; script putting it all together:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;from puzbot.vision import ScreenshotSource, Vision
from puzbot.bot import Bot
from puzbot.solver import Solver
from puzbot.controls import Controller

source = ScreenshotSource()
vision = Vision(source)
solver = Solver()
controller = Controller()
bot = Bot(vision, controller, solver)

print('Checking out the game board')
bot.refresh()

print('Game board:', vision.get_cells())
print('Game pieces:', vision.get_pieces())

print('Calculating the solution')
print('Suggested solution:', bot.get_moves())
print('Performing the solution')
bot.do_moves()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now we can run the bot with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;python run.py&lt;/code&gt;, making sure that the game window is also visible on the screen. Here’s a demo video:&lt;/p&gt;

&lt;div style=&quot;position:relative;height:0;padding-bottom:56.25%&quot;&gt;&lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/Z8l1TRdUUSM?ecver=2&quot; width=&quot;640&quot; height=&quot;360&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&quot; style=&quot;position:absolute;width:100%;height:100%;left:0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;

&lt;p&gt;You can see that the bot needs human help to get through the interfaces, and it can only solve puzzles by itself.
Even then, it is able to resolve the first 4 basic puzzles, and just by chance is able to solve the 5th one, as it contains sum constraints - right now the bot cannot handle sum constraints in vision. So it does fail on the 6th puzzle.&lt;/p&gt;

&lt;p&gt;Full code mentioned in this post can be found on Github: &lt;a href=&quot;https://github.com/flakas/puzlogic-bot/blob/ef3922837b417d138e73f2f3ed783f379cfdb029/puzbot/bot.py&quot;&gt;flakas/puzlogic-bot - bot.py&lt;/a&gt;, &lt;a href=&quot;https://github.com/flakas/puzlogic-bot/blob/28a91cf8256f563273860bafe93645be89c36fd3/puzbot/controls.py&quot;&gt;flakas/puzlogic-bot - controls.py&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are interested in the full project, you can also find it on Github: &lt;a href=&quot;https://github.com/flakas/puzlogic-bot&quot;&gt;flakas/puzlogic-bot&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Sat, 10 Nov 2018 00:00:00 +0200</pubDate>
        <link>https://www.tautvidas.com/blog/2018/11/adding-controls-and-bot-logic-part-3-of-solving-puzlogic-with-python-and-opencv/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2018/11/adding-controls-and-bot-logic-part-3-of-solving-puzlogic-with-python-and-opencv/</guid>
        
        <category>opencv</category>
        
        <category>python</category>
        
        <category>game</category>
        
        <category>puzlogic</category>
        
        
        <category>Projects</category>
        
      </item>
    
      <item>
        <title>Adding basic vision - Part 2 of Solving Puzlogic puzzle game with Python and OpenCV</title>
        <description>&lt;p&gt;In the &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2018/09/solving-puzlogic-puzzle-game-with-python-and-opencv-part-1/&quot;&gt;previous post&lt;/a&gt; we looked into building a basic solver of the &lt;a href=&quot;https://www.kongregate.com/games/ejbarreto/puzlogic?haref=HP_HNG_puzlogic&quot;&gt;Puzlogic puzzle&lt;/a&gt;. Now let’s take a look at creating the Vision component of the bot.&lt;/p&gt;

&lt;p&gt;The vision component needs to be able to:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Identify the game board, which cells are already filled in, which ones may have pieces placed into them;&lt;/li&gt;
  &lt;li&gt;Identify the set of available pieces a level provides;&lt;/li&gt;
  &lt;li&gt;Identify values of the available pieces and filled-in game cells.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;planning&quot;&gt;Planning&lt;/h3&gt;

&lt;p&gt;As before, we’ll use &lt;a href=&quot;https://opencv.org/&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;OpenCV&lt;/code&gt;&lt;/a&gt; for vision, &lt;a href=&quot;https://pypi.org/project/mss/&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mss&lt;/code&gt;&lt;/a&gt; for taking screenshots of the window, and &lt;a href=&quot;https://pypi.org/project/Pillow/&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Pillow&lt;/code&gt;&lt;/a&gt; (fork of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PIL&lt;/code&gt;) to convert screenshots into OpenCV compatible images.
For character recognition of piece values we’ll use &lt;a href=&quot;https://pypi.org/project/pytesseract/&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Pytesseract&lt;/code&gt;&lt;/a&gt; bindings for &lt;a href=&quot;https://github.com/tesseract-ocr/tesseract&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Tesseract&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’ll implement &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get_game_board()&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get_cells()&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get_pieces()&lt;/code&gt; public methods, as well as a way to OCR piece values.&lt;/p&gt;

&lt;h4 id=&quot;taking-screenshots&quot;&gt;Taking screenshots&lt;/h4&gt;

&lt;p&gt;Let’s start by building an easier way to campture screenshots of the window.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;import cv2
import numpy as np
import time
import pytesseract
from PIL import Image
from mss import mss
from collections import namedtuple

class ScreenshotSource:
    def __init__(self):
        self.monitor = {'top': 0, 'left': 0, 'width': 1920, 'height': 1080}
        self.screen = mss()
        self.image = None

    def get(self):
        if self.image is None:
            self.image = self.refresh()

        return self.image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here I’m hardcoding the monitor’s dimensions to 1920x1080 - the common Full HD screen. It would be better to pass it in as a constructor argument, but for now that’ll do.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get&lt;/code&gt; method will be used to return the screenshot for further processing. It will be taking the screenshot of the full screen, including OS toolbars, browser window and hopefully game window.
Image itself will get cached for improved performance until we actually need to refresh the frame.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def refresh(self):
    source_image = self.screen.grab(self.monitor)
    rgb_image = Image.frombytes('RGB', source_image.size, source_image.rgb)
    rgb_image = np.array(rgb_image)
    bgr_image = self.convert_rgb_to_bgr(rgb_image)

    return bgr_image

def convert_rgb_to_bgr(self, img):
    return img[:, :, ::-1]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We take a screenshot with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mss&lt;/code&gt;, convert the binary output to RGB with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PIL&lt;/code&gt;, then convert the RGB (Red, Green, Blue) image into BGR (Blue, Green, Red) color space.
This is important, as &lt;a href=&quot;https://www.learnopencv.com/why-does-opencv-use-bgr-color-format/&quot;&gt;OpenCV internally works with BGR images&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;RGB to BGR conversion works by keeping X and Y coordinates the same (first two arguments), but reversing the order of the third timension, turning RGB values into BGR.&lt;/p&gt;

&lt;h4 id=&quot;finding-game-area&quot;&gt;Finding game area&lt;/h4&gt;

&lt;p&gt;Now that we have the screenshot in bot’s memory, we can start by identifying where within the window game area is.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;class Vision:
    def __init__(self, source):
        self.source = source

    def get_game_board(self):
        &quot;&quot;&quot; Detects the game window area within a computer screen &quot;&quot;&quot;

        screen_image = self.source.get()
        original_screen_image = screen_image.copy()
        grayscale = cv2.cvtColor(screen_image, cv2.COLOR_BGR2GRAY)

        # Find black background around the game screen
        ret, mask = cv2.threshold(grayscale, 1, 255, cv2.THRESH_BINARY)
        binary_grayscale = cv2.bitwise_not(mask)

        # Eliminate noise and smaller elements
        kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3, 3))
        dilated = cv2.dilate(binary_grayscale, kernel, iterations=1)

        _, contours, _ = cv2.findContours(dilated, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

        Board = namedtuple('Board', ['x', 'y', 'w', 'h', 'screen'])

        for contour in contours:
            # get rectangle bounding contour
            [x, y, w, h] = cv2.boundingRect(contour)

            # Discard small pieces, we're looking for a game window roughly 800x600
            if w &amp;lt; 700 or h &amp;lt; 500 or w &amp;gt; 800:
                continue

            cropped = original_screen_image[y:y+h, x:x+w]

            return Board(x, y, w, h, cropped)

        return False
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The basic idea is to turn the image into binary grayscale for easier management (pure white or pure black, turns the 3D array into 2D one), run a dilation kernel on the full image to eliminate smaller elements and noise (as that increases the number of contours), and then look for a rectangular contour that is close in size to the target game area (800x600 px).&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;We start with the original image.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/1-original.png&quot; alt=&quot;Source image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Turn the image to grayscale color space.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/2-grayscale.png&quot; alt=&quot;Grayscale&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Further simplify the image to binary grayscale, doing so eliminates most of the tiny details.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/3-binary-grayscale.png&quot; alt=&quot;Binary grayscale&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Dilation further decreases the size of small details. Now we can start looking for contours.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/4-dilated.png&quot; alt=&quot;Dilated&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;As you can see, it finds quite a few contours, that’s why I tried to simplify the image as much as possible.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/5-all-contours-dilated.png&quot; alt=&quot;All contours on dilated image&quot; /&gt;
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/6-all-contours-original.png&quot; alt=&quot;All contours on original image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Further filtering by bounding box’s dimensions allows to identify the game screen, as pictured above.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/7-target-contour-original.png&quot; alt=&quot;Game screen marked on the original image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;And here it is cropped to a useable image for further processing:
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/game-board/8-game-screen.png&quot; alt=&quot;Cropped game screen&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final result for this image is: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Board(x=391, y=255, w=800, h=600, screen=np.array(..., dtype=uint8))&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Lastly, for future processing we take only the area that actually is the game screen. We can ignore the rest of the screen for vision purposes, but we still have to keep in mind offsets of the game area to allow us control mouse movements later.&lt;/p&gt;

&lt;h4 id=&quot;finding-all-cells&quot;&gt;Finding all cells&lt;/h4&gt;

&lt;p&gt;Now that we have the game area as a cropped image, we can start looking for game cells within it.
Both the game board (are where game pieces should be placed), as well as the pieces themselves are squares of a similar size, just colored slightly differently.
So we can write the same method to look for all cells, and then separate target cells from game pieces.&lt;/p&gt;

&lt;p&gt;The pattern is similar:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Convert image to binary grayscale&lt;/li&gt;
  &lt;li&gt;Dilate to eliminate some of the noise&lt;/li&gt;
  &lt;li&gt;Find all contours, filter them to ones roughly matching the target size&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_visible_cells(self):
    board = self.get_game_board()

    grayscale = cv2.cvtColor(board.screen, cv2.COLOR_BGR2GRAY)

    ret, mask = cv2.threshold(grayscale, 100, 255, cv2.THRESH_BINARY)
    binary_grayscale = cv2.bitwise_not(mask)

    # Erase smaller artifacts on screen, hopefully leaving only
    # larger contours to detect.
    kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3, 3))
    dilated = cv2.dilate(binary_grayscale, kernel, iterations=1)

    _, contours, _ = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    Cell = namedtuple('Cell', ['x', 'y', 'w', 'h', 'content'])

    # [x, y, w, h, img]
    bounding_boxes = map(lambda c: list(cv2.boundingRect(c)), contours)
    candidates = filter(lambda b: 40 &amp;lt; b[2] &amp;lt; 60 and 40 &amp;lt; b[3] &amp;lt; 60, bounding_boxes)
    cells = map(lambda c: Cell(c[0], c[1], c[2], c[3], self._recognize_number(board.screen[c[1]:c[1]+c[3], c[0]:c[0]+c[2]])), candidates)
    return list(cells)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I’ve set contour sizes for cells to be bigger than 40px, but smaller than 60px. It was done mostly by trial and error.
I think the actual size of the game piece is around 48px, but the game cell is slightly bigger, and we’ve done some dilation as well.&lt;/p&gt;

&lt;p&gt;To calculate &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cells&lt;/code&gt;, we crop the source image based on coordinates of the countour’s bounding box, taking box’s width and height into account.
Note that here I’m also using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;self._recognize_number(...)&lt;/code&gt; call, which does OCR of cell’s contents, turning it into an actual number, or returning &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;False&lt;/code&gt;. We’ll get to it in a minute.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;It all starts with the full game screen.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/cells/1-original.png&quot; alt=&quot;Original image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;This is how it looks like after turning the image into a binary grayscale image with some dilation. You can see that most of the details have been almost eliminated.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/cells/2-binary-grayscale-dilated.png&quot; alt=&quot;Dilated binary grayscale image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;We find all contours and filter them based on expected contour dimensions. After some trial and error that leaves only the cells.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/cells/3-marked-on-dilated.png&quot; alt=&quot;Marked game cells on dilated image&quot; /&gt;
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/cells/4-marked-on-original.png&quot; alt=&quot;Marked game cells on original image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Method’s result for this picture is:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[
  Cell(x=401, y=517, w=46, h=46, content=2),
  Cell(x=353, y=517, w=46, h=46, content=1),
  Cell(x=353, y=277, w=46, h=46, content=2),
  Cell(x=401, y=229, w=46, h=46, content=False),
  Cell(x=353, y=229, w=46, h=46, content=False),
  Cell(x=401, y=181, w=46, h=46, content=1)
]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can see that it was able to find all game board cells as well as all game pieces.&lt;/p&gt;

&lt;h4 id=&quot;finding-game-pieces&quot;&gt;Finding game pieces&lt;/h4&gt;

&lt;p&gt;Now that we have all the cells identified, we can use this information to figure out which ones are game cells.&lt;/p&gt;

&lt;p&gt;On different maps there may be different number of pieces provided in different arrangements, e.g. two rows with different number of pieces, two rows with same number of pieces, one row of pieces.&lt;/p&gt;

&lt;p&gt;The common theme I saw in the first few levels was there were at most 2 rows of pieces, so I’ll assume the same will hold for all levels in the game. The other nice thing is that the distance between the lowest game board cell and the highest piece is fairly big, which we can use to differentiate cells from pieces.&lt;/p&gt;

&lt;p&gt;Based on that information, we can find the lowest cell in the set of cells and assume that it is actually a game piece. Furthermore, I’ll assume that anything vertically within 3 cell heights of the lowest cell is also a piece.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_pieces(self):
    cells = self.get_visible_cells()
    lowest_cell = max(cells, key=lambda c: c.y)

    # Expect available pieces to be lower than the lowest row of game board cells
    return list(filter(lambda c: abs(lowest_cell.y - c.y) &amp;lt; lowest_cell.h*3, cells))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With the original image this method will return:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[
  Cell(x=401, y=517, w=46, h=46, content=2),
  Cell(x=353, y=517, w=46, h=46, content=1)
]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;finding-game-board-cells&quot;&gt;Finding game board cells&lt;/h4&gt;

&lt;p&gt;Now that we have identified which cells are actually movable pieces, any other cell must then be a game board cell!&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def get_cells(self):
    cells = self.get_visible_cells()
    pieces = self.get_pieces()

    return list(set(cells) - set(pieces))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Result of this method would be:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[
  Cell(x=401, y=181, w=46, h=46, content=1),
  Cell(x=353, y=229, w=46, h=46, content=False),
  Cell(x=353, y=277, w=46, h=46, content=2),
  Cell(x=401, y=229, w=46, h=46, content=False)
]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;recognizing-digits-within-cells&quot;&gt;Recognizing digits within cells&lt;/h4&gt;

&lt;p&gt;Above we identified some cells, which may be game board cells, or game pieces.
So the cell may contain a number, or it may not.&lt;/p&gt;

&lt;p&gt;The procedure is similar to what we’ve done before, but there are some differences.&lt;/p&gt;

&lt;p&gt;For OCR we will use Pytesseract bindings for Google’s Tesseract.
Tesseract works best when we feed in images containing just black text on white background with as little noise and unnecessary elements as possible.&lt;/p&gt;

&lt;p&gt;Due to that the first step is to remove the black border around the cell that some cells may have, as I found it would be easier for OCR to reliably recognize the digit.&lt;/p&gt;

&lt;p&gt;Next the image needs to be converted to binary grayscale, as the cell may contain other colors (like purple background of the piece, colored background of the cells).
At this point we should have either black text on white background, or white text on black background.&lt;/p&gt;

&lt;p&gt;Previous steps may have made edges of any digits a little pixelated, so blurring the image should make the digit more solid and slightly bigger, giving a better chance for OCR to recognize it.&lt;/p&gt;

&lt;p&gt;Last step before feeding the image to OCR, we need to make sure the image contains black text on white background.
I tried feeding in white text on black background, but it didn’t work well, and at this point the image may contain black text on white background or vice versa.&lt;/p&gt;

&lt;p&gt;To make sure the image is actually white text on black background, we check the color of the first pixel. If it is white (has a value of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;255&lt;/code&gt;) - I assume that it’s black text on white background.
But if the pixel is black (has a value of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0&lt;/code&gt;), then I assume it’s white text on black background, so we have to invert image’s colors.&lt;/p&gt;

&lt;p&gt;Lastly we feed in the image to pytesseract and attempt to convert the result into an integer. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--psm 10&lt;/code&gt; argument for Tesseract instructs that the image should contain just a single character inside.
In other modes I found that Tesseract returned mostly unusable garbage.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def _recognize_number(self, candidate_tile_image):
    &quot;&quot;&quot; Attempts to OCR the number within a game tile image &quot;&quot;&quot;

    borderless_image = candidate_tile_image[5:-5, 5:-5]
    grayscale = cv2.cvtColor(borderless_image, cv2.COLOR_BGR2GRAY)

    ret, mask = cv2.threshold(grayscale, 100, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
    binary_grayscale = cv2.bitwise_not(mask)

    # Blur to help text show up better for OCR
    ocr_image = cv2.medianBlur(binary_grayscale, 3)

    if ocr_image[0, 0] == 0:
        # OCR needs black text on white background
        black_text_on_white_background = cv2.bitwise_not(ocr_image)
        ocr_image = black_text_on_white_background

    # Use single-character segmentation mode for Tesseract
    character = pytesseract.image_to_string(ocr_image, config='--psm 10')
    try:
        return int(character)
    except:
        return False
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;We start with the image of an isolated cell of the original game screen
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/digit-ocr/1-original.png&quot; alt=&quot;Original image&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;This is how the cell looks like after turning it into a binary grayscale image and removing its borders. Note that background disappears as well.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/digit-ocr/2-grayscale.png&quot; alt=&quot;Binary grayscale&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Here’s how the image looks like after slight blurring. The digit looks slightly bolder, but it seems for this image blurring may not have been all that necessary.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/digit-ocr/3-blur.png&quot; alt=&quot;After blurring&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Because the image was white text on black background, the last step was to invert the colors and feed the resulting black-on-white image into Tesseract for OCR.
  &lt;img src=&quot;https://www.tautvidas.com/blog/images/posts/puzlogic-bot-2018/part-2/digit-ocr/4-black-on-white.png&quot; alt=&quot;After color inversion&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And here is the final result of the method is: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1&lt;/code&gt;. A parsed integer digit.&lt;/p&gt;

&lt;h4 id=&quot;performance-tuning&quot;&gt;Performance tuning&lt;/h4&gt;

&lt;p&gt;Now that we have most of the basic vision done, we can run some tests on the actual game, and it does work.
However, when performing such commands in rapid succession, it may take in the region of 10 seconds on my machine for all calls to respond.&lt;/p&gt;

&lt;p&gt;Mostly because some calls use others, so there’s quite a bit of unnecessary re-processing happening.
To improve the situation we’ll add basic caching: cache results of each class method for the given frame, clear out the cache on the next frame.&lt;/p&gt;

&lt;p&gt;For caching I’ve used decorators. Set a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cache&lt;/code&gt; property dict on the object, adding return values to the cache when computation happens.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;def cache_until_refresh(func):
    def wrapper(self):
        if func in self.cache:
            return self.cache[func]

        result = func(self)
        self.cache[func] = result
        return result

    return wrapper

class Vision:
    def __init__(self, source):
        self.source = source
        self.cache = {}

    def refresh(self):
        self.cache = {}
        self.source.refresh()

    @cache_until_refresh
    def get_game_board(self):
        # ...

    @cache_until_refresh
    def get_pieces(self):
        # ...

    @cache_until_refresh
    def get_cells(self):
        # ...

    @cache_until_refresh
    def get_visible_cells(self):
        # ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now instead of having to wait for full results 10 seconds, the response comes in under 2.&lt;/p&gt;

&lt;h3 id=&quot;closing-thoughts&quot;&gt;Closing thoughts&lt;/h3&gt;

&lt;p&gt;So now we have a basic way to identify the game area, know which pieces are available, where are the target cells for game pieces, as well as which number each cell has.&lt;/p&gt;

&lt;p&gt;Of course, the methods used above are quite brittle, at least as implemented here. Changes in UI, scaling, adding more complex or just other elements may easily break the bot.
But, you can also see how powerful these simple-once-abstracted techniques are: convert to grayscale, dilate/blur, find contours/bounding boxes and one can identify objects without providing a reference picture of it. A more sophisticated approach than &lt;a href=&quot;https://www.tautvidas.com/blog/blog/2018/02/automating-basic-tasks-in-games-with-opencv-and-python/&quot;&gt;image pattern matching in Burrito Bison bot&lt;/a&gt; I wrote about previously.&lt;/p&gt;

&lt;p&gt;If you are interested in the full code mentioned in this post, you can find it on &lt;a href=&quot;https://github.com/flakas/puzlogic-bot/blob/ef3922837b417d138e73f2f3ed783f379cfdb029/puzbot/vision.py&quot;&gt;Github - vision.py&lt;/a&gt;.
The full project’s code, possibly in a further state of the project, can be found on &lt;a href=&quot;https://github.com/flakas/puzlogic-bot&quot;&gt;Github - flakas/puzlogic-bot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next post we’ll look into adding Mouse controls and tying it all together to get the bot to actually solve several puzzles. A demo video shall follow too.&lt;/p&gt;
</description>
        <pubDate>Sat, 10 Nov 2018 00:00:00 +0200</pubDate>
        <link>https://www.tautvidas.com/blog/2018/11/adding-basic-vision-part-2-of-solving-puzlogic-with-python-and-opencv/</link>
        <guid isPermaLink="true">https://www.tautvidas.com/blog/2018/11/adding-basic-vision-part-2-of-solving-puzlogic-with-python-and-opencv/</guid>
        
        <category>opencv</category>
        
        <category>python</category>
        
        <category>game</category>
        
        <category>puzlogic</category>
        
        
        <category>Projects</category>
        
      </item>
    
  </channel>
</rss>
