fn/🌐
+e
or control
+cmd
+space
) on Firefox: instead of opening, the emoji picker briefly flashes and disappears:
If you use the “Edit… Emoji & Symbols” menu, the picker works - but it’s annoying to reach out for the mouse whenever you need an emoji or special character! I thought such an inconvenient bug would be fixed quickly on a minor Firefox or macOS update, but months passed and the bug was still there.
Between annoyed and curious, I dug the source code a bit and wrote a patch that fixes it, and also a secondary problem with the fn/🌐
+e
shortcut (introduced in Monterey as a replacement/alternative for control
+cmd
+space
): it works sometimes, but when it does, it also writes the letter “e” where the cursor is, which is equally irksome.
For reasons that I will explain below, Mozilla did not accept the fix. They are working on another solution, but it will take a while to be released. Since many people have the problem right now, I decided to share some details about my fix here, alongside with instructions for applying the patch to the official Firefox source code or downloading my patched version - which I rebranded “EmojiFox” to avoid confusion and respect Mozilla’s trademarks/license.
(skip to the next section if you are not interested in the programming details and just want a workaround for the bug)
One thing that caught my attention was that this bug also happened with Chrome after the macOS Sonoma update. They quickly produced a fix - a surprisingly short one:
if (is_a_system_shortcut_event) {
[[NSApp mainMenu] performKeyEquivalent:theEvent];
+
+ // Behavior changed in macOS Sonoma - now it's important we early-out
+ // rather than allow the code to reach
+ // _hostHelper->ForwardKeyboardEventWithCommands(). Go with the existing
+ // behavior for prior versions because we know it works for them.
+ if (base::mac::MacOSVersion() >= 14'00'00) {
+ _currentKeyDownCode.reset();
+ _host->EndKeyboardEvent();
+ return;
+ }
} else {
[self interpretKeyEvents:@[ theEvent ]];
}
It gives some hints on the root cause: the app is forwarding the system shortcut event, instead of stopping it. It caused no harm before Sonoma, but after it, it results in the shortcut triggering twice, and the second trigger closes the picker. You can check it by pressing the shortcut twice in quick succession on any other (non-buggy) app: it cause the exact behavior of the bug!
With that in mind, I downloaded the Firefox source code and started poking at it. It’s a quite large (but well documented) codebase, but I didn’t need to understand it all - just had to find a suitable place to stop the event processing. Finding one, I produced a proof-of-concept, which crudely detected the key combinations and abruptly stopped processing:
if (((anEvent.modifierFlags & NSEventModifierFlagControl) &&
(anEvent.modifierFlags & NSEventModifierFlagCommand) &&
anEvent.keyCode == 49) ||
((anEvent.modifierFlags & NSEventModifierFlagFunction) &&
anEvent.keyCode == 14)) {
return;
}
break;
That worked, but wouldn’t be much useful because users can redefine the emoji keyboard shortcut, so the only way to figure out whether an event is the shortcut I want to prevent is to go through the menus and find one that matches the event keys and triggers the action of opening the emoji picker. Chrome already had code for that, so I had to add something equivalent to Firefox:
// Determines whether the key event matches the shortcut assigned to the Emoji &
// Symbols menu item, so we can avoid dispatching it (and closing the picker).
//
// It works by looking for a second-level menu item that triggers the picker AND
// matches the shortcut, skipping any top-level menus that don't contain picker
// triggers (avoiding potentially long menus such as Bookmarks).
//
// It handles fn-E (the standard shortcut), ^⌘Space (which appears in a hidden
// menu when the standard is not redefined) and custom shortcuts (created via
// System Settings > Keyboard > Keyboard Shortcuts > App Shortcuts), save for a
// few complex key combos that send incorrect modifiers.
static bool IsKeyEventEmojiAndSymbols(NSEvent* event, NSMenu* menu) {
SEL targetAction = @selector(orderFrontCharacterPalette:);
for (NSMenuItem* topLevelItem in menu.itemArray) {
if (topLevelItem.hasSubmenu &&
[topLevelItem.submenu indexOfItemWithTarget:nil
andAction:targetAction] != -1) {
for (NSMenuItem* item in topLevelItem.submenu.itemArray) {
if (item.action == targetAction) {
NSString* itemCharacters = [[item keyEquivalent] lowercaseString];
NSUInteger itemModifiers = [item keyEquivalentModifierMask];
NSString* eventCharacters =
[[event charactersIgnoringModifiers] lowercaseString];
NSUInteger eventModifiers =
[event modifierFlags] &
NSEventModifierFlagDeviceIndependentFlagsMask;
if ([itemCharacters isEqualToString:eventCharacters] &&
itemModifiers == eventModifiers) {
return true;
}
}
}
}
}
return false;
}
(a nice bonus of doing that was discovering how Apple managed to replace control
+cmd
+space
with fn/🌐
+e
on the menu, yet the old shortcut still worked: they keep two menu items for “Emoji & Symbols”, but the one linked to the control
+cmd
+space
shortcut is hidden)
With that in place, the fix is as simple as Chrome’s:
// Early exit if Emoji & Symbols shortcut is pressed; fixes bug 1855346
// (first seen on macOS Sonoma) and bug 1833923 (first seen on Monterey)
if ((nsCocoaFeatures::OnMontereyOrLater()) &&
IsKeyEventEmojiAndSymbols(aNativeEvent, [NSApp mainMenu])) {
return currentKeyEvent->IsDefaultPrevented();
}
It turns out that doing this on the right place fixes not only the “flashing” bug, but the “e” one as well, because the later is also being processed twice, but fn/🌐
being such a special key on the Mac, sometimes the second occurrence loses the fn
, acting as the e
key being pressed right after the fn/🌐
+e
. That’s why I only apply the check to Monterey or later (after confirming that the “e” bug was introduced in Monterey, and the “flashing” one in Sonoma).
Of course it took me quite some time and learning to get there, and the true heroes are the Mozilla developers who kindly gave me feedback and pointed me towards the right direction at every step. At the end, we had a patch that fully fixed both problems, addressing all performance and compatibility concerns with the traversal.
However, those same Mozilla developers reasoned it would be better to prevent the event from trickling down at all instead of catching it on the TextInputHandler
(and catch any system shortcut events, not just the emoji picker one, which would avoid the need to traverse menus altogether), and wrote a different patch in that direction.
I was happy with that: I learned a lot, helped raising awareness and researching towards the cleanest solution, and - most important - the bug would soon be fixed for good. But a couple months passed, Firefox got a few major version updates, yet the keyboard shortcut was still broken!
So I asked around, and it seems the cleaner patch fixes the “flashing” issue, but not the “e” bug. They are actively working on a second patch for that, and will release both together - which is technically the best approach, but will take a while to be available for Firefox + Mac users.
If you are affected by this bug, you can:
Wait for the official fix. This is the simplest and safest option, but considering the release calendar, my best guess is that a fix won’t come before Firefox 125 (due April 16).
Apply my patch to the Firefox source code and build it. This is also very safe: you don’t need to trust anyone but Mozilla, since you are using their code (which you already trust as a Firefox user) and my patch (which is public and you can review). But it requires familiarity with the command line and a lot of time to compile the browser.
Download and install EmojiFox, that is, Firefox Nightly/Unofficial with my patch applied. The downsides: you have to trust me and it won’t auto-update - you should throw it away as soon as the official fix is released.
First step is to download and build Firefox on macOS by following the official instructions. Just keep in mind that:
./mach build
, you should add ac_add_options --enable-release
to your .mozconfig
file (and comment out any debug-related options, or pretty much anything but this one)./mach build
takes a few hours, even on a beefy machine. Once you are running Nightly, confirm the bug is still there, then apply the patch with:
curl -L https://phabricator.services.mozilla.com/D193328?download=true | patch
and ./mach build
again (don’t worry, it will be much faster, since it only recompiles the files that changed), then ./mach run
again. You should see the bug fixed now:
You are not done yet - even though there is a Nightly.app
that you can copy, it will be bound to assets that will disappear once you clean up your building environment. So now run:
./mach package
That will generate a .dmg
file in your obj-x86_64-apple-darwin23.1.0/dist
folder (or obj-arm64/dist
if you are using an Apple Silicon Mac). That .dmg
contains a Nightly.app
that you can copy to your Applications folder and use as your main browser.
I have been using Firefox 121 with this path since December, and recently re-applied it to the nightly build of Firefox 124. You can download this patched version (which I rebranded as “EmojiFox”) as long as you keep in mind that:
Open the file and drag EmojiFox to your Applications folder, as usual. But when you run the app, it will say it’s from an unidentified developer (I currently don’t do enough macOS development to justify the yearly USD 99 for an Apple Developer Program membership that would allow me to sign it).
The trick here is to hold control
while clicking the app icon, then select “Open” from the context menu. It will also show a warning, but now there will be an Open
button that will open the app. You only need to do this once - from now on, it will open normally.
Verify that both the control
+cmd
+space
and fn/🌐
+e
shortcuts work as expected.
Regardless of whether you built or downloaded your fixed browser, you will want to bring your existing bookmarks, history, extensions, etc. to it, and the easy way to do that is to configure it to use your existing profile (instead of the one it just created).
To do so, ensure the original Firefox is closed, and type about:profiles
at the new browser’s address bar. Click Set as default profile
on your Firefox profile, restart the browser and you should see your bookmarks, history, extensions, etc.
(if you have multiple profiles and don’t know which is the right one, just try Launch profile in new browser
on any candidates)
The only issue I had with this build so far: unlocking the 1Password extension doesn’t work with Touch ID (I suppose it’s because it isn’t an official release); you can still unlock with your password, and/or use Touch ID on the 1Password application. It’s a minor trade-off for people who, like myself, type emoji much more often than log on to websites.
If you try any of these solutions, please let me know in the comments below whether it worked for you or not. And let’s 🤞 for Mozilla to release the official fix soon (they are working on it) so we don’t need these workarounds anymore!
]]>In this age of controversial social media platforms, having a blog is one of the few remaining opportunities to keep ownership over your content. There are several good solutions around to publish and host one, but Jekyll and GitHub Pages are a great (and free) combination for people like myself who are happy hacking a little bit - except for not providing a comment system out of the box.
For years, I filled that gap with Disqus - a service that hosts your comments in exchange for a bit of advertising space. It was great at first, but over time ads became heavier, users were pushed towards creating accounts and abusively tracked. Moreover, hosting comments externally affects search engine indexing, and over time this all caused people to comment less and less, so I decided to bring the comments back to my blog.
A comment system isn’t a very complicated app, but it would be another database that I’d have to care for, and a departure from Jekyll’s static generation model that served me so well. The ideal solution would be to store comments in the same place I store posts: a trusty GitHub repository. Jekyll can read data files to show the comments, and all I needed was to host an app somewhere that would create those files when a new comment is written.
I almost coded that app myself, but Eduardo Bouças wrote and kindly shared Staticman, which does precisely that. Sure, I still had to host/configure it, adapt the blog to send it the comments (and read them from the repository files), and migrate the old comments from Disqus. These things combined took me a couple days, so I thought I’d share the process here.
It’s a good idea to first familiarize oneself with how Staticman works, but the gist is that your blog’s “new comment” form sends the POST to Staticman (instead of sending to the blog itself); Staticman has a GitHub API key that allows it to add the data file containing the post data to your website. That will trigger a rebuild (in the same way that a new blog post would), and Jekyll will show the new comment.
If you want to moderate the comments (like I do), it can create a pull request instead of merging the data directly. You review the pull request and merge it to approve, or discard to reject - a very familiar environment for most programmers these days. It supports other git providers such as GitLab, but I’ll focus on GitHub.
You will need to host it somewhere. It’s a lightweight, database-less Node.js app, so there are lots of options and not a lot of configuration involved. My choice is a DigitalOcean droplet (you can check my recent blog post on cost-effective hosting for details).
The official instructions are clear once you figure the moving parts. Your server will contain two RSA keys: a GitHub API key so the server can act on your behalf), and a private key that (I suppose) is used to store local secrets.
A few gotchas I ran into:
There are two configuration files: the API configuration (config.production.json
) and the site configuration (staticman.xml
). The first contains secrets such as API keys and should only reside on your Staticman server; the other goes on your blog’s repository, telling Staticman what to do when it receives a comment, and can be public (here is mine).
The docs currently state that the GitHub Application ID in config.production.json
is githubAppId
; actually, it’s gitHubAppID
.
Both RSA keys were triggering a node-rsa
error. In order to fix it, I changed the code (here and here).
Thanks to GitHub’s support for Let’s Encrypt, my blog runs over https (TLS), which means it cannot post data to a regular http server. My go-to solution for those cases is to run the application behind nginx, configuring it to terminate the secure connection and use certificates that Let’s Encrypt provides for free.
If you use Ansible (or are comfortable reading Ansible files), here is the playbook that installs/configures the Staticman and nginx, with Supervisor to keep it running and Certbot to keep the certificates up to date.
At this point you should have a working Staticman server, so the next step is to add a form to your blog that sends the comment to it. The form should have the same fields that Staticman expects, and you can use JavaScript to send the data to the server and show the comment immediately after it’s created.
I based mine on a few examples I saw online, most notably this one. It uses jQuery to send the data to the server and show the comment - not my choice in 2024, but I already have legacy JQuery code on the blog anyway, so I rolled with it.
You will know it is working when a post results in a pull request on your blog’s repository like this one. Merging it will add the comment to your blog’s _data
directory, and the next step is to show it in the post’s page.
Again I borrowed a lot from Avglinux’s example, fixing a couple issues with the threaded replies and adjusting to my blog’s style. I also replaced the Liquid strip_html
filter with a custom one that sanitizes it instead, so I can allow some HTML tags alongside the Markdown, while still keep the blog safe from JavaScript injection, cross-site scripting attacks and the like.
This PR contains all the code mentioned above; feel free to peruse and copy any of them; possibly checking the latest versions as this post gets older.
With this in place, all that was left to do was to migrate the comments from Disqus.
Disqus allows you to export the comments to an XML file (documented here), but in order to import them anywhere else, a conversion is needed. I found a few recipes (1, 2, 3, 4) online, but none of those worked for me, so I threw together some JavaScript code that does the job:
You can just run it, making the needed adjustments for your staticman.yml
configuration (e.g., if you changed the filename structure or added other fields that you want to import or generate) and put the generated comments
directory under your _data
directory in your blog’s repository, like I did here.
The code documents some of the shenanigans I found (odd terminology, invalid characters, etc.). It’s worth noticing that not every bit of information needed by Staticman is available in the XML, so a few choices were made:
I kept the comment _id
as its original Disqus ID (instead of generating a UUID, which would change the values at each migration run and require an extra lookup for comment replies). Doing so made the replying_to_uid
field odd, but it will correctly point to the _id
of the comment being replied to, and Staticman is fine with that.
createdAt
is an ISO 8601 date with seconds precision, which is easy to convert to the date
Staticman field (which is, by default, a Unix time), but the comment filenames are based on the timestamp in milliseconds. In order to improve uniqueness in the case of same-second comments, I filled the ms using the _id
(once again keeping successive migration runs idempotent).
My blog uses Gravatar to display user pictures (if they create one on the site; a generated pattern otherwise) based on a hash of their e-mail. Unfortunately, Disqus doesn’t export users’ emails, so instead of leaving it blank (which would give all users the same pattern), I hash the Disqus username, so the same user will always have the same pattern across the site.
As I said before, it took me a while to figure out all these pieces, but I’m happy with how it turned out: I own the comments (which I can keep hosting if I ever switch away from Jekyll), they are indexed by search engines, and I can moderate them in a familiar environment (pull requests).
There is the burden/cost of hosting the server, but I share it with other apps, so it’s effectively free for me. I did not (yet) set up email notifications for replies on users’ comments or a spam filter, but that can be done with Mailgun and Akismet - and those services have generous free tiers.
The only caveat is that Staticman doesn’t seem to be actively maintained, despite its numerous forks/users. That is a sign of maturity, but also makes me wary of yet-undiscovered vulnerabilities. But with its minimal code (and thus attack surface) and Dependabot on my fork warning me about vulnerabilities found in its dependencies, I think it’s worth the risk. Worst come to worse, it can always be replaced by a custom solution, since the comments are not locked in a proprietary system anymore - something I’ll never give up again.
]]>Having a relatively short office closure week for the holidays led my wife and I to look for a sunny break from the Canadian winter, and we noticed a lot of Canadians chose Cuba for that.
Between the proximity (it’s a 3-hour flight from Toronto), the relative affordability and our desire to understand the country beyond the usual left/right political narratives, we decided to give it a try. Here are some highlights of our experience and a few things we wish we knew before going.
Accommodations: Typical (that is, US-based) travel agency and homestay websites don’t list accommodations in Cuba. Expedia is a notable exception, so we booked and prepaid our stays through it. You usually have two options: all-inclusive resorts (usually close to a beach, where people stay through their whole vacation) and homestays (for those wanting to see the country and talk to the locals, like us). In any case, the websites ask an odd question about your reason for visiting Cuba (and none of the answers matches “tourism”) - that’s because they are required to do so by the US government, so just pick one of the options and move on.
Money: Visa/MasterCard don’t operate there, so you will use cash for everything. Restaurants, stores and experiences will accept US dollars, so bring those. USD 30 should get three basic meals for a person and USD 10 transports you within the same city, so budgeting at least USD 50/person/day (with prepaid stays) is a good idea. Don’t buy pesos (CUP) - you will get the official rate (1 USD = 120 CUP), whereas purchases will go with 200-250 CUP per USD. Change for your USD will usually be CUP anyway - so bring small bills (5/10/20) and use the pesos from change on small purchases, like a coffee or pastry on a street vendor. Don’t count on converting back to USD or spending at the airport, so spend or donate your pesos before leaving.
Food and medication: Be prepared to focus on local specialities such as the Ropa Vieja (shredded beef). You may find other food, but ingredients and spices will be scarce, so don’t expect sophistication. Bring some snack bars and biscuits if you have dietary needs or are a picky eater, and also any medication, seasoning (e.g.: sweeteners) and toiletries you can imagine yourself needing.
Clothing: Winter was mostly warm (above 21ºC with a single rainy day); summer must be 🔥, so pack accordingly. You may consider donating some of your clothes/shoes once you use them. Speaking of which…
Donations: Blame the embargo, the corruption, both or anything else; it doesn’t matter: fact is that Cuba is suffering severe shortages, and the locals will appreciate any donations you can bring (details below). Think dollar store daily essentials that would be hard to get in an isolated country (toiletries, clothing, reading glasses, pens and pencils, crosswords/coloring books, etc.) and fill the empty spaces in your luggage with those, if you can and want.
Media and Internet: Check below for details, but in short: before you go, get a VPN (to access sites and services that are blocked for Cuba) and plan for mobile internet access: you can either get a local SIM card at the airport or use roaming from your phone operator (but triple check the rates and limits and keep an eye while you are there). Of course, download offline maps on Google Maps and any other apps and media you may want there, and pre-book (and pre-pay) any tours and experiences you can before traveling.
Advance Travel Information (D’VIAJEROS): Not sure if mandatory, but it’s easy: a day or two before the trip, go to the website, select your language and fill the information on what you are (not) bringing. You get a PDF with a QR code that you print or save to your phone, and show that when asked at the airport. Unlike Canada’s ArriveCAN, each passenger needs to fill their own form and get their own QR code.
Visa: Any visitor needs a visa (often referred to as a “tourist card”) to enter the country, but it’s inexpensive (around USD 20) and airlines often give you one as part of the ticket purchase; check with yours. Air Transat gave us the card inside the plane (hints: bring a pen and avoid mistakes; you can’t cross them and they will charge you for another form). Just fill (twice) your name, passport number and date of birth, show it on your way in, don’t lose it, return it on your way out and you are good.
Non-Americans / Non-Canadians: If you are not Canadian or American, and plan to visit the United States in the future, keep in mind that, as of January 2024, visiting Cuba prevents you from using the ESTA (“electronic visa”) program and you will need to apply for a regular US visa; so you may want to get your ESTA before visiting Cuba (or plan to get a US visa instead). This was a farewell gift from Trump that hopefully will change in the future, but keep it in mind for now.
Our arrival and return flights were in Varadero (VAR), which is a very popular destination for Canadians - quite a few get all-inclusive resorts and stay inside them for the entirety of their trips. That wasn’t really our thing - sure, we wanted a break from the Canadian winter, but we also wanted to see the country and talk to the locals, so we split our time between Varadero itself (for the beaches), Havana (the urbanized capital) and Boca de Camarioca (a smaller village closer to the Varadero airport).
The beaches in Varadero are magnificent and the water was warm and clear. We were lucky to have great weather during our stay - in fact, winter brings the sun down to levels that my caucasian skin can withstand. You can rent chairs for the day and/or drink coconut water for a few dollars, get lunch at a nearby restaurant or have it brought to you. If you want to just relax and enjoy the sun, that’s where you should go.
The airport in Varadero was functional, albeit minimalistic - they used inexpensive webcams to take our pictures and we even to lend one of the officers a pen. Customs asks few questions (fewer if you are Canadian), but we were randomly stopped after getting our luggage. We were asked about our itinerary (it helped to have all the dates and addresses) and, after acknowledging how cannabis is legal in Canada and stating it was “not a judgement question”, they asked whether we ever had it; we said no, and that was everything.
Speaking Spanish (or, in our case, Portuñol) was quite helpful, but you can get around with English and some effort. We had a great recommendation for a taxi driver, with whom we pre-arranged pickup and drop-off at the airport and the transportation between our destinations according to distances. Going from Varadero to Havana and back typically costs around USD 100 for each trip (the price is for the car, regardless of the number of persons, but always ask). Getting from/to the airport will hover around USD 30.
Cuba is famous for the old-style cars used for tourists (other than those, most of what we saw were really old ones, nearly falling apart and the occasional luxury one, heavily contrasting with everything around it). The old-style cars do tours (a popular one being to visit Havana from Varadero and vice-versa). There are also horse-drawn carriages, pedal-powered taxis and buses, but we had no problems just using the taxis for long distance and walking everywhere else.
An interesting fact: the first letter in the license plate identifies the car’s assigned usage. Sources differ on the meaning, but a few that seem to be consistent are:
First Letter | Meaning / Usage |
---|---|
A/B | Government-assigned (by Transportation Ministry) |
C/D/E | Diplomatic (Consular / Diplomatic / Embassy?) |
F | Revolutionary Army Forces |
K | Temporary residents / foreign companies doing government business (later with “Cuba” written on a blue banner) |
M | Ministry of the Interior |
P | Private |
S/W/Z | Religious institutions, cooperatives (Z/S = bikes, W = others) |
T | Tourism |
Even during our beach time, we walked around and talked to the locals (as we started giving some of the things we brought as gifts). At first, most people would tell how they were happy with the government and how things were good, but as we got closer to the less tourist-y places, we started to hear (and witness) a different story.
Outside of Cuba, people tend to align their opinions on the country with their left/right political leanings. For example, when Bolsonaro was Brazil’s president, his supporters will often tell you to “go to Cuba” when they run out of arguments (or even unprovoked just if you happen to not idolize him), while left-wing people will often overly praise the country’s performance on safety, education and healthcare, turning a blind eye to the human rights violations, lack of basic freedoms and the biased origins of those performance metrics.
This is a very complex subject, and I don’t expect to cover it in a single blog post from a week-long visit, but I would like to share some of the tidbits we learned from talking to the locals, from the guided tours (led by a sociologist and a geographer) and from consuming their local media and online content.
We highly recommend the tours above - as much as they have their personal opinions, they brought us lots of perspective on the country’s history (in particular, they helped us understand the extent of US assets that were seized, which explains why private lobby will never allow the US government to relent until those parties are compensated and why the local government has no hopes of negotiating a compensation, as neither the assets, nor wealth created by them are available) and the current situation, including the 2021 protests.
The thing that mostly stands out are the shortages. Food, medicine, toiletries, electronics, you name it: everything is in short supply and rationed. The bodegas - government-sanctioned stores for the locals - will check their libreta (ration book) and only sell the amounts they’re allowed to buy (when they have supplies).
We witnessed a semi-scheduled evening blackout of city lights around the public squares; locals are so used to it that our guide didn’t realize at first how scared we were. Sure, it is deemed safe to walk around at night, but it must be hard to go without electricity for hours when it hits residences (and it does).
As much as salaries are nearly the same for everyone, those with access to foreign currency (from working with tourism, for example) or resources (from relatives abroad) have a visibly different standard of living. They can buy things at the markets that are not available to the general population (at much higher prices), but they need to first get the foreign currency through a local bank card.
We noticed that that all the things we brought as regalitos (“gifts”, but they were actually donations) were gladly accepted, even by people that would obviously not use them - because they could be traded in alternative markets for the things they need and cannot get at the government stores. These trades are organized in the community by word of mouth, but also in WhatsApp groups. Checking that activity was an eye-opening experience.
As for the causes: the embargo (and US in general) definitely plays a role on that. Even those who blame the government admit things were better in the Obama years and are very afraid of a second Trump term. But the locals are quick to point out how many of the shortages can’t be explained by the embargo, as there are many things that are produced in Cuba and should be available in sufficient quantities.
Those conversations happened in hush-hush tone, as the government doesn’t take criticism as well as it says it does, and the electoral process is quite convoluted; in the higher instances, anything but sanctioning the central administration decisions is said to be not tolerated. Locals are proud of their country, but aware that they can’t speak freely about it, and that they aren’t compensated by social welfare as some external supporters would like to believe.
We learned is that doctors that work abroad (e.g.: in Brazil, where remote communities had significant social improvements thanks to them, until the right-wing government dismantled the program with offering no replacement) are paid a fraction of what the host country pays for their services, and the rest goes to the government. This sounds absurd in a capitalist logic, but might be be understandable if the government used that money to improve the lives of the locals (and maybe justify Cuba’s reputation for education, safety and health systems).
We wanted to check that in loco, but between the school break and our unwillingness to disturb hospitals in any way, we had to rely on the locals. They recognized and were proud of those achievements, but pointed out (and showed a bit) how the lack of investment in people and supplies degraded service in schools and hospitals in recent years. As for safety, crime is indeed low with nearly no weapons around, but locals are very afraid of the police (even more so with so many people are still incarcerated or disappeared after the 2021 protests).
Of course, these are random impressions I picked up; if you want some hard data with proper methodologies, I invite you to check the reports (in English) from the Observatorio de Derechos Sociales - the Spanish version was highly recommended by academics to whom we spoke. The report is quite critical of the government, but also very transparent about the methodologies and data sources, so you can make your own conclusions.
As Portuguese native speakers, we were able to communicate with locals and and consume the local media. Television included some Latin American channels (seemingly unsupervised, but we were not sure how accessible they were to non-tourists). Government-ran Cubavisión seemed available everywhere, and Segundo Sol, a 2018 Brazilian soap opera dubbed in Spanish as “Nuevo Sol”, is popular among locals.
Local shows were, as expected, heavily loaded with unapologetic government propaganda - a youth-oriented show that presented itself as an educational block against media manipulation was basically accusing international media of doing that, while presenting the government’s point of view as the only unbiased truth.
The official newspaper is a daily set of around 8 pages that costs 20 CUP and limits to reporting policy changes and occasional cultural articles. There are some libraries and pop-up bookstores (the later mostly for tourists), but the selection is very limited.
Internet access is available, mostly through smartphones. Locals complain about the speed more often than they complain about price (but that was among those that actually had a smartphone). Some websites are indeed blocked by the government, but from our point of view, most of the blockings were done on the US sites/services side, which either block you, or don’t allow you to create accounts or handle payments from Cuba. A VPN works around that (and I recommend getting one before you go, for the reasons stated).
To get internet as a foreigner, you can either by an ETECSA SIM (or e-SIM) right at the airport, or get roaming from your phone operator. I usually avoid the later option, as it’s quite easy to rack up a huge bill, but my provider (Freedom Mobile) has a CAD$ 30 package that gives you 5GB for 90 days on a large list of countries that includes Cuba. I kept an eye to make sure I’d not go over it and it worked fine, while my wife went with the e-SIM, so we had a failover in case one of them didn’t work.
One unexpected bug/feature: while her e-SIM got under all the restrictions while outside of a VPN, mine didn’t. I got a British Columbia IP address (despite living in Ontario) and was able to access all the sites and services that were blocked for her. Of course I didn’t abuse that, but it was quite helpful as we were able to book a couple of tours that were only available online.
When visiting the library, I found a book that explains how the internet (and computers in general) work in Cuba, which (as a retrocomputing and computer history nerd lucky enough to read Spanish) I found quite interesting. It’s very deferential to the government mantras (even in contradictory points such as being proud of their pioneirism on mainframe capabilities yet noticing they were kickstarted by nationalized IBM assets), but it’s interesting to see how Cubans managed to get their online infrastructure working in such a challenging environment.
Definitely yes! It is a great place to get a breather from the Canadian winter, with friendly people and relatively affordable accommodations, but you should be aware of the situation and be prepared to see a lot of poverty and people struggling to get by. If you’re not comfortable with that, you should probably stick to the all-inclusive resorts.
On the other hand, if you want to check for yourself, get an unbiased perspective and/or help the locals, by all means spend some time out of the resorts, and get to know the real Cuba. You will be able to see the beautiful landmarks and the old cars, but also the real people and their struggles.
Most important: free your mind both from the demonizing and the idolizing of the country, the government system and the people running it, and try to understand the situation from the locals’ perspective. You won’t regret it.
]]>People get surprised when I tell them I keep all my personal software projects running on a single server, and that it costs me roughly a cup of (fancy) coffee per month. This post explains how I do it, and how you can do it too!
Every now and then I build a personal project that requires hosting on a server - sometimes it’s a website, sometimes a back-end API for a watchface, or even a multiplayer game server. In any case, it is tempting to use a free service or self-host, but eventually some of those prove to be useful to other people (or myself) on the long run, requiring a more robust solution.
Sure, one can always embrace the DevOps culture and build the projects with tooling that describes their infrastructure needs in code, deploying on a cloud provider. That works, but comes with a price tag that might discourage building new projects, and you don’t learn a lot about how things work behind the scenes.
In contrast, configuring and keeping a server up in a cost-effective, secure and robust way is a fun challenge in itself and a great learning experience. Even if you have a team or a service that does it all for you, getting some hands-on experience with servers makes you build better software, as you’ll be more aware of the constraints and trade-offs involved when it is deployed.
There are many ways to host a project. Some of those are:
Self-hosting: You run the server yourself, on your own hardware. It’s cheap, but requires a good connection and a static IP address (or some shenanigans like dynamic DNS) and dealing with the physical aspects of a server. Currently I only use it for home automation stuff (which I run on a Raspberry Pi, which is silent and has modest space and power requirements). Can be fun to do for a while, but may become a chore, and I have enough of those at home.
Platform-specific solutions: There are many services that allow you to host a project written in a specific technology (Java Web Applications and PHP-based sites come to mind); you just configure some things, drop your files there and your app is visible to the world. They are free/cheap for small projects, but can get expensive as things grow. I’ve used some of those in the past, but they don’t fit the things I do nowadays (and are unlikely to fit new projects for anyone, as they are a bit out of fashion).
Serverless: Instead of running a server, people that choose this route use a service that runs code on demand. I am skeptical of those in professional settings (where you can at least justify the gymnastics with the cost flexibility), but for personal projects it’s just too much hassle (and often $) for too little benefit (unless your chosen tooling is already serverless).
Dedicated servers: They are reliable, but buying or renting them is too expensive for personal projects, in particular because they will be underused - unless you intend to build your own personal cloud, but that isn’t my case.
Containers: As mentioned above, you can build your app with the likes of Docker and Kubernetes, and run it on a cloud provider. It is a good option for professional projects (and I particularly love the “cattle, not pets” approach), but it is overkill for personal ones, and it is a bit more expensive than the option below.
Virtual private servers: On those, you rent a virtual machine (VM) from a provider. Like a dedicated server, you manage it yourself, but it is much cheaper (because you are sharing the hardware with other users). It is what I use for most of my projects, and I’ll detail it below.
There are many VPS providers around, and it is tempting to go with the likes of Amazon Web Services (AWS) or Microsoft Azure, but they are overkill for personal projects. My VPS provider of choice is DigitalOcean - besides the reliability and great tooling, they charge a fixed monthly rate for each “droplet” (their VMs) - which is important since any project may have an unexpected surge, and I’m not a startup that converts engagement into VC money.
It supports several Linux distributions and you can even supply your own, I suppose. I personally prefer Ubuntu Server, as it builds upon the robustness, stability and familiarity of Debian but puts practicality above purity (e.g., by including non-free drivers and codecs by default). I use the long-term support (LTS) versions, so I can stay a few years just adding security patches, and rebuild the server from scratch when support ends or I need something from the newer versions - sounds radical, but is actually easy to do (more on that below).
In theory, you can just access the server’s by typing its IP address on the browser (or client tool), but if you want to show it to others, it is much nicer to have a domain name. Some people like to register a domain for each hobby project (and for a few, registering domain names is the hobby project). A more suitable route IMHO is to register a single domain (that represents you) and use subdomains for the projects, which is what I do nowadays: I registered chester.me
, and use the likes of totransit.chester.me
, minitruco.chester.me
, etc, for my projects.
A registrar is the company or entity that will register the domain name for you. Which one to choose depends on the domain-suffix, but my general go-to is Dynadot, which is reasonably priced and has a nice user interface that allows you to easily direct all access to the server, or even fancier setups. For example, I pointed chester.me
DNS entries to GitHub Pages, which hosts the blog you are reading right now, and all the subdomains to go to my DigitalOcean droplet.
Another reason to keep all your things on a single top-level domain is that it makes it easier to set up encryption (https), which requires a certificate for each domain. Having a single one, you can get a wildcard certificate for it, and use it for all subdomains (instead of registering a new one for each new project). You can and should get a free one from Let’s Encrypt and use it with Certbot, which automates the process of getting and renewing the certificates.
Even with a cost-effective VPS, it is tempting to just go creating one server for each project, but that will become expensive pretty quickly, and here comes the main point of my personal project hosting strategy: I host them all on a single (virtual) server. Doing so allows me to share system resources, bandwidth and storage between projects (reducing the cost), and I can always split one of them to a new server if it grows too much or misbehaves.
The trick is to just configure them all on the same machine, each on a separate process (tree). Doing so is trivial for services that run on different ports, but even for those that run on the same port, I use nginx as a reverse proxy to route requests to the right app.
For example, I have two apps that run internally on ports 3000 and 3001 (but you can’t reach those ports externally, see the Security part below). Each of them has its own domain name, so it’s easy to set up nginx to listen on the default web port (80) and route requests to the right app. For example, the configuration for one of the apps looks like this:
# /etc/nginx/sites-available/cruzalinhas.com
# Define a new upstream server, which is the app running on port 3000
upstream cruzalinhas_rails { server 127.0.0.1:3000; }
# Requests to cruzalinhas.com...
server_name cruzalinhas.com;
server {
# ...are forwarded to the upstream server defined above
location ~ .* {
proxy_pass http://cruzalinhas_rails;
}
}
Other niceties that nginx provides are SSL termination (so you can have your apps “speak” plain HTTP, and it converts the incoming HTTPS requests from port 443 to HTTP requests to the upstream server), and the ability to serve static files (e.g., static HTML, images, video) directly, without having to go through the app. The docs show how to do it, but as you will see below, I use Ansible to automate most of those configs.
In order to keep those services alive, I use Supervisor, a very configurable tool that allows you to start and stop your different projects, monitoring their processes and auto-(re)starting any of them as needed.
The cheapest DigitalOcean droplets provide more than enough disk space and transfer bandwidth for my projects, but the memory is a bit tight. I could just upgrade to a more expensive droplet, but I found a better solution: a swapfile.
Usually sysadmins cringe at the idea of swap files on servers - I worked at places where they were forbidden, or where swap activity above a certain threshold would page the on-call person! For personal projects like mine, the server sits still most of the time, and the actual working set of used RAM is much smaller than the allocated RAM, so it’s 100% ok to let the allocated-but-unused pages swap out. On top of that, the “disk” is actually an SSD, so even if some swap activity happens, the performance hit isn’t that bad, if perceptible at all.
You can start with the cheapest droplet, install your things and check htop
to see how much physical memory is being used. If it is close to the limit, upgrade to the next tier (which is a matter of a few clicks on DigitalOcean’s web interface), rinse, repeat. I have all my current projects (two active Rails apps and one Java multiplayer game server back-end) running their second-cheapest droplet, which has 1GB of RAM. It has a 2GB swapfile, rarely used above 10% and no thrashing observed.
Sure, you can manually set up your server (and in fact you should do that every now and then for learning purposes, IMHO), but doing so it is not a good idea for a production environment: you may want to move to a different provider, or the server may crash and you’ll need to set it up again, or you may want to split a project that grew too much into its own server… any of those things will require you to reconfigure from the start, and it’s very likely that you will forget something.
One way of doing so is to have a shell script that contains all the commands you’d typically run to set up a server. That is better than nothing (and I’ve seen some startups doing that), but it has a few drawbacks: you need to handle secrets (e.g., passwords, API keys); if the config need changes, you need to manually reconfigure and keep the script updated (and likely can’t run again on a production server); it gets complicated with different profiles (e.g., development vs production, or different apps); and so on.
A better approach is to use a tool that allows you to describe your server’s configuration in a file, and then run that tool to set up the server. There are many tools for that, but I use Ansible. It is free and open-source, does not require installing anything on the server (it just needs SSH access), and is very flexible.
Ansible’s central idea is that you describe your server’s desired state in a YAML file containing all the configuration steps - this file is called a “playbook”. A playbook can be written to be idempotent, that is, it can be run multiple times on an existing server, and it will only change what needs to be changed.
Most important: those configuration steps are not unlike the commands you’d manually issue (they are only wrapped in so-called “tasks”, which automate all that checking). Even if there isn’t a task for a given command, you can always run a shell command directly (as long as you ensure it is idempotent, as described above).
For example, my server has a provisioning.yml
playbook that sets up the necessary Unix users (both mine for manual ssh-ing, and the ones needed by application servers), tightens the security of the server and does some additional configuration.
It also has one playbook for each specific project (e.g., this one that sets up a Ruby on Rails app, which installs the necessary packages, sets up the database, installs the app, and so on). Sure, right now all my apps run on the same server, but if I ever need to split one of them, I can just provision the new server and run the playbook for the new app.
The best thing is that spinning a new server is easy and non-disruptive. For example, when I want to upgrade Ubuntu to a new version (usually a risky operation in any operating system or distribution), I just create a new droplet on DigitalOcean and run the playbooks on it. Then I edit my local /etc/hosts
file and point all subdomains to the new server’s IP (from the point of view of my computer) and test it. Once I’m sure it’s working fine, I revert that edit, flip the DNS to the new server and destroy the old one.
Each tech stack has its own way of deploying and redeploying apps; quite a few have more than one blessed way. You have to consider things like how much downtime your project can tolerate between deploys; whether you need to preserve any state (data, connections) between deploys, and so on.
For personal projects, I try to make the simplest possible thing that works. For example, most of my Rails-based apps just update the code on the server straight from GitHub (the existing instance won’t be affected by that, as the code is already loaded and production configs don’t auto-reload) and restart it.
For the game server, I needed to do a more complex script that starts the server software, then monitors its .jar
file for changes. When it happens, the script instructs the running server to stop responding to connections, and immediately start another process with the new version (which accepts all new connections). Meanwhile, the old server drops any player that isn’t currently on a game until no one is left, then finishes itself. In practice, that gives everyone the opportunity to finish their games, while anyone not in a game just connects to the new server, with no downtime or interrupted games.
When you manage your own server, it’s important to keep it secure. There are many things you can do, but here are some of the most important ones:
unattended-upgrades
, but you need to configure it to actually apply the updates; I recommend just focusing on security ones here, and manually update playbooks or rebuild the server on a new OS version for everything else);ufw
, which is a friendly wrapper for Linux’ built-in packet filtering), only allowing inbound traffic on the necessary ports. My provisioning playbook opens ports 22 (SSH), 80 (HTTP) and 443 (HTTPS), and any app that needs a different port will have that configured in its own playbook;fail2ban
, which automatically blocks IPs that try to brute-force SSH or HTTP logins by using the same firewall structure managed by ufw
, thus adding no additional load to the server.You can see these configs on the security part of my provisioning setup.
Keep in mind your apps’ needs (for example, when opening the port used by my multiplayer game server on ufw
, I also had to increase fail2ban
’s allowance for multiple connections from the same IP, because it is reasonable that friends playing together may be using the same Wi-Fi network and have the same external IP). Always go with closed stuff by default, and opening on a case-by-case basis.
If you tighten it enough, the next vector for security issues will be… your own code! Aside from the usual security best practices (e.g., sanitizing user input), you should also keep your dependencies up to date. Keeping the code on GitHub allows Dependabot to check for that in supported languages (and even suggest updates), and the tooling there will also monitor for leaked secrets (e.g., API keys) and alert you if it finds any. As a former GitHub employee I may be biased, but I can’t recommend it enough.
It isn’t a home banking system, yet you want the service to stay alive when you sleep. I already mentioned Supervisor above, but other small things worth considering are:
logrotate
to avoid that; it has sensible defaults but is easy to customize.Honestly, having any sort of persistent data in a personal project is a hassle - both from a technical perspective (you have to back it up, you have to take it with you when you rebuild your server, etc.) and from a legal perspective (you have to comply with privacy laws, etc.). It’s ok to have persisted data, as long as you can easily rebuild and start anew.
If you really need a database for important data, consider those operations when choosing. For example, SQLite is easy to back up (it’s just a file) if you can stop the server - or at least writes - during backups (otherwise you risk copying a corrupt backup); MySQL has more backup options, but you need to set them up. An external service like Amazon RDS or DigitalOcean’s Managed Databases may be a good idea, but it will add to the cost.
Whatever you choose, make sure you restore the backups when provisioning a new server, and test the backups from time to time, not only when you need them. In fact, if you restore when provisioning, a good (and cheap) test is to just provision a new server and see if the data is there. The only caveat is that once you confirm it is fine, users may have written new data to the soon-to-become old server, so you’ll likely need some downtime to make a new backup/restore cycle… as I said, avoid this at all costs!
My server configuration is published on GitHub. It may sound reckless, but good security should never be done by obscurity; instead, it gives the good guys a chance to alert me about bad practices, while not showing much that the bad guys couldn’t find out by themselves. Please check it for details on the techniques I describe here.
Things that should actually be kept secret can be stored in Ansible Vault, which encrypts them with a password that you supply when running the playbooks, or in some service like GitHub Secrets, that are only visible to the repository owner (but they require some GitHub Actions gymnastics for provisioning into a server, which is why I currently stay with Ansible Vault).
This isn’t a universal guide on how to host personal projects - but rather a description of how have been doing it (which I may update as I find or remember other interesting things), and how other people can do it too. The dynamics of hosting change all the time; the main thing is to design and iterate around them, automating just enough so that you don’t need to keep an eye on it. That’s what worked for me so far!
]]>My front door is as old school as my doorbell: just an electric lock downstairs, and a button inside my house that opens it. The voltage/current there is certainly out of of what an ESP8266 GPIO pin can handle, so the solution is to connect a relay to it.
I usually prefer discrete circuits over “shields” (boards that go over an Arduino/Raspberry Pi/etc.) because they are simpler, cheaper and more fun/educational to build; but I found an inexpensive and convenient shield containing the relay and all needed electronics.
I measured the current/voltage when the button was pressed, but forgot to write it down for the post. But it was well within what the relay shield can handle.
There isn’t any schematics this time: just soldered the shield pins over the ESP pins. The relay has three pins: the common (C), the normally closed (NC) and the normally open (NO). In order to “press” the wall unlock button with the relay, we have to connect the C and NO pins to its terminals. But before that…
It’s easier to test before you stick things to the wall. Once again I used ESPHome within Home Assistant. This time we want to present the relay as a push button, so something like this does the job:
switch:
- platform: gpio
name: "Front door"
id: "front_door"
pin: GPIO5
on_turn_on:
- delay: 4s
- switch.turn_off: front_door
icon: "mdi:door"
The relay is configured by GPIO5 pin, and the on_turn_on
section is what makes it behave as a push button: it doesn’t matter what turns it on in Home Assistant (user interface, automations, etc), it will turn itself off automatically after the specified interval (4 seconds). During that time, the electric lock buzzes, signaling the visitor that they can enter.
Once you write ESPHome to the board with the configuration above, you can add that switch to any card on the “Overview” page (if your Home Assistant isn’t set up to automatically add new switches). When you click/touch the virtual button to turn it on, you should hear a “click” on the relay, and, after 4s, another “click” as the virtual button returns to the “off” position.
Using the conductivity test on a multimeter, double-check that the “NC” and middle pin only connect when the button is pressed. Those pins are the ones you should connect to the doorbell button, without disconnecting the wires already there (so the manual button continues to work).
There was plenty of space on the wall for the ESP8266 and the relay shield, so I just stuck them there, wrapping all the metal parts with electrical tape to avoid short circuits. Could use a case, but that was good enough.
You’ll need to supply power to the ESP8266. This time I didn’t have an outlet nearby, but there was some leftover coaxial cable from the older cable installation (the landlord even offered to remove that cabling, since the current provider comes from a different place altogether). It isn’t the most appropriate cable for USB power, but it is already there, and it works.
Now the sky is the limit. I could, for example, just unlock the door from any automation by adding an action like this (no need to turn the switch off, as the configuration above does that automatically):
action:
- service: switch.turn_on
target:
entity_id: switch.porta_da_casa
I actually did a more complex setup: since I also have a smart lock controlled by Home Assistant on the internal door, I created an “I’m home” button on the UI that unlocks both the front and the internal door (which can even be linked to Siri/Google Assistant/etc. so I can just say “Hey Siri, I’m Home” and enter).
I’ve also created an “incoming delivery” switch that I can turn on when I’m expecting a delivery, and it will unlock just the front door when the doorbell rings (but only during daytime):
I’m considering to hook this Mail and Packages Integration to automatically turn on that switch in days I’m expecting a delivery, but I first need to ensure it works with the emails sent by Canadian retailers. In any case, this setup is pretty robust and easy to customize, all done for a ridiculously low price. I’m pretty happy with it!
]]>Sure, I could make it ring louder, but it would be annoying for anyone on the lower floor. And I also want to automate other hurdles related to package delivery, so I decided to first get the doorbell to ring into my Home Assistant setup (where I can trigger all sorts of automations).
The easiest thing would be to replace the front door pushbutton with a four-contact one (so it would close the doorbell circuit and mine); but this being a rental discourages me from doing outdoors modifications; instead I got my trusty multimeter to pry into the doorbell, and found that when the doorbell rings, a ~12V AC current flows through the exposed contacts.
That kind of signal isn’t good to trigger my home automation stuff, which usually relies on Arduino/Raspberry Pi/etc. GPIO pins (which operate on lower voltages and DC current). But the internet is wonderful - I found a circuit that pulls down (that is, connects to GND) a GPIO pin whenever a 12V AC current is present. Here is a reproduction:
As usual, I’m learning the basics of the electronics involved. This time the big mystery was the AC->DC conversion. What I understood/learned is that:
The other new thing was the opto-coupler - a tiny chip that contains an LED and a photo-sensitive transistor. Feeding the (now) DC current to the LED closes the transistor, which is a smart way to pull down the GPIO pin while keeping the ESP8266 100% isolated from the doorbell. 🏆
Before progressing further, I decided to test the circuit above on a breadboard, using an Arduino and a slightly modified version of the 02-Digital/Button
example that comes with the Arduino IDE that pulls up the pin and flips the built-in LED when it is pulled down, that is, when the doorbell rings:
void setup() {
pinMode(2, INPUT_PULLUP);
pinMode(13, OUTPUT);
}
void loop() {
int sensorVal = digitalRead(2);
Serial.println(sensorVal);
if (sensorVal == HIGH) {
digitalWrite(13, HIGH);
} else {
digitalWrite(13, LOW);
}
}
I could connect that circuit to the Raspberry Pi 3B+ that runs my Home Assistant, but, as I said, it’s a big house, so I got a WeMos D1 Mini - a cheap and tiny ESP8266-based board that, in short, behaves like an Arduino with built-in Wi-Fi.
In the past I used the OpenMQTTGateway software to turn such boards into sensors and triggers. But as much as I love the community around that software, its source-code-based approach and reliance on MQTT makes it quite hard to set up and update.
Enter ESPHome, which is focused on Home Assistant and thus much easier to work with: you just configure what your sensor needs to do in a tiny YAML file straight from Home Assistant; it builds the firmware and updates it (wirelessly and very securely once you set up the first time via USB).
I know, I know: all the cool kids either make their own PCBs, or design them and use one of the small-volume PCB services that sponsor half of the YouTube electronics videos. Not wanting to deal with the chemicals and iterate quickly, I went to my custom solution of cutting a piece of perfboard and doing the connections with solder and, when needed, jumpers.
After botching a first attempt (in which I just threw the components at Fritzing’s Breadboard view), I realized I could reproduce the circuit on the Schematic view, and get all the components connected on the Breadboard view, where I could just lay them out on a perfboard (“Stripboard” component) and figure the needed solder/jumper points, error-free - not unlike people use the PCB view to design their boards, I suppose.
I used a white 5V USB charger to power the boards, sticking them with mount tape to the side of the charger (which will be facing down and thus invisible when plugged in):
A pair of white wires connects this to the doorbell. It’s not the most finished setup, but it is discreet enough to be ignored (in particular because there are several other white cables running over those walls) and easy to remove when I return this rental.
Here is the link to download the Fritzing (.fzz) schematics:
Using the code above we can check the doorbell state on the ESP8266 LED, but what we really want is to have it available on Home Assistant via ESPHome. The easiest way is to do it directly from Home Assistant, but that requires:
The specific config that allowed me to get the doorbell state on Home Assistant is:
binary_sensor:
- platform: gpio
name: "doorbell"
pin:
number: 2
inverted: true
mode:
input: true
pullup: true
It’s pretty straightforward: we’re using the GPIO pin 2, which is the one we connected to the opto-coupler output, and we’re telling ESPHome to treat it as a binary sensor (i.e. a switch that can be either on or off). The inverted: true
is needed because the opto-coupler is pulling down the pin when the doorbell rings, and we want the sensor to be on when the doorbell rings.
Once that configuration is built and installed, we can add/see the sensor on Home Assistant’s dashboard:
I already had the Telegram bot set up (from this other post), so I just created an automation whose trigger is the doorbell sensor changing from “off” to “on” and whose action is to send a message to my Telegram bot.
I did it mostly on the web UI, resulting in something like this:
alias: Send Telegram message when doorbell rings
description: ""
trigger:
- platform: state
entity_id:
- binary_sensor.doorbell
from: "off"
to: "on"
condition: []
action:
- service: telegram_bot.send_message
data:
message: 🔔 Ding dong
target:
- 11111111
- 22222222
- ....
mode: single
The target:
s should be the ID(s) of the user(s) to which the bot will send messages. To get the numeric ID of a Telegram user, you can start a chat with GetIDsBot (source).
Now when someone rings my doorbell, I get a message:
As seen above, I often get more than one message; which means that either people are a bit nervous, or I need to set up some debouncing; fortunately ESPHome has options for that.
I also want to figure out a way to check who is on the door (maybe snapping a picture from a security camera and sending it alongside the message), and the icing on the cake would be to open the door (e.g., by interacting with the robot).
Not sure if I’ll do all that (depends on how long I’ll stay on this house), but if I do, it will mean no more packages will be returned or stolen because I wasn’t there to receive them.
]]>NFC (the tech used in mobile phones for contactless payments and contact exchanges) and RFID (used in product identification/tracking, building access cards and many other things) are found everywhere these days. I played a little bit with cheap tags that can be used to interact with phones, but implants are getting more practical, so I decided to give one of them a go!
(I know, I know: technically, NFC “is” RFID - or, specifically, is a set of protocols built upon a subset of the RFID ones, but I’m going with the commonplace usage of the terms: “RFID” for the unregulated “low frequency” 120-150 kHz tags that use all sorts of proprietary protocols, “NFC” for the “high frequency” 13.56 Mhz devices using specifically NFC)
I didn’t want to limit myself to a single technology (or to go with two implants), but Dangerous Things (yes, that’s the company name 😅) sells the NExT: an implant with both an RFID chip (that can simulate - or “clone” - fobs and tags on the wild) and an NFC chip (which can store 888 bytes of data, accessible to any NFC reader I touch, including smartphones).
There are some limitations: its NFC can’t be used for payments (like, for example, the Walletmor), and the RFID can’t emulate super advanced security, or hold multiple tags, but the combination is enough for a lot of uses, plus it’s a field-tested set of chips that should be operational for years and years, so I chose it.
Now that I knew what implant I wanted, the issue was how to get it implanted. Being quite afraid of something going wrong, I would prefer to get a physician to do it, but I didn’t feel like bothering my family doctor with an elective procedure during a pandemic.
After some digging, I found the installation partners section on the Dangerous Things website, which led me to Midway Tattoo and Piercing, a studio in Kensington (where else?), Toronto where Matt Graziano kindly gave me a quote and scheduled a date.
I won’t lie: I was quite afraid, but Matt - a biohacker himself who was quite excited with his (literally) shiny xSIID NFC + LED implant - put me at ease. The entire procedure took a couple minutes, and honestly, was nearly painless (just a bit of blood). I even managed to film it with the other hand:
The beauty of NExT is flexibility: its high-frequency NTAG216 chip allows me to store several different classes of data (not unlike, say, a QR code) and have it accessed by any NFC-enabled phones when I touch them at the right spot, and it’s the easiest thing to do, so I tried it first.
There are several Android and iOS apps that allow writing NFC data to the chip; I already had NFC Tools PRO, so I used it to store a Linktree address, turning my hand into a virtual business card:
@chesterbr I am now a #cyborg 😁 #bodymodification #nfc #chipimplant #electronics ♬ original sound - Chester
The URL sending thing was fun (although NFC support in phones on the wild is hit-and-miss, at least here in Canada), but the other part of the dual chip (the low-frequency T5577) allows some other interesting applications, including the one I really wanted: getting rid of building access “fobs”.
Contrary to the NTAG216 (which can be programmed with any smartphone with NFC), specialized hardware is needed to read the fobs and write them on the T5577. My kit came with the Proxmark 3 Easy, a somwewhat outdated member of the Proxmark platform of open-source RFID hacking tools, but very useful nonetheless for reading, writing and cracking several types of RFID devices.
Using it requires a client on your computer and a matching firmware on the board. There are several builds of those around - my board came with an old version of “Iceman”, one of the most feature-complete builds, but in a really old version. Updating it requires downloading and compiling the client and firmware as a set.
EDIT: previously this required some file editing; now you just follow the instructions to get the dependencies (e.g., these for macOS), then make
as usual.
Any errors here can be fixed with web searching (for example, I had to do a small fix to my Homebrew-installed openssl). When that is done, it’s a good idea to start by upgrading the board’s bootloader with ./pm3-flash-bootrom
. It requires putting the board into bootloader mode, by pressing its single button while plugging to the computer. My version was so old that I had to keep it pressed through the process, otherwise it was quick and easy.
Once the bootloader is on a decent version, the same process (plugging the board while pressing the button, but now you can release it) can be used to update the firmware, just now running ./pm3-flash-fullimage
.
With the board fully updated, typing ./pm3
invokes the client - which will detect the board, connect to it, and that’s where the fun begins!
These videos go a long way showing how to navigate the client menus; the most useful commands I learned are lf search
and hf search
, that try to read things on the LF (RFID) and HF (NFC) antennas, respectively. If they identify a tag, they will suggest further commands.
In my case, the command identified the tag as a Kantech ioProx, which also uses a chip from the T55xx family and had no particular advanced protection. Yours is likely to be different, so do an image search to confirm you identified it correctly.
These tags are identified with a number in the format XSF(XX)YY:CN
, where XX
is the “version”, YY
is the “facility” and CN
is the “card number”. These are important because they are the parameters for the command that will reprogram the implant (or some other T55xx card/tag) to behave just like the tag, effectively “cloning” it.
With the right numbers in hand, you can use lf io clone --vn xx --fn yy --cn CN
(xx
and yy
are the XX
and YY
from above, converted to decimal) to write the tag to the implant. Of course, other manufacturers/models of tags will require different commands (this is for the iOProx, hence the io
), but the overall process is the same.
Just be careful and notice the version and facility numbers are hexadecimal - they need to be converted to decimal before applying. I topped that with also writing the original fob (they are often writable!) with the wrong info; getting both hand and fob rejected by the building on the first attempt 🤦♂️.
Fortunately I keep a long scrollback on my terminal, so I was able to figure out the change and fix the mess. Lesson learned: save the info returned by lf search
- in particular, the [+] IO Prox - XSF(XX)YY:ZZZZZ, Raw: NNNNNNNNNNNNNNNN (ok)
line. The last number is a composition of the others, so you can use it to verify the tag.
Anyway, with that sorted out, I was able to record the implant with the correct information, and then access my building just with my bare hand~s~:
@chesterbr Used a Proxmark 3 Easy to copy the building fob to the NExT chip in my hand #biohacking #RFID #NFC #dangerousthings ♬ original sound - Chester
That was all cool, yet feels like I just scratched the surface on what can be done with this dual chip implant. My bundle included (among other things) these cool hacking tools:
I am tempted to use some of this stuff to to open my apartment’s door (since I already have an electronic lock securely linked to a Home Assistant-controlled Raspberry Pi), but I am not sure how safe that would be (building fobs are ridiculously easy to copy, as seen above). On the other hand 🥁, it’s not like a regular key can’t be trivially copied too. Have to think about it.
The idea of upgrading one’s body with cybernetic parts is mesmerizing, I must admit. Surely, I’m not lifting cars like The Six Million Dollar Man, but mine didn’t cost anywhere near that mark (which, to be fair, would be more like 34 Million these days), and I’m having my share of fun with it.
Anyway, RFID readers and fob cards/stickers/rings can be bought online, and are cheaper and less risky than and implant - but those can always be lost, so it’s your call. Implant or no implant, one thing is ture: there is a lot of fun to be had with NFC and RFID!
]]>It has been a wild ride, and it finally comes to an end. Here are the final tweaks I’ve added to this project, which results in a mostly working Atari 2600. Most importantly, I learned a ton and had a lot of fun. If you came so far, I hope you did too!
It’s time to play, and for that we need a joystick! I could a couple DB-9 connectors, but it was more fun to just add a few push buttons and precariously play by pressing them directly on the motherboard.
The Atari joystick connector has a pin for each direction and the button, open by default. When you move or press, the controller closes the connection between that pin and ground. A simple design that requires no electronics on the joystick itself (paddles and auto-fire are another story, of course).
Directions are handled by RIOT pins 12-15 (PA4-PA7), so I just connected them to the push buttons (black wiring). The fire button is handled by TIA pin 36 (I4), but it requires what seems to be debouncing circuit (a high-value resistor pulls it up, and a capcitor/low-value resistor ensures consistent registering of button presses).
I did not connect the buttons for a player 2, but it would be the same thing, using using RIOT pins 8-11 (PA0-PA3) and TIA pin 35 (I5). I did have to add the pull-up resistor to the TIA pin, otherwise player 2 would fire constantly. Likewise I don’t care about the paddle pins, so I just added capacitors to TIA pins 37-40 to avoid spurious signals.
The Atari 2600 had quite a few switches on the console: two difficulty switches (one for each player), TV Type, Game Select and Game Reset . They are all handled by a subset of RIOT pins PB0-PB7.
For this experiment, I only needed Game Reset to play, so I added a push button between that one (pin 24/PB0) and ground, with a (debouncing?) cap like in the original schematics, and let all the others open. It’s the leftmost black button seen on the picture above.
This was the last “feature”, but I still had a couple things to fix:
One thing that bothered me was that the breadboard Atari would never power on cleanly - most games would just crash, and I’d have to push the 6502 reset button a couple times to make it start properly.
That happens because the 65xx family of processors doesn’t have a built-in startup circuit - when powered on, the CPU just works with whatever (usually bad) state the silicon happens to be on.
The original schematics shows a 4.7µf capacitor alongside a pull-up resistor do the job of triggering the reset once the capacitor charges, which not only ensures the proper initialization, but also gives time for the CPU to start its work at optimal power level.
I had the pull-up resistor in my drawer, but not the capacitor at that exact value. Didn’t matter: a higher value just delays power-on by a non-humanly-measurable factor, so I threw in the 100µf I had. Now the Atari turns on consistently whenever I power it. For good measure, I added the equivalent circuit for the RIOT (6532), since it also shows up on the schematics:
In previous posts, I tried several composite circuit variations, but I’d always get a scrolling image, like the TV wasn’t able to figure out when each frame (“screen”) started. After a lot of head-scratching, I finally found the issue: TIA pin 6 (BLK), the “Vertical Blank Output” - that is, the signal that is output at the beginning of every frame of an Atari game, is not connected anywhere.
For some odd reason, it doesn’t appear on the Atari schematics. Once I joined that pin with pin 2 (SYNC), a stable image appeared! Horizontal position and weird colors still need adjustment, but that can be solved by fiddling with resistor values on the composite circuit. For my study purposes, it’s good enough™️:
I also got a pair of RCA jacks for breadboards from Omega MCU Systems - they have breadboard connectors for all sorts of things, making connections to the outside world much sturdier.
With that setup, all my carts worked pretty well - including the Harmony Cart, that allowed me to test all sorts of games; including modern homebrew!
Below is the final board (and here is a larger photo). It required connecting three breadboards together:
Wires are somewhat color-coded: red/blue for power/ground, yellow for data lines, green for address lines, white for logic signals (clock, chip select, read/write, etc.) and black for input/output devices (audio/video/controller).
My main motivation for this project was the fact that I wanted to reproduce Ben Heck’s hand-soldered Atari, but he did not publish any diagrams (like he usually does), just referring people to the original Atari schematics (which were near uncomprehensible for me at the time).
My plan was to create drawings of the breadboard on Fritzing as I progressed, so anyone wanting to rebuild an Atari could do so without resorting to the circuit diagram. And I did it on the fist few parts, but at certain point, it became impossible - the drawing got too complicated and Fritzing started to crash.
Anyway, after tackling the project in small steps (and looking at other diagrams, partial circuits, articles and discussions), I learned enough about electronics and the Atari 2600 to grasp its original diagram - and finally understood why Ben did not find it necessary to publish anything else.
So if you want to (re)build your ow your Atari, be it soldered or on a breadboard, you can either follow the diagram like Ben, or, if it’s too complicated, just go step by step.
In any case, I hope you have as much fun (and learn as much) as I did!
]]>Now that I got Hello, World! running, I feel confident this project may actually succeed! 😅 The next step is to run an actual game, which requires wiring the last chip (and, due to the poor video I have so far, a sound circuit).
MOS’s catalog included several support chips compatible with its successful 6502 “family” of CPUs (of which our Atari’s 6507 is a member). Atari picked the 6532 to supply the missing pieces (RAM, Input/Output channels and Timers) that give this chip the nickname “RIOT”.
Wiring it was no different than wiring the TIA. Pretty much:
The chip also reads directional/fire buttons from joysticks, SELECT, START and both difficulty switches, but I’ll wire those later. All I want is to give games what they need to run!
Sound was actually part of TIA, but I did not bother adding it before because the only software I had that would work without RAM and Timers (the “Hello, World” program) was silent. But now it will work (and be invaluable since my composite output is still 💩), so I added a 1K resistor and a 1µF cap to both pins; resistor goes to 5V and cap goes to center pin of audio jack (outer part is ground). May be improved (e.g., doubling the circuit for “stereo” sound), but good for now.
Incredibly, this setup worked first time (notwithstanding the video). One can hear (and almost see) Pac-Man going through the circle of life:
Unfortunately, I can’t still get a good composite video image. Tried every composite circuit under the sun, results are still wonky. The best I got so far was with the circuit mentioned on the last post, only replacing a resistor that was a bit out-of-range:
In fact, only one of my TVs got me any image at all (as previously mentioned, modern TVs are more picky with the signals they get). People often get around that running the signal through a VCR, but I had a much smaller option: this composite-to-HDMI converter. It’s not as tolerant as the VCR (or a tube TV), but it picks up even this bad signal, and was quite cheap when I bought it.
I guess I should now add a joystick and SELECT/START (either a connector, or some push buttons, depending on what I have on my drawer), then fix the video (assuming it’s not a TIA defect). Then I’ll transcribe all the things back to a Fritzing drawing (so anyone - including myself in the future - can reproduce), and I guess I will be done with this experiment.
]]>In the previous post, I had the CPU, cartridge and TIA wired and tested, but still needed the Arduino to make them tick and check the resuts. All those hex numbers were fun to debug, but let’s get to the real deal: plugging it to the TV.
In order to show the image of a game on a TV set, the video chip (TIA) needs to generate 60 frames (screens) per second, which, in turn, requires its clock to receive 3,579,545 “pulses” per second. What makes that magic work is a crystal oscilator, and fortunately that particular frequency (3.579545 Mhz) is so common in TV-related applications that one can easily buy such a crystal online.
The crystal won’t do the job on its own. We need a circuit that amplifies and cleans the wave it produces, so the TIA gets a more regular signal. Each Atari 2600 model has a slight variation of that circuit, and after fiddling with a couple, I settled on the one used in the Atari 2600 Jr., following this diagram by Kevin Horton:
Testing that circuit was a bit of a pickle: the Arduino could not monitor such a signal. Usually, these tests are done with an oscilloscope. Those appliances were always out of my budget as a hobbyist, but these days there are quite a few “USB oscilloscopes”: they contain the electronics to do the measurement, but output their data to a computer, which makes them much cheaper. Professionals frown a bit on them, but the Hantek 6022BE was within my price range, so I decided to give it a shot.
It was a great purchase - in particular due to OpenHantek, an open-source alternative to the included software that future-proofs the investment. The user guide initiated me to the point that I could measure the clock of a working Atari, then compare it with my circuit (click to zoom).
In both cases, I got a steady wave, pulsing once each ~277ns, which the software computes to be a frequency of 3.60 Mhz. The actual values should be 279.365115ns and the 3.579545 Mhz mentioned above, but I had to mark the period manually, and this is as precise as I could get; maybe ther is a way for the software to auto-zoom on a single cycle, have to figure that out.
That was enough for me to organize and shorten the components into a tidier circuit. Notice the long horizontal cable connecting the clock output to TIA pin 11, where the Arduino used to provide a much slower pulse.
At that point I had a clock that probably puts the CPU/TIA to work in full speed, so now it’s all about hooking it up to the TV, right? Not so fast: the TIA generates a few separate signals that need to be combined properly into a format that the TV can understand.
The Atari 2600 does so by generating an RF (antenna) signal, which was complicated, prone to interference and most modern TVs have a hard time with it, so a better option for modern TVs is composite video. Roughly speaking, it combines a sync wave (that tells the TV when each frame starts) with the black-and-white and color components (aka “chrominance” and “luminance”) of each pixel (in a non-trivial way to ensure backwards compatibility with black-and-white TVs).
Modern-day Atari owners often install a “composite mod” on their consoles: a small circuit that extracts and combines the signals mentioned above from the Atari main board. I tried using a few mods, but they all depend partially on existing components on the Atari board.
I found a circuit on the AtariAge forum (based on a chroma/luma one from the Atari FAQ) that includes the Atari parts (the ones on the blue border). It’s slightly more complex than others, but was the only one that generated anything on my TV.
Like I did on the previous post, I used the Atari 2600 Hello, World program that I created for an Atari 2600 Programming presentation, because we still don’t have RAM, timers or inputs that any regular game would require. Also, it’s kinda fun to start with a “Hello, World”.
My first attempt (which I livestreamed in Portuguese) was an epic failure (didn’t check for shorts and smell of burning ensued), but the second one worked… if you ignore the lack of color and the rolling screen 😅
This often happens when Atari softare fails to “race the beam”, but that piece of code is, to put it shortly, too minimalistic to fail. After checking the TV menus, I found the issue: the TV was recognizing the signal as PAL, and not NTSC.
Those systems expect different frame sizes, which explains the scrolling (the software sends the next screen starts before the TV is done with the previous one).
TVs of that time were electrical devices, driving a cathode tube that were tied to (among other things) the electrical frequency on the outlets (60Hz in countries that work with NTSC, 50Hz with PAL - which 100% relates to the 50 x 60 frames per second). If you used the wrong frequency, scrolling would indeed happen (if you want details, click the image for a great Technical Connections video about that):
In that context, my modern(ish) TV is more like an “emulator”, trying to identify and decode the signal into its LCD array of fixed pixels. That is hit-and-miss (of the three TVs that I tested, this was the only one that showed any image at all). I tried adding a CD4050 chip as a buffer (like the FAQ circuit does), no improvement.
What actually worked was “cleaning up” the circuit - cutting the components and fitting them properly (like I did with the clock). That convinced the TV to recognize the signal as NTSC. Swapping the crystal also gave it more stability (maybe the livestream incident broke it? glad I bought a pack of 3 😅).
Anyway, I finally got to the first milestone I envisioned where I started this journey, years ago: HELLO WORLD
! 🎉
Even with the fixed circuit, the image is a bit unstable, colors are wrong and there is an odd bar on the right. But instead of debugging these, I’ll add the RIOT (the last chip), which should allow me to run actual games and fine-tune the circuit.
I also want to update the Fritzing sketches with all those circuits once I settle them (so anyone wanting to rebuild a custom Atari can have a more readable starting point), add or build a controller… we’ll see!
]]>In the previous posts I made the CPU work on the breadboard, then added a cartridge connector, all using jump wires - which can be easily reconnected, labeled, etc., but have a downside: they disconnect easily. Coupled with the equally flimsy cart connector, all my attempts at moving on with the project would result in failures.
After seeing Ben Eater’s beautiful breadboard computers I decided to rewire the boards I already had. For that, I’d have to rethink my cartridge connector: instead of having the jumper cables going out of it (left), I got some long pin female headers that extended the pins so the connector now fits the board like any other chip (right):
Armed with that and Ben’s video with tips, I rewired everything I had (minus the “pause” button, as it will be replaced on the next step). Other than the Arduino (which is temporary), everything is now nice, tidy and color-coded (data is yellow, addresses are green, +5V is red and ground is blue):
The Television Interface Adapter (TIA) is the only custom chip on the Atari. It was designed to generate the video and audio signals, and to reduce costs, it has nearly no video memory, requiring programmers to sync their code with the TV signal - with microssecond precision. To aid in that task, the TIA controls the 6507 CPU in two ways:
WSYNC
address, the TIA halts the CPU (using the RDY
pin) until the current scanline in the video finishes drawing.For that magic to work, we need to connect the 6507 clock pins (ø0
and ø2
; the later is an output that seems to help keeping things in sync) and the RDY
pin to their matching TIA pins (keeping the pull-up resitor); we’ll also connect the R/W
pin between them (so the TIA can know whether the 6507 wants to read or write to it).
And, of course, connect the data pins (D0-D7
) and the lower address pins (A0-A5
), so the data can flow from/to the proper addresses. Finally, connect two address pins and two fixed voltages to the “chip selector” TIA pins (CS0-CS3
), so TIA knows when the CPU is talking to it (as opposed to the cart or the other upcoming chip).
I found some TIA pin descriptions online, but they had a few incorrections; so I guided myself mostly on the original schematics and built this table for TIA-to-CPU connections:
Pin Name | TIA # | 6507 # |
---|---|---|
ø0 |
4 |
27 |
ø2 |
26 |
28 |
RDY |
3 |
3 |
R/W |
25 |
36 |
D0 |
14 |
25 |
D1 |
15 |
24 |
D2 |
16 |
23 |
D3 |
17 |
22 |
D4 |
18 |
21 |
D5 |
19 |
20 |
D6 |
33 |
19 |
D7 |
34 |
18 |
A0 |
32 |
5 |
A1 |
31 |
6 |
A2 |
30 |
7 |
A3 |
29 |
8 |
A4 |
28 |
9 |
CS0/A12 |
24 |
17 |
CS3/A7 |
21 |
12 |
and one for the direct-to-power connections:
Pin Name | TIA # | +/- |
---|---|---|
CS1 |
23 |
+5V |
CS2 |
22 |
GND |
VSS |
1 |
+5V |
VCC |
20 |
GND |
Throwing a 100nF capacitor close to the VSS
pin gives the TIA clean power (like I did to the CPU). I stacked a second breadboard on the first to add the TIA, using the same color conventions of before for power, data and address signals, adding white for non-address/non-data TIA-to-CPU connections.
I aligned the TIA with the breadboard numbers, which proved to be a mistake: several cables needed to go through the left side and had to hang outside of the board. Other than that, it looks pretty nice:
Once again, I’ll plug an Arduino for testing, connecting pins 2-9 to the data lines as in the previous post. This time, I’ll connect the Arduino-generated clock pin to the TIA clock input (OSC
, pin 11), and slightly reduce the delays to 10ms, speeding the clock to roughly 50Hz (still around 24 times slower than the real Atari):
// Turns an Arduino into a 50Hz clock generator (on pin A5)
// and a monitor for an 8-bit data bus (pins 2-9)
void setup() {
for(int pin = 2; pin <= 9; pin++) {
pinMode(pin, INPUT);
}
pinMode(A5, OUTPUT);
Serial.begin(115200);
}
void loop() {
// Clock pulse
digitalWrite(A5, HIGH);
delay(10);
digitalWrite(A5, LOW);
delay(10);
// Print current data bus (pins 2-9)
int data_value = 0;
int power_of_two = 1;
for(int bit = 0; bit <= 7; bit++) {
data_value += digitalRead(bit + 2) * power_of_two;
power_of_two *= 2;
}
if (data_value < 0x10) { Serial.print("0"); }
Serial.println(data_value, HEX);
}
Instead of an actual game, this time I loaded my Harmony Cart with the Atari 2600 Hello, World program that I created once for an Atari 2600 Programming presentation, because we don’t have RAM or timers yet, and that software doesn’t use either.
Once again, let’s have the Stella disassembly handy:
and check the Arduino output (right after we reset the CPU with the button on the breadboard). This time I’ll remove the timestamps, group the numbers together and put on a Gist, to make it easier to analyze:
First thing to notice: that almost all numbers print three times. That is expected: the TIA sends one clock pulse for each three it receives, and we monitor the data lines every time we send a clock pulse to TIA.
When we account for the triple vision, the output is not unlike the one on the previous post: line 1 shows the 6502 reading the two-byte location of the reset vector on the cart (F000
, read in reverse order). Line 2 matches the instruction at that address (LDA #$02
, that is, A9 02
in machine code), and line 3 is the following instruction (STA VSYNC
, or 85 00
). The STA
writes the value 02
to memory, which takes an extra cycle (line 4, with the 02
forming at the end of the cycle).
Things get odd at lines 7, 10 an 13, where we see a lot of repeated numbers. What gives? Well, the lines above them are STA WSYNC
, and as stated above, when we write to that address, TIA uses the RDY
pin to “freeze” the 6502 until an entire scanline is produced; which is why the data bus didn’t change until we total 228 cycles.
If you pay attention, you may notice that we have a little less than that number of bytes on lines 10 and 13, and even less on line 7; that is because we have to discount the cycles “spent” since the current scanline started.
Anyway, this more than proves that our TIA is up and running!
One may wonder why I don’t plug this on the TV, given there is a video chip. We’re missing a couple things that I plan to add on the next post, in which I expect to finally generate some image!
]]>A long time ago I grabbed the three chips from a broken Atari 2600 (Jr.), to see if I could build an Atari with them on a solder-less “breadboard”. My first attempt (post here) was to drive the CPU with an Arduino, which showed the chip advancing through what it believes to be memory, but is actually just a single “no operation” (NOP
) hard-wired instruction:
It took some time (between finding the right connector, 3D-printing a part, figuring out the wiring and fixing the Arduino software), but I finally moved on to the next step: plugging a real Atari cart and seeing some actual code running!
An Atari cartridge contains a ROM (Read-Only Memory) chip, meaning we’ll only read data from it. The 6507 CPU can request any single byte on a given memory position on the cart by setting that memory address, in binary form, on a given set of its pins (the address bus), and the cart responds with the byte on that position on another set of pins (the data bus).
In fact, these buses are used to both read and write bytes to all other parts of the system, but for now we only care about the cart. My first step was to get the cart pinout, which you can find in several places, but people often forget to mention the orientation of the pins, whether we are seeing the cart or the slot, etc., so I went with the original Atari schematics, which shows the connector as seen by someone looking directly at the console:
From looking at it, the connector consists pretty much in address (A0
-A12
) and data (D1
-D8
) lines (plus a +5V and two GND pins), so connecting those to the matching CPU and voltage pins in our board should do the job, no extra electronics needed this time.
In non-Atari lingo, the socket is a 24-pin “edge” connector - which is just like a computer peripheral card “slot”, only smaller. It isn’t a trivial size, but with the right name you can find it online (or use the link above).
Unfortunately you can’t just plug a cart into the connector, because carts (or, at least, the Atari-manufactured ones) have a sliding plastic protector that only opens when the cart is inserted in the matching plastic guide - and that one isn’t manufactured anymore.
I was lucky not to be the only one with this problem. In particular, people hacking Atari Flashback mini-consoles to add a cart slot also required one, so they created a model that I could download and 3D-print (of course, there are other options you may consider):
The fit wasn’t perfect, but with some epoxy and a bit of drilling, I managed to fix the connector in the guide. I connected some female-to-male jump wires (hint: use longer ones), inserting a toothpick to keep them firm, then labeling according to the schematics:
Starting with the breadboard from the first post, I removed the hard-wired NOP
instruction, and connected the address/data lines to the matching pins on the 6507, and the +5V (socket pin 23) and ground (socket pins 12 & 24) to the power lines.
One thing to notice: the Atari schematics refer to data pins as D1
-D8
, whereas 6507 names them D0
-D7
(starting from 0 like the address pins, and also like bits are usually assigned). But at least they are aligned on the chip, so it wasn’t (much) confusing.
Another thing to pay attention: for some reason, A10
and A11
are flipped on the connector - the sequence, looking left-to-right from outside the console, is A8
, A9
, A11
, A10
and A12
. Remember to flip them as you connect the wires.
Speaking of pins, the previous method of monitoring the address lines worked fine when addresses were just growing sequentially, but monitoring an actual program this way was too difficult, so I switched to wiring the Arduino to the data lines instead. That will show the actual ROM bytes as they were requested by the CPU for execution (as long as we tweak the monitoring program, which I had to do anyway, see below).
Here is the updated drawing, with the Arduino connected to the data lines, and the cart connected to data and address. I included the power connections as well, so everything needed is there. I recommend opening the .fzz file on Fritzing, which has the pin names on the cart connector (it doesn’t resemble the connector a lot, I know; but it’s the first time I customized a part in the software).
The test program generates a (slow - 10Hz) clock pulse to keep the processor running. At each pulse, it prints the hex value from the data bus in the Arduino IDE serial monitor (set the speed to 115200). It is much smaller than the original code, and several issues (such as use of serial I/O pins and wonky binary conversion) were fixed.
// Turns an Arduino into a 10Hz clock generator (on pin A5)
// and a monitor for an 8-bit data bus (pins 2-9)
void setup() {
for(int pin = 2; pin <= 9; pin++) {
pinMode(pin, INPUT);
}
pinMode(A5, OUTPUT);
Serial.begin(115200);
}
void loop() {
// Clock pulse
digitalWrite(A5, HIGH);
delay(50);
digitalWrite(A5, LOW);
delay(50);
// Print current data bus (pins 2-9)
int data_value = 0;
int power_of_two = 1;
for(int bit = 0; bit <= 7; bit++) {
data_value += digitalRead(bit + 2) * power_of_two;
power_of_two *= 2;
}
if (data_value < 0x10) { Serial.print("0"); }
Serial.println(data_value, HEX);
}
To test it, I’ve used a cart with 2048 2600 since, as the author, I’m familiar with the code. The Stella screenshot below shows the initialization code, and we’ll be looking for the bytes (opcodes) on the right side:
To be precise: once we press and release the “reset” push button, we expect the 6507 to read the address of that code (F914
) from its standard location from the cartridge, then start reading the opcodes above (78
, D8
, A2
, 00
, …) in sequence, until the BNE
instruction at F91D
loops back to reading again from a few lines above (CA
, 9A
, 48
, …), and repeat that a bunch of times.
That will be enough to show us whether this mess of wires is working - and indeed it is! Check this snippet of the serial monitor output (comments and disassembly after #
), comparing to the values above:
...
00 # Some gibberish here, until 6507 resets
00
14 # 6507 reads the contents of RESET vector: the lowest byte (14)...
F9 # ...then the highest (F9) of F914, which is where our code starts
78 # SEI # read from address F914
D8 # CLD # read from address F915
D8
A2 # LDX #$ # read from address F916
A2
00 # 00
8A # TXA # read from address F918
A8 # TAY # read from address F919
A8
CA # DEX # read from address F91A
CA
9A # TXS # read from address F91B
9A
48 # PHA # read from address F91C
48
D0 # maybe a premature read of next instruction?
00 # the value that would be sent to the stack - if we had RAM :-)
D0 # BNE
FB # F91A # address calculated as "5 bytes before"; FB here means -5
A9
CA # DEX # again from address F91A
9A # TXS # again from address F91B
9A # ...
48
48
D0
00
D0
FB
A9
CA
9A
...
We print the value on the data bus once per clock cycle - since instructions take a different number of clock cycles to run, we see the uneven repetitions. But overall, we are running the code in the cart - mission accomplished! 🎉
Hope I don’t take as long as I did this time to continue with this experiment. I wonder if I can add the TIA at some capacity without going with a full speed clock (which will be hard to monitor without an oscilloscope, so I’m deferring as much as I can). I’ll see as I tinker. Stay tuned!
]]>I sold that one years ago, but having some floor space and time now, I decided to buy a “new” one on eBay. Not having a Playstation these days, I planned to use Stepmania (the open-source DDR clone), but my mat was missing the USB adaptor. A Playstation-to-USB one gets recognized, but arrows do not register correctly.
The adaptor I needed would plug in the XBox connector (classic XBox controllers are quite close to USB in nature, as we’ll see below). They are near-impossible to find, but it seems the breakaway cable that came with the controller can be converted into such an adaptor.
The operation is trivial:
I was going to solder them together and tape (like the video), but it seemed too flimsy for me, so I soldered the wires into a protoboard, then fixed the set on a small box with my trusty Durepoxy.
Ugly, I know, but sturdy. Once the box was closed, had some cleanup and electrical tape finished, it looked much better:
As for the software, I used this macOS XBox Controller Driver. To test it (and Mac controllers in general), I recommend Controllers Lite.
With that set, I downloaded Stepmania, addded some songs from the usual places, and spent a great afternoon playing! 🕺
A positive surprise was that no remapping was needed: Controllers Lite shows the arrows registering both as axis and button presses, and StepMania recognizes the arrows as it should, the A/B buttons as OK/back, and even the cancellation shortcuts with select+start.
P.S.: If you can’t find the original XBox breakaway cable, this page says an XBox 360 one (which is already USB on the other end) will do, as long as you sand the connector a bit to make it fit on the mat/controller. I’m curious to try that.
Photos: Raquel Camargo
]]>There is just one problem: Home Assistant doesn’t know whether the TV is on or off. If it is already on when I start casting, sending the command will turn it off - the opposite of what I want. Also, I would like to turn the TV off when not using the Chromecast (something it doesn’t seem to do, even with HDMI-CEC).
I am not the only one: people have already asked around, and one of the ideas was to use the USB port (that a lot of sets have for playing media or firmware upgrades). A quick multimeter test on mine showed that it is only powered when the TV is on, so it’s just a question of monitoring it and forwarding the info to Home Assistant.
The simplest idea for monitoring the state would be to connect the power output of the TV to a GPIO pin on the Raspberry Pi. However, those pins expect 3.3V, and USB operates at 5V, so a direct connection would fry the RPi. I knew I’d need what is known as a “voltage divider” - a setup that (in its simplest form) uses two resistors to extract a lower voltage from a higher one.
The good news: someone had already done the homework for me, noticing the Rasperry Pi already provides one of the resistors and calculating the value of the other. So it was as simple as:
You can reuse any cable with a USB connector - they are usually color-coded, red being the 5v, and the silver, non-isolated wire is GND. Raspberry Pi pinout is here, but here is my wiring for this setup:
To test it, we can invoke python2
and type some Python code:
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
while True:
print GPIO.input(11)
time.sleep(1)
This prints 1
when the TV is off and 0
when it is off. Here is a test I did plugging and unplugging the USB to a power adaptor:
On the actual TV, it takes some time to pick up the “off” state because the TV slowly reduces the output instead of cutting it straight (I checked with a multimeter, it takes quite a few seconds to go from ~5V to ~0).
Anyway, it shows that the hardware works, so we can move on to exposing it in Home Assistant. It will appear on the panel as a binary_sensor
just by adding these lines to the binary_sensor
section of configuration.yaml
(creating one if you don’t have it):
binary_sensor:
- platform: rpi_gpio
ports:
17: PIR TV # rpi_gpio uses BCM notation => physical pin 11 = GPIO17
pull_mode: DOWN
And it almost works 🥺. Even though the sensor shows up on the interface (alongside the Chromecast, on the screenshot), and switches to on
when I turn the TV on, it does not switch to off
when I turn the TV off. Never.
It happens that (unsurprisingly), Home Assistant code is more efficient than mine, using threaded callbacks instead of checking the state every second (details).
So I changed my test code to match Home Assistant’s:
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
def cb(port):
print GPIO.input(port)
GPIO.add_event_detect(11, GPIO.BOTH, callback=cb, bouncetime=1000)
Instead of continuously printing, it will just output the current state when it changes, and works as expected fine when plugging/unplugging from the power adaptor.
When connected to the TV, turning it of produces a “1”, but turning it off also produces a “1”. It isn’t a bounce issue (I tried changing bouncetime
to no avail).
My other guess is that the state returned by GPIO.input
isn’t updated when the cb
function is fired by the callback, likely due to the slow discharge. To confirm that, I include a little pause (200ms) on the function before I read the state, and, lo and behold, that fixes the problem. The code above consistently prints “0” when I turn the TV off:
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
def cb(port):
time.sleep(0.2) # Pause for 200ms
print GPIO.input(port)
GPIO.add_event_detect(11, GPIO.BOTH, callback=cb, bouncetime=1000)
I could change the Home Assistant code on the Pi to do that (maybe accepting an optional delay parameter in the same way that it accepts a bounce time), but I had a hard time running their tests, so it will be a while before I can submit a contribution to the project (which may or may not be accepted), so for now I went with ha different approach:
To be honest, I haven’t been using the Raspberry Pi GPIO for a while precisely because of this type of issue: running I/O on a non-realtime (or at least not very predictable OS) leads to inconsistent reads. Instead, I’ve switched all the I/O on my home automation to a separate board (a NodeMcu Lua ESP8266, which behaves like an Arduino, but is more compact and has built-in Wi-Fi).
The board runs OpenMQTTGateway, a software that makes it easy to forward hardware events to the Raspberry Pi (here is how I set it up to work with Home Assistant) and brings us the best of two worlds: the stability of the microcontroller and the software flexibility of the Pi.
For this setup, we don’t have the Rapsberry-pi-provided pull-down, so we’ll need two resistors to provide the voltage divider (i.e., bring the TV USB 5v down to 3.3v that the board can monitor).
There is a formula that you can use to find a pair of resistors, but I was lazy and just used this calculator, throwing 5V as the voltage input and playing with values until I got a pair that I had lying around (R1 = 75Ω and R2 = 150Ω) and gave an approximate 3.3v output.
Here is how I wired them (you must choose a pin you are not using for some other I/O):
Opening OpenMQTTGateway’s source code in the Arduino IDE, I enabled the monitoring by removing the trailing //
from this line in User_config.h
:
#define ZsensorGPIOInput "GPIOInput" //ESP8266, Arduino, ESP32
and in config_GPIOInput.h
, I configured the pin I’m using by adding it to the first #define
on the block below PIN_DEFINITIONS
(the correct number depends on your board). In the one depicted above, D5 means GPIO14, so we’d go with:
#if defined(ESP8266) || defined(ESP32)
#define GPIOInput_PIN 14
#else
#define GPIOInput_PIN 7
#endif
Of course there are other configurations you may want to change to ensure the software connects to your Wi-Fi network, and that the Raspberry Pi can subscribe to the events published by OpenMQTTGateway (see the docs and my previous post).
Once everything is set up, it is possible to ssh into the Raspberry Pi and monitor the queue with:
mosquitto_sub -t \# -v
As the TV is turned on and off, the following events appear:
home/OpenMQTTGateway/GPIOInputtoMQTT {"gpio":"HIGH"}
home/OpenMQTTGateway/GPIOInputtoMQTT {"gpio":"LOW"}
That allowed me to add a binary_sensor
to Home Assistant’s configuration.yaml
. Like I did with the sensors in the aforementioned post, I used the mqtt
platform, telling it to watch for the messages above:
binary_sensor:
- platform: mqtt
name: Living Room TV Power
state_topic: "home/OpenMQTTGateway/GPIOInputtoMQTT"
payload_on: '{"gpio":"HIGH"}'
payload_off: '{"gpio":"LOW"}'
device_class: power
That makes the switch appear, and this time it reacts to on and off!
The final goal is to to monitor my Chromecast (media_player.living_room_tv
)’s state
. When it changes from off
to anything else, I want it to send a power toggle command to my TV (which I defined as the switch.tv
when I set up IR) - but only if the sensor we just installed says the TV is off
.
In Home Assistant language, that translates to these lines in automations.yaml
:
- alias: tv_on_when_start_casting
trigger:
platform: state
entity_id: media_player.living_room_tv
from: 'off'
condition:
condition: state
entity_id: binary_sensor.living_room_tv_power
state: 'off'
action:
- service: switch.toggle
entity_id: switch.tv
Conversely, if I want it to turn off the TV when I’m done with the Chromecast (and again, only if I haven’t turned it off already):
- alias: tv_off_when_stop_casting
trigger:
platform: state
entity_id: media_player.living_room_tv
to: 'off'
condition:
condition: state
entity_id: binary_sensor.living_room_tv_power
state: 'on'
action:
- service: switch.toggle
entity_id: switch.tv
A few quirks still remain. For example, if I switch sources without giving the setup a few seconds to catch up, the TV will turn off, but not on again. Worse: if I switch to another HDMI source without disconnecting, the Chromecast will become idle after a while, and turn the TV off at the worst possible moment.
But those are likely fixable by tweaking the automations, and in general I just start casting and everything works!
]]>However, this one had names like Victor Trucco (one of the most respected Brazilian retrocomputing hackers) and Rick Dickinson (industrial designer behind several Sinclair computer cases, who sadly passed away before it was finished) behind it, so in May 2017 I gave it a shot and backed the campaign in exchange for a unit.
Expected to ship January 2018, it was delayed for more than two years, but for good reasons: the people behind the project would not accept anything but the best quality, continuously pressuring manufacturers to go on-spec. And it was worht the wait - the computer is sturdy and gorgeous:
The ZX Spectrum was one of the most influential computers from the 80s. It matched the (relatively) low price of its Sinclair predecessors with capabilities like color graphics, sound and enough RAM made it capable of all sorts of tasks - in particular games.
In Brazil (where, at the time, it was legal to clone any foreign computer) we had the TK90X, a clone of the ZX Spectrum 48K, which I was lucky enough to have access to during my formative years. Here is one (from my retrocollector days), with a few software titles in cassete tapes, and a homemad sound chip expansion module:
(I did eventually own a ZX Spectrum +2 as a retrocomputing enthusiast, but that’s another story entirely. Back to the ZX Spectrum Next!)
Computer boxes at that time were neitehr the unimaginative packaging of typical PCs, nor the sterile whiteness of Apple ones. They used to showcase what the computer could do, and the Next goes with that idea, but with a modern look. I loved it.
The manual is another highlight: like manuals of the era, it covers everything from handling the hardware to teaching you BASIC - in this case, a souped-up version that unleashes the new hardware features, yet feels like the classic.
I am not a huge fan of unboxing videos, but the packaging of this computer deserved some special attention, so here it is:
Between the material and the portal, there is a lot of material covering the Next, so I decided to just post a couple videos made right when I unboxed it.
The first one shows me turning it on and (after the one-time configuration screens) typing the classic “Hello World” program.
On the second one, I recreate a prototype “game” that draws a glyph on the screen and moves it in the four directions when directional keys are pressed. It’s a bit long, and the result won’t beat Fortnite in popularity so soon, but it shows how fun it is to just play freely in BASIC. Was happy doing it back then, am happy doing it now!
Video and stills by Raquel Camargo
TK90x photo by Carlos Eduardo Nogueira
]]>One problem that I was having with them: the custom URL did not work inside my network, just outside. That happens because my router does not support NAT loopback, blocking any requests from the internal network to the external IP (which is what my custom domain points to).
On a computer, there is an easy fix. Let’s assume your custom domain is foobar.duckdns.org
, and the internal IP (the one that you configured the router to forward a given port to) is 1.2.3.4. Adding a line like this to /etc/hosts
(and commenting it out with #
if the computer ever leaves the house) does it:
foobar.duckdns.org 1.2.3.4
For mobile devices, it isn’t that easy: neither Android, nor iOS expose /etc/hosts
. And those devices enter and leave the house all the time, making it impractical anyway.
The workaround: since I already have the Raspberry Pi lying around, I installed dnsmasq
(a lightweight DNS server) on it:
sudo apt-get install dnsmasq
and opened the firewall for its port, so my mobile devices can use that DNS when they are inside the network:
sudo ufw allow dns
Several online tutorials suggest config changes (such as enabling DHCP on dnsmasq
- a no-no for my otherwise working network). I just kept the default config and added this line to /etc/dnsmasq.conf
and restarting the service:
address=/foobar.duckdns.org/1.2.3.4"
With that, the DNS server will respond with the internal IP for the custom domain:
$ dig @1.2.3.4 foobar.duckdns.org
; <<>> DiG 9.10.6 <<>> @1.2.3.4 foobar.duckdns.org
...
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10291
;; flags: qr aa rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
...
;; ANSWER SECTION:
foobar.duckdns.org. 0 IN A 1.2.3.4
...
and everything else will go to the normal DNS server for normal resolution:
$ dig @1.2.3.4 google.com
; <<>> DiG 9.10.6 <<>> @1.2.3.4 google.com
...
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55137
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
...
;; ANSWER SECTION:
google.com. 171 IN A 172.217.165.14
...
The final step was to edit the Wi-Fi connection on the devices, so they would ask dnsmasq
for names when inside the network. iOS was the easiest: you just click Configure DNS
, switch to Manual
, remove the auto-configured value(s) and add 1.2.3.4
. Android requires changing the whole IP settings
to Static
(which also means you’ll need an IP assigned to your mobile phone on your router) and enter the information manually.
But that is a minor nuisance (that I only had to go through once). After that, the devices work inside and outside the house (which is great for Home Assistant, in particular if you have security things you want to check from away). The downside: if the Pi is down, the devices won’t have internet (which is why I kept the computer on the ‘/etc/hosts’ solution), but you can always switch back (or to data) if that happens. Overall, I’m happy now.
]]>When I saw these $3 leak detectors on eBay, I decided to give them a shot. Not only for low price, but also because they used 433Mhz RF - the same tech I use to voice-control my lights from a Raspberry Pi.
Once they arrived, I ran RFSniffer
and indeed, when they get wet, the Raspberry Pi prints a different value for each sensor - so it should be easy to wire up an alert system… right?
Well, it wasn’t. I found it odd that Home Assistant doesn’t have a 433MHz input integration (despite having a few for output). The reason is that “sniffing” from Raspbian can be clunky (you need a daemon running) and unreliable (due to its non-realtime nature).
technicalpickles suggested I should check out MQTT, so I did. It is a “publish/subscribe message transport” - something I’m familiar with (having worked with a few pub-sub systems, including the venerable IBM MQSeries from which the “MQ” in “MQTT” comes), but honestly, it felt like over-engineering at first.
Eventually, I realized it is more of a divide-and-conquer approach, with these parts:
Splitting things like that (and using MQTT as the glue) means I don’t have to write any new software:
I got a new Arduino Uno for the project (my other Arduino clones/models lacked the memory requirements for OpenMQTTGateway). Since it needs to send the events to the network, I added an Ethernet Shield, and for RF I used my newest Long Range 433Mhz RF Receiver module. The long range and built-in antenna made this my receiver of choice over the more popular RF modules - and it’s just as cheap as those.
By bending the data output pin on the RF receiver a little bit, the receiver can be inserted directly on the shield, just matching the VCC and GND pins with 5V and GND on the board (respectively). The data pin can be then connected to Arduino digital input 3 with a small breadboard jumper wire.
The final result is a bit taller than I wished, but hey, no soldering required:
Assuming you are using Raspbian Buster or later, just log on the Pi and type:
sudo apt-get install mosquitto mosquitto-clients
and you are good. Seriously, that’s it.
Technically you just need the mosquitto
package, but the other one allows you to test your installation by running:
mosquitto_sub -t \# -v
which prints all messages published to the broker. You can publish a message by opening a second terminal window and running a command like this:
mosquitto_pub -t "some/test/topic" -m "hi this is the message"
The first terminal will show some/test/topic hi this is the message
, indicating hi this is the message
was published under the topic some/test/topic
.
This project’s wiki contains several configuration guides, including one for my setup, that is, an Arduino reading RF signals and publishing to a MQTT broker.
Here are the changes I made to User_config.h (after downloading the “CODE-“ release and moving the lib
folder as explained on the wiki):
DEFINE THE MODULES YOU WANT BELOW
, uncomment (that is, remove the //
from):#define ZgatewayRF "RF" //ESP8266, Arduino, ESP32
Leave all other ##define Z...
lines commented (with //
).
In the line char mqtt_server[40] = "..."
, put the IP address of the Raspberry Pi (between the quotes).
Replace the zeros in the line const byte ip[] = { 0, 0, 0, 0 }; //ip adress
with an IP address for the Ethernet Shield that is compatible with the network (even though the comments say the software supports DHCP, it wasn’t working for me).
IMPORTANT: Uncomment the line below so each sensor publishes events to a specific MQTT topic (instead of a single topic for all of them, which results in a barrage of No matching payload found for entity: ... with state_topic: ...'
).
#define valueAsASubject true
Uploading a sketch with those changes will make the Arduino connect to your broker (Serial Monitor on the IDE will help you debugging; fiddle with baud speed until you see text instead of garbage).
If you still have mosquitto_sub -t \# -v
open, you should see something like:
home/OpenMQTTGateway/LWT online
home/OpenMQTTGateway/version 0.9.1
and whenever a sensor gets wet:
home/OpenMQTTGateway/433toMQTT/VALUE {"value":VALUE,"protocol":...,"length":...,"delay":...}
where VALUE
is (hopefully) unique for each of your sensor. In fact, you’ll see those events for all other things transmitting in the 433MHz frequency in your vicinity. I get a couple every minute in my apartment.
First thing is to make Home Assistant aware of your new broker. You can do it on the UI (clicking on +
, selecting “MQTT” and setting localhost
as the broker address), or by adding to configuration.yaml
:
mqtt:
broker: localhost
That will make Home Assistant subscribe to the broker, but you need to expose the events. There are two ways:
The trigger was tempting for my goal (getting notifications on my computer/phone when a potential leak is detected), but putting the sensors in the UI allows for richer integrations with other elements in the home. It also allows configuring fine details - for example, defining that a sensor is “on” when you get the message, but only gets “off” after X seconds without a message, so I went with it.
Just add these values to configuration.yaml
, one -
session for each sensor (replacing “11111111”, “22222222”, etc. with the VALUE
s from mosquitto_sub
or RFSniffer
):
binary_sensor:
- platform: mqtt
name: Washroom Leak Sensor
payload_on: "11111111"
value_template: "{{ value_json.value }}"
off_delay: 10
device_class: moisture
state_topic: "home/OpenMQTTGateway/433toMQTT/11111111"
- platform: mqtt
name: Kitchen Sink Leak Sensor
payload_on: "22222222"
value_template: "{{ value_json.value }}"
off_delay: 10
device_class: moisture
state_topic: "home/OpenMQTTGateway/433toMQTT/22222222"
- platform: mqtt
name: Some Other Leak Sensor
...
Once you restart, the sensors should be available in your Home Assistant main dashboard. I manually config mine, so I built a nice little card with them:
The page updates dynamically, so you should see them flip as you wet the sensors, then go back after 10s (or how long you set the off_delay
above):
No one looks at a dashboard all the time (well, I don’t 😁), so we need a way to send notifications to my mobile phone. The Telegram integration is a great way to do it.
Just open BotFather on the app, send it a /newbot
command to create a bot for you, and get its TOKEN
. Send any message to the newly-created bot, then visit https://api.telegram.org/botTOKEN/getUpdates
(replace TOKEN
accordingly) to get the ID
of your personal user for that bot.
Then add these lines to configuration.yaml
, replacing TOKEN
and ID
with yours.
telegram_bot:
- platform: polling
api_key: TOKEN
allowed_chat_ids:
- ID
notify:
- name: my_telegram
platform: telegram
chat_id: ID
Another Home Assistant restart, and you should be able to send messages to your Telegram app by clicking on “Services” and calling notify.my_telegram
with something like { "message": "hi" }
. The result:
The grand finale: sending the notification when a sensor gets on (wet) or off (dry). Here is where Home Assistant shines - something like this in automations.yaml
does the trick:
- alias: kitchen_sink_leak_sensor_state_change_alert
trigger:
platform: state
entity_id: binary_sensor.kitchen_sink_leak_sensor
action:
service: notify.my_telegram
data:
message: "Kitchen Sink Leak Sensor is {{ states('binary_sensor.kitchen_sink_leak_sensor') }}"
Had to add one for each sensor, but it was worth it. Here is my phone, telling me I should check the pipes under my sink:
How cool is that? Not at all? Well… I find it cool. 🤓
I’m quite happy with the results so far, but here are some things that could be improved:
The cause is a plastic latch that erodes slightly with use (or maybe after it gets jammed on the backplate when you mis-connect it). There is a cheap replacement for that latch that you can order here.
The replacement latch is made of metal (which should have been Nintendo’s original choice), and several videos (like this one) show how to replace it. No soldering is required, just some careful screw removal and disassembly.
With the usual caveats (you do it on your own risk, it voids your warranty, etc.), here is how it went for me.
Nintendo things often use tri-wing screws (in addition to the more common Phillips ones), so the kit comes with one of each screwdriver. It would be handy - if they weren’t horrible. Seriously, do yourself a favour and get a decent tri-wing.
Once you open the joy-con, be careful so you don’t damage the ribbon cables connecting the two halves.
Remove the screws that connect the black board to the back half, and put the later aside.
After you get to the black board, lift the metal blade that holds the latch in its place.
Once the blade is removed, just replace the plastic latch with a metal one from the kit, fitting it over the old one’s spring.
When removing the old latch, be careful so the spring doesn’t fly off. Don’t worry if the spring or any buttons fall out of place - just put them back as you reassemble the controller.
It takes a little time, but it’s worth it: now my joy-con stays firm on its place, only sliding off when I press the button on the back.
]]>With so much of my routine depending on that setup, backup became a concern. I’d make an occasional copy of the SD card with dd
, but that isn’t a good long-term solution. Ideally, I wanted to rebuild my setup easily, should the card get corrupt, slow or just wrecked by my ongoing hacking.
Enter Ansible. Sysadmins use it to write “playbooks” that represent the changes they would manually apply to a server. If done right, such playbooks can be applied to an existing server (fixing any broken configs), or a brand new one (to recreate its services).
The Raspberry Pi is just a (tiny) server - meaning hobbyists can use Ansible as well!
I’m not an Ansible expert (there are better places to learn about it), but my Ansible configs and these notes may be useful for anyone interested in automating Raspberry Pi setups (for home automation or anything else).
Raspberry Pi setup is typically done by downloading Raspbian and writing it to a (micro)SD card. I usually download the latest “Lite” version, so I can install just what I need and keep it snappy.
With that as a starting point, I created two Ansible playbooks:
The main playbook can be ran as many times as needed - it will only configure things that aren’t already set up (Ansible peeps call that an “idempotent playbook”, I’ve heard).
Every server needs a couple passwords and keys, and since my playbooks are public, I encrypt those secrets using Ansible Vault. That works nicely for everything… except Home Assistant.
In theory, you can provide Home Assistant secrets on a separate file and just encrypt it, provision manually, etc. I have tried that, but every time I built the system from zero, I realized something was stored outside the standard config files (e.g., logins), or even scattered in binary datafiles (dynamic device information, some configs made on the UI, etc.).
After lots of frustration, I went with a different plan: I set up an encrypted daily backup of the Home Assistant configuration to a network drive (just a thumb drive on my router), and made the playbook restore the latest backup when a config-less install is detected.
The main downside is that my automations aren’t easily shared. But I can always write a post if I ever come with something useful (so far, they are all pretty boring 😅).
What a horrible programmer I must be, because, you know, reuse is good™️… right? 😁
I tried to use Ansible Galaxy. Really. But the roles I found were often not generic enough (like almost doing what I wanted) and don’t always support Raspbian. Galaxy also lacks a robust package management system (which may not even be feasible, given the free-form, “script-y” nature of Ansible that I like so much), so I went solo.
Of course I took inspiration in a lot of third party roles and playbooks (on Galaxy and around the web), and highly recommend doing so.
Good question! Hass.io is a prebuilt SD card image that manages a minimal OS with Home Assistant baked in, automatically updated with Docker.
I personally found it a bit too slow (at least on earlier-gen Raspberry Pi models), and I feel more comfortable with a Debian-based system that I can poke with a stick. But hey, all the cool kids are using Docker 😜. Seriously though, if it works for you, awesome - you’ll save yourself a lot of trouble.
The automatic Home Assistant updates are appealing, but with my solution, I can just axe the application directory (or the whole SD card, for the matter) and run the playbook, and the latest version will be there, with my configs unchanged.
Oh yes I do! With gusto. The main point of having a custom-made solution (other than cost and security) is tinkering. Ansible makes me confident that I can rebuild the whole thing quickly if I screw up, but yes, that requires me to keep the Ansible file up-to-date.
That’s actually easier than it sounds: once I’m happy with my changes, I type history
and figure which steps (installed packages, changed config files) are really needed, and add those to the playbook. Run it a few times, undo some changes, check that it does nothing when changes are already there… and that’s it.
If the change was super complex and/or I’m afraid I forgot something, I can always run the playbooks against a fresh card, pop it in and kick the tires - it’s a great opportunity to get a fresh, snappy OS install for my tiny computer!
]]>The console generates a TV signal, which the TV has to tune in just like a normal over-the-air channel. It was quite convenient at the time (and quality was good enough for the TV sets we had), but modern TVs show degradation - not to mention some can’t even pick up the faint signal - I had to hook mine to a VCR that would decode the signal into A/V.
That quirky setup led me to make the popular A/V conversion (“mod”) - and, while at it, replace the power adaptor (with one that I can actually keep on the wall without fear of burning down the house) and capacitors (something that should be done to any vintage electronics that you want to keep humming).
There are different types of of mods, varying in how they mix (or split) the video and audio signal and what sort of output they generate. I opted to get an A/V output from the signals mixed by the Atari board (but before they get modulated into a TV signal), with the video one amplified by a single transistor and a couple resistors.
I based my mod on this version, which throws in a third 75Ω resistor that adjusts the impedance. Following the schematics there, I aligned the components like this (transistor is a 2N3904, flat side up; resistors are, from left to right, 75Ω, 3K3Ω and 2K2Ω):
There are ready-made circuit boards, but I just cut a piece of protoboard. Hint: don’t solder the cables like I did - instead, follow the “strain relief” tip here for better securing.
The layout reduced the number of connections, so I could just throw an extra gob of solder over the terminals before cutting to make the jumper connections. Maybe I could have used a little less solder, but heh, it worked.
You need to pick up Video In and +5V/GND from the Atari board. Mine was a Rev 16, which has those three right on the connector of the RF box. That box had to be removed anyway, and doing so opened some space for the protoboard.
Audio comes straight from the Atari board into the inner part of the audio jack. I strongly recommend checking this mod assembly guide to figure out where to pick it up in your particular model/board revision.
The guide also helps figuring out which components to remove in order to reduce interference. I was able to test before removing anything from the board (just disconnected the RF box, something easy to revert if it did’t work). I just cut one resistor (R209) and one transistor (Q202).
This is a good moment to replace the capacitors. Again, each model has its own set, but this thread has it figured out. I could not find a bulky C243 (guess the technology for eletrolytics improved), but stretching the legs on the modern one allowed me to solder it.
You can find a lot of Atari 2600 power brick replacements online. But most of them have short cables, so I reused the original’s loooong cord on a high quality 2A power supply) with the proper adapter. Just ensure that it supplies 9V and a minimum of 0.5A with polarity as labeled on the original (⊖-outside, ⊕-inside).
The most unexpected improvement was in audio quality. Even without a second audio jack, I get much better sound now that what I had with RF. Image has almost no artifacts now (it seems the occasional faint vertical line/shadow is a fact of life unless you go with more sophisticated mods.
Personally, I’m pretty happy with the results I got:
My friends were also super happy:
One of them got enough points in Frostbite to earn an Activision Patch 😁. Too bad we are a few decades late to send a picture to Actvision, but here are the proof and her honorary patch.
]]>