The NeoSmart Files https://neosmart.net/blog Recovery software and more Tue, 24 Oct 2023 18:42:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://neosmart.net/blog/wp-content/uploads/cropped-NeoSmart-Padded-32x32.png The NeoSmart Files https://neosmart.net/blog 32 32 1082592 Embed only the video from another post on X or Twitter https://neosmart.net/blog/embed-only-the-video-from-another-post-on-x-or-twitter/ https://neosmart.net/blog/embed-only-the-video-from-another-post-on-x-or-twitter/#respond Sun, 22 Oct 2023 17:33:56 +0000 https://neosmart.net/blog/?p=5104 Twitter has a new-ish feature that lets you embed only the video from another post or tweet in a post/tweet of your own (without quote-replying the source tweet itself). Only the video is then embedded in your post, and a … Continue reading

The post Embed only the video from another post on X or Twitter first appeared on The NeoSmart Files.]]>
Twitter has a new-ish feature that lets you embed only the video from another post or tweet in a post/tweet of your own (without quote-replying the source tweet itself). Only the video is then embedded in your post, and a small attribution appears at the bottom identifying where the video came from:

In the screenshot above, Sarah is sharing a video that was originally shared by Luc, but she’s not embedding/quoting Luc’s tweet itself – only the video. This post will cover how to do that yourself, both on the desktop/web and in the iOS Twitter app on iPhone.

All of Twitter’s features are really just special-cased handling of URLs, and video embedding is no different. If you want to quote-reply, you are actually just posting something followed by the URL of the original tweet you want to quote. For example,

Look at the size of this crowd!
https://twitter.com/LucAuffret/status/1716085946016252251

ends up with the following quote reply:

And similarly, embedding just the video from a tweet is as simple as appending /video/1 to the URL of the source tweet. In this case:

Look at the size of that crowd! #LibérezPalestine
https://twitter.com/LucAuffret/status/1716085946016252251/video/1

becomes

On iOS, in the Twitter/X app, this is all done for you automatically. If you just long-press on a video, you can use the “Post Video” option in the menu that pops up to have twitter copy-and-paste the full tweet URL with the /video/1 already appended for you:

If the source tweet/post contains more than one video, you can change the 1 in /video/1 to a different number in order to embed a video other than the first one in the post.

Liked this post? Follow me on twitter @mqudsi or subscribe to new posts via email from the sidebar to the right!

The post Embed only the video from another post on X or Twitter first appeared on The NeoSmart Files.]]>
https://neosmart.net/blog/embed-only-the-video-from-another-post-on-x-or-twitter/feed/ 0 5104
Increment only numbers matching regex in Vim https://neosmart.net/blog/increment-only-numbers-matching-regex-in-vim/ https://neosmart.net/blog/increment-only-numbers-matching-regex-in-vim/#comments Fri, 13 Oct 2023 17:47:14 +0000 https://neosmart.net/blog/?p=5071 Long-time vim or neovim users are probably already aware that visually selecting a block of text then pressing CTRL + A in vim will result in any numbers in the selected block of text being incremented by 1. This works … Continue reading

The post Increment only numbers matching regex in Vim first appeared on The NeoSmart Files.]]>
Long-time vim or neovim users are probably already aware that visually selecting a block of text then pressing CTRL + A in vim will result in any numbers in the selected block of text being incremented by 1. This works even if the block contains non-numeric text: each group of digits gets treated as a number and is incremented.1

For example, here’s a video that shows what happens when you select some text in vim and then use CTRL + A to increment the values:

(It’s also a fact that a lot of vim users learned about this functionality the terribly hard way: accidentally pressing CTRL + A then later realizing that all the numbers in their document were off-by-one for some unknown reason.)

But in all honesty, this isn’t a very useful mapping because it’s rare (at least in the programming world) to have numeric and text content completely separate: you usually have numbers in certain, key places while also possibly having numbers intermixed with the remainder of the text in the document. And we’ll often need to only increment certain numeric values but not the rest.

Here’s how you can use increment only numbers matching against a regular expression (including multiple numbers on the same line) while leaving the rest intact:

  1. Write a regex that matches against only the numbers you want to change. By doing this in normal mode, we can get vim to highlight the matches as we edit the regular expression, allowing us to visually confirm that the regex matches the numbers we want to increment. To do this, just use our trusty, old friend :s/foo (you can match against numeric content by using \d\+ to select all consecutive digits)
  2. For the second half of the /s/foo/bar/ expression (bar, the replacement value), we’ll use the magic of a vim expression to increment (or otherwise manipulate) the matching value. Remember that regex capture groups are made with (match here), match group 0 is the entirety of the match, and our manually captured groups (via the parentheses) are then counted from left-to-right from number 1 onward. The magic bits are \=submatch(n)+1 which replaces the nth match group with the incremented value.

Here’s an example where we want to insert some text in the middle of a numbered/indexed structured body of text then update all the indexes afterwards by bumping them up by one:

We have some subtitles in the SRT format and we want to insert a new caption in the middle, then update all the caption numbers but not the timestamps to reflect the insertion in the middle of the list. We have this text to start with:

1
00:00:03,400 --> 00:00:06,177
In this episode, we'll be talking about
the importance of strong typing in programming.

2
00:00:010,000 --> 00:00:11,200
Strongly-typed languages have many benefits over
their loosely-typed counterparts.

3
00:00:11,500 --> 00:00:13,655
Using strongly-typed languages can actually make
you more productive.

And we want to insert the following subtitles between 1 and 2, but not have to increment all the indexes that come after one-by-one by hand, which is time-consuming, error-prone, and a chore:

00:00:06,600 --> 00:00:09,220
Hang on to your hats because this is going to be fun!

We’ll do this by pasting the text where we want it to go, selecting the remainder of the text (where we need to increment the subtitle index number), and then using the vim expression :s/^\d\+$/\=submatch(0)+1/g to match a line that contains only numeric content (so it’ll match the subtitle index number but not the timestamps, which we absolutely don't want to inadvertently increment in the process):

As you can see, it’s simply a matter of selecting the text you want to replace in (in our case, everything past the captions we just entered) and then coming up with a regex that matches only the numbers we want to replace but not the numbers we don’t. If we had used CTRL + A here instead, we would have ended up with the first timestamp in each caption in our selection incremented, in addition to incrementing the index.

I think the syntax for this one is easy enough to remember that you probably don’t need a plugin or a custom key mapping to do this for you. The trickiest part is just the regex, and odds are in most cases judicious application of ^ (start of line), $ (end of line), and whitespace will probably suffice to get you a regex that matches only the values you need. Unlike some other vim expressions that have really inscrutable names or incantations, using \=submatch(0) (or \=submatch(4) or whatever) a few times is probably all it will take for you to memorize the syntax and soon enough it’ll be second nature.

If you enjoyed this tip, consider subscribing to blog posts via RSS or via email from the sidebar to the right and follow me on twitter @mqudsi for more fun hacking or programming stuff! (If you’re an emacs user, it’s highly unlikely I’ll have any text editing hacks for you at any time, unfortunately!)


  1. To be pedantic, only the first group-of-digits/number on each line gets incremented; like many vim commands this only works on the first match per line of text unless some sort /g global modifier is used. 

The post Increment only numbers matching regex in Vim first appeared on The NeoSmart Files.]]>
https://neosmart.net/blog/increment-only-numbers-matching-regex-in-vim/feed/ 4 5071
tcpproxy 0.4 released https://neosmart.net/blog/tcpproxy-0-4-released/ https://neosmart.net/blog/tcpproxy-0-4-released/#respond Sun, 08 Oct 2023 18:53:36 +0000 https://neosmart.net/blog/?p=5060 This blog post was a bit delayed in the pipeline, but a new release of tcproxy, our educational async (tokio) rust command line proxy project, is now available for download (precompiled binaries or install via cargo). I was actually surprised … Continue reading

The post tcpproxy 0.4 released first appeared on The NeoSmart Files.]]>

Image courtesy of Hack A Day

This blog post was a bit delayed in the pipeline, but a new release of tcproxy, our educational async (tokio) rust command line proxy project, is now available for download (precompiled binaries or install via cargo).

I was actually surprised to find that we haven’t written about tcpproxy before (you can see our other rust-related posts here), but it’s a command line tcp proxy “server” written with two purposes in mind: a) serving as a real-world example of an async (tokio-based) rust networking project, and b) serving as a minimal but-still-useful tcp proxy you can run and use directly from the command line, without needing complex installation or configuration procedures. (You can think of it as being like Minix, but for rust and async networking.)

The tcpproxy project has been around for quite some time, originally published in 2017 before rust’s async support was even stabilized. At the time, it manually chained futures to achieve scalability without relying on the thread-per-connection model – but today its codebase is a lot easier to follow and understand thanks to rust’s first-class async/await support.

That doesn’t mean that there aren’t “gotchas” that rust devs need to be aware of when developing long-lived async-powered applications, and tcpproxy’s purpose here is to serve as a real-world illustration of the correct way to handle some of the thornier issues such as tying the lifetime of various connections (or halves of connections) to one-another and aborting all remaining tasks when the first terminates (without blocking or polling).

The 0.4.0 release doesn’t contain any major changes but tweaks a number of things to improve both the usability of the application and to model the correct way of handling a few things (such as not using an Arc<T> to share state that remains alive (and static) for the duration of the program’s execution1)

One of the user-visible changes in this release is that ECONNRESET and ECONNABORT are no longer treated as exceptional, meaning that tcpproxy proceeds as if the connection in question were closed normally and uneventfully. While a compliant TCP client shouldn’t just abort a tcp connection (and a server shouldn’t reset one), these things happen quite often in the real world, and since all tcpproxy connections are stateless, there’s really no reason to handle these any differently than a normal, compliant tcp connection tear-down. Since we don’t report a connection error in these cases, tcpproxy prints (when executed in debug -d mode, that is) the normal messages about the number of bytes proxied in each direction, hopefully leading to less confusion.

For those of you hearing about the tcpproxy project for the first time, I invite you to look over the core event loop which remains fairly small even when correctly handling all the cases we need to account for and synchronizing lifetimes the way we like. If you spot something that’s wrong, not quite right, or could be done in a more idiomatic way, please do leave a comment, send an email, or open an issue – tcpproxy is an open source project and it takes a village to raise and nurture even the smallest of projects to a healthy state!

You can follow me on twitter @mqudsi or sign up below for our rust-only mailing list to receive a heads-up when new rust educational content or rust open source crates are released. If you’re in a position to do so, I am also experimenting with accepting sponsors on my Patreon page and would greatly appreciate your patronage and support!

If you would like to receive a notification the next time we release a rust library, publish a crate, or post some rust-related developer articles, you can subscribe below. Note that you'll only get notifications relevant to rust programming and development by NeoSmart Technologies. If you want to receive email updates for all NeoSmart Technologies posts and releases, please sign up in the sidebar to the right instead.


  1. In cases like this, the recommendation is to actually just leak the memory instead to reduce cache coherency traffic in the MESI or MOESI protocols that is caused when each new task increments or decrements the shared reference count bits in the Arc<T>. If you know the value is going to live until the end of the application’s lifetime anyway, there’s no need to incur that cost and any future (read-only) access to the shared variable from any thread on any core will be ~free. 

The post tcpproxy 0.4 released first appeared on The NeoSmart Files.]]>
https://neosmart.net/blog/tcpproxy-0-4-released/feed/ 0 5060
CallerArgumentExpression and extension methods don’t mix https://neosmart.net/blog/callerargumentexpression-and-extension-methods-dont-mix/ https://neosmart.net/blog/callerargumentexpression-and-extension-methods-dont-mix/#respond Mon, 11 Sep 2023 17:17:55 +0000 https://neosmart.net/blog/?p=5037 This post is for the C# developers out there and takes a look at the interesting conjunction of [CallerArgumentExpression] and static extension methods – a mix that at first seems too convenient to pass up. A quick recap: [CallerArgumentExpression] landed … Continue reading

The post CallerArgumentExpression and extension methods don’t mix first appeared on The NeoSmart Files.]]>
This post is for the C# developers out there and takes a look at the interesting conjunction of [CallerArgumentExpression] and static extension methods – a mix that at first seems too convenient to pass up.

A quick recap: [CallerArgumentExpression] landed as part of the C# 10.0 language update and helps to reduce the (often brittle!) boilerplate involved in, among other uses, creating useful error messages capturing the names of variables or the text of expressions. You tag an optional string method parameter with [CallerArgumentExpression("argName")] where argName is the name of the method argument you want stringified, and the compiler does the rest.

Here’s a quick demo of how [CallerArgumentExpression] works:

using System;
using System.Runtime.CompilerServices;

public class Program
{
    static string Stringify(object obj,
        [CallerArgumentExpression("obj")] string expr = "")
    {
        return expr;
    }

    public static class Foo
    {
        public string Bar = "bar";
    }

    public static void Main()
    {
        var expr = Stringify(Foo.Bar);
        Console.WriteLine(expr); // prints "Foo.Bar"
        expr = Stringify(Foo.Bar + Foo.Bar);
        Console.WriteLine(expr); // prints "Foo.Bar + Foo.Bar"
    }
}

And you can try it online yourself in this .NET Fiddle.

It’s really cool and it opens the door to a lot of possibilities (though I’m still stuck trying to figure some of them out, such as reliably setting/clearing model binding errors that involve array expressions).

As mentioned, this shipped with C# 10. And of course, C# 8 shipped “the big one:” nullable reference types. Since then, the following pattern has become familiar in many a codebase while devs figure out where variables actually can or can’t be null:

using System;
using System.Diagnostics.CodeAnalysis;
using System.Runtime.CompilerServices;

static class Extensions
{
    public static T ThrowIfNull<T>([NotNull] T? value, string expr)
    {
        if (value is null) {
            throw new ArgumentNullException(expr);
        }
        return value;
    }
}

This does exactly what you think it does: it verifies that a value isn’t null or throws an exception if it is. And it lets the compiler know that downstream of this call, the passed-in value is non-null. To make it useful, it’s common enough to extend it with more caller attribute magic:

using System;
using System.Diagnostics.CodeAnalysis;
using System.Runtime.CompilerServices;

static class Extensions
{
    public static T ThrowIfNull<T>(
        [NotNull] T? value,
        string expr,
        [CallerMemberName] string callerName = "",
        [CallerFilePath] string filePath = "",
        [CallerLineNumber] int lineNumber = 0)
    {
        if (value is null) {
            throw new InvalidOperationException(
                $"{expr} unexpectedly null in {callerName} "
                + $"at {filePath}:{lineNumber}");
        }
        return value;
    }
}

Now we get useful exceptions that we’ll hopefully log and revisit to help us find any places in our codebase where we are assuming a value can’t be null but it turns out that, in fact, it can be.

But what about if we try to add our new best buddy [CallerArgumentExpression] here, to get rid of the need to manually specify the text of the argument via argName in our ThrowIfNull()?

using System;
using System.Diagnostics.CodeAnalysis;
using System.Runtime.CompilerServices;

static class Extensions
{
    public static T ThrowIfNull<T>(
        [NotNull] T? value,
        [CallerArgumentExpression("value")] string expr = "",
        [CallerMemberName] string callerName = "",
        [CallerFilePath] string filePath = "",
        [CallerLineNumber] int lineNumber = 0)
    {
        if (value is null) {
            throw new InvalidOperationException(
                $"{expr} unexpectedly null in {callerName} "
                + $" at {filePath}:{lineNumber}");
        }
        return value;
    }
}

At first blush, this works great. Use it with a single variable directly, as in foo.ThrowIfNull(), and everything will work swimmingly and it’ll do exactly what it says on the tin. But try using it in a more-complicated setting, say like foo?.bar?.ThrowIfNull(), and you’ll see what I mean: here, argName will only capture the last token in the chain and you’ll see that argName is only bar and not foo.bar!

It’s actually not particularly surprising behavior. Even without knowing what Roslyn desugars the above code to, you could logically think of it as being an expression (conditionally) invoked on/with the final variable bar itself – after all, T here would have been bar.GetType(), so it’s not a huge stretch of the imagination to guess that expr might only span bar as well.1

Indeed, when you look at what the code compiles to, you’ll see why. For the following code fragment:

public class Foo {
    public string? Bar;
}

public class C {
    public void M(Foo? foo) {
        foo?.Bar.ThrowIfNull();
    }
}

We get

public class Foo
{
    [System.Runtime.CompilerServices.Nullable(2)]
    public string Bar;
}

public class C
{
    [System.Runtime.CompilerServices.NullableContext(2)]
    public void M(Foo foo)
    {
        if (foo != null)
        {
            Extensions.ThrowIfNull(foo.Bar, ".Bar");
        }
    }
}

Which, while still helpful, is not exactly what we want. Although as C# developers we are somewhat allergic to calling static helper utilities directly instead of cleverly turning them into their more ergonomic extension method counterparts, in this we don’t have any other choice.

When we change ThrowIfNull() from an extension method to a regular static method though, we get the result we really wanted:

public static class Utils
{
    public static T ThrowIfNull<T>(
        [NotNull] T? value,
        [CallerArgumentExpression("value")] string? expr = null) 
    {
        if (value is null) {
            throw new ArgumentNullException(expr);
        }
        return value;
    }
}

public class Foo
{
    public string? Bar;
}

public class C
{
    public void M(Foo? foo)
    {
        Utils.ThrowIfNull(foo?.Bar);
    }
}

Desugaring to:

public class C
{
    [System.Runtime.CompilerServices.NullableContext(2)]
    public void M(Foo foo)
    {
        Utils.ThrowIfNull((foo != null) ? foo.Bar : null, "foo?.Bar");
    }
}

Liked this post? Follow me on twitter @mqudsi and like this tweet for more .NET awesomeness!

If you would like to receive a notification the next time we release a nuget package for .NET or release resources for .NET Core and ASP.NET Core, you can subscribe below. Note that you'll only get notifications relevant to .NET programming and development by NeoSmart Technologies. If you want to receive email updates for all NeoSmart Technologies posts and releases, please sign up in the sidebar to the right instead.


  1. Except expr is actually not bar but rather .bar

The post CallerArgumentExpression and extension methods don’t mix first appeared on The NeoSmart Files.]]>
https://neosmart.net/blog/callerargumentexpression-and-extension-methods-dont-mix/feed/ 0 5037
Implementing truly safe semaphores in rust https://neosmart.net/blog/implementing-truly-safe-semaphores-in-rust/ https://neosmart.net/blog/implementing-truly-safe-semaphores-in-rust/#comments Mon, 03 Oct 2022 20:11:34 +0000 https://neosmart.net/blog/?p=4958 Discuss this article on r/rust or on Hacker News. Low-level or systems programming languages generally strive to provide libraries and interfaces that enable developers, boost productivity, enhance safety, provide resistance to misuse, and more — all while trying to reduce … Continue reading

The post Implementing truly safe semaphores in rust first appeared on The NeoSmart Files.]]>
Discuss this article on r/rust or on Hacker News.

Low-level or systems programming languages generally strive to provide libraries and interfaces that enable developers, boost productivity, enhance safety, provide resistance to misuse, and more — all while trying to reduce the runtime cost of such initiatives. Strong type systems turn runtime safety/sanity checks into compile-time errors, optimizing compilers try to reduce an enforced sequence of api calls into a single instruction, and library developers think up of clever hacks to even completely erase any trace of an abstraction from the resulting binaries. And as anyone that’s familiar with them can tell you, the rust programming language and its developers/community have truly embraced this ethos of zero-cost abstractions, perhaps more so than any others.

I’m not going to go into detail about what the rust language and standard library do to enable zero-cost abstractions or spend a lot of time going over some of the many examples of zero-cost interfaces available to rust programmers, though I’ll just quickly mention a few of my favorites: iterators and all the methods the Iterator trait exposes have to be at the top of every list given the amount of black magic voodoo the compiler has to do to turn these into their loop-based equivalents, zero-sized types make developing embedded firmware in rust a dream and it’s really crazy to see how all the various peripheral abstractions can be completely erased giving you small firmware blobs despite all the safety abstractions, and no list is complete the newest member of the team, async/await and how rust manages to turn an entire web server api into a single state machine and event loop. (And to think this can be used even on embedded without a relatively heavy async framework like tokio and with even zero allocations to boot!)

But the tricky thing with abstractions is that the relative price you pay scales rather unfairly with the size of the interface you are abstracting over. While a byte here and a byte there may mean nothing when we’re talking framework-scale interfaces, when you are modeling smaller and finer-grained abstractions, every byte and every instruction begin to count.

A couple of weeks ago, we released an update to rsevents, our crate that contains a rusty cross-platform equivalent to WIN32 events for signaling between threads and writing your own synchronization primitives, and rsevents-extra a companion crate that provides a few handy synchronization types built on top of the manual- and auto-reset events from the rsevents crate. Aside from the usual awesome helpings of performance improvements, ergonomics enhancements, and more, this latest version of rsevents-extra includes a Semaphore synchronization primitive – something that the rust standard library surprisingly lacks… but not without good reason.

What makes a semaphore a semaphore?

Semaphores are well-documented and fairly well-understood underpinnings of any concurrency library or framework and essential Computer Science knowledge. So why doesn’t the rust standard library have a semaphore type?

Unlike the synchronization types that the rust standard library currently provides (such as Mutex<T> and RwLock<T>, a semaphore is somewhat harder to model as it doesn’t so much restrict concurrent access to a single object or variable so much as it does limit concurrency within a region of code.

Of course it can be argued that in traditional programming a semaphore is just a more general case of a mutex, and just like mutexes traditionally protected a region of code from concurrent access1 but were converted into synchronization primitives owning the data they protect and marshalling access to it, there’s no reason a rust semaphore couldn’t do the same. But therein lies the problem: a mutex and a read-write lock can both be understood in terms of readers and writers,2 a semaphore makes no such guarantees. And rust is quite fundamentally built on the concept of read ^ write: it needs to know if a thread/scope is reading or writing from a variable or memory location in order to uphold its most basic memory safety guarantee: there can either be multiple “live” read-only references to an object or a single write-enabled (&mut) reference to the same — but a semaphore doesn’t make that distinction!

While a strictly binary semaphore (max concurrency == 1) can guarantee that there will never be multiple writers accessing a memory region, there’s not much theoretical benefit to such a binary semaphore over a mutex – in fact, they’re interchangeable. What makes a semaphore truly special is that it can be created (or even dynamically modified) with a concurrency limit n and then uphold its core precondition, guaranteeing that at any given time there will never be more than n threads/stacks3 accessing a semaphore-protected region at any given time.

The problem is that with n > 1, there’s no concept of a “privileged” owning thread and all threads that have “obtained” the semaphore do so equally. Therefore, a rust semaphore can only ever provide read-only (&T) access to an underlying resource, limiting the usefulness of such a semaphore almost to the point of having no utility. As such, the only safe “owning” semaphore with read-write access that can exist in the rust world would be  Semaphore<()>,4 or one that actually owns no data and can only be used for its side effect of limiting concurrency while the semaphore is “owned,” so to speak.5 (Actual mutation of accessed resources within the concurrency-limited region, if needed, would continue to be marshalled via Mutex<T> or RwLock<T> on a fine-grained level.)

Ok, so this explains why the rust standard library doesn’t contain a Semaphore<T> type to mirror Mutex<T> and its friends, but then what’s so hard about shipping a non-owning std::sync::Semaphore instead?

Designing a safe Semaphore for rust

To answer this, we need to look at what a semaphore API generally looks like in other languages. While the names and calling semantics differ, a semaphore is generally found as a type that provides the following, starting with the most fundamental to de facto properties:

  • It is a type that can be used to limit concurrency to a resource or region of code, up to a dev-defined limit n.
  • It is a type that has a concept of “currently available concurrency,” which represents and tracks the remaining number of threads/stacks that can “obtain” the semaphore, thereby reducing its available concurrency and generally giving the calling thread access to the concurrency-limited region,
  • A semaphore can be created/declared with an “initially available concurrency” and a “maximum possible concurrency,” which may differ (indeed, “initially available concurrency” is often zero),
  • Semaphores don’t generally have a concept of ownership, meaning a thread (or any thread) can increment (up to the pre-defined limit) or decrement (down to zero) the available concurrency for a semaphore without having “obtained” or “created” it. (This is necessary otherwise it’d be impossible to initialize a semaphore with a lower initial concurrency limit than its maximum, because no thread could then increase it.)

It’s the last of these points that makes a semaphore so tricky to model in any language that prides itself on safety. While a semaphore that acts strictly as a variable-occupancy mutex (i.e. initial concurrency equals the max concurrency and each time it is obtained, it must be subsequently released by the same thread that obtained it), that’s not generally a requirement that semaphores impose, and such a requirement would considerably limit the utility that a semaphore could offer.

Let’s look at some ways we might design such a semaphore in rust, some of which we actually tried while prototyping rsevents_extra::Semaphore.

Before anything else, let’s get the hard part out of the way by introducing you to rsevents::AutoResetEvent, a one-byte6 synchronization primitive that takes care of putting threads to sleep when the event isn’t signalled/available and allowing one-and-only-one waiting thread to either consume the event (if it’s not already asleep) or to wake up (if it’s asleep waiting for the event) when the event is signalled (after which the event is atomically reset to a “not signalled” state). It doesn’t even have any spurious waits, making it really nice and easy to work with in a safe fashion. All of our Semaphore implementations will use this auto-reset event to take care of the synchronization and we’ll omit the details of when and where to call AutoResetEvent::set() and AutoResetEvent::reset() for now.

So here’s what our initial semaphore skeleton looks like. We know we need an internal count of some integral type to keep track of the current concurrency (since we already established that it’s going to be variable and not just zero or one), and we know that at minimum a semaphore’s interface needs to provide a wait to “obtain” the semaphore (decrementing the available concurrency for future callers) and a way to “release” the semaphore (at least to be used by a thread that has already obtained a “concurrency token” to re-increment the count after it is done and wants to give up its access to the concurrency-restricted region