Kami 2 is an iOS game where the player folds colored paper with the goal of making the whole screen have the same color. Each level has a set of colors and the player can pick any of them, color a section, then repeat for a limited number of steps:

Some levels are fairly tricky. For example I got stuck on this one:

While the app can provide hints, a more interesting exercise is to see if we can solve this with an algorithm. We can look at this as a graph problem. Each colored section represents a node in the graph and we consider adjacent areas as connected by edges:

Here is a more abstract representation of the same level:

The only information we care about is the color of each node and what other nodes it is connected with.

A step in the game consists of choosing a color, picking a node to get colored with the chosen colore, then merging the colored node with adjacent nodes of the same (new) color. The game is won if the nodes get merged down to a single node within the step limit.

A Python representation of the graph would look like this:

```
class Graph:
def __init__(self, nodes=dict(), edges=[]):
self.nodes = nodes
self.edges = edges
```

We represent nodes as a dictionary where the key is the node id and the value is the color of the node, and edge as a list of pairs of ids. The level diagramed above would be represented as:

```
g = Graph(
nodes = {
1: "Purple",
2: "Yellow",
3: "Red",
4: "Yellow",
5: "White",
6: "Purple",
7: "Yellow",
8: "Purple",
9: "Yellow",
10: "Red",
11: "Purple",
12: "White",
13: "Purple",
14: "Purple",
15: "White",
16: "Yellow",
17: "Red",
18: "Purple",
},
edges = [
(1, 2),
(1, 4),
(2, 3),
(2, 5),
(2, 6),
(3, 6),
(3, 7),
(4, 5),
(4, 8),
(5, 6),
(5, 8),
(6, 7),
(6, 9),
(7, 9),
(7, 12),
(8, 9),
(8, 10),
(9, 11),
(9, 12),
(10, 11),
(10, 13),
(10, 15),
(12, 13),
(12, 14),
(12, 16),
(13, 15),
(13, 16),
(13, 18),
(15, 18),
(16, 17),
(16, 18),
(17, 18),
])
```

We need a function that, for a given node id, enumerates all the connected nodes. This is a member function of the graph:

```
def connected(self, node):
for edge in self.edges:
if edge[0] == node:
yield edge[1]
elif edge[1] == node:
yield edge[0]
```

We also need a function that colors a node and merges it with adjacent nodes of the same color. We can make this function return a new graph instance with the applied updates. Its signature would be:

```
def color(self, node, color):
```

First it would have to determine the set of nodes that need to be merged after coloring. That is the node that just got colored and adjacent nodes which have the same color as its new color. By convention, when we merge nodes we keep the smallest id:

```
to_merge = [node]
to_merge += [n for n in self.connected(node) if self.nodes[n] == color]
new_n = min(to_merge)
```

The nodes of the new graph would be the same nodes as the old one, minus any node in to_merge. Nodes in the to_merge list would be represented by the node new_n:

```
new_nodes = { new_n: color }
for node in self.nodes:
if node not in to_merge:
new_nodes[node] = self.nodes[node]
```

We also need to build the new list of edges. We do this as follows: for each edge, if both nodes are in to_merge, the edge does not exist in the new graph so we discard it. If one node is in to_merge, we create a new edge where the node in to_merge is replaced by new_n. If none of the nodes is in to_merge, we keeep the edge:

```
new_edges = set()
for edge in self.edges:
if edge[0] in to_merge and edge[1] in to_merge:
continue
elif edge[0] in to_merge:
new_edges.add(tuple(sorted((new_n, edge[1]))))
elif edge[1] in to_merge:
new_edges.add(tuple(sorted((edge[0], new_n))))
else:
new_edges.add(edge)
```

We keep the edge tuples sorted by node id to avoid duplication (for example having both a (1, 3) and a (3, 1)). We return a graph consiting of new_nodes and new_edges. The full implementation of color is:

```
def color(self, node, color):
to_merge = [node]
to_merge += [n for n in self.connected(node) if self.nodes[n] == color]
new_n = min(to_merge)
new_nodes = { new_n: color }
for node in self.nodes:
if node not in to_merge:
new_nodes[node] = self.nodes[node]
new_edges = set()
for edge in self.edges:
if edge[0] in to_merge and edge[1] in to_merge:
continue
elif edge[0] in to_merge:
new_edges.add(tuple(sorted((new_n, edge[1]))))
elif edge[1] in to_merge:
new_edges.add(tuple(sorted((edge[0], new_n))))
else:
new_edges.add(edge)
return Graph(new_nodes, new_edges)
```

To solve a level we try coloring all of the nodes then recursively solve for the new graph. If our graph has one node, we found a solution. If we run out of steps, our candidate solution is not good so we backtrack:

```
def solve(graph, steps, n):
if len(graph.nodes) == 1:
print(steps)
exit()
if n == 0:
return
for node in graph.nodes:
colors = list(set([graph.nodes[n] for n in graph.connected(node)]))
for color in colors:
g = graph.color(node, color)
solve(g, steps + [(node, color)], n - 1)
```

Here graph is the graph we are trying to solve, steps is the list of actions in our solution, consisting of pairs of node id and color, and n is the remaining number of steps.

Note that we don’t attempt to color a node with any random color, rather we want
to color it with the color of one of its adjacent nodes. The reason for this is
that such a coloring guarantees *some* nodes will get merged so we reduce the
total number of nodes with this step. If we were to color a node with a color
none of its adjacent nodes has, there would be nothing to merge so we would
waste a step without reducing the graph.

This solution works but is rather slow. One optimization we can do is to more aggressively prune our search space: if at any point our graph has more colors than the number of remaining steps + 1, we know we are down the wrong path and need to backtrack. As an example, if we have four colors on the board: blue, red, yellow, white, but we only have 2 steps left, no matter how the areas are connected, we can never end up with a single color in 2 steps as we need to recolor 3 areas.

We can implement this optimization by updating the Graph constructor to keep track of the number of unique colors and update our solve function to backtrack if we have more colors than steps + 1:

```
class Graph:
def __init__(self, nodes=dict(), edges=[]):
self.nodes = nodes
self.edges = edges
self.colors = len(set(nodes.values()))
```

The update is the last line, which maintains the count of unique values in the nodes dictionary. Updated solve looks like this:

```
def solve(graph, steps, n):
if len(graph.nodes) == 1:
print(steps)
exit()
if n == 0:
return
if graph.colors > n + 1:
return
for node in graph.nodes:
colors = list(set([graph.nodes[n] for n in graph.connected(node)]))
for color in colors:
g = graph.color(node, color)
solve(g, steps + [(node, color)], n - 1)
```

We introduced a new if statement that returns if graph.colors > n + 1.

Running this yields the following solution for the level:

```
[(10, 'Purple'), (8, 'White'), (5, 'Yellow'), (2, 'Purple'), (1, 'Red')]
```

So coloring node 10 with purple, then node 8 with white and so on solves the level.

Another potential optimization which I did not implement could improve pruning further by relying on the fact that coloring a node and merging it with adjacent nodes removes at most two edges from a path. So if the shortest path between two nodes in the graph is longer than twice the number of remaining steps, we would again not be able to find a solution from the current state.

The full source code is available on GitHub.

]]>This blog post looks at a few algorithms to generate Fibonacci numbers. For a much better treatment of these algorithms, I recommend From Mathematics to Generic Programming. The implementations are provided in poorly written Rust, as I’m just learning the language.

Learning Rust and going through The Rust Programming Language,
I got to the end of chapter 3,
where there are a few simple exercises. One of them is *Generate the nth
Fibonacci number*.

The Fibonacci sequence is defined as the sequence \(F_n = F_{n-1} + F_{n-2}\) with the seed values \(F_0 = 0\) and \(F_1 = 1\).

The first few values of the sequence are 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ….

Directly translating the definition above into an algorithm to compute the n-th Fibonacci number yields:

```
fn fib(n: i32) -> i32 {
if n == 0 { return 0; }
if n == 1 { return 1; }
fib(n - 1) + fib(n - 2)
}
```

This algorithm works for very small n values but it is extremely inefficient as it has exponential time complexity and linear space complexity (based on stack depth). Since fib(n - 1) = fib(n - 2) + fib(n - 3), a call like fib(n - 1) + fib(n - 2) is equivalent to (fib(n - 2) + fib(n - 3)) + fib(n - 2), so the same elements of the sequence end up being computed over and over again.

A better way to generate the nth Fibonacci number is to build it bottom-up, starting from \(F_0\) and \(F_1\):

```
fn fib2(n: i32) -> i32 {
if n == 0 { return 0; }
let mut a = 0;
let mut b = 1;
for _ in 1..n {
let c = a + b;
a = b;
b = c;
}
b
}
```

Here we start with \(a = F_0, b = F_1\) and with each iteration of the loop we advance from \(a = F_k, b = F_{k+1}\) to \(a = F_{k+1}, b = F_{k+2}\) until b becomes \(F_n\).

Compared to the first algorithm, this is highly efficient, as it has linear complexity and requires constant space. There are faster ways to compute the nth Fibonacci number though.

The Fibonacci sequence can also be described in matrix form as follows:

\[\begin{split}\begin{bmatrix}
F_{k+2} \\
F_{k+1}
\end{bmatrix}
=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}
*
\begin{bmatrix}
F_{k+1} \\
F_k
\end{bmatrix}\end{split}\]

Note that the next pair of numbers in the sequence, \(F_{k+3}, F_{k+2}\) can be expressed as:

\[\begin{split}\begin{bmatrix}
F_{k+3} \\
F_{k+2}
\end{bmatrix}
=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}
*
\begin{bmatrix}
F_{k+2} \\
F_{k+1}
\end{bmatrix}
=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}
*
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}
*
\begin{bmatrix}
F_{k+1} \\
F_k
\end{bmatrix}\end{split}\]

Thus we have the formula:

\[\begin{split}\begin{bmatrix}
F_n \\
F_n - 1
\end{bmatrix}
=
\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}^{n-1}
*
\begin{bmatrix}
F_1 \\
F_0
\end{bmatrix}\end{split}\]

Since \(F_0 = 0\) and \(F_1 = 1\), \(F_n\) is the element at index \((0, 0)\) in:

\[\begin{split}\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}^{n-1}\end{split}\]

Assuming we have a function for exponentiating 2x2 matrices exp2x2, we can implement an algorithm to compute the nth Fibonacci number like this:

```
fn fib3(n: i32) -> i32 {
if n == 0 { return 0; }
if n == 1 { return 1; }
let a = [
[1, 1],
[1, 0],
];
exp2x2(a, n - 1)[0][0]
}
```

The complexity of this algorithm is given by the complexity of exp2x2. A simple implementation of matrix exponentiation given a matrix multiplication function mul2x2 is:

```
fn exp2x2(a: [[i32; 2]; 2], n: i32) -> [[i32; 2]; 2] {
let mut result = [
[1, 0],
[0, 1],
];
for _ in 0..n {
result = mul2x2(result, a);
}
result
}
```

This function computes a^n by starting with the identity matrix and multiplying it with a n times. The function for multiplying two 2x2 matrices is trivial:

```
fn mul2x2(a: [[i32; 2]; 2], b: [[i32; 2]; 2]) -> [[i32; 2]; 2] {
[
[a[0][0] * b[0][0] + a[1][0] * b[0][1], a[0][0] * b[1][0] + a[1][0] * b[1][1]],
[a[0][1] * b[0][0] + a[1][1] * b[0][1], a[0][1] * b[1][0] + a[1][1] * b[1][1]],
]
}
```

With mul2x2 and exp2x2, our fib3 algorithm has linear complexity, which is determined by the number of times we call mul2x2 in our exponentiation function. There is a faster way to do exponentiation though: observe that \(x^7 = x^4 * x^2 * x\). In general, any number n and can be decomposed as a series of powers of two. So we can implement a fast_exp2x2 which works as follows:

```
if n == 1, return a
if n is even, return fast_exp2x2(a * a, n / 2)
if n is odd, return fast_exp2x2(a * a, (n - 1) / 2) * a
```

We stop when our exponent is 1 and return a. If n is even, we square the base and halve the exponent (for example \(x^8 = (x*x)^4\)). If n is odd, we do the same but multiply by the base (for example \(x^9 = (x*x)^4 * x\)). This is a recursive algorithm which halves n at each step, so we have logarithmic time and space complexity.

```
fn fast_exp2x2(a: [[i32; 2]; 2], n: i32) -> [[i32; 2]; 2] {
if n == 1 { return a; }
let mut result = fast_exp2x2(mul2x2(a, a), n >> 1);
if n & 1 == 1 {
result = mul2x2(a, result);
}
result
}
```

This is a very efficient way to compute the nth Fibonacci number. But where does this fast exponentiation algorithm come from?

The ancient Egyptian multiplication algorithm comes from the Rhind Papyrus from around 1500 BC. The idea is very similar to our fast exponentiation algorithm: we can implement a fast multiplication algorithm by relying on addition and doubling (eg. \(x * 7 = x * 4 + x * 2 + x\)). The steps or our egyptian_mul algorithm are:

```
if n == 1, return a
if n is even, return egyptian_mul(a + a, n / 2)
if n is odd, return egyptian_mul(a + a, (n - 1) / 2) + a
```

An implementation in Rust is:

```
fn egyptian_mul(a: i32, n: i32) -> i32 {
if n == 1 { return a; }
let mut result = egyptian_mul(a + a, n >> 1);
if n & 1 == 1 {
result += a;
}
result
}
```

This multiplication algorithm relies only on addition, and multiplies a by n in \(log(n)\) steps.

egyptian_mul and fast_exp2x2 algorithms have the same structure since they are fundamentally the same: they provide an efficient way to implement an operation defined as applying another operation n times. Multiplication is, by definition, repeated addition. Similarly, exponentiation is, by definition, repeated multiplication. We can generalize these to an algorithm that given an initial value a of any type T, an operation op(T, T) -> T, and n, the number of times to apply op, provides an efficient computation using doubling and halving:

```
fn op_n_times<T, Op>(a: T, op: &Op, n: i32) -> T
where T: Copy,
Op: Fn(T, T) -> T {
if n == 1 { return a; }
let mut result = op_n_times::<T, Op>(op(a, a), &op, n >> 1);
if n & 1 == 1 {
result = op(a, result);
}
result
}
```

We can express Egyptian multiplication (egyptian_mul) as addition applied n times:

```
fn egyptian_mul(a: i32, n: i32) -> i32 {
op_n_times(a, &std::ops::Add::add, n)
}
```

Similarly, we can express fast matrix exponentiation (fast_exp2x2) as matrix multiplication applied n times:

```
fn fast_exp2x2(a: [[i32; 2]; 2], n: i32) -> [[i32; 2]; 2] {
op_n_times(a, &mul2x2, n)
}
```

I wanted to benchmark the two interesting implementations: fib2 and fib4. The first exponential complexity implementation is highly inefficient and even for small values of N (eg. N = 50) it takes a long time to complete. fib3 has linear complexity like fib2, but while fib2 just performs additions and assignments on each iteration, fib3 performs matrix multiplication, which is more expensive. So fib2 and fib4 are more interesting to look at.

Turns out that the Fibonacci sequence grows quite fast, the 100th Fibonacci number is 354224848179261915075, which does not fit in an i32. So let’s update the implementations to use num::BigUint, an arbitrary precision number. First is fib2:

```
extern crate num;
use num::bigint::{BigUint, ToBigUint};
fn fib2(n: i32) -> BigUint {
if n == 0 { return 0.to_biguint().unwrap(); }
let mut a = 0.to_biguint().unwrap();
let mut b = 1.to_biguint().unwrap();
for _ in 1..n {
let c = &a + &b;
a = b;
b = c;
}
b
}
```

For fib4, we need to update mul2x2 to work with BigUint array references, so we don’t copy BigUint arrays:

```
fn mul2x2(a: &[[BigUint; 2]; 2], b: &[[BigUint; 2]; 2]) -> [[BigUint; 2]; 2] {
[
[&a[0][0] * &b[0][0] + &a[1][0] * &b[0][1], &a[0][0] * &b[1][0] + &a[1][0] * &b[1][1]],
[&a[0][1] * &b[0][0] + &a[1][1] * &b[0][1], &a[0][1] * &b[1][0] + &a[1][1] * &b[1][1]],
]
}
```

We also need to update our op_n_times so the operation now takes &T instead of T. Note this version still works with i32 arrays and numbers, but now the operation is expected to take two references instead of two values. On the other hand we no longer require that T has the Copy trait:

```
fn op_n_times<T, Op>(a: T, op: &Op, n: i32) -> T
where Op: Fn(&T, &T) -> T {
if n == 1 { return a; }
let mut result = op_n_times::<T, Op>(op(&a, &a), &op, n >> 1);
if n & 1 == 1 {
result = op(&a, &result);
}
result
}
```

Then we can update our fib4 implementation to use BigUint:

```
fn fast_exp2x2(a: [[BigUint; 2]; 2], n: i32) -> [[BigUint; 2]; 2] {
op_n_times(a, &mul2x2, n)
}
fn fib4(n: i32) -> BigUint {
if n == 0 { return 0.to_biguint().unwrap(); }
if n == 1 { return 1.to_biguint().unwrap(); }
let a = [
[1.to_biguint().unwrap(), 1.to_biguint().unwrap()],
[1.to_biguint().unwrap(), 0.to_biguint().unwrap()],
];
fast_exp2x2(a, n - 1)[0][0].clone()
}
```

These two implementations work with arbitrarily large numbers, for example fib4(10_000) is:

33644764876431783266621612005107543310302148460680063906564769974680081442166662368155595513633734025582065332680836159373734790483865268263040892463056431887354544369559827491606602099884183933864652731300088830269235673613135117579297437854413752130520504347701602264758318906527890855154366159582987279682987510631200575428783453215515103870818298969791613127856265033195487140214287532698187962046936097879900350962302291026368131493195275630227837628441540360584402572114334961180023091208287046088923962328835461505776583271252546093591128203925285393434620904245248929403901706233888991085841065183173360437470737908552631764325733993712871937587746897479926305837065742830161637408969178426378624212835258112820516370298089332099905707920064367426202389783111470054074998459250360633560933883831923386783056136435351892133279732908133732642652633989763922723407882928177953580570993691049175470808931841056146322338217465637321248226383092103297701648054726243842374862411453093812206564914032751086643394517512161526545361333111314042436854805106765843493523836959653428071768775328348234345557366719731392746273629108210679280784718035329131176778924659089938635459327894523777674406192240337638674004021330343297496902028328145933418826817683893072003634795623117103101291953169794607632737589253530772552375943788434504067715555779056450443016640119462580972216729758615026968443146952034614932291105970676243268515992834709891284706740862008587135016260312071903172086094081298321581077282076353186624611278245537208532365305775956430072517744315051539600905168603220349163222640885248852433158051534849622434848299380905070483482449327453732624567755879089187190803662058009594743150052402532709746995318770724376825907419939632265984147498193609285223945039707165443156421328157688908058783183404917434556270520223564846495196112460268313970975069382648706613264507665074611512677522748621598642530711298441182622661057163515069260029861704945425047491378115154139941550671256271197133252763631939606902895650288268608362241082050562430701794976171121233066073310059947366875

We can benchmark these implementations using Rust’s built-in benchmarking:

```
#[bench]
fn fib4_bench(b: &mut Bencher) {
b.iter(|| fib4(N));
}
#[bench]
fn fib2_bench(b: &mut Bencher) {
b.iter(|| fib2(N));
}
```

On my Surface Book, for N = 100, we have:

```
test tests::fib2_bench ... bench: 7,805 ns/iter (+/- 3,042)
test tests::fib4_bench ... bench: 6,140 ns/iter (+/- 356)
```

For N = 1_000:

```
test tests::fib2_bench ... bench: 89,131 ns/iter (+/- 28,346)
test tests::fib4_bench ... bench: 16,307 ns/iter (+/- 2,087)
```

For N = 10_000:

```
test tests::fib2_bench ... bench: 2,121,140 ns/iter (+/- 132,448)
test tests::fib4_bench ... bench: 184,625 ns/iter (+/- 12,184)
```

For N = 100_000:

```
test tests::fib2_bench ... bench: 128,769,418 ns/iter (+/- 5,198,789)
test tests::fib4_bench ... bench: 7,176,026 ns/iter (+/- 364,400)
```

While fib2 and fib4 start with about the same performance at N = 100, for N = 100_000, fib4 is significantly faster. The benchmark results don’t seem to reflect the algorithmic complexity of fib2 (linear) and fib4 (logarithmic), I suspect because of the introduction of BigUint and operations on large numbers. Still, the algorithm relying on fast exponentiation performs many times faster on large Ns.

This blog post covered:

- Algorithms to generate Fibonacci numbers: naïve recursive (exponential), bottom-up (linear), matrix exponentiation (linear or logarithmic, depending on the matrix exponentiation algorithm).
- Ancient Egyptian multiplication and fast matrix exponentiation are the same algorithm applied to different operations.
- A generic algorithm of efficiently applying an operation n times.
- Algorithms to generate Fibonacci numbers implemented with BigUint for arbitrary precision numbers.
- Benchmarking the fib2 and fib4 algorithms shows fib4 to be much better as N increases.

My humble conclusion is that generating Fibonacci numbers is more than an introductory exercise.

I am not a huge fan of “pure” OOP. In this post I will cover a few non-pure OOP concepts: subtyping wihtout inheritance, mixins, free functions, and types without invariants. I will make a case for why multi-paradigm is needed and how using a wider variety of concepts enables us to build simpler systems.

If it walks like a duck and it quacks like a duck, then it must be a duck.

Let’s say we have a Duck class. A Duck quacks and waddles:

```
class Duck
{
public:
void Quack() { }
void Waddle() { }
};
```

We have a function that uses a duck:

```
void foo(Duck& duck)
{
duck.Quack();
duck.Waddle();
}
```

The object-oriented way to implement subtyping is to inherit from the base class:

```
class UglyDuckling : public Duck { };
// ...
UglyDuckling uglyDuckling;
foo(uglyDuckling);
```

We can call foo on an UglyDuckling since UglyDuckling inherits from
Duck. We have an *is-a* relationship, so we can substitute an
UglyDuckling for a Duck. The problem with this approach is that whenever
we want something that quacks and waddles, we need to inherit from Duck.
More generally, this type of polymorphism is achieved by implementing a set of
interfaces like, for example, IComparable, IClonebale, IDisposable
and so on. This makes things slightly complicated: what if we need something
that waddles, but we don’t care about quacking? Do we separate our duck into two
different interfaces? In general, do we add an interface for each behavior and
then pull groups of interfaces together to form more refined types?

```
struct IQuack
{
virtual void Quack() = 0;
};
struct IWaddle
{
virtual void Waddle() = 0;
};
class Duck : IQuack, IWaddle
{
public:
void Quack() override { }
void Waddle() override { }
};
class Penguin : IWaddle
{
public:
void Waddle() override { }
};
```

This works, but has combinatorial complexity and we end up with deep hierarchies which are difficult to reason about. There is another way to achieve this though, using generic programming:

```
class UglyDuckling // No inheritance
{
public:
void Quack() { }
void Waddle() { }
};
template <typename Duck>
void foo(Duck& duck)
{
duck.Quack();
duck.Waddle();
}
// ...
UglyDuckling uglyDuckling;
foo(uglyDuckling);
```

foo here is a templated function which only cares that the type passed in has a Quack and a Waddle member function. There is no inheritance involved, but we can still substitute an UglyDuckling for a Duck. This gets us rid of all the interfaces (we don’t need our Penguin to explicitly implement an IWaddle interface, we just need it to provide a Waddle member function). Our model becomes simpler - as long as a type supports the behavior required by a function, it can be used with that function.

Lore has it that multiple inheritance is bad and it is by design not supported
in Java, C#, and such. On the other hand, mixins are extremely useful, and it
is a pity that we usually have to express them via inheritance. A mixin is a
type that provides some behavior which is *mixed in* or *included* into another
type. For example, if we use intrusive reference counting, we can isolate the
reference-counting behavior into its own type:

```
class RefCounted
{
public:
void AddRef() { ++m_refCount; }
void Release() { if (--m_refCount == 0) { delete this; } }
virtual ~RefCounted() = default;
private:
std::atomic<int> m_refCount = 1;
};
```

Then we can have other types for which we want intrusive reference counting simply mixing in this behavior:

```
class Foo : public RefCounted { };
```

Now Foo has AddRef and Release functions which can be called by a generic smart pointer that expects managed types to expose these member functions. While technically Foo inherits from RefCounted, conceptually we only care that it includes the reference counting behavior. In such cases it is perfectly fine to mix and match and include behavior defined across multiple other types.

What is the difference between the following two Print functions?

```
class Foo
{
public:
void Print() { std::cout << this->Data(); }
const char* Data() { /* ... */ }
};
void Print(const Foo& foo)
{
std::cout << foo.Data();
}
```

The first is a member function, called with an implicit this argument which points to the object instance, while the second is a free function called with an explicit reference to a Foo object.

The member function approach leads to bloated objects as whenever we need some
additional processing of the type, we would have to add new member functions.
This contradicts the *Single Responsibility Principle* which states that each
class should have a single responsibility. Adding member functions like
ToString, Serialize etc. needlessly bloats a class.

In general, we only need member functions when these functions access private members of the type. If Data was private in the above example, then the free-function version wouldn’t have worked. As long as we can implement a function that operates on a type without having to access its private member, that function should not belong to the type. Depending on the language, we have several options. We could put such functions in “helper” types:

```
class FooPrinter
{
public static void Print(Foo foo) { /* ... */ }
}
```

C# provides extension methods as syntax sugar for this, which allow us to call foo.Print() even though we implement the Print function as an extension method:

```
static class FooPrinter
{
public static void Print(this Foo foo) { /* ... */ }
}
```

Still, the simplest thing to do is have a free function:

```
void Print(const Foo& foo) { /* ... */ }
```

Being forced to group everything inside classes yields messy code. Steve Yegge’s Kingdom of Nouns is a classic on the topic.

Because a purely object-oriented language forces developers to think in classes, we more often than not end up with managers and utility classes, both being horrible replacements for free-standing functions.

Managers usually show up once we have a nice object model for the problem space but we need to implement a set of operations on said object model. Managers tend to be singletons. For example, we have a Connection type that models a connection to a peer:

```
class Connection
{
// Open, Close, Send, Receive etc.
}
```

We also want someone to open new connections and close all opened connections. Here is a purely object oriented ConnectionManager:

```
class ConnectionManager
{
private static ConnectionManager _instance = new ConnectionManager();
private ConnectionManager() { }
public static ConnectionManager GetInstance()
{
return _instance;
}
private List<Connection> _connections = new List<Connection>();
public Connection Make()
{
var connection = new Connection();
_connections.Add(connection);
return connection;
}
public void CloseAll()
{
_connections.ForEach(connection => connection.Close());
}
}
```

This maintains the list of connections and can close all of them with a call to CloseAll(). Besides being verbose to use (ConnectionManager.GetInstance().Make(), ConnectionManager.GetInstance().Close()), this class does not make much sense. A non-OOP implementation would look like this:

```
// In .h file
class Connection
{
// Open, Close, Send, Receive etc.
};
Connection& Make();
void CloseAll();
// In .cpp file
namespace
{
std::vector<Connection> connections;
}
Connection& Make()
{
connections.emplace_back(Connection{});
return connections.back();
}
void CloseAll()
{
for (auto&& connection : connections)
{
connection.Close();
}
}
```

Make() and CloseAll() do not need to be group in some manager. They can be free functions living next to the Connection type, which is the only context within which they make sense. The list of connections can be stored in a variable scoped to the implementation .cpp file. “Managers” rarely make sense.

Utility classes are even worse: while a manager is usually tightly coupled to the type it “manages”, “Utils” classes end up being dumping grounds of functions that don’t seem to belong anywhere else. The biggest problem is that each of these functions usually depends on some other component:

```
class FooUtils
{
public static void DoBar() { /* Dependency on Bar */ }
public static void DoBaz() { /* Dependency on Baz */ }
}
```

Now whoever takes a dependency on FooUtils, transitively takes a dependency on both Bar and Baz too, even if they only really needed one of them. If DoBar() and DoBaz() were free functions, taking a dependency on DoBar() would transitively take a dependency on Bar only. “Utility” types make layering a nightmare.

I am a big believer in multi-paradigm. If our only tool is a hammer, we can only hammer things. While pure functional languages are elegant, they are too far removed from the machine they run on (for example we can’t implement an in-place reverse if all data is immutable). Similarly, if everything is an object, we end up with too many classes and too many complicated relationships. Procedural languages usually provide some way to group data via struct or record types, so when are classes useful?

The answer is *for encapsulating* - classes enable us to declare private data
and control access to it. This is useful when the class needs to maintain
invariants, which could potentially be broken if external entities would be able
to change an object’s state. Let’s use a Date type as a made up example.
Made up because dates are usually implemented as a number representing a tick
count since some set start date, and information like *day*, *month*, and *year*
is derived from that. But let’s assume we have separate *day*, *month*, and
*year* fields. This type should maintain an invariant that it represents a valid
date, so we can’t have, for example, a June 31st. It’s hard to enforce the
invariant with:

```
struct Date
{
uint8_t day;
uint8_t month;
uint8_t year;
};
```

Alternately, we can implement a class with a constructor which ensures only valid dates can be created:

```
class Date
{
public:
Date(uint8_t year, uint8_t month, uint8_t day)
{
if (month == 0 || month > 12) throw /* ... */
/* Additional checks to ensure a valid date... */
}
uint8_t year() const noexcept { return m_year; }
uint8_t month() const noexcept { return m_month; }
uint8_t day() const noexcept { return m_day; }
private:
uint8_t m_day;
uint8_t m_month;
uint8_t m_year;
};
```

If we want to add an AddDays function, we would create a member function [1] which would implement the algorithm that would know when adding a number of days would increment the month and when incrementing the month would increment the year, such that the invariant of always having a valid date is enforced.

On the other hand, a type which doesn’t need to maintain an invariant, say a point in the plane, should not be implemented as a class:

```
struct Point
{
int64_t x;
int64_t y;
};
```

Inheritance is rarely warranted, and when used, it should mostly be used in the context of mixins - with the intention of including behavior rather than deriving and extending. Interfaces are sometimes useful at a component boundary, though static, template-based polymorphism is preferred. A good design consists of a set of independent classes which maintain invariants, and free functions that operate on them. Structure (or record) types should be used when there is no invariant to be maintained. Generic functions should be used when algorithms can be generalized to multiple types as long as they satisfy some requirements (as in the Duck Typing section above). This encourages reusable code and systems of loosely-coupled components which can be more easily reasoned about in isolation and reused when needed.

- Generic programming/compile-time polymorphism yields less complex models than inheritance
- While multiple inheritance is frowned upon, mixins provide a great way to add behavior to a type. The problem is including this behavior is usually syntactically equivalent with inheritance.
- Free functions are great. Managers and Utils are bad and should be avoided.
- Classes are useful when invariants need to be enforced. Encapsulation and member functions maintain invariants.
- A good design consists of loosely-coupled components and generic functions, which can be reasoned about in isolation and freely combined to create complex behavior.

[1] | Or better yet a free function which takes a Date and returns a new instance - immutability seems like a good idea in this case. |

One of my go-to interview questions goes like this:

Given an array of numbers, make it so the even numbers come before the odd ones.

For example, for { 1, 2, 3, 4, 5, 6, 7, 8 }, a possible output would be { 8, 2, 6, 4, 5, 3, 7, 1 }.

This is not a trick question by any means, it is a straightforward problem with
a couple of straightforward solutions. Note the *possible output* wording and
the fact that evens and odds in the output do not preserve the relative order
they had in the input.

An easy solution is to traverse the input once and store all even numbers encountered during the traversal into a separate array, then traverse it again and do the same for the odd numbers:

```
auto evens_before_odds(const std::vector<int>& numbers)
{
std::vector<int> result;
result.reserve(numbers.size());
for (int i : numbers)
if (i % 2 == 0)
result.push_back(i);
for (int i : numbers)
if (i % 2 != 0)
result.push_back(i);
return result;
}
```

This solves the problem in O(n) linear time (two traversals of the input array) and O(n) linear space - result is as large as input, so additional space required grows linearly with the size of the input.

There are more efficient way of doing this in linear time and constant space.

There are a couple of ways we can solve this. One algorithm goes like this:

Find the first odd number. Stop if we reached the end of the array.

Find the first event number after that odd number. Stop if we reached the end of the array.

Swap them.

Repeat.

Our loop invariant is that all numbers before the first odd number (which we update during each iteration) are already in the right place. With each iteration, we find another even number that appears after the first odd number so we swap them, putting the even in the right place. We stop when we run out of numbers to swap, either odd or even. An implementation of this algorithm looks like this:

```
void evens_before_odds(std::vector<int>& numbers)
{
int i = 0, j, n = numbers.size();
for (;;)
{
// Find first odd number
while (i != n && numbers[i] % 2 == 0)
++i;
// If we reached the end we're done
if (i == n)
return;
// Find the first even number after the first odd one
j = i + 1;
while (j != n && numbers[j] % 2 != 0)
++j;
// If we reached the end we're done
if (j == n)
return;
// Swap and continue
std::swap(numbers[i], numbers[j]);
++i;
}
}
```

While the algorithm is fairly straight-forward, the devil is in the details - we need to perform multiple checks to make sure we don’t run off the end of the array. While interviewing, I’ve seen many bugs come up due to missing some of these checks.

An interesting observation we can make is that once we found the first pair of odd and even numbers, after we swap them, the new first odd number is right after the even we just swapped, so we can hoist the first while statement out of the main loop - we only need to find the first odd once, then we just increment after each swap:

```
void evens_before_odds(std::vector<int>& numbers)
{
int i = 0, j, n = numbers.size();
// Find the first odd number
while (i != n && numbers[i] % 2 == 0)
++i;
// If we reached the end we’re done
if (i == n)
return;
// Start after the first odd and until we reach the end
for (int j = i + 1; j != n; ++j)
{
// If it’s an even number
if (numbers[j] % 2 == 0)
{
// Swap with the first odd
std::swap(numbers[i], numbers[j]);
// Increment first odd position
++i;
}
}
}
```

Another algorithm goes like this:

Find the first odd number.

From the back, find the last even number.

Stop if the first odd number appears after the last even number.

Swap and repeat.

Our loop invariant is that all numbers before the first odd and all numbers after the last even are already in place. With each iteration, we move the first odd and last even. We stop when the first odd appears after the last even, which means all evens appear before the odds. Here is a possible implementation:

```
void evens_before_odds(std::vector<int>& numbers)
{
int i = 0, j = numbers.size();
for (;;)
{
// Find the first odd number
while (i != j && numbers[i] % 2 == 0)
++i;
// If the first odd occurs after the last even, stop
if (i == j)
return;
// Find the last even number
--j;
while (i != j && numbers[j] % 2 != 0)
--j;
// If the first odd occurs after the last even, stop
if (i == j)
return;
// Swap and continue
std::swap(numbers[i], numbers[j]);
++i;
}
}
```

Both of the above algorithms solve the problem in linear time and constant space.

Some interesting test cases to validate the implementations:

- Our example input {1, 2, 3, 4, 5, 6, 7, 8 }
- An empty vector { }
- A vector with a single even number { 2 }
- A vector with a single odd number { 1 }
- A vector consisting of all even number { 2, 4, 6 }
- A vector consisting of all odd numbers { 1, 3, 5 }

My follow up question is

What if we also want the ability to put odd numbers before even ones? How would we extend our code?

An answer I’m **not** looking for is *we copy/paste the function, rename it to*
odds_before_evens *and update the checks*.

A clever answer (which I’m also not looking for) is *we provide an*
odds_before_evens *which internally calls* evens_before_odds *, then
reverses the output*:

```
void odds_before_evens(std::vector<int>& numbers)
{
evens_before_odds(numbers);
std::reverse(numbers.begin(), numbers.end());
}
```

A common answer is *we add a flag*:

```
void arrange_numbers(std::vector<int>& numbers, bool evensFirst)
{
int i = 0, j = numbers.size();
for (;;)
{
while (i != j && ((evensFirst && numbers[i] % 2 == 0)
|| (!evensFirst && numbers[i] %2 != 0)))
++i;
if (i == j)
return;
--j;
while (i != j && ((evensFirst && numbers[j] % 2 != 0)
|| (!evensFirst && numbers[j] % 2 == 0)))
--j;
if (i == j)
return;
std::swap(numbers[i], numbers[j]);
++i;
}
}
```

This kind of works, but the condition becomes very complicated.

What if we also want to move prime numbers before non-prime numbers, given some bool is_prime(int) primality-testing function?

We can keep adding flags and extending the if conditions:

```
enum class Arrangement
{
EvensBeforeOdds,
OddsBeforeEvens,
PrimesBeforeNonPrimes,
};
void arrange_numbers(std::vector<int>& numbers, Arrangement arrangement)
{
int i = 0, j = numbers.size();
for (;;)
{
while (i != j && ((arrangement == Arrangement::EvensBeforeOdds && numbers[i] % 2 == 0)
|| (arrangement == Arrangement::OddsBeforeEvens && numbers[i] %2 != 0)
|| (arrangement == Arrangement::PrimesBeforeNonPrimes && is_prime(numbers[i]))))
++i;
if (i == j)
return;
--j;
while (i != j && ((arrangement == Arrangement::EvensBeforeOdds && numbers[j] % 2 != 0)
|| (arrangement == Arrangement::OddsBeforeEvens && numbers[j] % 2 == 0)
|| (arrangement == Arrangement::PrimesBeforeNonPrimes && !is_prime(numbers[j]))))
--j;
if (i == j)
return;
std::swap(numbers[i], numbers[j]);
++i;
}
}
```

This doesn’t scale very well though. What we actually want to do here is abstract away the predicate based on which we move elements around:

```
template <typename Pred>
void arrange_numbers(std::vector<int>& numbers, Pred pred)
{
int i = 0, j = numbers.size();
for (;;)
{
while (i != j && pred(numbers[i]))
++i;
if (i == j)
return;
--j;
while (i != j && !pred(numbers[j]))
--j;
if (i == j)
return;
std::swap(numbers[i], numbers[j]);
++i;
}
}
void evens_before_odds(std::vector<int>& numbers)
{
arrange_numbers(numbers, [](int i) { return i % 2 == 0; });
}
void odds_before_evens(std::vector<int>& numbers)
{
arrange_numbers(numbers, [](int i) { return i % 2 != 0; });
}
void primes_before_non_primes(std::vector<int>& numbers)
{
arrange_numbers(numbers, is_prime);
}
```

Note the algorithm remains the same: we have the exact same steps and loop invariants, but we can parameterize the condition. With this abstraction, the code actually becomes smaller and more readable.

This is about as far as I can get during an interview.

This is actually a well-known algorithm called a *partitioning algorithm*. A
partitioning algorithm moves elements that satisfy a predicate before elements
that don’t satisfy it. Let’s start with the above implementation:

```
template <typename Pred>
void partition(std::vector<int>& numbers, Pred pred)
{
int i = 0, j = numbers.size();
for (;;)
{
while (i != j && pred(numbers[i]))
++i;
if (i == j)
return;
--j;
while (i != j && !pred(numbers[j]))
--j;
if (i == j)
return;
std::swap(numbers[i], numbers[j]);
++i;
}
}
```

This works for vectors, but what if we want to partition a doubly-linked list?
Can we abstract away the data structure we are partitioning? The answer is
*yes*. We can use iterators to access the data structure:

```
template <typename It, typename Pred>
void partition(It first, It last, Pred pred)
{
for (;;)
{
while (first != last && pred(*first))
++first;
if (first == last)
return;
--last;
while (first != last && !pred(*last))
--last;
if (first == last)
return;
std::swap(*first, *last);
++first;
}
}
```

The implementation is virtually the same. We get rid of i and j, as we are using the iterators provided as arguments for traversal. The implementation does not increase in complexity, but is now usable beyond vectors. For example we can now partition a C-style array:

```
void evens_before_odds(int arr[], int n)
{
partition(arr, arr + n, [](int i) { return i % 2 == 0; });
}
```

A useful return for our algorithm is the *partition point* - the position of the
first element that does not satisfy our predicate. We have this implicitly and
callers might be interested in it. To avoid making callers have to recompute it,
we should return it:

```
template <typename It, typename Pred>
auto partition(It first, It last, Pred pred)
{
for (;;)
{
while (first != last && pred(*first))
++first;
if (first == last)
return first;
--last;
while (first != last && !pred(*last))
--last;
if (first == last)
return first;
std::swap(*first, *last);
++first;
}
}
```

For example, partition is a key ingredient in quicksort:

```
template <typename It, typename Comp>
void quick_sort(It first, It last, Comp comp)
{
// Stop if we have no elements or one element
auto dist = std::distance(first, last);
if (dist < 2) return;
// Swap pivot with last element
auto pivot = first + dist / 2;
std::iter_swap(pivot, --last);
// Partition around pivot
auto p = partition(first, last, [&](auto&& i) {
return comp(i, *last);
});
// Move pivot back in place
std::iter_swap(p, last);
// Recursively sort left and right sides of the pivot
quick_sort(first, p, comp);
quick_sort(p + 1, ++last, comp);
}
```

The partition algorithm we ended up with is fairly efficient, but it’s worth taking a look at some of the highly-optimized STL implementations. This is the MSVC STL implementation:

```
template <typename It, typename Pred>
auto partition(It first, It last, Pred pred)
{
for (;; ++first)
{
for (; first != last && pred(*first); ++first);
if (first == last) break;
for (; first != --last && !pred(*last););
if (first == last) break;
iter_swap(first, last);
}
return first;
}
```

Note this performs the least possible amount of operations. It also seems to favor for loops. Contrast this with the LLVM libc++ implementation, which seems to favor while loops:

```
template <typename It, typename Pred>
auto partition(It first, It last, Pred pred)
{
while (true)
{
while (true)
{
if (first == last) return first;
if (!pred(*first)) break;
++first;
}
do
{
if (first == --last) return first;
} while (!pred(*last));
swap(*first, *last);
++first;
}
}
```

We focused on the second algorithm presented, which finds the first odd, last
even, and swaps them. We had another algorithm which was looking for *the first
even after the first odd* during each iteration. Let’s provide a generic
implementation for it too:

```
template <typename It, typename Pred>
auto partition(It first, It last, Pred pred)
{
while (first != last && pred(*first))
++first;
if (first == last)
return first;
for (It next = std::next(first); next != last; ++next)
if (pred(*next))
{
std::swap(*first, *next);
++first;
}
return first;
}
```

What is the difference?

The difference is that this algorithm only ever increments the iterators. That
means it only requires a ForwardIterator, as opposed to the other algorithm,
which finds the *last even* number starting from the last iterator, which
requires a BidirectionalIterator.

In other words, the algorithm requiring only a ForwardIterator works on a singly-linked list (forward_list), while the other one can’t (we can only traverse a singly-linked list forward in O(1) time, not backwards).

The MSVC STL implementation of the forward-iterator algorithm is:

```
template<typename It, typename Pred>
auto partition(It first, It last, Pred pred)
{
while (first != last && pred(*first))
++first;
if (first == last)
return first;
for (It next = next(first); next != last; ++next)
if (pred(*next))
iter_swap(first++, next);
return first;
}
```

The libc++ one is:

```
template <typename It, tpyename Pred>
auto partition(It first, It last, Pred pred)
{
while (true)
{
if (first == last)
return first;
if (!pred(*first))
break;
++first;
}
for (It next = first; ++next != last;)
{
if (pred(*next))
{
swap(*first, *next);
++first;
}
}
return first;
}
```

The reason both implementations are provided is that the ForwardIterator version, while more generally applicable, is slightly less efficient. The BidirectionalIterator version moves any element at most once, and since the move is a swap, it means it performs at most N / 2 swaps where N is the number of elements. The ForwardIterator version might perform more swaps, up to N. For example, for the input 1 2 4, during the first step, it would swap 1 with 2, ending up with 2 1 4, then during the next step it would swap 1 with 4, ending up with 2 4 1.

Partitioning is not specific to the C++ language. The same implementation can be used, for example, in C#, up to abstracting away data structure traversal:

```
public static class IListPartition
{
public static int Partition<T>(this IList<T> self, Func<T, bool> pred)
{
int first = 0, last = self.Count;
for (;;)
{
while (first != last && pred(self[first]))
++first;
if (first == last)
return first;
--last;
while (first != last && !pred(self[last]))
--last;
if (first == last)
return first;
var temp = self[first];
self[first] = self[last];
self[last] = temp;
++first;
}
}
}
```

The .NET IEnumerator does not allow us to mutate the data structure we are enumerating over, so we cannot provide a generic IEnumerable<T> partition algorithm that works in-place. Otherwise the implementation is pretty much identical to the C++ one, as the algorithm is the same.

- Moving even numbers before odd ones in a given array of numbers is an instance of partition.
- The algorithm can be generalized to work with an arbitrary predicate.
- The algorithm can be generalized to work across any data structure as long as it can be traversed with at least a ForwardIterator.
- A BidirectionalIterator version performs at most N / 2 swaps (and N applications of the predicate).
- A ForwardIterator version performs at most N swaps (and N applications of the predicate).
- Both versions of the algorithm are part of the standard library (std::partition algorithm).
- The same algorithm can be implemented in other languages, as generic as the available abstractions allow.

Given a set of objects A, a binary relation R on the set is defined as a
subset of A x A. The *characteristic function* r for R is the
function r : A x A -> bool such that r(x, y) is true if
(x, y) in R, and false if (x, y) not in R. For a more natural
notation, we can use x ~ y to denote r(x, y).

More generally, a binary relation can be defined on a pair of sets A x B but to keep things simple, we’ll only cover binary relations over a single set.

Binary relations may have several properties. A few interesting ones are:

- A binary relation is
*reflexive*if for any x in A, x ~ x - A binary relation is
*strict*or*irreflexive*if there is no x in A for which x ~ x - A binary relation is
*symmetric*if for any x, y in A, x ~ y implies y ~ x - A binary relation is
*antisymmetric*if for any x, y in A, x ~ y and y ~ x implies x = y - A binary relation is
*transitive*if for any x, y, z in A, if x ~ y and y ~ z, then x ~ z - A binary relation is
*total*if for any x, y in A, either x ~ y, y ~ x, or both (in other words, for any x, y, ~ imposes some relation between them)

The relation *is in the subtree rooted at* is a reflexive relation where A
is the set of nodes of a tree. For any pair of nodes x and y, we can
establish whether x is in the subtree rooted at y or not, and for any
x, x ~ x is true.

The relation *is parent of* in a tree is a strict relation: for any x in the
set of tree nodes A, x cannot be a parent of itself.

The relation *edge between* over the vertices of a non-directed graph is a
symmetric relation: for any x and y vertices of the graph, if there is
an edge from x to y, the same edge exists from y to x, in other
words, if x ~ y then y ~ x.

The *is in the subtree rooted at* relation above is also antisymmetric: if for
a pair of nodes we can say x is in the subtree rooted at y and also
y is in the subtree rooted at x, it’s obvious that both x and y
are, in fact, the root of the subtree, thus x ~ y.

The relation *is reachable from* over the vertices of a directed graph is a
transitive relation: if x is reachable from y and y is reachable
from z, then x is reachable from z.

All of the above examples are of total relations. An example of a non-total
relation is *is ancestor of* in a tree. x can be an ancestor of y, in
which case x ~ y, or y can be an ancestor of x, in which case
y ~ x, but it could also be that x and y are in different subtrees,
so neither x ~ y nor y ~ x holds.

A *preorder* is a relation which is reflexive and transitive.

A preorder which is also symmetric is an *equivalence*. A preorder which is
antisymmetric is a *partial order*. More on those below.

An example of preorder is the *is reachable from* relation over a directed
graph in the example above. This relation is obviously reflexive and transitive,
but it is neither symmetric nor antisymmetric. If x is reachable from y,
it doesn’t mean that y is reachable from x, so symmetry is not
guaranteed. Similarly, if x is reachable from y and y is reachable
from x, it does not mean that y equals x.

An *equivalence* relation ~ is a binary relation that is reflexive,
symmetric, and transitive. In other words, it is a preorder which also has the
symmetric property.

Such a relation partitions the set over which it is
defined into *equivalence classes* - groups of objects that are equivalent
based on the relation.

An example of equivalence is *same month* over a set of dates. This relation
is reflexive, since a date d has the same month as itself (d ~ d); is
symmetric, since if d1 has the same month as d2, then d2 has the
same month as d1 (d1 ~ d2 => d2 ~ d1); and transitive, since if d1 ~
d2 and d2 ~ d3 => d1 ~ d3.

This relation partitions our set of dates in the equivalence classes
corresponding to *dates in January*, *dates in February*, and so on. Note that
the dates for which the relation holds are equivalent, but not necessarily equal.

An *equality* relation is an equivalence relation which partitions the set A
consisting of n objects into exactly n equivalence classes. In other
words, for any x in A, only x ~ x is true.

A *partial order* relation <= is a binary relation that is reflexive,
antisymmetric, and transitive. In other words, it is a preorder which also has
the antisymmetric property.

An example of a partial order is the *is subset of* relation. It is reflexive
(A is a subset of A), antisymmetric (if A is a subset of B and
B is a subset of A, then A = B), and transitive (if A is a
subset of B and B is a subset of C, then A is a subset of
C).

A *total order* relation is a partial order that is also total. The above
example relation *is subset of* is not total - there could be a pair of sets
A and B such that neither is the subset of the other.

An example of a total order relation is *less than or equal to* for integers.

A *weak order* relation ~ is a binary relation that is transitive and total.
This implies reflexivity (for any x and y, either x ~ y, y ~ x,
or both, so for x and x we have x ~ x). In other words, it is a
preorder which is also total.

An example of a weak order is *less than or equal absolute value* for complex
numbers. For any two complex numbers c1 and c2, either c1 ~ c2 (
|c1| <= |c2|), c2 ~ c1 (|c2| <= |c1|), or both, so ~ is total.
We also have c1 ~ c2 and c2 ~ c3 implies c1 ~ c3 (|c1| <= |c2|
and |c2| <= |c3| implies |c1| <= |c3|). Unlike a total order though, the
relation is not antisymmetric. We can have c1 ~ c2, c2 ~ c1, with
c1 and c2 distinct complex numbers (any two numbers with the same
absolute value but different components).

A *strict weak order* relation < is a binary relation that is transitive and
strict (irreflexive).

An example of strict weak order is *less than* for integers.

Most programming languages provide a way to customize equality, inequality, and
comparison operators (==, !=, <, <=, >, >=). There is an
interesting point to be made about what equality *means* in this context. For
some types, this can simply mean comparing the bits and if they are the same,
the objects are equal. But we also have *logical equality* - two objects can
have different bitwise values but still be considered equal. Even more so for
comparing objects - comparing bit representations usually does not translate to
a meaningful comparison of objects.

Note though that any other function bool r(const T& m1, const T& m2) or member function bool r(const T& other) of T denotes a binary relation on T.

Different algorithms require different types of relations to exist between objects.

For example, we need at least a partial order relation to perform a topological sort. That is, in an directed acyclic graph, we can sort the vertices such that for every edge from a to b, a precedes b in the order. This can be used, for example, on the dependency graph in a makefile to determine how to sequence work.

Having an equivalence relation (eg. ==), we can implement a linear search algorithm to traverse a data structure and find an object equivalent to a given object. The C++ standard library algorithm find is an example of such an algorithm.

Having a total order relation (eg. <=) or a strict weak order (eg. <), allows us to implement binary search over an ordered set of objects. A total order or strict weak order relation also enables comparison sort algorithms.

Similarly, we need a total oreder or strict weak order to be able to determine a minimum or a maximum element from a set of objects (min_element and max_element algorithms in C++).

- A binary relation R on a set A is a subset of A x A, denoted by a characterisitc function r : A x A -> bool.
- A binary relation on a type T is denoted by either a free function of the the form bool r(const T&, const T&) or a member function bool r(const T&).
- A binary relations may have several properties: it can be reflexive or strict, symmetric or antisymmetric, transitive, total etc.
- Depending on the properties it has, a relation can be, for example:
- A preorder (reflexive and transitive)
- An equivalence (reflexive, symmetric, and transitive)
- A partial order (reflexive, antisymmetric, and transitive)
- A weak order (reflexive, transitive, and total)
- A strict weak order (irreflexive, transitive, and total)

- Certain algorithms require the types they operate on to have relations with certain properties.

This post covers my view on unit testing, why they are important, how to make time for them, how much to test, and why I don’t believe in TDD. It draws from my personal experience working on multiple software projects, small and large, both on new code and legacy code, practices I tried to apply, what worked well and not so well. While it is more highlevel and I do not provide code snippets, this is by no means a purely theoretical essay.

Engineers and engineering teams who are not “bought” on unit testing use the excuse that there is not enough time to write unit tests. There is always a deadline or pressure to ship and unit tests, not being “product” code, get lower priority.

The problem is that we never get it right the first time - as we get a better understanding of our problem space, we need to refine our solution, which includes refactoring to better structure the code and redesign to accommodate for new requirements. This is where unit tests become invaluable in ensuring that such radical alterations of the codebase can be made safely and easily.

What I noticed first hand is that a team who is not disciplined about testing starts by churning out a significant amount of code but over surprisingly little time development slows down to a crawl because there is a lot of manual testing involved in validating any change, nasty bugs come up, and since refactoring is now scary (who knows what will break?), engineering debt keeps building up. On the other hand, teams that author unit tests from the very beginning can maintain a steady development pace indefinitely.

The funny thing is that engineers who worked in a code base with good test coverage could never go back - they immediately see the benefits and are sold on the practice - while engineers who haven’t done it know the theory, pay lip service to it, but never have time to actually implement tests.

There is, unfortunately, never enough time to do the right thing. My advice is
to make time: unit tests are part of feature development, so they should be
accounted for as such. Do not have separate tasks for implementing the feature
and writing the unit tests - these unit testing tasks are prime candidates to be
deprioritized and cut, after all, the feature *works*, right? Instead consider
testing as part of implementation, create a single task, and adjust estimates
accordingly. There is always pressure to ship, the job of a good engineer is to
not cave under this pressure, set expectations, and deliver a robust solution.
As mentioned above, the ability to keep a steady development pace makes the
average cost of authoring tests over time seem like nothing compared to the
alternative - a steady drop in development agility.

Advice to managers is to encourage a culture of quality and best practices. Strategically, shipping next week vs the week after is not as important as shipping in a couple of months vs shipping in a year, which is where the brittleness of the codebase becomes a major factor. Reward good engineering practices and you end up with well-engineered code.

That being, sometimes we *do* need to ship next week.

In the old waterfall development days, we had several major milestones, each spanning months of development: M1, M2 etc. As ship date came near, pressure increased, and shortcuts were taken more often. In the end, ship dates were met, but with a lot of compromises. What followed right after, when the team was burned out after the final stretch, was the so call “quality milestone” or MQ. Here, engineers were free to reduce debt while project managers went to define the future version of the product.

I personally love the concept of MQ. While I don’t doubt the existence of
*purely agile* teams where everything is delivered after week-long sprints with
high quality, most businesses make promises to customers and must meet
deadlines. Sometimes the pressure increases enough that we knowingly take
engineering shortcuts:

If I had a week, I’d do it the right way, but this works for now. I’ll come back and fix it later.

- Every programmer in the world at some point

After a hard deadline it’s the perfect time to schedule a mini-MQ - spend a week or two recovering from burnout and reducing debt.

This expands beyond unit tests to things like refactoring and rearchitecting code, automating manual processes, writing documentation etc.

The other extreme is test-driven development. The premise of test-driven development is that turning requirements into unit tests, then writing code to make those tests pass is a solid approach to engineering. This sounds great in theory but falls flat in practice.

Good software is correct, efficient, and maintainable. These qualities come from a good understanding of the problem being solved and a thought-through solution, not by making a set of test cases pass. An anecdote I like to reference when discussing this is the Sudoku solver. Ron Jeffries, one of the founders of extreme programming, wrote several blog posts in which he attempted to implement a Sudoku solver using test-driven development here, here, here, here, here, and here. The attempt failed. Around the same time, Peter Norvig implemented a Sudoku solver and wrote a blog post with a beautiful explanation of a thorough approach to analyzing the problem and coming up with a good algorithm to solve it. The point here is that a set of unit tests, no matter how comprehensive, will not design an algorithm for you. The algorithm comes from stepping back and thinking about the problem, which a test-centric approach actively discourages.

The one good thing that TDD encourages is writing tests which initially fail, then providing the implementation to make them pass, which ensures the tests themselves are correct. We can always create a test that exercises a function and then asserts a tautology (Assert.IsTrue(true)), which covers the code, makes the test pass, but provides zero value. Having a test that fails when invoked with a stub and passes when invoked with the real implementation avoids this issue.

Almost forgotten across the industry nowadays is that software can be formally proven correct. A given number of passing tests can only guarantee that for that particular set of inputs, we get the expected output - which, in case of test bugs, might not mean anything. The way to be 100% confident that the code does what we think it does is to prove this fact formally. This is not always feasible at scale, but for critical pieces of functionality, formalism is better than test cases.

That doesn’t mean tests are not needed - as soon as a line of code changes (bug
fix, optimization, etc.), the formalism must be re-evaluated, and, outside
fringe programming languages, we can’t automatically detect when a proof no
longer holds. The point is that tests are about behavior not about design - we
design to solve the problem, we test to make sure that our solution does what we
expect it to do. **Design comes first, tests come second, implementation is
third.**

Unit tests become valuable when we can make deep changes within our code and ensure there is no observable change in output. This is invaluable to engineering velocity.

In terms of code coverage, I believe something around 90% can easily be achieved with a minimum of effort. Full coverage is unrealistic because the code always has some interaction with the world - making network calls, relying on time, random numbers, IO etc. These are all interactions that can sporadically fail and unit tests, by definition, must be 100% reliable. A testable design abstracts all the world interactions under interfaces that can be mocked during testing. This way, we end up with a thin layer that implements these interfaces and forwards to the real OS/library functions. This thin layer should not contain any logic beyond forwarding arguments since it is not really testable and attempting to write unit tests against it ends up with an ongoing cost of analyzing random test breaks due to failures in components outside of our control. The other place where ROI is small is testing trivial code like getters/setters. This is wasted engineering effort and provides questionably little value. That being said, this layer should be at most 10% of the code base, more likely somewhere in the 1-2% range for larger projects. Everything else should be covered by unit tests.

There is also an interesting distinction between explicit vs implicit testing - a function can be covered explicitly, by writing unit tests against it, or implicitly, by writing unit tests against other functions that end up calling this function. A good rule of thumb is to test against the interface not against the private implementation. If you can’t reach the same amount of code coverage by testing the public interface as you can by testing the implementation details, it means you have dead code in the implementation - code that cannot be reached from the public interface for any possible input. This code should be removed not tested. Unit tests have a cost themselves - if we have tests exercising a function and, during a refactoring, we change the signature of that function, we have to go update all these tests. If any refactoring we make breaks unit tests and requires us to fix them, engineering cost of maintaining test coverage is increased needlessly.

Ideally, we should break and have to update tests when we break the interface (the unit’s contract to the outside world). We should be able to freely move the implementation guts around, as keeping tests green in this case is the ultimate purpose of unit testing - ensuring output through the contract doesn’t break during internal changes. A couple of gotchas here: if we feel we need to test an implementation detail because it’s scarily complex, we have a code smell - that implementation detail should be split into multiple, less scary pieces; if we have a lot of implementation logic underneath a thin interface, we have another smell - the component (unit) is too clever and should be split into multiple components, which would necessarily pull some of the code to the interface level.

The bottom line is that we can achieve +90% test coverage without taking dependencies on implementation details.

Unit testing must be easy.

Authoring unit tests should be cheap. Running unit tests should be fast and 100% reliable. Unit tests should be part of the engineering inner-loop - code/compile/unit test. Code coverage should be easy to measure. Mocking should be easy. If any of these points fall short, test coverage suffers. Good infrastructure makes it easy to author and execute unit tests. This is key in encouraging a team to use good engineering practices.

The other aspect of testing cost is design - code that is well componentized is easily testable. Monolithic code, code that implements lots of branching for various conditions, code that directly calls components outside of our control (network, UI etc.), are all hard to test. This is not an excuse to bypass testing, it’s a smell of the code itself.

It’s easy to agree with all of the above but resign yourself to the fact that in your organization things are different - the infrastructure is not there, the culture is not there, there is no time. I believe that the most successful and long-lived software projects have a codebase ridden with compromises and outdated software practices, which is not a symptom of any problem, it’s the result of implementing a successful business. It is our duty as software craftsman to remove the compromises and update the outdated practices, question the status quo and strive to make things better. Write unit tests!

I recently learned about the 24 game. There are several variants of it, but the version I learned goes like this:

Take out the face cards and jokers from a deck and deal four cards (aces or number cards). Aces can be used either as 1 or as 11. Using addition, subtraction, multiplication, and division, with any grouping of operations (paranthesis can be added anywhere), try to come up with an expression that uses all four cards once and equals 24. Division is fractional, so 5 / 2 is 2.5.

For example for A 4 5 8 we have \((1 + 5) * (8 - 4)\).

There is the problem of implementing an algorithm to find a solution given the cards as input.

A simple solution is to try all permutations of cards, all possible operations, and all possible groupings.

In the general case, there are 24 ways to arrange the cards - permutations of 4 which is 4!. Each ace doubles this number as we need to consider the case in which we use it as 1 and the case in which we use it as 11.

There are 64 ways to combine operations, since we have 4 operations in 3 possitions, which means \(4^3\) total.

We also have 5 possible groupings:

\[ \begin{align}\begin{aligned}(x_0 \odot x_1) \oplus (x_2 \otimes x_3)\\((x_0 \odot x_1) \oplus x_2) \otimes x_3\\(x_0 \odot (x_1 \oplus x_2)) \otimes x_3\\x_0 \odot ((x_1 \oplus x_2) \otimes x_3)\\x_0 \odot (x_1 \oplus (x_2 \otimes x_3))\end{aligned}\end{align} \]

where \(\odot, \oplus, \otimes\) are placeholders for any operators (they could potentially be the same operator).

A simple solver implementation looks like this:

```
from itertools import permutations, product
from sys import argv
# Transform arguments to numbers, replace 'A' and 'a' with [1, 11]
args = sum([[1, 11] if arg in 'aA' else
[int(arg)] for arg in argv[1:5]], [])
# For every permutation of 4 arguments
for xs in permutations(args, 4):
# If we have more 1s and 11s then aces ignore this permutation
if sum(n == 1 or n == 11 for n in xs) > len(args) - 4:
continue
# For every possible combination of 3 operators
for ops in product('+-*/', repeat=3):
# For every possible grouping
for exp in ['({} {} {}) {} ({} {} {})',
'(({} {} {}) {} {}) {} {}',
'({} {} ({} {} {})) {} {}',
'{} {} ({} {} ({} {} {}))',
'{} {} (({} {} {}) {} {})']:
# Place operands and operators in expression
exp = exp.format(xs[0], ops[0], xs[1], ops[1], xs[2],
ops[2], xs[3])
try:
# If expression evaluates to 24, we found a solution
if eval(exp) == 24:
print(exp)
exit()
except ZeroDivisionError:
# Ignore division by zero errors
pass
# If we get here we tried all combinations and couldn't
# find any solution
print('No solution')
```

We have ten possible cards (ace and number cards), and taking combination with repetition of 4 cards, we have \(\frac{(n + r - 1)!}{r! * (n - 1)!}\) for \(n = 10, r = 4\), so a total of \(\frac{(10 + 4 - 1)!}{4! * (10 - 1)!} = 715\) possible games.

Feeding all possible games to the code above, we can see that there are 117 games which have no solution. The remaining 598 games are solvable.

We can optimize the above solution further by observing that we only need three of the five groupings to cover all cases. Take, for example, \(x_0 \odot ((x_1 \oplus x_2) \otimes x_3)\). Now if \(\odot\) is a commutative operation (addition or multiplication), we can rewrite this to the equivalent \(((x_1 \oplus x_2) \otimes x_3) \odot x_0\), and since we any way take all permuations of arguments and operators, this ends up getting covered by the \(((x_0 \odot x_1) \oplus x_2) \otimes x_3\) case. For non-commutative operations, for example subtraction, notice that if we do have a solution \(x_0 - ((x_1 \oplus x_2) \otimes x_3) = 24\), since \(x_0\) is at most 11, it means we need to subtract a negative number from it in order to get 24. This implies that at least \(\oplus\) or \(\otimes\) is also a subtraction. If \(\otimes\) is a subtraction, we can rewrite the expression \(x_0 - ((x_1 \oplus x_2) - x_3)\) as \((x_0 + x_3) - (x_1 \oplus x_2)\). If \(\otimes\) is not a subtraction but \(\oplus\) is, we have \(x_0 - ((x_1 - x_2) \otimes x_3)\) which is equivalent with \(x_0 - (- (x_2 - x_1) \otimes x_3)\). If \(\otimes\) is addition, this becomes \(x_0 - (x_3 - (x_2 - x_1))\) = \((x_0 - x_3) + (x_2 - x_1)\). If \(\otimes\) is multiplication or division, this becomes \(x_0 - (- (x_2 - x_1) \otimes x_3)\) = \(x_0 + ((x_2 - x_1) \otimes x_3)\) = \(((x_2 - x_1) \otimes x_3) + x_0\).

Similar rewrites can be done if \(\odot\) is division by observing that we would have to divide with a fractional number in order to get 24, so at least one of \(\oplus\) or \(\otimes\) is also a division. This means we only need the groupings

\[ \begin{align}\begin{aligned}(x_0 \odot x_1) \oplus (x_2 \otimes x_3)\\((x_0 \odot x_1) \oplus x_2) \otimes x_3\\(x_0 \odot (x_1 \oplus x_2)) \otimes x_3\end{aligned}\end{align} \]

to find all possible solutions. Our solution becomes:

```
from itertools import permutations, product
from sys import argv
args = sum([[1, 11] if arg in 'aA' else
[int(arg)] for arg in argv[1:5]], [])
for xs in permutations(args, 4):
if sum(n == 1 or n == 11 for n in xs) > len(args) - 4:
continue
for ops in product('+-*/', repeat=3):
for exp in ['({} {} {}) {} ({} {} {})',
'(({} {} {}) {} {}) {} {}',
'({} {} ({} {} {})) {} {}']:
exp = exp.format(xs[0], ops[0], xs[1], ops[1], xs[2],
ops[2], xs[3])
try:
if eval(exp) == 24:
print(exp)
exit()
except ZeroDivisionError:
pass
print('No solution')
```

This means that for games without aces, we need to check 24 (\(4!\)) permutations of cards, 64 (\(4^3\)) combinations of operators, and 3 groupings. That is \(4! * 4^3 * 3 = 4608\) tests. For games with aces, we double this number for each ace to account for both the 1 and 11 cases.

A more interesting question is what is the minimum number of tests we need to perform in order to correctly find a solution for all solvable games.

It is obvious that there are expressions which can never evaluate to 24 for any game. For example \((x_0 - x_1) - x_2) - x_3\), since \(x_i \in \{1, 2, ... 11\}\).

It is also obvious that we perform a lot of redundant tests, since, for example, all of the below expressions are equivalent for all possible inputs:

\[ \begin{align}\begin{aligned}(x_0 + x_1) + (x_2 + x_3)\\((x_0 + x_1) + x_2) + x_3\\(x_0 + (x_1 + x_2)) + x_3\\(x_0 + x_1) + (x_3 + x_2)\\((x_0 + x_1) + x_3) + x_2\\(x_0 + (x_1 + x_3)) + x_2\\...\end{aligned}\end{align} \]

and so on for all permutations of \(x_0, x_1, x_2, x_3\).

Let’s generate all possible permutations of cards, combinations of operators, and groupings as above:

```
import itertools
operands = list(itertools.permutations(range(4), 4)) # 24 of these
operators = list(itertools.product('+-*/', repeat=3)) # 64 of these
groupings = ['({} {} {}) {} ({} {} {})',
'(({} {} {}) {} {}) {} {}',
'({} {} ({} {} {})) {} {}'] # 3 of these
```

Note that here we are looking at all possible games so operands are permutations of indexes from 0 to 3, not actual cards. We can also take all possible games as combinations of 4 numbers from 1 to 11. Here we generate 1 and 11 games for each ace, so we end up with 1001 possible games instead of 715:

```
inputs = list(itertools.combinations_with_replacement(range(1, 12), 4))
# 1001 of these
```

We can now write a function that, for a given game, generates all possible expressions which evaluate to 24:

```
def solutions_for(inp):
for xs in operands:
for ops in operators:
for exp in groupings:
try:
if eval(exp.format(inp[xs[0]], ops[0],
inp[xs[1]], ops[1], inp[xs[2]],
ops[2], inp[xs[3]])) == 24:
yield exp.format(f'x{xs[0]}', ops[0],
f'x{xs[1]}', ops[1], f'x{xs[2]}',
ops[2], f'x{xs[3]}')
except ZeroDivisionError:
pass
```

This is very similar with the initial solution, except that we don’t have to worry about aces (we assume they are already converted to either 1 or 11), and we use the permutations of indexes to determine the order of terms as input is going to always be in increasing order (as generated by itertools.combinations_with_replacement). So instead of placing inp[0], op[0], inp[1], op[1], inp[2], op[2], inp[3] in the expression to be evaluated as we did in the initial solution, since inp is fixed, we come up with permutations of operands from operands, so we are placing inp[xs[0]], op[0], inp[xs[1]], op[1], inp[xs[2]], op[2], inp[xs[3]] in the expression instead. We also return the expression replacing the operands with x0, x1, x2, x3 since we don’t care about their particular values, rather the expression we are using.

For example calling:

```
list(solutions_for([2, 4, 7, 8]))
```

yields

```
['((x0 * x2) - x3) * x1', '(x1 * x2) - (x3 / x0)',
'((x2 * x0) - x3) * x1', '((x2 / x0) * x3) - x1',
'(x2 / (x0 / x3)) - x1', '(x2 * x1) - (x3 / x0)',
'((x2 * x3) / x0) - x1', '(x2 * (x3 / x0)) - x1',
'((x3 / x0) * x2) - x1', '(x3 / (x0 / x2)) - x1',
'((x3 * x2) / x0) - x1', '(x3 * (x2 / x0)) - x1']
```

These are all possible expression which evaluate to 24 for the game 2 4 7 8.

We can compute the list of all expressions which evaluate to 24 for every possible game:

```
results = []
for inp in inputs:
result = set(solutions_for(inp))
# Only append the set of expressions to the list if
# non-empty (if game has at least one solution)
if result:
results.append(result)
```

We can take the union of the sets in results and get the set of all expressions that evaluate to 24 for at least one game:

```
expressions = set()
for result in results:
expressions = expressions.union(result)
```

The size of this set is 1809. We are guaranteed that for any possible game, no other expression evaluates to 24 since we generated all possible solutions for all possible games. Which means we can test just these 1809 expression for any game and determine whether it is solvable or not, which is better than our original 4608 (or more for games with aces).

Here we eliminated all expressions which never evaluate to 24, but we still have all the redundant tests in our set of expressions. It is also possible to have an expression \(E_0\) which solves all games some expression \(E_1\) solves, plus some other games. In which case we wouldn’t ever need to test using \(E_1\) since \(E_0\) would still solve all games that \(E_1\) would solve.

More formally, expressions is our universe \(\mathcal{U}\) of tests and results is a set of sets \(R = \{ R_0, R_1 ... R_n \}\) where \(R_i \subset \mathcal{U} \space \forall i \in \{ 0, 1 ... n \}\). We want to find the smallest set \(H \subset \mathcal{U}\) such that \(H \cap R_i \neq \varnothing \space \forall i \in \{ 0, 1 ... n \}\).

The good news is that this is actually a well known problem called **the hitting
set problem** [1]. The bad news is this problem is NP-hard. Even with clever
pruning, trying out combinations of expressions to find the smallest \(H\)
has factorial complexity and even for small sets it quickly reaches astronomical
numbers.

Since finding an optimal solution is too computationally expensive, we can at
least attempt to find a *good enough* solution.

The greedy algorithm which solves the hitting set problem works as follows: build up the solution by selecting at each step the element which hits the highest number of sets which were not hit so far.

```
# Start with an empty set
min_expressions = set()
# While we have unhit sets
while results:
min_expression, max_hitting = set(), 0
# For each expression in our universe
for expr in expressions:
hitting = sum([1 for result in results if expr in result])
if hitting > max_hitting:
min_expression, max_hitting = expr, hitting
# We found the expression hitting most unhit sets
min_expressions.add(min_expression)
# Remove hit sets from results
results = [result for result in results if
min_expression not in result]
```

Interestingly enough, since we are working with sets and hashing is randomized in Python, I got different results across different runs of this algorithm. For cases where there are multiple max hitting sets (sets intersecting the same number of other sets), we non-deterministically select one, since iteration over sets is based on the randomized key order. I got solutions ranging from 110 to 114 expressions. This gives us an upper bound of 110 - we must perform at most 110 tests to find a solution for a game.

We can use the above code to generate a set of expressions and dump it into a source file, together with the code to test input:

```
from sys import argv
expressions = [
"(x3 + x1) * (x2 - x0)", "((x2 * x3) - x0) / x1",
"(x3 + (x1 + x0)) * x2", "(x0 - x3) * (x1 - x2)",
"(x1 * x0) + (x2 - x3)", "((x2 * x0) * x3) - x1",
"(x3 + (x1 + x2)) + x0", "(x2 * x0) + (x3 - x1)",
"((x0 / x1) * x2) * x3", "(x1 * (x3 - x2)) + x0",
"((x3 + x2) - x0) + x1", "(x0 * x1) * (x3 - x2)",
"(x1 * x0) - (x2 + x3)", "(x3 / x1) * (x2 + x0)",
"((x2 + x1) + x3) * x0", "(x3 - x0) / (x1 / x2)",
"(x3 + x1) * (x2 / x0)", "(x1 * x3) + (x0 * x2)",
"((x3 * x1) + x2) + x0", "((x0 - x2) + x1) * x3",
"((x3 / x0) + x2) * x1", "(x3 + (x2 * x1)) - x0",
"(x1 * x3) - (x2 - x0)", "(x1 * (x3 - x0)) - x2",
"((x2 + x1) * x3) / x0", "(x0 * x3) - (x1 + x2)",
"(x1 + (x3 * x0)) * x2", "(x0 * (x2 + x3)) - x1",
"(x1 * x3) - (x0 + x2)", "(x3 - (x2 / x1)) * x0",
"(x0 - (x1 / x2)) * x3", "(x3 * x0) + (x2 - x1)",
"((x3 / x0) + x1) + x2", "(x3 + (x1 * x2)) + x0",
"(x0 + x2) * (x1 + x3)", "(x0 * (x3 - x1)) + x2",
"((x0 + x3) * x1) * x2", "(x2 * (x0 + x3)) / x1",
"(x2 - (x1 + x0)) * x3", "(x3 * x2) - (x0 / x1)",
"((x0 + x3) * x1) + x2", "((x0 * x3) + x1) - x2",
"(x2 + (x3 - x1)) * x0", "(x2 * (x0 + x1)) + x3",
"((x2 + x3) * x0) + x1", "(x0 - (x3 / x2)) * x1",
"((x0 + x2) - x3) * x1", "((x3 / x2) + x0) * x1",
"(x1 / x0) + (x2 + x3)", "((x1 * x0) - x2) * x3",
"((x0 + x1) + x2) * x3", "(x2 * (x3 - x1)) - x0",
"((x2 * x1) - x0) * x3", "((x3 - x1) * x2) + x0",
"(x3 / (x0 + x1)) * x2", "((x1 * x0) + x3) + x2",
"(x3 + x0) * (x2 - x1)", "(x2 - x0) * (x3 - x1)",
"((x3 / x0) * x2) + x1", "((x2 * x1) - x0) - x3",
"(x0 + (x3 - x1)) * x2", "(x0 * x2) / (x3 - x1)",
"((x3 * x2) - x1) / x0", "(x2 - (x0 / x3)) * x1",
"(x3 - x2) * (x0 + x1)", "(x0 * x2) - (x3 + x1)",
"((x2 - x0) * x3) + x1", "(x1 * (x3 - x0)) + x2",
"(x1 * (x0 + x3)) - x2", "((x0 + x1) * x3) + x2",
"((x1 - x2) + x3) * x0", "((x3 - x1) * x2) * x0",
"((x2 + x1) - x3) * x0", "(x1 + (x3 / x2)) * x0",
"(x2 / (x0 / x1)) + x3", "(x2 / (x3 - x0)) * x1",
"(x3 * x1) - (x0 * x2)", "((x1 + x0) * x2) * x3",
"(x1 - (x2 / x3)) * x0", "(x2 + (x3 * x0)) + x1",
"((x2 * x3) + x1) / x0", "(x3 - x0) * (x2 + x1)",
"(x1 * x3) + (x2 - x0)", "(x3 * (x2 - x0)) - x1",
"(x0 + (x1 - x3)) * x2", "(x1 * (x3 - x2)) - x0",
"(x0 + (x2 * x3)) - x1", "(x2 + x3) / (x0 / x1)",
"(x0 * x3) / (x2 - x1)", "(x2 - (x3 / x0)) * x1",
"(x0 - (x1 - x2)) * x3", "(x3 + x1) + (x2 * x0)",
"((x3 - x1) - x0) * x2", "(x1 * (x2 - x0)) - x3",
"(x2 + x0) * (x3 - x1)", "((x3 - x1) * x2) / x0",
"((x1 * x3) - x2) * x0", "((x1 + x3) * x0) - x2",
"((x3 - x2) + x0) * x1", "((x2 * x0) - x3) * x1",
"(x2 * (x1 + x0)) - x3", "(x0 + (x1 / x2)) * x3",
"((x2 - x1) * x3) + x0", "((x2 / x1) + x3) * x0",
"(x1 * x0) - (x2 / x3)", "((x3 * x0) - x1) * x2",
"((x2 - x0) * x1) + x3", "((x1 + x0) * x3) - x2",
"(x1 * x0) / (x3 - x2)", "(x3 * (x1 - x0)) - x2"
]
# Since we no longer try all permutations of cards, we
# need to split inputs containing aces by replacing aces
# with both 1 and 11
def get_input(args):
if 'A' not in args:
return [args]
idx = args.index('A')
return get_input(args[:idx] + ['1'] + args[idx+1:]
) + get_input(args[:idx] + ['11'] + args[idx+1:])
for args in get_input(argv[1:5]):
# We also expect inputs to be in sorted order now
args = sorted([int(arg) for arg in args])
for exp in expressions:
for i in range(4):
# Replace x0 ... x3 with args[0] ... args[3]
exp = exp.replace('x' + str(i), str(args[i]))
try:
if eval(exp) == 24:
print(exp)
return
except ZeroDivisionError:
pass
print('No solution')
```

We can test this by ensuring that we still see 117 games without solution when we try to solve all 715 games, which is indeed the case. We reduced the number of tests we perform on a game from 4608 to 110.

There are a couple of other interesting facts we can determine from the set of all solutions for all games: the set of unique expressions and out of them, the subset of expressions which are absolutely required in order to solve all possible games.

By *unique expression* I mean picking an expression and eliminating all other
equivalent expressions from the set (for example keeping only
\(((x_0 + x_1) + x_2) + x_3\) while dropping all other permutations and
groupings of addition). We can easily do this by defining equivalent expressions
as expressions which solve the exact same games. So if \(E_1\) solves some
subset \(R_{E_1}\) of \(R\), an expression \(E_2\) is equivalent to
it if the set \(R_{E_2}\) of games it solves is equal to \(R_{E_1}\). We
can define equivalence as:

```
def equivalent(exp1, exp2):
for result in results:
if exp1 in result and exp2 not in result:
return False
if exp1 not in result and exp2 in result:
return False
return True
```

With this function, we can go over our univese \(\mathcal{U}\) of expressions and eliminate all expressions which are the equivalent of another expression:

```
expressions = list(expressions)
i1, i2 = 0, 0
while i1 < len(expressions):
i2 = i1 + 1
while i2 < len(expressions):
if equivalent(expressions[i1], expressions[i2]):
expressions = expressions[:i2] + expressions[i2+1:]
else:
i2 += 1
i1 += 1
expressions = set(expressions)
```

The resulting set of unique expressions has 273 elements. This means these 273 expressions are enough to solve all possible games and, more so, adding any other expression to this set would be redundant. This is a lower bound than our original 1809 expressions which solve games, but higher than our previously found bound of 110 expressions. Note that this data point is not useful in the greedy algorithm shown in the previous section, since once that algorithm picks an expression, it would automatically discard all other equivalent expressions, since it eliminates all games which are solved by the picked expression from the search space. This should put the problem in combinatorial perspective though, as we need to select from 273 candidates to find the smallest hitting set.

Once we have eliminated equivalent expressions, we can update the set of game results accordingly, by intersecting each \(R_i\) with our new \(\mathcal{U}\):

```
results = [list(expressions.intersection(result)) for result in results]
```

Now we can search \(R\) for sets with a single element. This will give us expressions which must be part of our solution, otherwise eliminating them would cause a solvable game to appear as unsolvable:

```
min_expressions, games_with_unique_result = set(), 0
for result in results:
if len(result) == 1:
min_expressions = min_expressions.union(result[0])
games_with_unique_result += 1
```

Turns out there are 62 expressions which (ignoring equivalences) provide unique solutions to games. They are:

```
{'(x1 + x2) - (x0 - x3)', '(x1 + x0) * (x2 * x3)',
'((x1 + x0) - x2) * x3', '((x3 + x2) * x0) + x1',
'(x0 + x3) * (x2 - x1)', '((x2 * x1) / x0) + x3',
'((x3 - x1) * x2) - x0', '(x1 - (x3 / x2)) * x0',
'((x1 * x0) - x2) - x3', '((x2 - x1) - x0) * x3',
'((x1 * x0) - x3) + x2', '(x2 + (x1 / x0)) + x3',
'((x1 + x3) - x2) * x0', '(x2 - (x3 - x0)) * x1',
'(x2 + (x1 + x3)) + x0', '(x1 * x0) - (x2 / x3)',
'(x3 + (x2 * x1)) - x0', '((x0 * x1) + x3) + x2',
'(x2 + (x1 - x3)) * x0', '((x0 * x1) - x2) * x3',
'(x2 * x0) + (x1 + x3)', '(x2 + (x3 * x0)) + x1',
'((x0 * x3) - x1) + x2', '(x0 + (x2 - x1)) * x3',
'((x3 * x1) - x2) - x0', '(x1 - x2) * (x0 - x3)',
'((x3 - x0) - x1) * x2', '(x1 * (x2 - x0)) - x3',
'(x0 / (x3 - x2)) * x1', '((x0 + x1) - x3) * x2',
'((x3 - x1) + x0) * x2', '((x1 + x0) * x3) + x2',
'((x1 + x0) * x3) - x2', '(x3 / x1) * (x2 + x0)',
'((x3 - x1) * x0) + x2', '(x3 * (x2 - x0)) + x1',
'((x0 + x2) + x1) * x3', '(x2 * (x0 + x1)) - x3',
'(x2 * x1) / (x3 - x0)', '(x3 + x0) + (x1 * x2)',
'(x0 + (x3 * x2)) - x1', '(x0 - x2) * (x1 - x3)',
'((x2 + x3) * x1) / x0', '(x2 / x0) * (x3 + x1)',
'(x3 * (x1 - x0)) - x2', '((x3 + x1) * x0) - x2',
'((x0 + x3) * x1) - x2', '((x3 + x2) * x0) - x1',
'(x3 + x0) / (x1 / x2)', '(x0 + x2) * (x3 - x1)',
'(x2 * (x3 - x1)) + x0', '((x0 * x3) - x2) - x1',
'((x1 * x2) - x3) - x0', '(x3 - (x1 - x2)) * x0',
'(x2 * x3) / (x1 + x0)', '(x3 - x2) * (x0 + x1)',
'(x0 / (x3 - x1)) * x2', '((x2 * x3) - x0) / x1',
'(x2 * (x3 - x0)) / x1', '((x1 + x0) * x2) + x3',
'(x1 * (x2 - x0)) + x3', '(x3 * (x2 - x1)) + x0'}
```

This is a lower bound for our problem, since at the very least we need these expressions in order to correcly solve all possible games. We also computed the number of games with a unique solution, which is 122. The remaining games are either unsolvable or admit multiple solutions. Note we considered aces as 1s and aces as 11s as distinct games in the above analysis. We could search for equivalent games (number of 1s and 11s is the same) and see if we can eliminate some expressions from the list above. This is left as an exercise to the reader.

- There are 715 distinct games in 24, with 117 of them unsolvable and the remaining 598 having at least one solution.
- A brute-force solution checks 4608 expressions (double for each ace) to determine that a game is unsolvable.
- Only 1809 expressions (out of the 4608) solve at least one game.
- Finding the minimum number of expressions we need to check in order to find whether a game has a solution is equivalent to the hitting set problem which is NP-hard.
- A greedy algorithm finds a set of 110 expressions which is enough to find a solution for any solvable game.
- Removing equivalent expressions, we are left with 273 expressions which all solve at least one different game than any other. Out of these, 62 expressions are unique solutions to 122 games, so they must necessarily be checked in order to find solutions for all possible games.

[1] | Wikipedia explains the set cover problem which is equivalent to the hitting set problem (one can be converted to the other). |

Idris is a programming language inspired by Haskell, with a set of innovative features to facilitate type safety and program correctness. Type Driven Development with Idris is a great introductory book which I highly recommend. In this post, I will try to cover the features I was most impressed with, providing some simple code samples. I will not cover syntax since most should be familiar from Haskell. If you are not familiar with Haskell syntax, here is a nice syntax cheat sheet. If you are not interested in either Haskell or Idris syntax, start with the last section of this post, Thoughts about the Future.

A total function in Idris is a function which is defined for all possible inputs and is guaranteed to produce a result in a finite amount of time [1]. The compiler obviously employs a heuristic, since the halting problem is undecidable, but is usually close enough to the truth to guarantee correctness from this point of view. It achieves this not by evaluating the function, but by ensuring that every recursive branch converges to a halting branch.

Natural numbers are defined in Idris using Peano axioms, so it is easy to prove things about them. Here is a minimal definition of natural numbers [2]:

```
data Nat' = Z | S Nat'
```

This defines Nat' as a data type which can be constructed either as Z (zero) or S Nat' (successor of another natural number). With this definition, the compiler can easily determine the following function is total:

```
f : Nat' -> ()
f Z = ()
f (S k) = f k
```

This function return a unit given Z, otherwise it recursively takes the predecessor of the argument. This converges to the Z case. The following function, on the other hand, is correctly identified as potentially non-terminating:

```
g : Nat' -> ()
g n = g (S n)
```

These are trivial examples, but in general, having a compile-time check for termination is a very powerful tool.

Dependent types are types computed from other types. To put it another way, Idris has first-order types, meaning functions can take types as arguments and return types as their output. Functions that compute types are evaluated at compile time. This is similar to C++ metaprogramming, but without employing a different syntax.

Before an example, we first need to define addition on naturals as follows:

```
(+) : Nat' -> Nat' -> Nat'
(+) Z r = r
(+) (S l) r = S (l + r)
```

Now we can declare a vector type consisting of a size (Nat') and a type argument:

```
data Vect' : Nat' -> Type -> Type where
Nil : Vect' Z a
(::) : (x : a) -> (xs : Vect' k a) -> Vect' (S k) a
```

Here a is a type argument. Vect' has two constructors: Nil, creating a Vect' of size Z containing elements of type a (0 elements) and (::), which concatenates an object of type a with a vector of size k of a and produces a vector of size S k containing a.

Now to see dependent types in action, we can define append', a function that appends a vector to another vector:

```
append' : Vect' n a -> Vect' m a -> Vect' (n + m) a
append' Nil ys = ys
append' (x :: xs) ys = x :: append' xs ys
```

The interesting part is the function signature - given a vector of size n and a vector of size m, the resulting vector will have size n + m. This information is captured in the declaration and the compiler knows to apply the (+) defined above and type-check that this is indeed true for a given pair of arguments.

We can also attempt to define a reverse' function, which recursively appends the head of the vector to the reversed tail:

```
reverse' : Vect' n a -> Vect' n a
reverse' Nil = Nil
reverse' (x :: xs) = append' (reverse' xs) [x]
```

This doesn’t compile though. We get the following error message:

```
When checking right hand side of reverse' with expected type
Vect' (S k) a
Type mismatch between
Vect' (k + S Z) a (Type of append' (reverse' xs) [x])
and
Vect' (S k) a (Expected type)
Specifically:
Type mismatch between
k + S Z
and
S k
```

We are claiming the function returns a vector of the same length as the input vector, but we haven’t proven enough theorems about our definition of natural numbers to convince the type checker. In this particular case, the problem is that the compiler expects an S k but finds an k + S Z. We need to prove that these are indeed equal (successor of k is the same as k + successor of Z). Here is the proof:

```
addOneProof : (n : Nat') -> S n = n + S Z
addOneProof Z = Refl
addOneProof (S k) = cong (addOneProof k)
```

Proofs are functions. There are a few things worth noting here: first, the return type of this function is an equality (our theorem). Given a natural n, the function proves that the equality holds. Refl is the built-in reflexivity constructor, which constructs x = x. For the Z case, we can use Refl to say that S Z = Z + S Z which is true by the definition of (+). For the (S k) case, we use cong. cong is a built in function that states that equality holds after function application. It’s signature is cong : (a = b) -> f a = f b, which basically means if a is equal to b, then f a is equal to f b. In our case, we are saying that if addOneProof k holds, then so does addOneProof (S k), which allows us to converge on the Z case.

We now have a proof that S n = n + S Z. With this, we can prove that the type Vect (k + (S Z)) a can be rewritten as Vect (S k) a:

```
reverseProof : Vect' (k + (S Z)) a -> Vect' (S k) a
reverseProof {k} result = rewrite addOneProof k in result
```

There is some Idris-specific syntax here: {k} brings k from the function declaration in scope, so we can refer to it in the function body even if it is not passed in as an argument. The rewrite ... in expression applies the equality in the proof above to the input, in this case effectively rewriting Vect (k + (S Z)) a to Vect (S k) a. Note these proofs are evaluated at compile time and simply provide information to the type checker. With this proof, we can implement reverse like this:

```
reverse' : Vect' n a -> Vect' n a
reverse' Nil = Nil
reverse' (x :: xs) = reverseProof (append' (reverse' xs) [x])
```

This is similar to the previous implementation, we just apply reverseProof to the result of append'. This definition compiles.

Software development is generally driven by economics, where we more often than
not trade correctness for speed to market. But once the software is up and
running, correctness becomes an issue. As code increases in complexity, the
number of issues tends to increase, and the velocity with which changes can be
made without introducing regression drops dramatically. We have various
techniques that aim to maintain stability, like automated testing, but these are
not perfect: a test can prove that for a given input we get an expected output,
but cannot prove that for *any* input we would get the expected output.

On the other hand, we have solutions that do eliminate entire classes of issues.
An example is typing. Python, Ruby, and JavaScript, all dynamically typed, are
extremely expressive and make it very easy to whip up a proof of concept. But
there is an entire class of type errors which now turns into runtime issues. We
are notoriously bad at predicting what our code does, so the more help we get
from machines to ensure correctness, the better. In a strongly typed language,
even though it takes longer to convince the compiler that the code is type-safe,
this whole class of errors is eliminated. Language evolution over the years
tends to converge towards stronger typing: dynamic languages are augmented with
type checkers (Python has type hints, JavaScript has TypeScript etc.) and
statically typed languages are becoming less verbose as type inference evolves.
There will always be a need for a quick prototype, but code we want to deem
*reliable* should be typed. This includes a wide range of business-critical
applications where errors are very costly.

I see Idris as the next step beyond this. Totality checking allows the compiler
to guarantee termination, eliminating hangs from the code. First-order types
allows us to push more information to the type-checker, allowing for stricter
type-checking. Proofs, expressed as functions with regular syntax, allow the
compiler to provide formal verification of programs - here, as opposed to unit
tests, we are actually proving that we get the expected output for *any* input.
These are all tools for writing better, more correct code. As other functional
concepts got adopted over the years into more mainstream languages (for example
first-order functions, anonymous functions, algebraic types etc.), I expect (and
hope) these features to eventually be adopted too.

There is still a lot of room for improvement: writing proofs is tedious, compiler errors are not always very clear, and, coming back to the speed to market tradeoff, I doubt we will ever get to entire large applications formally proven correct (barring some form of proof inference to speed things up by a couple of orders of magnitude). That being said, I would love to have these facilities as optional features in other languages and at least have the ability to prove that the core functionality of a component does what it is supposed to do, and get a compile break whenever a regression is introduced.

Programming languages are continuously evolving and the future looks exciting!

[1] | Idris also supports functions that produce an infinite stream of values which can be used with lazy evaluation. The full definition of totality includes functions which don’t terminate but produce Inf. This allows for non-terminating functions, but ensures non-termination is intentional. |

[2] | I am using ' to avoid naming conflicts with the built-in types and functions. Idris already provides Nat, Vect, append and reverse. |

I will start off with a word of caution that singletons should be avoided. Singleton is the object-oriented equivalent of a global variable – a piece of state which anyone can grab and modify, which makes it hard to reason locally about code and generates ugly dependency graphs. That being said, in a large system there are situations where there is a legitimate need for global state or some component that exposes a service to other components.

This CppCon lightning talk by Arno Lepisk covers some implementation alternatives, suggesting that in most cases using a namespace and flat functions is the simplest and best way to implement a singleton:

```
namespace Foo {
void DoFoo();
}
```

instead of:

```
class Foo
{
public:
void DoFoo();
};
Foo& UseFoo();
```

I completely agree with this, with the caveat that sometimes we do want to inject dependencies and work against an interface instead of the actual implementation, in which case the above approach might be insufficient. Note that unless dependency injection is needed, the default should be a namespace and flat functions.

Given an interface, an implementation, and a function to retrieve the singleton like the following:

```
struct IFoo
{
virtual void DoFoo() = 0;
virtual ~IFoo() = default;
};
struct Foo : IFoo
{
void DoFoo() override { /*...*/ };
};
IFoo& UseFoo();
```

a common mistake I see is components directly calling the function like this:

```
class Component
{
public:
void Bar()
{
UseFoo().DoFoo();
}
};
```

If the goal is to inject the dependency, for example have tests run against MockFoo, this approach is not ideal. Code explicitly calls UseFoo so the only way to switch implementation is to modify UseFoo and provide some internal mechanism to change its return value. A better approach is to have the client simply require an interface and provide that at construction time:

```
class Component
{
public:
Component(IFoo& foo = UseFoo())
: m_foo(foo)
{
}
void Bar()
{
m_foo.DoFoo();
}
private:
IFoo& m_foo;
};
```

Note that in the above example we can create Component with a MockFoo implementation of IFoo or some other implementation, which is a better decoupling than directly calling UseFoo inside the member functions of Component.

By definition, a singleton should represent a unique object, so our UseFoo needs to return the same reference on each call. Ensuring that concurrent calls from multiple threads don’t cause problems was non-trivial until C++11, which introduced “magic statics”. Quote from the C++ standard section 6.7:

… such a variable [with static storage] is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.

The standard now guarantees that a static would only ever be created once, and the simple way to implement a singleton (for example according to Scott Meyer’s Effective Modern C++) is:

```
IFoo& UseFoo()
{
static Foo instance;
return instance;
}
```

Or the heap-allocated version:

```
IFoo& UseFoo()
{
static auto instance = make_unique<Foo>();
return *instance;
}
```

A more interesting problem which the above implementation doesn’t cover is deterministic shutdown. A local static, once created, will be live for the duration of the program, which might not always be desirable. Building on the previous implementation, here is a singleton which we can also shutdown on demand:

```
template <typename T>
class Singleton
{
public:
static T& Use()
{
static auto& instance = []() -> T& {
m_instance = make_unique<T>();
return *m_instance;
}();
return instance;
}
static void Free() noexcept
{
m_instance.reset();
}
private:
static unique_ptr<T> m_instance;
};
template <typename T> unique_ptr<T> Singleton<T>::m_instance;
/* ... */
IFoo& UseFoo()
{
return Singleton<Foo>::Use();
}
void FreeFoo()
{
Singleton<Foo>::Free();
}
```

Using this implementation, we can deterministically free the singleton on demand via the Free function as opposed to having to wait for the program to get unloaded, which can be useful in certain cases.

Magic statics provide an easy way to guarantee we end up with a single object, but the code generated to support this is non-trivial. Disassembly of the UseFoo above as compiled by Clang 4.0.0 with -O3 flag:

```
UseFoo(): # @UseFoo()
push rbx
mov al, byte ptr [rip + guard variable for Singleton<Foo>::Use()::instance]
test al, al
jne .LBB0_6
mov edi, guard variable for Singleton<Foo>::Use()::instance
call __cxa_guard_acquire
test eax, eax
je .LBB0_6
mov edi, 8
call operator new(unsigned long)
mov qword ptr [rax], vtable for Foo+16
mov rdi, qword ptr [rip + Singleton<Foo>::m_instance]
mov qword ptr [rip + Singleton<Foo>::m_instance], rax
test rdi, rdi
je .LBB0_5
mov rax, qword ptr [rdi]
call qword ptr [rax + 8]
mov rax, qword ptr [rip + Singleton<Foo>::m_instance]
.LBB0_5:
mov qword ptr [rip + Singleton<Foo>::Use()::instance], rax
mov edi, guard variable for Singleton<Foo>::Use()::instance
call __cxa_guard_release
.LBB0_6:
mov rax, qword ptr [rip + Singleton<Foo>::Use()::instance]
pop rbx
ret
mov rbx, rax
mov edi, guard variable for Singleton<Foo>::Use()::instance
call __cxa_guard_abort
mov rdi, rbx
call _Unwind_Resume
```

A lot of the generated code is the compiler implementing the guarantee that on concurrent calls, a single intialization is performed. The Singleton functions are inlined here since we are compiling with -O3. We can provide a much more efficient implementation using an atomic pointer on architectures where atomics are lock-free and we are not worried about redundantly calling the constructor in the rare cases of concurrent access that requires initialization:

```
template <typename T>
class Singleton
{
public:
static T& Use()
{
auto instance = m_instance.load();
if (instance != nullptr)
return *instance;
instance = new T();
T* temp = nullptr;
if (m_instance.compare_exchange_strong(temp, instance))
return *instance;
delete instance;
return *m_instance.load();
}
static void Free() noexcept
{
auto instance = m_instance.exchange(nullptr);
delete instance;
}
private:
static atomic<T*> m_instance;
};
template <typename T> atomic<T*> Singleton<T>::m_instance;
/* ... */
IFoo& UseFoo()
{
return Singleton<Foo>::Use();
}
void FreeFoo()
{
Singleton<Foo>::Free();
}
```

The disassembly of the above UseFoo (built with the same compiler and -O3 flag) is:

```
UseFoo(): # @UseFoo()
push rax
mov rcx, qword ptr [rip + Singleton<Foo>::m_instance]
test rcx, rcx
jne .LBB0_3
mov edi, 8
call operator new(unsigned long)
mov rcx, rax
mov qword ptr [rcx], vtable for Foo+16
xor eax, eax
lock
cmpxchg qword ptr [rip + Singleton<Foo>::m_instance], rcx
je .LBB0_3
mov rax, qword ptr [rcx]
mov rdi, rcx
call qword ptr [rax + 8]
mov rcx, qword ptr [rip + Singleton<Foo>::m_instance]
.LBB0_3:
mov rax, rcx
pop rcx
ret
```

This code might new the object multiple times, but is guaranteed to always return the same instance and retrieving it is more efficient than relying on statics, since it uses a compare-exchange to guarantee uniqueness. Many thanks to my colleague Vladimir Morozov who suggested this approach.

We now have an efficient way to create and shutdown a singleton. If shutdown, a
subsequent call to Use would re-create the object. One optional feature we
can add is to enforce that once shutdown, a singleton should never be accessed
again. So instead of the two-state *not initialized* and *live*, we can use
three states: *not initialized*, *live*, *freed* and terminate if an access
is attempted in the *freed* state:

```
const uintptr_t FreedSingleton = 0xDEADBEEF;
template <typename T>
class Singleton
{
public:
static T& Use()
{
auto instance = Get();
if (instance == reinterpret_cast<T*>(FreedSingleton))
terminate();
return *instance;
}
static void Free() noexcept
{
auto instance = m_instance.exchange(reinterpret_cast<T*>(FreedSingleton));
delete instance;
}
private:
static T* Get()
{
auto instance = m_instance.load();
if (instance != nullptr)
return instance;
instance = new T();
T* temp = nullptr;
if (m_instance.compare_exchange_strong(temp, instance))
return instance;
delete instance;
return m_instance.load();
}
static atomic<T*> m_instance;
};
template <typename T> atomic<T*> Singleton<T>::m_instance;
```

We now have an efficient generic singleton which we can shutdown on-demand and ensure clients never call after shutdown.

- Try not to use singletons, singletons are evil
- In most cases, a namespace and flat functions are enough, no need to over-complicate things
- If dependency injection is required, make sure dependency is properly injected during construction as opposed to member functions directly calling the singleton-retrieving function
- Magic statics provide an easy way to implement singletons
- Atomics are more efficient than magic statics when they are lock-free and we aren’t worried about potentially having multiple constructor calls in race cases
- If needed, singletons can be extended with a shutdown mechanism
- Three-state singletons can terminate on use-after-shutdown

“Data Structures and Algorithms” is one of the basic CS courses. Data structures
and algorithms are also the building blocks of software. In this post I will
give a quick overview of data structures, algorithms, and cover *iterators*,
which bridge the two together.

As the name implies, data structures provide a way of structuring data, in other words, they maintain some set of relationships between the contained elements. Data structures are built around expected access/update patterns and encapsulate the inherent tradeoffs. For example, a queue models FIFO access (so accessing the last inserted element requires dequeuing all elements which is O(n) for n elements) while a stack models the opposite LIFO access (which can access the last inserted element in O(1) but conversely makes accessing the first element O(n) for n elements). A deque allows elements to be inserted and removed from both front and back, but not from the middle. On the other hand, inserting an element in a forward list (where each node starting form head has a pointer to its successor) can be done anywhere, but requires a traversal of the data structure up to the insertion point (O(n)).

More complex data structures exist which model more complex relationships between elements, for example graphs and trees.

In practice, while there are always complex situations which require the use or development of exotic data structure, I consider those to be exceptions - a few basic data structures are enough to solve most problems. In fact, in most cases, simple linear data structures like lists are sufficient.

It’s worth noting that the relationships and access patterns modeled by a data structure do not have anything to do with the type of the contained data. A queue of integers or a queue of strings work in exactly the same way. Generics provide a great mechanism to separate the organizing structure from the data itself. Thus the C++ std::vector<T> can provide a generic implementation of a heap array for any type T, the same way a C# List<T> does. These generic data structure model how the contained elements are laid out, but work with any provided type.

The dictionary definition of an algorithm is:

noun: a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

There is a set of basic functions we can put our data through: search,
partition, rotate, sort, map, reduce, filter and so on. These functions can be
implemented in several ways, depending on the characteristics of the input. For
example, we can search for an element in O(log n) time if our input is
sorted and we can access it from the “middle” at no extra cost. On the other
hand, given unsorted data or a data structure like a forward list which we can
only access through its head, search becomes an O(n) operation. The
implementations of these functions are what we call algorithms. In the examples
above the algorithms are *binary search* and *linear search*.

The same observation as with data structures applies: while there are complex problems which require the development of brand new algorithms, in practice, the vast majority of processing that we want to perform on our data can be expressed either as a simple algorithm or a composition of simple algorithms.

It is also interesting to note that the algorithms themselves are not tied to a particular data type either, rather they only require certain characteristics of their input. So we can perform a search as long as there is some equivalence relation defined for the input data. Similarly, we can perform a sort as long as there is a total order relation defined on the input type. It doesn’t really matter whether we search for numbers or strings, the steps we take are the same.

Generics help here too, since they allow us to conceptually separate the implementation of the algorithm (the steps) from the data we are operating on. So the C++ std::partition algorithm can partition any input – given by a pair of forward iterators using any given predicate. Similarly, the C# LINQ Select (known in most other languages as a map operation), transforms all input values into output values given a mapping function from the input type to the output type. We don’t need to implement a partition for ints, one for strings, and one for dates, we need a generic partition which implements the steps of the algorithm and works with any given data type.

Iterators act as the bridge between data structures and algorithms. Iterators traverse a given data structure in a linear fashion, such that an algorithm can process the input in a consistent manner, regardless of the actual layout of the data. Note the data structure itself does not need to be a linear one: a binary tree can be traversed with a preorder iterator, or an inorder iterator, or a postorder iterator.

Algorithms work on ranges of data, which can be defined as a pair of iterators (beginning and end) or an iterator and the number of available elements (beginning and length). I will cover some of the C++ iterator concepts since they are the most fleshed out. Other languages usually rely on a subset of these.

**Input iterators** can only be advanced and are one-pass only. These map to
input streams, for example unbuffered keyboard input where data can be read
once, but a subsequent read would yield different data.

**Forward iterators** extend input iterators to multiple passes. For example, a
forward iterator models traversal of a forward list. We can always re-start
traversal from any saved position, but we cannot move back (since nodes only
have links to successors, not predecessors).

**Bidirectional iterators** extend forward iterators to bidirectional access.
For example, a bidirectional iterator models traversal of a doubly linked list.
Here, we can move from one node in either direction – to its predecessor or to
its successor.

**Random access iterators** extend bidirectional iterators to random access,
meaning any element can be accessed in constant time. For example, a random
access iterator models traversal of an array. Here, we can access any element at
the same cost, since we don’t need to perform any traversal, simply index into
the array.

Depending on the implementation of a given algorithm, different iterator types might be required. The same function can sometimes be implemented with several algorithms, having a more efficient version work with more capable iterators and an alternative algorithm for less capable iterators. For example we can implement an O(n log n) quicksort with a random access iterator but we can also implement an O(n^2) bubblesort that works with forward iterators.

IEnumerator<T> in C# models a forward iterator. The (simplified) interface is:

```
interface IEnumerator<T>
{
T Current { get; }
bool MoveNext();
void Reset();
}
```

This allows us to advance the iterator and to reset it to the initial position and re-start traversal, which is exactly what a forward iterator does.

Lazy evaluation in some functional languages and generators (functions that yield results) model input iterators which can be advanced in a single pass.

While most relevant operations can be implemented with input iterators, the resulting algorithms are not very efficient. For example, with a bidirectional iterator, reverse can be implemented in O(1) space by starting from both ends and swapping elements. On the other hand, given an input iterator, reverse requires O(n) space as elements need to be pushed onto a stack and popped in reverse order.

- Data structures model the relationship between elements and encapsulate access patterns. Generic data structures provide a good abstraction decoupling the structure of the data from the actual contianed data.
- Algorithms implement operations over data. Algorithms are grouped together based on the function or transformation they implement. Generic algorithms abstract the operational steps from the input the algorithms operate on.
- Iterators provide a bridge between data structures and algorithms. The more capabilities an iterator has (ie. the more restrictions we impose on the input), the more efficient the algorithm can be. Similarly, most operations can be implemented in less efficent manners (time and space-wise) on iterators with fewer capabilities (ie. fewer restrictions imposed on the input).

Recommended reading:

- From Mathematics to Generic Programming by Alexander A. Stepanov and Daniel E. Rose
- Elements of Programming by Alexander A. Stepanov and Paul McJones

I recently stumbled upon a heterogeneous event collection which turned out to pose an interesting design problem. We are using library code (we can’t change) that provides a templated Event to which we can register callbacks and which we can raise later to invoke the callbacks. The interface looks like this:

```
/* Library code */
template <typename T> struct Event
{
// T is a callable object, like std::function
void Register(T&& callback) { /* Register a callback */ }
// Raises the event and forwards the arguments to the callbacks
template <typename ... Args>
void Raise(Args&& ... args) { /* Invoke all registered callbacks */ }
};
```

A stub implementation that validates client code is typed properly would be:

```
template <typename T> struct Event
{
// T is a callable object, like std::function
void Register(T&& callback)
{
/* Register a callback */
_callback = std::move(callback);
}
// Raises the event and forwards the arguments to the callbacks
template <typename ... Args>
void Raise(Args&& ... args)
{
/* Invoke all registered callbacks */
_callback(std::forward<Args>(args)...);
}
T _callback;
};
```

Note that unlike the real implementation, this only stores the last registered event, but that is irrelevant for the purpose of this post. I’m providing the code just to have something to compile against (otherwise Raise would happily swallow any combination of arguments passed to it). In reality, a more complex implementation would maintain a list of callbacks, but this is sufficient for framing the design problem.

The event collection which was wrapping a set of library events looked like this:

```
using LaunchCallback = std::function<void()>;
using SaveCallback = std::function<void(const std::string&)>;
using ExitCallback = std::function<void()>;
class EventStore
{
public:
void OnLaunch(LaunchCallback&& callback)
{
m_launchEvent.Register(std::forward<LaunchCallback>(callback));
}
void OnSave(SaveCallback&& callback)
{
m_saveEvent.Register(std::forward<SaveCallback>(callback));
}
void OnExit(ExitCallback&& callback)
{
m_exitEvent.Register(std::forward<ExitCallback>(callback));
}
void RaiseLaunch()
{
m_launchEvent.Raise();
}
void RaiseSave(const std::string& fileName)
{
m_saveEvent.Raise(fileName);
}
void RaiseExit()
{
m_exitEvent.Raise();
}
private:
Event<LaunchCallback> m_launchEvent;
Event<SaveCallback> m_saveEvent;
Event<ExitCallback> m_exitEvent;
};
```

Sample usage of the EventStore:

```
EventStore store;
// Register a couple of callbacks
store.OnLaunch([]() { /* Do stuff */ });
store.OnSave([](const auto& arg) { /* Do stuff */ });
// Raise an event
store.RaiseSave("Some file name");
```

Looking at EventStore, it’s obvious that there is a lot of repetition involved: hooking up a new event involves aliasing a new callback, adding a new member to the class, and adding the corresponding registration and Raise member functions which end up being copy/pastes of the other ones. There must be a better way!

An initial idea would be to use some sort of associative container (hopefully something better than an unordered_map), but there is an interesting complication due to the fact that some of the various events are actually of different types. Event<std::function<void()>> has a different type than Event<std::function<void(const string&)>>. There are potential workarounds to explore, like standardizing on a single type and requiring clients to, for example, only use callbacks that do not take any arguments. Another option would be to pass in some base object to each event and let each callback re-interpret it. This takes us down the wrong path though. We don’t need to do any runtime lookup - the initial code doesn’t.

From the repetition in EventStore, it should become apparent that we need some form of templated Register and Raise that would work for each type of event we care about. A quick sketch of our function signatures should look something like this:

```
template </* ??? */>
void Register(/* ??? */)
{
// Register callback to the appropriate event
}
template </* ??? */, typename ... Args>
void Raise(Args&& ... args)
{
// Raise the appropriate event, forwarding args to it
}
```

It is also clear that we need a way to store all of the events we need in our class. Since they are of heterogeneous types, we can’t store them in a map or equivalent, but we don’t need to. std::tuple was build exactly for this:

```
std::tuple<Event<LaunchCallback>,
Event<SaveCallback>,
Event<ExitCallback>> m_events;
```

Now the only remaining question is how to templatize our two member functions to enable a lookup in the tuple. One approach would be to use an enum:

```
enum class EventType : size_t
{
LaunchEvent = 0,
SaveEvent,
ExitEvent,
};
```

Given this enum, we can templatize on its values:

```
class EventStore
{
public:
template <EventType eventType, typename T>
void Register(T&& callback)
{
std::get<static_cast<size_t>(eventType)>(m_events).Register(std::forward<T>(callback));
}
template <EventType eventType, typename ... Args>
void Raise(Args&& ... args)
{
std::get<static_cast<size_t>(eventType)>(m_events).Raise(std::forward<Args>(args)...);
}
private:
std::tuple<Event<LaunchCallback>,
Event<SaveCallback>,
Event<ExitCallback>> m_events;
};
```

Callers can use this new implementation like this:

```
EventStore store;
// Register a couple of callbacks
store.Register<EventType::LaunchEvent>([]() { /* Do stuff */ });
store.Register<EventType::SaveEvent>([](const auto& arg) { /* Do stuff */ });
// Raise an event
store.Raise<EventType::SaveEvent>("Some file name");
```

With this implementation, we preserve the ability to have polymorphic events but only need to implement the Register and Raise functions. Adding a new event type now only requires aliasing the callback, adding an enum member, and extending our member tuple by adding the new Event to it.

The only drawback of this approach is the fact that we need to manually keep the enum and the tuple in sync. This is not too bad, because if we try to call std::get with a number higher than the size of the tuple, we get a compile time error. If we accidentally swap two events, if they are of incompatible types (for example Event<LaunchCallback> and Event<SaveCallback>, as one expects callbacks of type std::function<void()> and the other expects std::function<void(const std::string&)>), we get a compile-time error because Register and Raise calls would fail to compile (attempting to pass in callback/arguments of incompatible types). If we accidentally swap two events of the same type, (Event<LaunchCallback> and Event<ExitCallback>, since both LaunchCallback and ExitCallback are aliased to the same std::function<void()>), runtime behavior is equivalent, it just makes reading the code confusing. Now we end up storing launch callbacks inside what we called Event<ExitCallback> and vice-versa. Runtime is not affected, as we would also raise Event<ExitCallback> by calling Raise<EventType::LaunchEvent>, but it’s not ideal. We could drop the aliases altogether and simply have:

```
std::tuple<Event<std::function<void()>>,
Event<std::function<void(const string&)>>,
Event<std::function<void()>> m_events;
```

This solves the above issues but is not very readable. There are other options, like picking different names for the aliases - instead of naming the event, have them name the type of callback. Either way, effectively what we are doing is a mapping from an enum into a set of Event types. We can actually push more information to the type system and get rid of the need to do this mapping. We do that by making sure our events are always of different types, even if the callback signatures are the same. One way of achieving this is wrapping Event and defining different types for each of our events:

```
template <typename T> struct EventWrapper
{
Event<std::function<T>> m_event;
};
struct LaunchEvent : EventWrapper<void()> { };
struct SaveEvent : EventWrapper<void(const std::string&)> { };
struct ExitEvent : EventWrapper<void()> { };
```

Note this is type information used only by the compiler and doesn’t bring any runtime overhead to our code. Inheritance is used here just so we don’t have to repeat declaring m_event, we could have just as well declared each struct independently. Now we can update the member tuple to store an event of each of these types:

```
std::tuple<LaunchEvent, SaveEvent, ExitEvent> m_events;
```

Since they are of different types, we no longer need an enum to index into the tuple, we can do it by type (note std::get indexed by type requires that the tuple contains distinct types, which is not the case for Event<LaunchCallback> and Event<ExitCallback>, but it is for LaunchEvent and ExitEvent):

```
template <typename T, typename Callback>
void Register(Callback&& callback)
{
std::get<T>(m_events).m_event.Register(std::forward<Callback>(callback));
}
template <typename T, typename ... Args>
void Raise(Args&& ... args)
{
std::get<T>(m_events).m_event.Raise(std::forward<Args>(args)...);
}
```

The Callback template argument in Register can be deduced once T is specified. The full implementation is:

```
template <typename T> struct EventWrapper
{
Event<std::function<T>> m_event;
};
struct LaunchEvent : EventWrapper<void()> { };
struct SaveEvent : EventWrapper<void(const std::string&)> { };
struct ExitEvent : EventWrapper<void()> { };
class EventStore
{
public:
template <typename T, typename Callback>
void Register(Callback&& callback)
{
std::get<T>(m_events).m_event.Register(std::forward<Callback>(callback));
}
template <typename T, typename ... Args>
void Raise(Args&& ... args)
{
std::get<T>(m_events).m_event.Raise(std::forward<Args>(args)...);
}
private:
std::tuple<LaunchEvent, SaveEvent, ExitEvent> m_events;
};
```

Callers can use it like this:

```
EventStore store;
// Register a couple of callbacks
store.Register<LaunchEvent>([]() { /* Do stuff */ });
store.Register<SaveEvent>([](const auto& arg) { /* Do stuff */ });
// Raise an event
store.Raise<SaveEvent>("Some file name");
```

In this case, adding a new event requires declaring a new struct and adding it to the tuple. Since we are retrieving the event by its type, no mapping is involved.

]]>Memory management involves handling memory resources allocated for a certain
task, ensuring that the memory is freed once it is no longer needed so it can
be reused for some other task. If the memory is not freed in a timely manner,
the system might run out of resources or incur degraded performance. A memory
resource that is never freed once no longer needed is called a leak - the
resource becomes unusable, usually for the duration of the process. Another
issue is *use after free*, in which a memory resource that was already freed
is used as if it wasn’t. This usually causes unexpected behavior as the code
is trying to read, modify or wrongly interpret data at a memory location.
Memory management can be *manual* - with code explicitly handling
deallocation, or *automatic*, in which memory gets freed once no longer needed
by an automated process.

Manual memory management is efficient, since allocations and deallocations don’t incur any overhead. In C:

```
typedef struct _Foo {
...
} Foo;
...
Foo* foo = (Foo*)malloc(sizeof(Foo));
...
free(foo);
```

The disadvantage of this approach, and the main reason automatic memory management models were invented, is that this puts the developer in charge of making sure memory doesn’t leak and that it is not used after it is freed. As the complexity of the code increases, this becomes increasingly difficult. As pointers are passed around the system and get stored in various data structures, it becomes difficult to know given some pointer that is no longer needed whether: a) this was the very last piece of code that actually needed to access the location pointed to by this pointer, in which case the memory should be freed, and b) whether the memory this pointer is pointing to is still valid and hasn’t been freed previously.

Automatic memory management attempts to move the responsibility of tracking
when a memory resource is no longer needed (and handling its deallocation) from
the developer to the system. Such a system is called *garbage collected*, as
memory that is no longer needed (“garbage”) is reclaimed by the system
automatically. The two most popular methods used to automatically free memory
are *tracing garbage collectors* and *reference counting*.

Tracing garbage collectors work by tracing references to objects on the heap and checking whether a given resource allocated on the heap has at least one reference path to it from the stack. If such a path exists, it means that from the stack (an argument to a function, a local variable), there is a way to perform a set of dereference and access the memory resource. If such a path doesn’t exist, it means the memory is unreachable, so regardless of how executing code accesses other objects on the heap, there is no way to access this resource - which means the memory can be safely deallocated.

For example, a naïve tracing garbage collection algorithm, *mark-and-sweep*,
involves adding an “in-use” bit to each memory resource allocated then, during
collection, following all references starting from the stack and marking each as
“in-use”. Once all used resources are marked, the sweep stage involves walking
the whole heap and for each memory resource, if not marked as “in-use”, freeing
it.

Tracing garbage collectors are used by many popular runtimes, like JVM and .NET. In C#:

```
struct Bar { }
struct Foo
{
public Bar bar;
}
...
{
Foo foo = new Foo();
foo.bar = new Bar();
// there is no stack variable pointing to the Bar object, but it can
// still be reached through foo (foo.bar), so there exists a path from
// the stack to it, meaning code can still access it.
}
// foo goes out of scope which means neither foo nor its member Bar can be
// accessed any longer, so they can be safely collected
```

There are a couple of disadvantages with the tracing GC approach: first, the system needs to ensure memory resources are not being allocated while a garbage collection is taking place. This means code execution is paused during collection, which obviously impacts performance. The second disadvantage of this approach is that the system is not as lean as other memory management models: memory resources are kept allocated longer than really needed, for the time interval between the last reference to them goes out of scope until the actual collection is performed.

An alternative to tracing garbage collectors is reference counting. As the name implies, a memory resource in such a system has an associated reference count - the number of references to it. As soon as the last reference goes out of scope, when the reference count reaches zero, the memory can be safely deallocated. Unlike tracing, reference counting is performed as code executes: the count of a given memory resource is automatically increased with each assignment where the resource is on the right-hand-side, and is automatically decreased whenever a reference goes out of scope.

Python manages memory using reference counting:

```
class Foo: pass
# allocate Foo, its reference count is 1
foo1 = Foo()
# reference count is 2 after assignment
foo2 = foo1
...
# once foo1 and foo2 go out of scope, reference count becomes 0 and memory
# is automatically freed
```

C++ smart pointers work in a similar manner:

```
struct Foo { };
...
// foo1 is a shared_ptr pointing to a Foo stored on the heap. Reference
// count for the Foo object is 1
auto foo1 = std::make_shared<Foo>();
// reference count becomes 2 after assignment
auto foo2 = foo1;
...
// Once foo1 and foo2 go out of scope, reference count becomes 0 and memory
// is automatically freed
```

The main advantages over tracing garbage collection are the fact that execution
doesn’t need to be paused in order to reclaim memory and that resources are
deallocated as soon as they are no longer used (once reference count becomes 0).
There are also several disadvantages with this approach: first, each memory
resource needs to store an additional reference count and updating the reference
count in a multi-threaded environment needs to be performed atomically. Second,
and most important, this memory management model does not handle *reference
cycles*.

Reference cycles occur when two heap objects hold references to each other even after no longer being reachable from the stack. In this case, a tracing garbage collector would mark the objects as being unreachable and deallocate them, but simple reference counting would not be able to identify this - from that point of view, each object is being referred to by another object thus it should not be collected. Example of reference cycle in Python:

```
class Foo: pass
a, b = Foo(), Foo()
a.other, b.other = b, a
# a.other holds a reference to b, b.other holds a reference to a
# even when a and b go out of scope, the "other" attributes still hold references
# to the objects so their reference count would not drop to 0
```

A similar example in C++:

```
struct Foo
{
std::shared_ptr<Foo> other;
};
...
auto foo1 = std::make_shared<Foo>();
auto foo2 = std::make_shared<Foo>();
foo1->other = foo2;
foo2->other = foo1;
// there are two references to each Foo object: foo1 and foo2->other for the first
// object, foo2 and foo1->other for the second object. Even if the foo1 and foo2
// variables go out of scope, neither of the objects would be collected due to the
// extra reference
```

Python and C++ solve this problem in different ways: Python supplements reference
counting with a tracing garbage collector. So while most of the memory
management is done via reference counting, a tracing garbage collector is still
employed to clean up cycles like in the above example. This hybrid approach has
he pros and cons of both of the mechanisms discussed above. C++ avoids the
execution pauses a tracing garbage collectors would create by, instead,
leveraging *weak references*. Weak or non-owning references point to an object
but do not prevent it from being collected when all *strong* references go away.
There are several ways to express a non-owning reference, with different
advantages and drawbacks:

- A & reference has to be assigned on construction and cannot be re-assigned after being bound to an object. If used after the underlying object was destroyed, it causes undefined behavior.
- A * pointer can be nullptr-initialized and assigned later or re-assigned. Similarly, if used after the pointed-to object was destroyed, causes undefined behavior.
- A weak_ptr<T> is a standard library type implementing a non-owning reference. A weak_ptr can be converted to a shared_ptr (using its lock() method). If there is no strong (shared_ptr) reference to an object it gets destroyed, regardless of how many weak_ptr instances point to it. But once a weak_ptr successfully locks an object, it creates a strong reference which ensures the object is kept alive. The drawback of using weak_ptr is additional overhead: the control block of a smart pointer needs to store both strong and weak reference count (with similar atomic reference counting), and, even if an object gets destroyed because all strong references went out of scope, the control block stays alive until all weak references go away too.

Updating the Foo struct in the example above to use a weak_ptr instead, the reference cycle is avoided:

```
struct Foo
{
std::weak_ptr<Foo> other;
};
...
auto foo1 = std::make_shared<Foo>();
auto foo2 = std::make_shared<Foo>();
foo1->other = foo2;
foo2->other = foo1;
// now the two Foo objects have only one strong reference to them through
// the foo1 and foo2 variables The other pointers are weak references which
// won't prevent the objects from being destroyed when foo1 and foo2 go out
// of scope
```

An alternative way to think about heap objects is in terms of *ownership* and
*lifetime*. In this model, a heap object is uniquely owned by some other object
and gets freed automatically when the owner is destructed. In C++, this is
achieved through unique_ptr:

```
struct Foo { };
struct Bar
{
std::unique_ptr<Foo> foo { std::make_unique<Foo>(); }
}
...
{
Bar bar; // this creates a Foo object on the heap, owned by bar
}
// the heap object gets freed once bar gets freed
```

Ownership of the object can be transferred by moving the unique_ptr. The main advantage of this model is that it has no overhead - unlike tracing memory which involves pausing execution or reference counting which involves atomic count of references, a unique_ptr is just a wrapper over a pointer.

Unique pointers cannot be copied though (by definition, otherwise there would no longer denote unique ownership), so when other code needs to access the heap object, it would need to get a reference from the owning object:

```
void UseFoo(const Foo& foo)
{
...
}
...
Bar bar;
UseFoo(*bar.foo);
```

The problem with this approach is that if another object ends up holding on to a
reference which outlives the owning object, the reference becomes dangling and
refers to an object which was already freed. This becomes the equivalent of a
*use after free*, so here is where the concept of *lifetime* becomes important:
none of the non-owning references of a uniquely owned heap object should outlive
the object.

Unfortunately in C++ this has to be handled through sensical design and is mostly left up to the developer. Rust on the other hand provides strong static analysis and lifetime annotations to ensure such issues do not occur. In fact, the default in Rust is to have uniquely owned objects which can be “borrowed” when needed and static analysis ensures no dangling references appear. In C++:

```
struct Foo { };
struct Bar{
Foo* foo;
};
...
Bar bar;
{
Foo foo;
bar.foo = *foo;
}
// bar.foo is now a dangling pointer since Foo was freed
```

The above example used a pointer for simplicity, since a & reference (Foo&) needs to be bound at construction time, but same applies for that type of reference: once an object gets freed, & references and non-owning pointers to it are left dangling. On the other hand, this does not compile in Rust:

```
struct Foo {
}
struct Bar<'a> {
foo: &'a Foo
}
...
let bar;
{
let foo = Foo {};
bar = Bar { foo: &foo };
}
// compiler correctly shows `foo` dropped here while still borrowed
```

In Rust, the compiler ensures dangling references (“borrowed” objects) do not exist once owning object goes out of scope.

It seems that in most cases, the best approach to memory management is to use the latter model of ownership and lifetimes which comes with no runtime overhead and handle the dangling reference problem through static analysis. The advantages of this approach extend beyond the runtime cost of other automatic memory management techniques to a model which also works well in a multi-threaded environment, eg. if we only allow the owner of an object to modify it, we can eliminate certain data races. From a systems design perspective it is also an advantage to have a clear understanding of ownership throughout the system.

This post covered several memory management techniques, outlining their pros and cons:

- Manual - human error prone
- Automatic using a tracing garbage collector - safe but comes with runtime overhead
- Automatic using reference counting - smaller runtime cost than a tracing garbage collector but needs additional mechanisms to deal with reference cycles
- Concepts of ownership and lifetime - no runtime overhead, but should be supplemented by static analysis to avoid dangling references

I recently read Joe Duffy’s excellent blog post The Error Model. Joe worked on Midori and has some great insights on error model design. I wanted to write down a couple of personal notes on error handling.

Before even talking about error scenarios, it’s worth pointing out that there are categories of errors where the type system helps if not to eliminate them, at least to scope them and prevent them from propagating unchecked throughout the system.

In many cases, an error means the value of some variable has an invalid value. If this invalid value is passed down to called functions, it can manifests itself deep in the stack when it could’ve been caught earlier. A simple example would be move directions for a game - let’s say the player can move Up, Down, Left, or Right. This can be encoded as:

```
const int UP = 0;
const int LEFT = 1;
const int DOWN = 2;
const int RIGHT = 3;
...
void move(int direction, player player)
{
switch (direction)
{
case UP: player.move_up(); break;
case LEFT: player.move_left(); break;
case DOWN: player.move_down(); break;
case RIGHT: player.move_right(); break;
default:
// direction should only be 0, 1, 2, 3
}
}
```

but ultimately a caller can still pass any int value to this function which would end up in the default branch as a direction the code doesn’t know how to handle. The alternative is, of course:

```
enum class direction
{
up,
left,
down,
right
};
...
void move(direction direction, player player)
{
...
}
```

In this case, the type system ensures direction can only possibly hold one of the allowed values. This is a trivial example but there are many more interesting ones. Take for example some connection which, if opened, can receive data and close and, if not opened, can be opened. This can be modelled like this:

```
struct connection
{
auto open()
{
if (is_opened) { /* error, connection already opened */ }
...
};
auto close()
{
if (!is_opened) { /* error, connection not opened */ }
...
}
auto receive()
{
if (!is_opened) { /* error, connection not opened */ }
...
}
private:
bool is_opened;
};
```

Here we have to handle various cases where we try to perform an open-connection operation on a connection that hasn’t been opened yet, and vice-versa. Another way to model this (as of C++17) is using a variant and separate types for open and closed connections:

```
struct opened_connection
{
auto close() { ... }
auto receive() { ... }
};
struct closed_connection
{
auto open() { ... }
};
using connection = std::variant<closed_connection, opened_connection>;
```

Here, as long as we have a closed_connection instance, we can only perform closed-connection operations and as long as we have an open_connection instance, we can only perform opened-connection operations. The error states we had to handle above go away as the type system ensures we can never call receive on a closed connection etc.

The type system can also be leveraged to embellish return types as an alternative to using return codes. For example, assume we have a function which parses a phone number provided by the user into some phone_number_t used internally. There are a few ways to implement this:

```
phone_number_t parse_phone_number(const string& input)
{
if (phone_number_t::is_valid(input))
return phone_number_t::from_string(input); // assume we can construct a phone_number_t from a valid string
else
throw invalid_phone_number { };
}
```

This is not ideal though, since exception should really be exceptional (more on this below), and user providing invalid input should be a completely valid scenario. The alternative would be to use a return code:

```
bool parse_phone_number(const string& input, phone_number_t& output)
{
if (!phone_number_t::is_valid(input))
return false;
output = phone_number_t::from_string(input);
return true;
}
```

This works, but calling code is uglier:

```
auto phone_number = parse_phone_number(input);
```

now becomes:

```
phone_number_t phone_number;
bool result = parse_phone_number(input, phone_number);
```

We can also end up in a bad state if we forget to check the return value. The alternative is to encode the information that we either have a phone_number_t or an invalid number in a type. In C++ we have (as of C++17) optional<T> for this:

```
optional<phone_number_t> parse_phone_number(const string& input)
{
if (!phone_number_t::is_valid(input))
return nullopt;
return phone_number_t::from_string(input);
}
```

This is not quite a return error code and cannot really be ignored - there is no
implicit case from optional<T> to T, so callers need to explicitly
handle the case when the operation failed. Calling this is as natural as the
throwing version, but does not rely on exceptions [1]. This is also called
*monadic error handling* and is widely employed by functional languages. I find
this a good alternative to throwing exceptions as long as it is well scoped and
error checks don’t have to pollute too many functions in the call stack.

Preconditions are conditions that should be satisfied upon entering a function to ensure the function works as expected. When a function is called but the preconditions are not met, it is not an error, it is a developer bug. The recommended way of handling such a situation is, if possible, to crash immediately. The reason for crashing is that calling a function with preconditions not being met means the system is an invalid state and attempting recovery is usually not worth it. Crashing on the other hand would provide developers with dumps to help understand how the system got into this state and fix the underlying bug.

The alternative to this is undefined behavior - calling a function without meeting the preconditions cannot guarantee anything about the execution of the function. Undefined behavior is used extensively throughout the C++ standard [2]. While failing fast is the preferred approach, sometimes it is unfeasible to check preconditions at runtime: for example a precondition of binary search is that it searches over an ordered range. Performing a binary search takes logarithmic time but validating that a range is ordered takes linear time, so adding this check would negatively impact the run time of the algorithm. In this case, it is OK to say that we cannot provide any guarantees on what the function will do. Debug-time asserts are a middle ground solution, since we can afford to perform more expensive checks in debug builds to deterministically fail when preconditions are not met. That being said, if the check is not prohibitively expensive, it should be performed on all build flavors and immediately fail (via std::terminate or equivalent).

What should not be done is treating such a state as an error - this is a bug in the code and throwing an exception or returning some error result would just leak the bug and make it impact more of the system. There really isn’t anything useful to do with such an error - it only tells us that there is an issue in the code and we are now in a state we should never be in. At this point we don’t know which component originated the error and we cannot deterministically recover - we might abort the current operation but there is no guarantee that this would bring us back to a valid state. We are in undefined behavior land, where crashing is the best option.

We covered several ways to handle errors by either eliminating invalid states at compile-time or by failing fast when in an invalid state. There are, on the other hand, classes of errors from which we can legitimately recover, which brings us to exception and error codes.

I am a big fan of handling exceptional states using exceptions over returning error codes. For one, the code is more readable: instead of reserving the return type of a function to signal success or failure and resort to out parameters, functions can be declared in a natural way. We also end up with less code overall as instead of having to check error codes inside all functions in the call stack in order to propagate back an error, we simply throw it from the top of the stack and catch it where we can deal with it. This approach also composes better - take, for example, a generic algorithm that takes some throwing function. Since we supply the predicate, we know what exception it can throw and we can catch it in the code that invokes the generic algorithm, keeping this invisible to the algorithm:

```
template <typename InputIt, typename UnaryFunction>
void for_each(InputIt first, InputIt last, UnaryFunction f) noexcept(noexcept(UnaryFunction))
{
for (; first != last; ++first)
f(*first);
}
```

If our predicate returns an error code instead, the generic algorithm must be aware of this:

```
template <typename InputIt, typename UnaryFunction>
auto for_each(InputIt first, InputIt last, UnaryFunction f) noexcept
{
for (; first != last; ++first)
{
auto result = f(*first);
if (result != 0)
return result;
}
return 0;
}
```

Note that here we are also making an assumption that 0 means success, which is an arbitrary decision for a plug-in function.

That being said, I want to reiterate that exceptions should only be used for exceptional cases. The readability advantage gained with exceptions is lost if they are abused. It’s great if the callee throws one or two exception types which the caller catches and handles. On the other hand, if we have to resort to catch-all catch (...) and we have so many possible exception types coming out of a function that we can’t keep track of them, the code actually becomes harder to reason about.

An example and a counter-example: when reading a file with a set schema generated by our application, we expect it to be in a valid format. If it isn’t, it means some data corruption occurred but this should really be an exceptional case. If we encounter such data corruption, we can throw an exception and let the caller handle the fact that we cannot interpret this file. On the other hand, when reading user input, we should never throw an exception if input is not conforming - this should be a much more common scenario of user error.

There are cases which are not exceptional enough to warrant an exception but where some error information needs to be propagated through the call stack. Take, for example, a compiler which encounters an invalid token while parsing a file. Since this is user input, it should not be treated as an exception. On the other hand, simply using an optional and failing to parse without providing additional information is also not ideal. In this case we probably want to return additional information around the encountered error.

In this case we would return the error rather than throw it, but I would still prefer an embellished type like Rust’s Result and return an std::variant<T, Error> (as of C++17). In general I consider bad practice returning an int or an HRESULT which would afterwards have to be decoded to understand the actual error. For simple cases, if no other information besides success/failure has to be returned, a bool would suffice, or an enum or struct which contains the expected error information. Such an error type can be composed with a valid return type using a variant which brings us back to monadic error-handling.

My general rule of thumb is to use exceptions for really exceptional situations, which keeps the code cleaner as long as the number of exception types is managable, and use monadic error handling when errors are expected, as long as these can be scoped to a limited number of functions (repeated error checking all over the place is messy, error-prone, and makes code hard to read).

We went over various ways of handling errors:

- Declaring types that restrict the range of values a variable can take to eliminate invalid states at compile-time
- Monadic error handling using embelished return types
- Failing fast when preconditions of a function are not met
- Throwing exceptions in exceptional cases
- Returning strongly typed errors when errors are not exceptional

There is still a fair amount of controversy around what is *the right way* of
handling errors. My personal take on this is that there are tradeoffs that come
with each approach and rather than saying “always use exceptions” or “never use
exceptions”, it’s more a matter of choosing *the right tool for the job*. I
tried to list some of the possible approaches with their pros and cons, and how
I employ them. Your mileage may vary depending on your specific language,
runtime, problem domain, application type etc.

[1] | This is the recommended way of handling errors in Rust. |

[2] | See Chandler Carruth’s CppCon talk Garbage In, Garbage Out. |

A good type system can eliminate entire categories of errors in a program and simply make invalid code not compile. Before digging into types, below are a few common distinctions between the type systems of various programming languages:

In a statically typed language, the types are determined at compile time so if a function was declared as only accepting a certain type T but we attempt to pass it an unrelated type U, the program is considered invalid. This is invalid C++:

```
void sqr(int x)
{
return x * x;
}
...
sqr("not an int"); // does not compile
```

On the other hand, in a dynamically typed language, we do not perform any compile time checks and, if the data we get is of an unexpected type, we treat it as a runtime error. Below is a Python function that squares a number:

```
def sqr(x):
return x ** 2
...
sqr("not an int")
# runs but fails with TypeError: unsupported operand type(s)
# for ** or pow(): 'str' and 'int'
```

Some of the interesting features of dynamic languages are *duck typing* (“if it
walks like a duck, and it quacks like a duck…”) and *monkey patching*. Duck
typing means that accessing a member of an object works as long as that object
has such a member, regardless of the type of the object. In Python:

```
class Foo:
def func(self):
print("foo")
class Bar:
def func(self):
print("bar")
[obj.func() for obj in [Foo(), Bar()]] # prints "foo" and "bar"
```

This can’t work in a statically typed language where, at the bare minimum, we would have to add some form of constraint for the types in the list to ensure they contain a func() method we can call.

Monkey patching refers to the ability to change the structure of an object at runtime. For example we can swap the func method from an instance of foo with another function like this:

```
class Foo:
def func(self):
print("foo")
def func_bar():
print("bar")
obj = Foo()
obj.func() # prints "foo"
obj.func = func_bar
obj.func() # prints "bar"
```

These are useful capabilities, but the tradeoff is a whole class of type errors which a statically typed language would’ve caught.

As a side note, the fact that dynamic languages don’t need to specify types makes them more terse. That being said, the Hindley-Milner algorithm W can infer the types of a program in linear time with respect to the source size. So while Python is starting to support type annotations for better static analysis and TypeScript provides a way for writing type-safe JavaScript, C++ has better and better type inference, while in Haskell (which has one of the strongest static type systems) type annotations are mostly optional.

At a high level, a strongly typed language does not implicitly convert between unrelated types. This is good in most situations as implicit conversions are often meaningless or have surprising effects - for example, adding a number to a list of characters. This can either result in runtime errors or garbage data. In contrast, a strongly typed language will not accept code that attempts to do this.

In Python, which is strongly typed, this doesn’t work:

```
foo = "foo" # foo is "foo"
foo = foo + " bar" # foo is "foo bar"
foo = foo + 5 # TypeError: Can't convert 'int' object to str implicitly
```

It works just fine in JavaScript though:

```
var foo = "foo"; // foo is "foo"
foo = foo + " bar"; // foo is "foo bar"
foo = foo + 5; // foo is "foo bar5"
```

Note type strength is not an either/or - C++, while considered strongly typed, still allows several implicit casts between types (eg. pointer to bool). Some languages are more strict about converting between types implicitly, others less so.

Another difference to note is between static and dynamic polymorphism. Dynamic polymorphism happens at runtime, when calling a function on a base type gets resolved to the actual function of the deriving type:

```
struct base
{
virtual void func() = 0;
};
struct foo : base
{
void func() override
{
std::cout << "foo" << std::endl;
}
};
struct bar : base
{
void func() override
{
std::cout << "bar" << std::endl;
}
};
void call_func(base& obj)
{
obj.func();
}
...
call_func(foo{}); // prints "foo"
call_func(bar{}); // prints "bar"
```

In the above case, we effectively have a single function call_func which takes a reference to a base struct. The compiler generates a v-table for struct base and a call to func() on base involves a v-table jump to the actual implementation of the function, which is different between the inheriting types foo and bar.

Contrast this with the static alternative:

```
struct foo
{
void func()
{
std::cout << "foo" << std::endl;
}
};
struct bar
{
void func()
{
std::cout << "bar" << std::endl;
}
};
template <typename T> void call_func(T& obj)
{
obj.func();
}
...
call_func(foo{}); // prints "foo"
call_func(bar{}); // prints "bar"
```

In this case there is no relationship between foo and bar and no v-table is needed. On the other hand, we no longer have a single call_func, we have a templated function which is instantiated for both foo and bar types. This is all done at compile-time, the advantage being faster code, the drawback being compiler needs to be aware of all the types involved - we can no longer “inject” types implementing an interface at runtime. When calling call_func, we need to have both the definition of the function and the declaration of the type we’re passing in visible.

During the rest of this post, I will talk about types in the context of a statically and strongly typed language, with a focus on static polymorphism. This pushes as much as possible of the type checking to the compilation stage, so many of the runtime issues of less strict languages become invalid syntax.

I will focus on C++ and cover some of the new C++17 feature which enable or make some of these concepts easier to work with. That being said, since this post focuses on types, I will also provide examples in Haskell, as Haskell can express these concepts much more succinctly.

Let’s start with the definition of a type: *a type represents the set of
possible values*. For example, the C++ type uint8_t represents the set of
integers from 0 to 255. Effectively this means that a variable of a given type
can only have values from within that set.

Since we defined a type as a set of possible values, we can talk about the cardinality of a type, in other words the number of values in the set. Based on cardinality, there are a few interesting classes of types to talk about:

The first interesting type to talk about is the type that represents the empty set, with |T| = 0.

In Haskell, this type is named Void. Since Haskell is a functional language, all functions must return a value, so it does not make sense to have a function that returns Void - the same way it doesn’t make sense to define a mathematical function with the empty set as its codomain. We do have an absurd function though, which maps the empty set to any value:

```
absurd :: Void -> a
```

This function cannot be called though.

In C++, the absence of a value is represented as the void type. Since C++ is not purely functional, we can define functions that don’t return anything. We can even say that a function does not take any arguments by putting a void between the parenthesis:

```
void foo(void);
```

This is the equivalent of:

```
void foo();
```

Note though that we cannot have a *real* argument of type void, that is a
compile error as it doesn’t make any sense - we would be mandating the function
takes a value from the empty set. So we can say foo(void) but not
foo(void arg), or even foo(int arg, void).

The next interesting class consists of types with cardinality 1. A type T
with |T| = 1 is called a *unit* or *singleton type*. A variable of such a
type can only ever have a single possible value. In Haskell, the anonymous
representation is the empty tuple (). Here is an example of a function that
maps anything to this type:

```
unit :: a -> ()
unit _ = ()
```

Of course, we can declare our own singleton types. Below is a custom Singleton type and an equivalent unit function:

```
data Singleton = Singleton
unit :: a -> Singleton
unit _ = Singleton
```

In C++, the anonymous representation of a singleton is an empty std::tuple:

```
template <typename T> std::tuple<> unit(T)
{
return { };
}
```

As can be seen from the above, Haskell makes it easier to define a function that takes an argument of any type, as it provides syntactic sugar for type parameters (a in our example). In C++, the equivalent declaration involves a template, but they boil down to the same thing. The non-anonymous C++ representation is a struct which doesn’t contain anything. All instances of such a struct are equivalent:

```
struct singleton { };
template <typename T> singleton unit(T)
{
return { };
}
```

Here, things get a bit more interesting: a sum type is a type which can represent a value from any of the types it sums. So given type A and type B, the type S summing up A and B is S = { i : i ∈ A U B }. So a variable of type S could have any value in A or any value in B. S is called a sum type because its cardinality is the sum of the cardinalities of A and B, |S| = |A| + |B|.

Sum types are great, because they allow us to build up more complex types from simpler ones. Once we have unit types, we can build up more complex types out of them by summing them. For example, a boolean type which can be either true or false can be thought of as the sum of the singleton true type and the singleton false type. In Haskell, a boolean is defined as:

```
data Bool = True | False
```

Similarly, a Weekday type can be defined as:

```
data Weekday = Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday
```

Theoretically, numerical types could also be defined as huge sum types of every possible value they can represent. Of course, this is impractical, but we can reason about them the same way we reason about other sum types, we don’t have to treat them as a special case.

In C++, an equivalent of the above is an enum class. bool is a built-in type with special syntax, but we could define an equivalent as:

```
enum class Boolean
{
True,
False
};
```

It’s easy to see how a Weekday definition would look like. Things get more interesting when we throw type parameters into the mix. In Haskell, we have the Either type, which is declared as follows:

```
data Either a b = Left a | Right b
```

An instance of this could be either a Left a, where a is a type itself, which means it can be any of the values of a, or it can be Right b, with any of the values of b. In Haskell we use pattern-matching to operate on such a type, so we can declare a simple function that tells us whether we were given a Left a like this:

```
isLeft :: Either a b -> Bool
isLeft (Left a) = True
isLeft (Right b) = False
```

This might not look like much, but of course we can compose more complex functions. For example, say we have a function foo that takes an a and returns an a, a function bar that takes a b and returns a b. We can then write a transform function which takes an Either a b and, depending on the contained type, it applies the appropriate function:

```
-- Implementation of foo and bar not provided in this example
foo :: a -> a
bar :: b -> b
transform :: Either a b -> Either a b
transform (Left a) = Left (foo a)
transform (Right b) = Right (bar b)
```

This is way beyond the capabilities of a C++ enum class. The old way of implementing something like this in C++ was using a union and a tag enum to keep track of which is the actual type we’re working with:

```
// Declaration of A and B not provided in this example
struct A;
struct B;
struct Either
{
Either(A a)
{
ab.left = a;
tag = tag::isA;
}
Either(B b)
{
ab.right = b;
tag = tag::isB;
}
union
{
A left;
B right;
} ab;
enum class tag
{
isA,
isB
} tag;
};
```

Our implementation of transform would look like this:

```
// Implementation of A and B not provided in this example
A foo(A);
B bar(B);
Either transform(Either either)
{
switch (either.tag)
{
case Either::tag::isA:
return foo(either.ab.left);
case Either::tag::isB:
return bar(either.ab.right);
}
}
```

Our Either type definition is obviously much more verbose than what we had in Haskell, and it doesn’t even support type parameters - at this point it only works with struct A and struct B, while the Haskell version works for any a and b types. The other major problem is that, while unions provide efficient storage for different types (the size of the union is the size of the maximum contained type), it is up to the implementer to make sure we don’t try to read an A as a B or vice-versa. That means we need to keep our tag in sync with what we put in the type and respect it when accessing the value of the union.

C++17 introduces a better, safer, parameterized type for this: std::variant. Variant takes any number of types as template arguments and stores an instance of any one of those types. Using variant, we can re-write the above as:

```
std::variant<A, B> transform(std::variant<A, B> either)
{
return std::visit([](auto e&&) {
if constexpr (std::is_same_v<decltype(e), A>)
return foo(std::get<A>(e));
else
return bar(std::get<B>(e));
}, either);
}
```

This is a lot of new syntax, so let’s break it down: std::variant<A, B> is the new C++17 sum type. In this case, we specify it holds either A or B (but it can hold an arbitrary number of types).

std::visit is a function that applies the visitor function given as its first argument to the variants given as its subsequent arguments. In our example, this effectively expands to applying the lambda to std::get<0>(either) and std::get<1>(either).

if constexpr is also a new C++17 construct which evaluates the if expression at compile time and discards the else branch from the final object code. So in this example, we determine at compile time whether the type we are being called with is A or B and apply the correct function based on that. Something very similar can be achieved with templates and enable_if, but this syntax makes for more readable code.

Note that with this version we can simply prepend a template <typename A, typename B> and make the whole function generic, as in the Haskell example. It doesn’t read as pretty (because we don’t have good pattern matching syntax in the language), but this is the new, type safe way of implementing and working with sum types, which is a major improvement.

With sum types out of the way, the remaining interesting category is that of
*product types*. Product types combine the values of several other types into
one. For types A and B, we have P = { (a, b) : a ∈ A, b ∈ B }, so
|P| = |A| x |B|.

In Haskell, the anonymous version of product types is represented by tuples, while the named version is represented by records. An example of a perimeter function which computes the perimeter of a rectangle defined by two points, where each point is a tuple of numbers:

```
perimeter :: (Num n) => (n, n) -> (n, n) -> n
perimeter (x1, y1) (x2, y2) = 2 * (abs(x1 - x2) + abs(y1 - y2))
```

The named version would declare a Point type with Int coordinates and use that instead:

```
data Point = Point { x, y :: Int }
perimeter :: Point -> Point -> Int
perimeter (Point x1 y1) (Point x2 y2) = 2 * (abs(x1 - x2) + abs(y1 - y2))
```

The C++ equivalents are std::tuple for anonymous product types and struct for named types:

```
// Anonymous
int perimeter(std::tuple<int, int> p1, std::tuple<int, int> p2)
{
return 2 * ((abs(std::get<0>(p1) - std::get<0>(p2))
+ abs(std::get<1>(p1) - std::get<1>(p2)));
}
// Named
struct point
{
int x;
int y;
};
int perimeter(point p1, point p2)
{
return 2 * (abs(p1.x - p2.x) + abs(p1.y - p2.y));
}
```

While sum types allow us to express values from multiple types into one, product types allow us to express values from several types together. Empty, unit, sum, and product types are the building blocks of a type system.

An optional type is a type that can hold any of the values of another type, or not hold any value, which is usually represented as a singleton. So an optional is effectively a sum type between a given type and a singleton representing “doesn’t hold a value”. In other words, the cardinality of an optional for a type T is |O| = |T| + 1.

In Haskell, an optional is the famous Maybe type:

```
data Maybe a = Just a | Nothing
```

A function that operates on Maybe could say “only apply foo if the optional contains an a”:

```
-- Implementation of foo not provided in this example
foo :: a -> a
transform :: Maybe a -> Maybe a
transform (Just a) = Just (foo a)
transform Nothing = Nothing
```

The new C++17 equivalent is the optional type:

```
// Implementation of foo not provided in this example
A foo(A);
std::optional<A> transform(std::optional<A> a)
{
return a != nullopt ? foo(*a) : nullopt;
}
```

This might read a bit like the pointer implementation:

```
A* transform(A* a)
{
return a != nullptr ? foo(*a) : nullptr;
}
```

There is a key difference though: the type contained in the optional is part of the object, so it is not allocated dynamically the way we would allocate a pointer. nullopt is a helper object of the singleton type nullopt_t.

Types are important because a big part of programming effectively consits of designing and composing types. Having a good understanding of the fundamentals leads to better, safer, and saner code.

We started by outlining some of the basic principles of type systems:

- Static and dynamic typing
- Weak and strong typing
- Static and dynamic polymorphism

Then we went over the building block types of a type system, with Haskell and C++ examples:

- Empty types (cardinality 0)
- Unit types (cardinality 1)
- Sum types (S of A and B has cardinality A + B)
- Product types (P of A and B has cardinality A x B)
- Optional types (O of A has cardinality A + 1)

Ben Dean had an excellent talk at CppCon this year, Using Types Effectively. Another great talk about type design from CppCon is C++, Abstract Algebra and Practical Applications by Robert Ramey. And then there is Bartosz Milewski blog about Haskell, C++, and category theory.

One of the most exciting features coming to C++ are coroutines. In this post, I will give a quick overview of how they are used today in C# to support generators and go over a few possible ways to bring the composability of Linq to the C++ world.

I will not go into the nitty-gritty details of coroutines, but in short, they are resumable functions – functions that can be suspended/resumed. Coroutines enable lazy evaluation, with two major applications: easy to read multi-threading (with async/await syntax in C#) and generators (with yield return syntax in C#). In this post, I will focus on generators and how they compose.

I will start with a C# example since Linq is, in my opinion, the golden standard for creating a processing pipeline, at least for non-functional languages. Linq is implemented as a set of extension methods for IEnumerable, and enables some very readable chaining of operations. For example, let’s get the first 100 natural numbers, filter out the odds, then square the remaining list.

The wrong way of doing this would be something like:

```
static IEnumerable<int> GetNumbers()
{
for (int i = 0; i < 100; i++)
if (i % 2 == 0)
yield return i * i;
}
static void Main(string[] args)
{
var result = GetNumbers();
foreach (var item in result)
Console.Write(item + " ");
}
```

The main problem with the above is that all the logic is inlined into GetNumbers, so things don’t decompose very well – for example what if we also want a function that squares the odd numbers? We would either duplicate the looping and squaring logic, or make the predicate we use to filter out things an input to the function. Same goes for the iterating logic and for the squaring. Luckily, we have Linq, which does just that:

```
static void Main(string[] args)
{
var result =
Enumerable.Range(0, 100).
Where(x => x % 2 == 0).
Select(x => x * x);
foreach (var item in result)
Console.Write(item + " ");
}
```

To illustrate the magic of generators, instead of relying on Enumerable.Range, let’s introduce a function that generates numbers forever:

```
static IEnumerable<int> Count()
{
for (int i = 0; ; i++)
yield return i;
}
```

Our code would then become:

```
static void Main(string[] args)
{
var result =
Count().
Take(100).
Where(x => x % 2 == 0).
Select(x => x * x);
foreach (var item in result)
Console.Write(item + " ");
}
```

While not strictly necessary in this particular case, infinite generators cannot exist without lazy evaluation, a feature of many functional languages. Lazy evaluation has some very practical applications as it allows processing of data as it becomes available, instead of waiting for everything to be ready before moving on to the next step. While the 100 natural numbers example might not sound so useful, imagine rendering frames in a streaming video as they arrive over the network. Linq is great because it provides a clean separation between the generic algorithms (Where, Select etc.) and the problem-specific operations which are passed in as arguments. Linq operations also compose well, so they can be chained together to form pipelines.

While coroutines haven’t made it into the C++17 standard itself, they are coming as a technical specification, with MSVC already supporting them (code samples below compile with VS 2015 Update 3). The main syntax additions are the new co_await, co_return, and co_yield keywords. The first two are used for creating and awaiting tasks (which I won’t cover in this post), while co_yield is used in generators.

Here is a lazy counter in C++:

```
auto count_to(int n) -> std::experimental::generator<int>
{
for (int i = 0; i < n; i++)
co_yield i;
}
int main()
{
for (int i : count_to(100))
std::cout << i << " ";
}
```

Note the return type of count_to is a generator<int> (currently in the experimental namespace). generator<T> is the type implicitly created by the compiler when encountering a co_yield. Also worth noting that range-based for loops work over generators, as they expose begin() and end() methods. The type annotation for the count_to return type above is not really needed, I added it just to clarify what the complier will generate in this case.

generator itself is pretty bare-boned, it doesn’t provide all the algorithms that Linq adds to IEnumerable. So if we wanted to do something like the above pipeline, we would need some algorithms. Here’s one way of implementing some of them:

```
auto count()
{
for (int i = 0; i < 100; i++)
co_yield i;
}
template <typename T>
auto take_n(std::experimental::generator<T> gen, int n)
{
int i = 0;
for (auto&& item : gen)
if (i++ < n)
co_yield item;
else
return;
}
template <typename T, typename Predicate>
auto filter(std::experimental::generator<T> gen, Predicate pred)
{
for (auto&& item : gen)
if (pred(item))
co_yield item;
}
template <typename T, typename BinaryOperation>
auto map(std::experimental::generator<T> gen, BinaryOperation op)
{
for (auto&& item : gen)
co_yield op(item);
}
```

Here I switched from Linq’s Select and Where to the more commonly used map and filter, but they effectively implement the same thing. While this implementation is pretty-straight forward, it doesn’t compose well at all:

```
int main()
{
auto result =
map(
filter(
take_n(count(), 100),
[](int x){ return x % 2 == 0; }),
[](int x){ return x * x;});
for (auto&& item : result)
std::cout << item << " ";
}
```

Definitely not like the nice chaining of Linq. So what gives? Why doesn’t generator come out-of-the-box with take_n, map, filter and all the other useful algorithms? Well, according to the single responsibility principle, these algorithms don’t belong in generator – generator encapsulates the lazy evaluation of the coroutine, it wouldn’t be the right place for algorithms. It’s also worth noting that Linq methods are not part of IEnumerable, they are extension methods. C++ doesn’t support extension methods, so we would need a slightly different design to achieve better chaining.

The next idea comes from pure OOP - let’s create a decorator over generator that exposes these algorithms. First, let’s declare our decorator as enumerable<T> and change our algorithms to work with the new type:

```
template <typename T>
struct enumerable;
template <typename T>
auto take_n(enumerable<T> gen, int n) -> enumerable<T>
{
int i = 0;
for (auto&& item : gen)
if (i++ < n)
co_yield item;
else
return;
}
template <typename T, typename Predicate>
auto filter(enumerable<T> gen, Predicate pred) -> enumerable<T>
{
for (auto&& item : gen)
if (pred(item))
co_yield item;
}
template <typename T, typename BinaryOperation>
auto map(enumerable<T> gen, BinaryOperation op) -> enumerable<T>
{
for (auto&& item : gen)
co_yield op(item);
}
```

The implementation looks pretty much like before, except that now we are getting and returning enumerable<T> instead of generator<T>. In this case the type annotation is mandatory, as by default the complier would create a generator<T>.

We can then implement our enumerable to wrap a generator and expose member functions which forward to the above algorithms:

```
template <typename T>
struct enumerable
{
// Needed by compiler to create enumerable from co_yield
using promise_type = typename std::experimental::generator<T>::promise_type;
enumerable(promise_type& promise) : _gen(promise) { }
enumerable(enumerable<T>&& other) : _gen(std::move(other._gen)) { }
enumerable(const enumerable<T>&) = delete;
enumerable<T> &operator=(const enumerable<T> &) = delete;
auto begin() { return _gen.begin(); }
auto end() { return _gen.end(); }
auto take_n(int n)
{
return ::take_n(std::move(*this), n);
}
template <typename Predicate>
auto filter(Predicate pred)
{
return ::filter(std::move(*this), pred);
}
template <typename BinaryOperation>
auto map(BinaryOperation op)
{
return ::map(std::move(*this), op);
}
std::experimental::generator<T> _gen;
};
```

A few things to note: we declare a promise_type and have a constructor which takes a promise as an argument. This is required by the compiler when creating the object on co_yield. We follow the same semantics as generator, since that is what we are wrapping – support only move-constructor, no copy-constructor. All the member algorithms do a destructive move on *this. This is intentional, as once we iterate over the encapsulated generator, it is no longer valid. Since we don’t expose a copy-constructor, we move out of *this when passing the generator to an algorithm. For completeness, we can also provide a function which converts from a generator to an enumerable:

```
template <typename T>
auto to_enumerable(std::experimental::generator<T> gen) -> enumerable<T>
{
for (auto&& item : gen)
co_yield item;
}
```

This works, and we can now compose algorithms by chaining the calls:

```
int main()
{
auto result =
to_enumerable(count()).
take_n(100).
filter([](int x) { return x % 2 == 0; }).
map([](int x) { return x * x; });
for (auto&& item : result)
std::cout << item << " ";
}
```

Still, it is not ideal – first, we need to explicitly tell the compiler everywhere to return our type with co_yield instead of the default generator, and we need to handle conversions to and from the standard library generator. The enumerable algorithms compose well, but we’ll have trouble composing with functions that work with generators. Also, having a huge class consisting solely of algorithms is not the best design, especially in a language where free functions are first class citizens.

An alternative approach, which the Boost Ranges library takes, is to overload |, the “pipe” operator, so we can compose our calls like this:

```
int main()
{
auto result =
count() |
take_n(100) |
filter([](int x) { return x % 2 == 0; }) |
map([](int x) { return x * x; });
for (auto&& item : result)
std::cout << item << " ";
}
```

One way we can get this working is to first create a type that wraps an algorithm and an operator| implementation between a lhs generator and a rhs of our type:

```
template <typename Predicate>
struct filter_t
{
filter_t(Predicate pred) : _pred(pred) { }
template<typename T>
auto operator()(std::experimental::generator<T> gen) const
{
for (auto&& item : gen)
if (_pred(item))
co_yield item;
}
const Predicate _pred;
};
template <typename T, typename Predicate>
auto operator|(
std::experimental::generator<T> lhs,
const filter_t<Predicate>& rhs)
{
return rhs(std::move(lhs));
}
```

Here, filter_t holds the Predicate we want to use, and operator| applies it on the given generator. This works, but we wouldn’t be able to instantiate filter_t with a lambda like in the above chaining example without specifying the Predicate type in the call. If we want to leverage type deduction, we can create a simple helper function that creates a filter_t from a given argument:

```
template <typename Predicate>
auto filter(Predicate pred)
{
return filter_t<Predicate>(pred);
}
```

With this we can call | filter(/* predicate */) on a generator and get back a filtered generator. Full implementation for take_n, filter and map would be:

```
struct take_n_t
{
take_n_t(int n) : _n(n) { }
template <typename T>
auto operator()(std::experimental::generator<T> gen) const
{
int i = 0;
for (auto&& item : gen)
if (i++ < _n)
co_yield item;
else
return;
}
const int _n;
};
template <typename T>
auto operator|(
std::experimental::generator<T> lhs,
const take_n_t& rhs)
{
return rhs(std::move(lhs));
}
auto take_n(int n)
{
return take_n_t(n);
}
template <typename Predicate>
struct filter_t
{
filter_t(Predicate pred) : _pred(pred) { }
template<typename T>
auto operator()(std::experimental::generator<T> gen) const
{
for (auto&& item : gen)
if (_pred(item))
co_yield item;
}
const Predicate _pred;
};
template <typename T, typename Predicate>
auto operator|(
std::experimental::generator<T> lhs,
const filter_t<Predicate>& rhs)
{
return rhs(std::move(lhs));
}
template <typename Predicate>
auto filter(Predicate pred)
{
return filter_t<Predicate>(pred);
}
template <typename BinaryOperation>
struct map_t
{
map_t(BinaryOperation op) : _op(op) { }
template <typename T>
auto operator()(std::experimental::generator<T> gen) const
{
for (auto&& item : gen)
co_yield _op(item);
}
const BinaryOperation _op;
};
template <typename T, typename BinaryOperation>
auto operator|(
std::experimental::generator<T> lhs,
const map_t<BinaryOperation>& rhs)
{
return rhs(std::move(lhs));
}
template <typename BinaryOperation>
auto map(BinaryOperation op)
{
return map_t<BinaryOperation>(op);
}
```

With this approach, we can apply our algorithms over a generator without having to introduce a different type. They also compose very nicely, the only slightly odd thing being using the | operator (though as I mentioned, there is a precedent for this in Boost and chances are it might show up in other places in the future).

One thing that would’ve made things even easier but unfortunately was not approved for C++17 is unified call syntax. At a high level, unified call syntax would make the compiler try to resolve x.f() to f(x) if decltype(x) doesn’t have an f() member function but there is a free function f(decltype(x)). Similarly, if no f(decltype(x)) exists but decltype(x) has a member function f(), f(x) would resolve to the member function call x.f().

If it’s not obvious, unified call syntax would allow us to easily create extension methods. We would be able to revert our algorithm code to the first version:

```
template <typename T>
auto take_n(std::experimental::generator<T> gen, int n)
{
int i = 0;
for (auto&& item : gen)
if (i++ < n)
co_yield item;
else
return;
}
template <typename T, typename Predicate>
auto filter(std::experimental::generator<T> gen, Predicate pred)
{
for (auto&& item : gen)
if (pred(item))
co_yield item;
}
template <typename T, typename BinaryOperation>
auto map(std::experimental::generator<T> gen, BinaryOperation op)
{
for (auto&& item : gen)
co_yield op(item);
}
```

But now this becomes very composable as calling take_n, filter or map on a generator would resolve to the free functions if the generator itself does not have them as members:

```
int main()
{
auto result =
count().
take_n(100).
filter([](int x) { return x % 2 == 0; }).
map([](int x) { return x * x; });
for (auto&& item : result)
std::cout << item << " ";
}
```

The above currently does not compile but it should (disclaimer: slight tweaks might be required) if unified call syntax becomes part of the standard.

We went over a couple of alternatives to implement some common algorithms over C++ generators with a focus on composability:

- Stand-alone functions are simple but don’t compose very well
- Using a decorator works, but is not ideal from a design point of view and not very idiomatic
- Using the pipe operator for chaining and helper types for the algorithms is the best approach today
- Unified call syntax would simplify things a lot, enabling a mechanism to implement these algorithms as extension methods

As a follow up to my previous post, I want to talk about two major new C++
features that keep not making it into the standard, namely *Concepts* and
*Modules*. These would have a significant impact on the code examples I
provided while discussing dependency injectiojn, so here’s a quick peek into the
future:

One way to think about concepts is that concepts are to templates what interfaces are to classes. Similarly to how interfaces specify a contract which implementing types must satisfy, concepts specify a contract which template argument types must satisfy. The main difference is that interfaces/classes enable runtime polymorphism while concepts/templates enable polymorphism at compile-time.

Runtime polymorphism:

```
// Our interface
struct IEngine
{
virtual void Start() = 0;
virtual void Stop() = 0;
virtual ~IEngine() = default;
};
// One implementation
class V6Engine : public IEngine
{
void Start() override { }
void Stop() override { }
}
// Another implementation
class V8Engine : public IEngine
{
void Start() override { }
void Stop() override { }
}
// A function that works against the interface
void StartEngine(IEngine& engine)
{
engine.Start();
}
void StartEngines()
{
V6Engine v6engine;
V8Engine v8engine;
// calls Start() on V6Engine instance passed as IEngine
StartEngine(v6engine);
// calls Start() on V8Engine instance passed as IEngine
StartEngine(v8engine);
}
```

Here, we have a single StartEngine function which works against an interface. Calling Start() for that interface involves a virtual function call, which means that at runtime, given an object of a type implementing IEngine, the code needs to figure out which function of the implementing type to call. For V6Engine, IEngine::Start() is V6Engine::Start(); for V8Engine, IEngine::Start() is V8Engine::Start(). Our classes have a vtable - a table containing this mapping, so a virtual call looks up the actual function to call in the vtable.

This is the “classical” object-oriented way of dispatching calls. The advantage of this approach is that it works across binaries [1] (we can export StartEngine from a shared library and pass in an external IEngine implementation to it), the disadvantage is the extra redirection - implementing classes must have a vtable and call resolution involves jumping through it.

Compile-time polymorphism:

```
// One implementation
class V6Engine
{
void Start() { }
void Stop() { }
}
// Another implementation
class V8Engine
{
void Start() { }
void Stop() { }
}
// A generic function
template <typename TEngine>
void StartEngine(TEngine& engine)
{
engine.Start();
}
void StartEngines()
{
V6Engine v6engine;
V8Engine v8engine;
// calls Start() on V6Engine, this instantiates
// StartEngine<V6Engine>(V6Engine& engine)
StartEngine(v6engine);
// calls Start() on V8Engine, this instantiates
// StartEngine<V8Engine>(V8Engine& engine)
StartEngine(v8engine);
}
```

A few differences to note here: we don’t have an IEngine interface anymore and the two types we use, V6Engine and V8Engine, no longer have virtual functions. Calling V6Engine::Start() or V8Engine::Start() now no longer involves a virtual call. The two StartEngine calls are actually made to different functions now - at compile time, whenever the compiler encounters a call to StartEngine with a new type, it instantiates the template, meaning it creates a new function based on that template with the template type as the provided type. We actually end up with one function that can start a V6Engine and one that can start V8Engine, both produced from the same template.

This is compile-time polymorphism, the advantage being that everything is determined during build - no virtual calls etc., the disadvantage being that the compiler needs to have a definition of the template available whenever it needs to create a new instance [2]. In this case we can’t encapsulate what happens inside StartEngine if we want others to be able to call the function.

The above works just fine, the problem being that, in general, if you have a templated function, it’s not obvious what contracts do the types it expects need to satisfy. For example, our StartEngine expects that the given type has a Start() function it can call. This isn’t obvious from the function declaration though. Also, compiler errors when templates cannot be instantiated are notoriously hard to decipher. The proposed solution to both of the above are concepts. Here is how an engine concept would look like:

```
template <typename TEngine>
concept bool Engine() {
return requires(TEngine engine) {
{ engine.Start() };
{ engine.Stop() };
}
}
```

This defines the Engine concept to require any type satisfying it to have a Start() function and a Stop() function. StartEngine would then be able to explicitly say what kind of types it expects:

```
template <Engine TEngine> StartEngine(TEngine& engine)
{
engine.Start();
}
```

It is now clear from the function declaration that StartEngine expects a type satisfying the Engine concept. We can look at the concept definition to see what we need to implement on our type. The compiler would also be able to issue much clearer errors when the type we pass in is missing one of the concept requirements.

Unfortunately, while several proposals for concepts have been put forward in the past years, they weren’t approved to be part of the C++17 standard. That being said, it’s fairly certain that they will eventually make it into the standard.

Another noteworthy feature are modules: currently, the #include directive textually includes the given file into the source file being compiled. This has a lot of build-time overhead (same header files get compiled over and over as they are included in various source files) and forces us to be extra-careful in how we scope things: what goes in a header file vs. what goes in a source file etc.

Modules aim to replace the header/source file split and provide a better way to group components and expose functionality. For example, here is a header/source file pair from my previous post:

```
// ICar.h
#pragma once
#include "IEngine.h"
struct ICar
{
virtual void Drive() = 0;
virtual ~ICar() = default;
};
std::unique_ptr<ICar> MakeCar(std::unique_ptr<IEngine> &&engine);
Car.cpp:
// Car.cpp
#include "ICar.h"
class Car : public ICar
{
public:
Car(std::unique_ptr<IEngine>&& engine)
: m_engine(std::move(engine))
{
}
void Drive() override
{
m_engine->Start();
// drive
m_engine->Stop();
}
private:
std::unique_ptr<IEngine> m_engine;
};
std::unique_ptr<ICar> MakeCar(std::unique_ptr<IEngine>&& engine)
{
return std::make_unique<Car>(std::move(engine));
}
```

Using modules, we would have:

```
module Car;
import Engine;
export struct ICar
{
virtual void Drive() = 0;
virtual ~ICar() = default;
};
class Car : public ICar
{
public:
Car(std::unique_ptr<IEngine>&& engine)
: m_engine(std::move(engine))
{
}
void Drive() override
{
m_engine->Start();
// drive
m_engine->Stop();
}
private:
std::unique_ptr<IEngine> m_engine;
};
export std::unique_ptr<ICar> MakeCar(std::unique_ptr<IEngine> &&engine)
{
return std::make_unique<Car>(std::move(engine));
}
```

This is now a single file where we import the Engine module (instead of #include), we provide the interface and concrete implementation, the factory function, and we mark publicly-visible declarations with the export keyword.

Like Concepts, Modules haven’t made it into the C++17 standard, but MSVC has a working implementation as of VS2015 Update 1.

So putting the above together, here is how dependency injection in C++ might look like in the not too far future:

```
// Engine.m
module Engine;
export template <typename TEngine>
concept bool Engine() {
return requires(TEngine engine) {
{ engine.Start() };
{ engine.Stop() };
}
}
// V8Engine.m
module V8Engine;
export class V8Engine
{
public:
void Start() { // start the engine }
void Stop() { // stop the engine }
};
// Car.m
module Car;
import Engine;
import V8Engine;
export template <Engine TEngine>
requires DefaultConstructible<TEngine>
class Car
{
public:
void Drive()
{
m_engine.Start();
// drive
m_engine.Stop();
}
private:
TEngine m_engine;
};
// Explicit instantiation exported from this module so clients
// won't have to re-instantiate the template for V8Engine type
export class Car<V8Engine>;
```

This can be used as follows:

```
import Car;
...
Car<V8Engine> car;
car.Drive();
```

This would be the equivalent of Dependency Injection with Templates I mentioned in the previous post.

[1] | As long as binaries are compiled with the same compiler. Otherwise the code produced by different compilers might have different vtable layouts and different name mangling. |

[2] | Other disadvantages are slower compile times and potential code bloat, as each template instantiation gets translated into a new function. |

In this post, I will switch gears from functional C++ to object oriented C++ and talk about dependency injection.

Let’s start with a simple example: take a Car class with a Drive() method. Let’s say this class contains a V8Engine attribute with Start() and Stop() methods. An initial implementation might look like this:

*V8Engine.h (publicly visible)*:

```
#pragma once
class V8Engine
{
public:
void Start();
void Stop();
};
```

*V8Engine.cpp*:

```
#include "V8Engine.h"
V8Engine::Start()
{
// start the engine
}
V8Engine::Stop()
{
// stop the engine
}
```

*Car.h (publicly visible)*:

```
#pragma once
#include "V8Engine.h"
class Car
{
public:
void Drive();
private:
V8Engine m_engine;
};
```

*Car.cpp*:

```
#include "Car.h"
void Car::Drive()
{
m_engine.Start();
// drive
m_engine.Stop();
}
```

In the above example, Car is tightly coupled to V8Engine, meaning we can’t create a car without a concrete engine implementation. If we want the ability to swap various engines or use a mock engine during testing, we could reverse the dependency by creating an IEngine interface and decoupling Car from the concrete V8Engine implementation. This way, we only expose an IEngine interface and a factory function. Car can work against that:

*IEngine.h (publicly visible)*:

```
#pragma once
struct IEngine
{
virtual void Start() = 0;
virtual void Stop() = 0;
virtual ~IEngine() = default;
};
std::unique_ptr<IEngine> MakeV8Engine();
```

*V8Engine.cpp*:

```
#include "IEngine.h"
class V8Engine : public IEngine
{
void Start() override { /* start the engine */ }
void Stop() override { /* stop the engine */ }
};
std::unique_ptr<IEngine> MakeV8Engine()
{
return std::make_unique<V8Engine>();
}
```

*Car.h (publicly visible)*:

```
#pragma once
#include "IEngine.h"
class Car
{
public:
Car(std::unique_ptr<IEngine>&& engine)
: m_engine(std::move(engine))
{
}
void Drive();
private:
std::unique_ptr<IEngine> m_engine;
};
```

*Car.cpp*:

```
#include "Car.h"
void Car::Drive()
{
m_engine->Start();
// drive
m_engine->End();
}
```

Headers simply get textually included in each compilation unit by the #include directive. It is not mandatory to provide a header file for each class declaration. If a class can be scoped to a single source file, then it doesn’t need a header declaration (for example the V8Engine class above does not need a V8Engine.h header corresponding to the V8Engine.cpp). It is also a good idea to have public headers and internal headers: public headers contain the public API surface and can be included by other parts of the system, while internal headers are only used within the component and should not be included by external code.

Default should be the least visible: try to keep everything inside the cpp file (like V8Engine.cpp). If that is not enough, an internal header might do. A declartion should be pulled into a public header only when external components need to reference it.

It’s a good idea to declare a default virtual destructor: if a deriving type has a destructor, it won’t get called if we store an upcast pointer to the interface unless the interface declares a virtual destructor. Note a destructor does not to be expicitly defined - compiler might generate a default one.

MSVC compiler provides a __declspec(novtable) [1] custom attribute which tells the compiler not to generate a vtable for pure abstract classes. This reduces code size. Below is the IEngine declaration with this attribute:

```
struct __declspec(novtable) IEngine
{
virtual void Start() = 0;
virtual void Stop() = 0;
virtual ~IEngine() = default;
};
```

I won’t include it in the code samples in this post, but it’s worth keeping in mind when working with MSVC.

When working with interfaces as opposed to concrete types, we use factory functions to get object instances. Below is a possible naming convention, taking object ownership into account:

```
std::unique_ptr<IFoo> MakeFoo();
IFoo& UseFoo();
std::shared_ptr<IFoo> GetFoo();
```

The first function, MakeFoo, returns a unique pointer, passing ownership to the caller. Like in the example above, the unqiue_ptr can be moved into the object, which ends up owning it. Use a Make when each call creates a new instance.

The second function implies there already exists an IFoo object which is owned by someone else, with the guarantee that it will outlive the caller. In that case, there is no need for pointers and we can simply return a reference to the object. This can be used, for example, for singletons. Below is an example of a singleton Engine:

```
IEngine& UseEngine()
{
static auto instance = std::make_unique<Engine>();
return *instance;
}
```

The third function, GetFoo, implies shared ownership - we get an object that other objects might hold a reference to, but we don’t have the lifetime guarantee a singleton would give us, so we need to use a shared pointer to make sure the object is kept alive long enough.

Since Car now works with an IEngine interface, in test code we can mock the engine:

*Test.cpp*:

```
#include "Car.h"
class MockEngine : public IEngine
{
void Start() override { /* mock logic */ }
void Stop() override { /* mock logic */ }
};
void Test()
{
Car car(std::make_unique<MockEngine>());
// Test Car without a real Engine
}
```

We can also expose Car as a simple interface, hiding its implementation details, in which case we would end up with the following:

*ICar.h (publicly visible)*:

```
#pragma once
#include "IEngine.h"
struct ICar
{
virtual void Drive() = 0;
virtual ~ICar() = default;
};
std::unique_ptr<ICar> MakeCar(std::unique_ptr<IEngine> &&engine);
```

*Car.cpp*:

```
#include "ICar.h"
class Car : public ICar
{
public:
Car(std::unique_ptr<IEngine>&& engine)
: m_engine(std::move(engine))
{
}
void Drive() override
{
m_engine->Start();
// drive
m_engine->Stop();
}
private:
std::unique_ptr<IEngine> m_engine;
};
std::unique_ptr<ICar> MakeCar(std::unique_ptr<IEngine>&& engine)
{
return std::make_unique<Car>(std::move(engine));
}
```

Test would become:

```
#include "ICar.h"
class MockEngine : public IEngine
{
void Start() override { /* mock logic */ }
void Stop() override { /* mock logic */ }
};
void Test()
{
auto car = MakeCar(std::make_unique<MockEngine>());
// Test ICar without a real Engine
}
```

Note this allows the caller to pass in any IEngine. We provide an out-of-the-box V8Engine but other engines can be injected when Car gets constructed. The headers IEngine.h and ICar.h are public per our above defintion.

In general, it’s great if we can get the rest of the component code and unit tests to work against the interface. Sometimes though we might need to know more about the actual implementation inside our component, even if externally we only expose an interface. In that case, we can add an internal Car.h header:

*Car.h (internal)*:

```
#pragma once
#include "ICar.h"
class Car : public ICar
{
public:
Car(std::unique_ptr<IEngine>&& engine)
: m_engine(std::move(engine))
{
}
void Drive() override;
private:
std::unique_ptr<IEngine> m_engine;
};
```

*Car.cpp* becomes:

```
#include "Car.h"
void Car::Drive()
{
m_engine.Start();
// drive
m_engine.Stop();
}
std::unique_ptr<ICar> MakeCar(std::unique_ptr<IEngine>&& engine)
{
return std::make_unique<Car>(std::move(engine));
}
```

Now we can include the internal header, and, while not necessarily recommended, we can cast ICar to Car inside the component:

```
auto icar = MakeCar(MakeV8Engine());
auto& car = static_cast<Car&>(*car);
```

Another trick if needing access to internals (again, not something necessarily recommended), is to make the unit test class testing Car a friend of the Car class, in which case it can access its private members.

In summary, with this approach we are able to:

- Hide implementation details in the .cpp files
- Work against abstract interfaces
- Inject dependencies during object construction

An alternative to the above is to use templates. In this case, we would have to provide the implementation inside the header file, as code needs to be available when templates get instantiated:

*V8Engine.h (publicly visible)*:

```
#pragma once
class V8Engine
{
public:
void Start();
void Stop();
};
```

*V8Engine.cpp*:

```
#include "V8Engine.h"
V8Engine::Start()
{
// start the engine
}
V8Engine::Stop()
{
// stop the engine
}
```

*Car.h (publicly visible)*:

```
#pragma once
template <typename TEngine>
class Car
{
public:
void Drive()
{
m_engine.Start();
// drive
m_engine.Stop();
}
private:
TEngine m_engine;
};
```

Note Car is implemented in the header and V8Engine is also a publicly visible header. Now we can create an instance of Car like this:

```
#include "V8Engine.h"
#include "Car.h"
...
Car<V8Engine> car;
```

Mocking the engine in test code would look like this:

```
#include "Car.h"
class MockEngine
{
void Start() { /* mock logic */ }
void Stop() { /* mock logic */ }
};
void Test()
{
Car<MockEngine> car;
// Test Car without a real Engine
}
```

With this approach we are able to:

- Inject dependencies during template instantiation
- No need for virtual calls (note TEngine is not an interface, so calls can be resolved at compile-time)
- Car<T> can be default-constructed

A drawback here is we expose the implementation details of Car inside the header file and we have to make this publicly visible.

We can use a hybrid approach if we don’t need an externally injected Engine. Say our component provides a V8Engine, a V6Engine, and we have a MockEngine used during testing. We have the same componentization requirements but don’t need to expose all the details to consumers. In that case we could have something like this:

*ICar.h (publicly visible)*:

```
#pragma once
struct ICar
{
virtual void Drive() = 0;
virtual ~ICar() = default;
};
std::unique_ptr<ICar> MakeV8Car();
std::unique_ptr<ICar> MakeV6Car();
```

*Car.h (internal)*:

```
#pragma once
#include "ICar.h"
template <typename TEngine>
class Car : public ICar
{
public:
void Drive() override
{
m_engine.Start();
// drive
m_engine.Stop();
}
private:
TEngine m_engine;
};
```

*Car.cpp*:

```
#include "Car.h"
#include "V8Engine.h"
#include "V6Engine.h"
std::unique_ptr<ICar> MakeV8Car()
{
return std::make_unique<Car<V8Engine>>();
}
std::unique_ptr<ICar> MakeV6Car();
{
return std::make_unique<Car<V6Engine>>();
}
```

Test would remain the same as in the example above, where we worked against a Car type (not an ICar) which we instantiate with a MockEngine.

With this approach:

- Our external API is an interface
- Internally we still inject the dependency using a template

With this approach, we do have an interface and virtual calls for Car but not for TEngine types. One drawback with this approach is that consumers cannot inject their own Engine type: we can only create cars with engines that are known within our component.

We decoupled Car from V8Engine and looked at three ways of injecting the dependency:

- Using interfaces, where dependency is injected at runtime during object creation
- Using templates, where dependency is injected at compile-time during template instantiation
- A hybrid approach which uses templates internally but exposes only interfaces publicly

Each of these approaches has pros and cons, the tradeoffs mostly being around encapsulation (how much of the component code we expose publicly), runtime (templates are instantiated at compile-time so no virtual calls etc.), type constraints (with templates we don’t require engines to implement a particular IEngine interface), and flexibility (with the hybrid approach we can’t inject an external engine, we can only use what the component has available internally).

[1] | For more details on novtable, see MSDN. |

Using hash maps (or dictionaries, or lookups) is a very natural way of coding in some languages, especially dynamic languages, where usually an object can be treated as a map itself, to which attributes and methods can be added or removed at runtime.

In practice though, maps are often used to convert a value of one type into a value of a different type. It is not uncommon to have very small maps like

```
const std::unordered_map<Foo, Bar> fooBarMap = {
{ foo1, bar1 },
{ foo2, bar2 },
{ foo3, bar3 }
};
...
auto bar = fooBarMap[foo];
```

Here it is useful to make a distinction between the pattern and the data structure. The coding pattern itself is great - mapping a value from a type to a value of another type should definitely be declarative. Below is a counterexample of non-declarative mapping:

```
Bar barFromFoo(const Foo& foo)
{
if (foo == foo1)
return bar1;
if (foo == foo2)
return bar2;
if (foo == foo3)
return bar3;
...
}
```

This is really ugly. As I mentioned in Clean Code - Part 1, branching should be avoided whenever possible, and this is a good opportunity to use a declarative approach as opposed to a bunch of branching logic. That being said, while the mapping pattern is great, in C++ the data structure most developers default to is not the optimal one for this.

If you are coding in C++, odds are you care a little bit about the runtime footprint of your code. In that case, you might be surprised to learn that, while an unordered_map in C++ (or a lookup or hash map or dictionary in any other language) has an average lookup cost of O(1), there are better ways to implement the above pattern.

A map in C++ is implemented as a red-black tree containing buckets of hashed values. Calling at() on a map implies the given key has to be hashed and the tree traversed to find the value. Calling [] on an inexistent key will add it to the data structure, which might trigger a rebalancing of the tree. There is a lot of work happening under the hood, and while this makes sense for an unordered_map of arbitrarily large size, for small lookups it is a lot of overhead.

An alternative to unordered_map provided by the boost library is flat_map [1]. This has similar semantics to an unordered_map, but the key-values are stored in a contiguous data structure so traversing it is more efficient than walking a tree.

In general, there are a couple of approaches for keeping a hash map in a linear data structure:

- The keys can be kept sorted, which has O(N) worst case insertion since it might require all elements to be moved to fit a new one and O(logN) lookup (binary search)
- The keys can be kept unsorted, which has O(1) insertion (simple append) but O(N) lookup (linear search)

For very small-sized lookups, the cost of hashing itself might out-weight a linear traversal, so for a small N

```
const std::unordered_map<Foo, Bar> fooBarMap = {
{ foo1, bar1 },
{ foo2, bar2 },
{ foo3, bar3 }
};
...
auto bar = fooBarMap[foo];
```

performs worse than

```
const std::vector<std::pair<Foo, Bar>> fooBarMap = {{
{ foo1, bar1 },
{ foo2, bar2 },
{ foo3, bar3 }
}};
...
auto bar = std::find_if(
fooBarMap.cbegin(),
fooBarMap.cend(),
[](const auto& elem) { return elem == foo; });
```

On my machine (using MSVC 2015 STL implementation), for an N of 5, find_if on a vector is about twice as fast as the equivalent unordered_map lookup.

There’s event more hidden cost: std::vector manages a dynamic array which is allocated on the heap. Having an std::vector initialized with key-values as described above, even if more efficent than an unordered_map, still has some associated cost in terms of heap allocations (albeit smaller than unordered_map). std::array is a much better suited container for cases when the key-values are known at compile time, as std::array simply wraps a regular array which is not allocated on the heap. So a more efficient (in terms of initialization cost) way of declaring such a look up is

```
const std::arrray<std::pair<Foo, Bar>, 3> = {{
{ foo1, bar1 },
{ foo2, bar2 },
{ foo3, bar3 }
}};
```

We can still apply the std::find_if algorithm on this array, but we skip a heap allocation. Depending on the template types used, we might be able to skip any allocations whatsoever (if both types are trivial [2]). For example, note that std::string, similarly to a vector, wraps a heap-allocated char* and constructing it requires heap allocations. const char* to a string literal on the other hand is just a pointer to the .rodata segment. So this

```
const std::array<std::pair<std::string, Bar>, 3> = {{
{ "foo1", bar1 },
{ "foo2", bar2 },
{ "foo3", bar3 }
}};
```

performs three heap allocations (for "foo1", "foo2", and "foo3"), while the (mostly) equivalent

```
const std::array<std::pair<const char*, Bar>, 3> = {{
{ "foo1", bar1 },
{ "foo2", bar2 },
{ "foo3", bar3 }
}};
```

shouldn’t perform any allocations.

Since in practice maps are often used to implement the above described pattern of mapping a value from one type to a value of a different type for a small set of known values, it would be great to combine the efficiency of an array with the nice lookup semantics of an unordered_map conatiner.

I propose a generic container of the following shape:

```
template <
typename TKey,
typename T,
size_t N,
typename KeyEqual = key_equal<TKey>>
struct associative_array
{
std::pair<TKey, T> m_array[N];
...
};
```

keq_equal should simply resolve to == for most types, but be specialized for strings types (to use strcmp, wcscmp etc.) and allow clients to specialize their own key_equal when needed.

```
template <typename T> struct key_equal
{
bool operator(const T& lhs, const T& rhs)
{
return lhs == rhs;
}
};
template <> struct key_equal<char*>
{
bool operator(const T& lhs, const T& rhs)
{
return strcmp(lhs, rhs) == 0;
}
};
...
// specializations for wchar_t and const variations of the above
```

Satisfying the container concept is fairly easy (eg. size() would return N, iterators over the member array are trivial to implement etc.), the only interesting methods are find(), at(), and operator[]:

```
...
struct associative_array
{
...
iterator find(const TKey& key)
{
return std::find_if(
begin(),
end(),
[&](const auto& elem) { return KeyEqual{}(key, elem.first); });
}
T& at(const TKey& key)
{
auto it = find(key);
if (it == end())
throw std::out_of_range("...");
return it->second;
}
T& operator[](const TKey& key)
{
return find(key)->second;
}
...
};
```

find() wraps std::find_if leveraging KeyEqual (with default implementation as key_equal), at() wraps a bounds-checked find, while operator[] does not check bounds. const implementations of the above are also needed (identical except returning const T&).

Such a container would have similar semantics to std::unordered_map (minus the ability to add elements given a key not already present in the container) and the same performance profile of std::array:

```
const std::associative_array<Foo, Bar, 3> fooBarMap = {{
{ foo1, bar1 },
{ foo2, bar2 },
{ foo3, bar3 }
}};
...
auto bar = fooBarMap[foo];
```

Note the only syntax difference between above and unordered_map is the container type, the extra size N which needs to be specified at declaration time, and an extra pair of curly braces. In practice, this should have a significantly better lookup time than an unordered_map for a small N (linear time, but since N is small and no hashing or heap traversal occurs, should clock better than a map lookup) and virtually zero initialization time - depending on the TKey and T types used, it is possible to declare an associative_array as a constexpr fully evaluated at compile-time.

[1] | Boost flat_map documentation is here. |

[2] | For more details on trivial types, see the is_trivial type trait. |

For efficiency reasons, C++ had a myriad of ways to pass data around. A function can take arguments in several forms:

```
void foo(T);
void foo(T*);
void foo(T&);
void foo(T**);
void foo(T&&);
```

No to mention adding const and smart pointers into the mix.

And speaking of pointers, we have raw pointers, unique_ptr, shared_ptr, CComPtr/ComPtr (for Windows COM objects), and, if you are working in older codebases, auto_ptr, and maybe even some homebrewed refcounted pointers.

All of this might seem a bit daunting and make C++ seem more complicated than it really is. One way to think about it is from the ownership perspective: objects are resources and the key question is “who owns this resource?”. This should dictate both how a particular resource is allocated (and released) and the shape function arguments should take. Lifetime is also an important consideration - a resource shouldn’t be released while other components expect it to be available, but it should be released as soon as it is no longer needed.

In this post I will try to cover the various ways in which resources can be allocated, owned, and passed around.

The simplest, clearest thing to do is allocate objects on the stack. A stack object doesn’t involve any pointers:

```
void foo()
{
Bar bar;
}
```

The variable bar of type Bar is created on the stack. Once the stack frame is popped (once the function is done executing, either through a normal return or due to an exception) the object goes away. This is the easiest, safest thing to do. The only reasons we wouldn’t always do this are time and space requirements: lifetime-wise, we might want to somehow use bar after foo() returns - for example we might want to pass it around to some other object that wants to use at a later time; in terms of space, stack memory is more limited than the heap, so large objects are better kept on the heap to avoid overflow.

One way to get around the lifetime requirement is to pass the object by value:

```
void do_suff(Bar bar); // hands bar off to some other object
void foo()
{
Bar bar;
do_suff(bar);
}
```

Let’s assume for this example that do_suff takes the argument and sticks it into some global object which will use it as some future time.

The above code will simply create a copy of the object, so whatever do_suff gets won’t be the original resource which gets freed once the function returns, rather a copy of it. Copying an object costs both run time and space, but if neither are a big concern, this is a great, safe way of ensuring resources don’t get released before we’re done with them.

C++11 introduces a cheaper way of achieving this, through move semantics:

```
void do_suff(Bar&& bar); // hands bar off to some other object
void foo()
{
Bar bar;
do_suff(std::move(bar));
}
```

With move semantics, the resource is actually *moved* into the do_suff
function. Once this happens, the original object is left in an undefined state
and shouldn’t be used anymore. This approach is usually employed when we have a
sink argument - we *sink* bar to its final resting place somewhere and
foo() no longer cares about it after passing it down to do_stuff.

One thing to keep in mind is that move is not magic, so Bar needs to declare a move constructor in order for this to do what we expect it to do. If Bar doesn’t declare a move constructor, the above becomes a simple copy [1].

On the flipside, when we care about the size, so that we don’t want to create a copy of the object, but we aren’t worried about the lifetime - meaning the object we pass to do_stuff won’t have to outlive the function call, we can pass by reference:

```
void do_suff(const Bar& bar); // bar is only used within do_suff
void foo()
{
Bar bar;
do_suff(bar);
}
```

Note the const above - this means do_suff will use bar but won’t modify it. By default, arguments should be marked as const unless the function does indeed need to alter the object. Regardless of constness, in this case we pass a reference to bar as an argument, which is very cheap (a reference has the same size as a pointer). The only caveat is that do_stuff should not pass this to some other object that outlives the function call (eg. a global object which tries to use it later), because as soon as foo returns, the reference becomes invalid.

A pointer argument would look like this:

```
void do_stuff(const Bar* bar);
void foo()
{
Bar bar;
do_stuff(&bar);
}
```

A good rule of thumb is to not do this. The difference between passing by reference and by pointer in this case is that a pointer can be null, while a reference can’t. So passing by pointer here automatically brings the need to perform null checks to ensure bad things don’t happen. You would need to make a very good argument to convince me during code review that using a pointer instead of a reference is appropriate. Unless working against a legacy API which can’t be changed, I highly discourage use of raw pointers.

In summary, when designing an API:

- Take argument by value if copying it is not a concern
- Take argument by const& if it’s not a sink argument, meaning we don’t need to refer to it passed the function call
- Take argument by reference (&) if 2) but the API needs to modify it
- Take argument by && if it’s a sink argument, the type has a move constructor, and copying it is expensive
- Don’t pass raw pointers around

In all of the examples above, bar was an object created on the stack. This works great in some cases, but some objects are simply too big to fit on the stack, or it doesn’t make sense for them to do so (if, for example, we want to vary their size at runtime). In this case, we allocate the object on the heap and keep a pointer to it.

Once we start working with heap objects, ownership becomes even more important:
unlike stack objects, which get automatically destroyed when their stack frame
gets popped, heap objects need to be explicitly deleted. This responsibility
should be with the *owner* of the object.

A unique pointer (std::unique_ptr) is a wrapper around a raw pointer which will automatically delete the heap object when it goes out of scope itself:

```
void foo()
{
auto ptrBar = std::make_unique<Bar>();
} // ptrBar goes out of scope => heap object gets deleted
```

The above call to make_unique allocates an instance of Bar on the heap and
wraps the pointer to it into the unique pointer ptrBar. Now ptrBar
*owns* the object and as soon as ptrBar goes out of scope, the heap object is
also deleted.

Unique pointers cannot be copied, so we can never accidentally have more than one single unique_ptr pointing to the same heap object:

```
auto ptrBar = std::make_unique<Bar>();
...
std::unique_ptr<Bar> ptrBar2 = ptrBar; // Won't compile
```

Of course, if we *really* want to, we can get the raw pointer out of ptrBar
using get() and we can initialize a unique_ptr from a raw pointer -

```
// Please don't do this
std::unique_ptr<Bar> ptrBar2(ptrBar.get())
```

but this is very bad - now both pointers think they have sole ownership of the resource, and as soon as one goes out of scope, using the other one leads to undefined behavior. In general, the same way there are very few good reasons to use raw pointers, there are very few good reasons to call get() on a smart pointer.

Avoid using raw pointers. Raw pointers don’t express ownership, so they don’t offer the same guarantees that a) the resource pointed to gets properly cleaned up and b) the resource pointed to is still valid at a given time. This leads to dereferencing invalid memory and double-deletes (trying to free the same heap object multiple times), which means undefined behavior. Also, don’t mix smart and raw pointers - the smart pointers will keep doing their job happily, with the potential of making the raw pointers invalid.

On Windows, COM uses a different reference counting mechanism: the base IUnknown interface declares AddRef and Release methods, which implementations are expected to use to keep track of the reference count. CComPtr (in ATL) and ComPtr (in WRL) are the COM smart pointers. They call AddRef and Release on the owned object, and the owned object is supposed to delete itself once its reference count drops to 0. Note that COM uses a slightly different mechanism than the standard library shared pointers - instead of the smart pointer keeping track of the reference count in the control block and deleting the object once the last reference goes away, COM objects are expected to keep track of their reference count themselves through the AddRef and Release methods and self-delete when the last reference goes away (through Release call). The COM smart pointers only need to call Release when they go out of scope.

It’s not a good idea to have both standard library and COM pointers point to the same object, as each might decide to delete the object at different times - shared_ptr looks at the shared_ptr refcount while COM objects look at their internal reference count. So a shared_ptr might decide to delete an object while a ComPtr still expects it to be valid or vice-versa. In general, when working with COM objects, use COM smart pointers.

auto_ptr is a deprecated smart pointer. Unless working with an old compiler and standard library, use unique_ptr or shared_ptr instead.

Old code bases might have custom smart pointer implementations, for the simple fact that automatic memory management is always a good idea, and there is C++ code that predates the introduction of smart pointers into the standard library. When interoperating with legacy code, use whatever works, but when writing new code, do prefer standard library smart pointers to homebrewed ones.

In summary, when creating objects:

- Create them on the stack if feasible (note that standard library types like std::vector and std::string internally keep their data on the heap, but they fit perfectly well on the stack, so you don’t need to create an std::vector on the heap just because you are planning to store a lot of elements in it - the vector manages a heap array internally already).
- Use a unique_ptr when creating them on the heap, to make ownership obvious.
- Use a shared_ptr only when unique_ptr isn’t sufficient (review your design first, might be a design issue).
- Use COM smart pointers like CComPtr when dealing with COM.
- Don’t use auto_ptr or other old constructs unless working with legacy code/compiler.
- Don’t use raw pointers.

We covered passing arguments and smart pointers. Now combining the two, how do we pass heap objects as arguments? Turns out Herb Sutter has a great post on this exact topic on his blog. I can’t hope to explain better than him, so go read his post. I will try to summarize:

Rather than forcing callers to use unique_ptr or shared_ptr by specifying the smart pointer type (which makes assumptions about ownership), just ask for a reference to the pointed-to-type:

```
void do_stuff(const Bar& bar);
void foo()
{
auto ptrBar = std::make_unique<Bar>();
do_stuff(*ptrBar);
}
```

Herb also mentions raw pointer to the underlying type if the argument can be null, but as I mentioned above, I’d rather stick to references and discourage use of raw pointers as a general rule of thumb.

Passing a unique_ptr by value implies a sink argument - since a unique_ptr cannot be copied, it has to be std::move’d in. Interestingly, Scott Meyers has a post on his blog where he disagrees with this and argues that arguments of move-only types should be specified as &&:

```
void do_stuff(unique_ptr<Bar>&& ptrBar); // sink
void foo()
{
auto ptrBar = std::make_unique<Bar>();
do_stuff(std::move(ptrBar));
}
```

Passing a shared_ptr by value implies the function wants to partake in the ownership - in other words, will keep somewhere a reference to the object after the function returns, but unlike the above unique_ptr example, it won’t have exclusive ownership of the resource:

```
void do_stuff(shared_ptr<Bar> ptrBar);
void foo()
{
auto ptrBar = std::make_shared<Bar>();
do_stuff(ptrBar); // copy-constructs another shared_ptr which shares ownership of the heap object
}
```

Only expect a smart pointer by non-const reference if the function is going to modify the smart pointer itself (eg. by making it point to a different object). In my experience, this is a rare occurrence.

```
// Implies this function modifies the pointer itself.
void do_stuff(shared_ptr<Bar>& ptrBar);
```

There is no good reason to expect a const& to a unique_ptr, just reference the underlying type:

```
// void do_stuff(const unique_ptr<Bar>& ptrBar);
// No reason to use the above as opposed to
void do_stuff(const Bar& bar);
```

Expect const& to shared_ptr only if the function *might* create a copy
of the smart pointer. If the function would never create a copy of the pointer,
simply use & to underlying type. If the function would always copy the
pointer, expect shared_ptr by value.

```
// Might or might not share ownership
void do_stuff(const shared_ptr<Bar>& ptrBar);
// Will never share ownership
void do_stuff(const Bar& bar);
// Will always share ownership
void do_stuff(shared_ptr<Bar> ptrBar);
```

A summary of the summary:

- Take argument by & to underlying type if you only care about the heap object, not about the pointer.
- Take unqiue_ptr by && to transfer ownership.
- Take shared_ptr argument by value to partake in ownership.
- Take smart pointer by (non-const) reference only if you are going to modify the smart pointer itself.
- No need for const& to unique_ptr (just take & to underlying type)
- Take const& to shared_ptr only if unknown whether function wants ownership (take by & to underlying type if function never wants ownership, shared_ptr by value if function always wants ownership).

[1] | Move constructors can be implicitly declared by the compiler if certain conditions are met, see here for details. |

In Part 1 I talked about writing less code and writing simple code. In this post I will cover writing stateless code and writing readable code.

A

pure functionis afunctionwhere the return value is only determined by its input values, without observable side effects. This is howfunctionsin math work: Math.cos(x) will, for the same value of x, always return the same result.

Source:SitePoint

Things break when a program gets into a “bad state”. There are a couple of ways
to make this less likely to happen: making data immutable and writing functions
that don’t have side-effects (*pure* functions).

If something shouldn’t change, mark it as immutable and let the compiler enforce that. A good rule of thumb is to mark things as const (const&, const* etc.) and/or readonly by default, and make them mutable only when truly needed [1].

A simple example:

```
struct StringUtil
{
static std::string Concat1(std::string& str1, std::string& str2)
{
return str1 + str2;
}
static std::string Concat2(std::string& str1, std::string& str2)
{
return str1.append(str2);
}
};
```

Both StringUtil::Concat1 and StringUtil::Concat2 return the same thing for the same input, the difference being that Concat2, as opposed to Concat1, modifies its first argument. In a bigger function, such a change might be introduced accidentally and have unexpected consequences down the line.

A simple way to address this is by explicitly marking the arguments as const:

```
struct StringUtil
{
static std::string Concat1(const std::string& str1, std::string& str2)
{
return str1 + str2;
}
static std::string Concat2(const std::string& str1, std::string& str2)
{
return str1.append(str2); // Won't compile - can't call append on str1
}
};
```

In this case, Concat2 won’t compile, so we can rely on the compiler to eliminate this type of unintended behavior.

Another example: a simple UpCase function which calls toupper on each character of the given string, upcasing it in place:

```
void UpCase(char *string)
{
while (*string)
{
*string = toupper(*string);
string++;
}
};
```

Calling it with

```
char* myString = "Foo";
...
UpCase(myString);
```

will lead to a crash at runtime - the function will try to call toupper on the characters of myString. The problem is that myString is a char* to a string literal which gets compiled into the read-only data segment of the binary. This cannot be modified.

To catch this type of errors at compile-time, we again only need to mark the immutable data as such:

```
const char* myString = "Foo";
...
UpCase(myString); // Won't compile - can't call UpCase on myString
```

In contrast with the previous example, the argument to UpCase is mutable by design (the API is modifying the string in-place), but marking myString as const tells the complier this is non-mutable data, so it can’t be used with this API.

Another way to reduce states is to use pure functions. Unfortunately there isn’t a lot of syntax-level support for this in C++ and C# (C++ supports const member functions, which guarantee at compile time that calling the member function on an instance of the type won’t change the attributes of that instance) [2]

This goes back to the recommendation from Part 1 of using generic algorithms and predicates rather than implementing raw loops. In many cases, traversal state is encapsulated in the library algorithm or in an iterator, and predicates ideally don’t have side-effects.

```
var squares = numbers.
Where(number => number % 2 != 0).
Select(number => number * number);
```

Above code (also from Part 1) doesn’t hold any state: traversal is handled by the Linq methods, the predicates are pure.

In general, try to encapsulate state in parts of the code built to manage state, and keep the rest stateless. Note that immutable data and pure functions are also an advantage in concurrent applications, since they can’t generate race conditions.

Key takeaways:

- Prefer pure functions to stateful functions and, if state is needed, keep it contained
- By default mark everything as const (or readonly), and only remove the constraint when mutability is explicitly needed

In computer science, the

expressive power(also calledexpressivenessor expressivity) of a language is the breadth of ideas that can be represented and communicated in that language […]

- regardless of ease (theoretical expressivity)
concisely and readily(practical expressivity)

Source:Wikipedia

Code is read many more times than it is written/modified, so it should be optimized for readability. What I mean by this is making the intent of the code clear at a glance - this includes giving good descriptive names to variables, functions, and types, adding useful comments where appropriate (a comment should describe what the code does if it is non-obvious; a comment like foo(); // calls foo() is not a useful comment), and in general structure the code for easy reading.

For a counterexample, think back on a piece of code you read that elicited a WTF. That’s the kind of code you don’t want to write.

I won’t insist much here, since there are countless books and industry best practices for improving code readability.

Another way to make the code more readable is to have a good knowledge of the language you are using. The strength of a language lies in its particularities, so use them whenever appropriate. This means writing idiomatic code, which implies knowledge of the language idioms. Don’t write C++ code like C code, write it like C++ code. Don’t write C# code as C++, write it as C# etc.

Also, keep up to date on the language. Language syntax evolves to address needs, so in general modern syntax introduces simpler, better ways to implement things than old syntax. Take object allocation and initialization in C++ as an example:

```
Foo* foo = (Foo*)malloc(sizeof(Foo));
init(foo);
...
deinit(foo);
free(foo);
```

This is the C way of allocating and initializing a structure on the heap, then deinitializing and freeing it. Allocation and initialization are separate steps, with opportunity to leak both memory (by omitting the free call) and managed resources (by omitting the deinit call). Not to mention opportunity to end up with an initialized struct (by omitting the init call), or accidental double-initialization, double-deinitialization, double-free etc.

C++ introduced classes, and the following syntax:

```
Foo* foo = new Foo();
...
delete foo;
```

new both allocates memory and calls the constructor, while delete calls the destructor then releases the memory. Many of the problems in the C example go away, but there is still the problem of leaking the resource by omitting the delete call, and the issue of calling delete twice on the same memory address.

To address these issues, smart pointers were introduced in the language:

```
std::shared_ptr<Foo> foo(new Foo());
```

Smart pointers encapsulate reference counting (how many shared_ptr objects point to the same memory address), and automatically release the resource when the last reference goes away. This gets rid of most problems, but there is an even better way of allocating heap objects:

```
auto foo = std::make_shared<Foo>();
```

make_shared has the advantage of improved performance, by allocating memory in a single operation for both the object and the shared pointer’s own control block [3]. It also prevents leaks due to interleaving [4]. So as the C++ language evolved, new constructs appeared to address potential problems. Keeping up to date with these updates, and incorporating them into your code will reduce the opportunity for bugs, make the code more concise, and thus more readable.

I encourage you to not stop at writing *working* code, rather strive to write
*beautiful* code. I have the following quote from Apprenticeship Patterns
on the wall behind my monitors, so I can see it while I work:

There’s always a better/faster/smarter way to do what you’re currently doing

So don’t stop as soon as something works, ask yourself *is this the best way to
implement this?*

Key takeaways:

- Come up with good names
- Write meaningful comments
- Keep up to date with your language
- Don’t just write working code, write beautiful code.

As I was working on putting together the talk that inspired this post, I realized there are a few more rules of thumb which I could cover. The current working draft is:

- Write safe code
- Write leakproof code
- Write responsive code
- Write testable code

Sometime in the future I hope to continue the series with the above, in the meantime, I’ll leave you with this one sentence summary:

Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live

Source:Coding Horror

[1] | At the time of this writing, there is an active proposals to extend the C# language with an immutable keyword. |

[2] | C# has a PureAttribue in the System.Diagnostics.Contracts namespace (purity not compiler-enforced) and there is an active proposal to add a keyword for it too. |

[3] | This is a non-binding requirement in the standard, meaning a standard
library implementation doesn’t have to do this, but most implementations
will. You can read more about it here. |

[4] | Interleaving occurs since call order is not guaranteed. For example, in bar(std::share_ptr<Foo>(new foo()), baz()), there is no guarantee that call order will be new foo(), then the shared pointer’s constructor, then baz(). Calls might get interleaved and executed as new foo(), then baz(), then the shared pointer constructor, in which case an exception thrown by baz() would leak the newly allocated Foo object, since the shared pointer didn’t get ownership of it yet. |

These posts are based on a Clean Code talk I did for my team a few months ago, which, in turn, was inspired by the advice I gave to some of our summer interns as four rules of thumb for writing cleaner code:

- Write less code
- Write simple code
- Write stateless code
- Write readable code

I will cover the first two points in this post and the remaining two in Part 2. I’m talking about C++ and C# throughout, but most of this should be applicable to any object-oriented or multi-paradigm language.

The number of defects found in open source projects was 0.45 defects/1,000 lines of code, while the industry’s average is around 1 defect per 1,000 lines of code for companies not using automated testing such as static analysis.

Source:InfoQ

A great way to have fewer bugs is to have fewer lines of code! What I mean by
this is that churning out many lines of code is by no means a measure of
productivity yet, unfortunately, most developers still feel great when, at the
end of the day, we wrote *insert high number* LoC.

Two points to keep in mind: first, don’t reinvent the wheel - don’t write code if there is an existing library, internal to your company or open-source, that already does what needs to be done. Case in point, we refactored some code for a project (C#), extracted some interfaces, componentized things, and wrote a bunch of unit tests. All of this was great, except we ended up with a bunch of handcrafted stub classes: for

```
interface IMyComponent
{
void Foo();
int Bar();
void Baz();
}
```

we had

```
class MyComponentStub : IMyComponent
{
public void Foo()
{
}
public int Bar()
{
return 0;
}
public void Baz()
{
}
}
...
var myComponentStub = new MyComponentStub();
```

and so on. Implementing these stubs was tedious, needless work - we integrated Moq , a mocking library, and the above code turned into:

```
using Moq;
...
var myComponentStub = Mock.Of<IMyComponent>();
```

Moq uses reflection to stub out interfaces at run-time, so simply adopting the library helped us get rid of a lot of code.

The second way to write less code is to know the standard library of your language of choice. Many times, a block of code can be replaced with a simple library call. For C++, pay particular attention to the STL <algorithm> header and for C#, System.Linq. Both contain many of useful algorithms which can replace a lot of code.

I also recommend watching Sean Parent’s C++ Seasoning talk, one of the best tech talks I’ve seen. The example he gives in the talk (from the Chrome codebase) shows how a couple of lines of STL code can be used instead of a whole convoluted function:

```
void PanelBar::RepositionExpandedPanels(Panel* fixed_panel) {
CHECK(fixed_panel);
// First, find the index of the fixed panel.
int fixed_index = GetPanelIndex(expanded_panels_, *fixed_panel);
CHECK_LT(fixed_index, expanded_panels_.size());
// Next, check if the panel has moved to the other side of another panel.
const int center_x = fixed_panel->cur_panel_center();
for (size_t i = 0; i < expanded_panels_.size(); ++i) {
Panel* panel = expanded_panels_[i].get();
if (center_x <= panel->cur_panel_center() ||
i == expanded_panels_.size() - 1) {
if (panel != fixed_panel) {
// If it has, then we reorder the panels.
ref_ptr<Panel> ref = expanded_panels_[fixed_index];
expanded_panels_.erase(expanded_panels_.begin() + fixed_index);
if (i < expanded_panels_.size()) {
expanded_panels_.insert(expanded_panels_.begin() + i, ref);
} else {
expanded_panels_.push_back(ref);
}
}
break;
}
}
// Find the total width of the panels to the left of the fixed panel.
int total_width = 0;
fixed_index = -1;
for (int i = 0; i < static_cast<int>(expanded_panels_.size()); ++i) {
Panel* panel = expanded_panels_[i].get();
if (panel == fixed_panel) {
fixed_index = i;
break;
}
total_width += panel->panel_width();
}
CHECK_NE(fixed_index, -1);
int new_fixed_index = fixed_index;
// Move panels over to the right of the fixed panel until all of the ones
// on the left will fit.
int avail_width = max(fixed_panel->cur_panel_left() - kBarPadding, 0);
while (total_width > avail_width) {
new_fixed_index--;
CHECK_GE(new_fixed_index, 0);
total_width -= expanded_panels_[new_fixed_index]->panel_width();
}
// Reorder the fixed panel if its index changed.
if (new_fixed_index != fixed_index) {
Panels::iterator it = expanded_panels_.begin() + fixed_index;
ref_ptr<Panel> ref = *it;
expanded_panels_.erase(it);
expanded_panels_.insert(expanded_panels_.begin() + new_fixed_index, ref);
fixed_index = new_fixed_index;
}
// Now find the width of the panels to the right, and move them to the
// left as needed.
total_width = 0;
for (Panels::iterator it = expanded_panels_.begin() + fixed_index + 1;
it != expanded_panels_.end(); ++it) {
total_width += (*it)->panel_width();
}
avail_width = max(wm_->width() - (fixed_panel->cur_right() + kBarPadding), 0);
while (total_width > avail_width) {
new_fixed_index++;
CHECK_LT(new_fixed_index, expanded_panels_.size());
total_width -= expanded_panels_[new_fixed_index]->panel_width();
}
// Do the reordering again.
if (new_fixed_index != fixed_index) {
Panels::iterator it = expanded_panels_.begin() + fixed_index;
ref_ptr<Panel> ref = *it;
expanded_panels_.erase(it);
expanded_panels_.insert(expanded_panels_.begin() + new_fixed_index, ref);
fixed_index = new_fixed_index;
}
// Finally, push panels to the left and the right so they don't overlap.
int boundary = expanded_panels_[fixed_index]->cur_panel_left() - kBarPadding;
for (Panels::reverse_iterator it =
// Start at the panel to the left of 'new_fixed_index'.
expanded_panels_.rbegin() + (expanded_panels_.size() - new_fixed_index);
it != expanded_panels_.rend(); ++it) {
Panel* panel = it->get();
if (panel->cur_right() > boundary) {
panel->Move(boundary, kAnimMs);
} else if (panel->cur_panel_left() < 0) {
panel->Move(min(boundary, panel->panel_width() + kBarPadding), kAnimMs);
}
boundary = panel->cur_panel_left() - kBarPadding;
}
boundary = expanded_panels_[fixed_index]->cur_right() + kBarPadding;
for (Panels::iterator it = expanded_panels_.begin() + new_fixed_index + 1;
it != expanded_panels_.end(); ++it) {
Panel* panel = it->get();
if (panel->cur_panel_left() < boundary) {
panel->Move(boundary + panel->panel_width(), kAnimMs);
} else if (panel->cur_right() > wm_->width()) {
panel->Move(max(boundary + panel->panel_width(),
wm_->width() - kBarPadding),
kAnimMs);
}
boundary = panel->cur_right() + kBarPadding;
}
}
```

becomes:

```
void PanelBar::RepositionExpandedPanels(Panel* fixed_panel) {
CHECK(fixed_panel);
// First, find the index of the fixed panel.
int fixed_index = GetPanelIndex(expanded_panels_, *fixed_panel);
CHECK_LT(fixed_index, expanded_panels_.size());
// Next, check if the panel has moved to the left side of another panel.
auto f = begin(expanded_panels_) + fixed_index;
auto p = lower_bound(begin(expanded_panels_), f, center_x,
[](const ref_ptr<Panel>& e, int x){ return e->cur_panel_center() < x; });
// If it has, then we reorder the panels.
rotate(p, f, f + 1);
}
```

Code snippets borrowed from Sean Parent’s slides, I highly recommend watching the whole talk.

The key takeaway here is that there could be a standard library implementation or
an external module that can greatly simplify your work and it’s a good practice
to always ask yourself *“do I really need to write this?”*

First, a few notes on cyclomatic complexity from Wikipedia:

Cyclomatic complexity is a software metric (measurement), used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program’s source code.

- The complexity M is then defined as
M = E − N + 2P- where
E = the number of edges of the graph, N = the number of nodes of the graph, P = the number of connected components.A control flow graph of a simple program. The program begins executing at the red node, then enters a loop (group of three nodes immediately below the red node). On exiting the loop, there is a conditional statement (group below the loop), and finally the program exits at the blue node. This graph has 9 edges, 8 nodes, and 1 connected component, so the cyclomatic complexity of the program is 9 - 8 + 2 * 1 = 3.

Source:Wikipedia

The cyclomatic complexity of any piece of code should be minimized. This can be achieved by avoiding branching, namely, whenever possible, avoiding conditional statements and loops. Linear code is easier to read and maintain, and provides less opportunities for bugs.

One way to avoid conditional statements is to, whenever feasible, throw exceptions instead of propagating errors through return values.

Here is an example of error code propagation through return values using the Windows API’s HRESULT:

```
HRESULT foo(); // Does some work and returns an HRESULT
HRESULT bar(); // Does some work and returns an HRESULT
HRESULT baz()
{
HRESULT hr = S_OK;
hr = foo();
if (FAILED(hr))
return hr;
hr = baz();
if (FAILED(hr))
return hr;
... // Some more work here which might fail
return hr;
}
if (SUCCEEDED(baz()))
{
// :)
}
else
{
// :(
}
```

This can be replaced with the more concise and much easier to read:

```
void foo(); // Does some work, might throw
void bar(); // Does some work, might throw
void baz()
{
foo();
baz();
... // Some more work here which might fail (and throw)
}
try
{
baz();
// :)
}
catch (...)
{
// :(
}
```

Error code return values come from the old days when exceptions didn’t exist and make code harder to read. That being said, for C++ specifically, you should be careful about throwing exceptions across DLL boundaries. In practice though, a lot of code in the shape of the above example appears within the same executable for no good reason. If cross-DLL boundary is a problem, I would actually recommend using exceptions internally and switching to return codes at the public API boundary.

Another way to avoid conditional statements is to use the Null Object pattern instead of checking for null. For example, take an IActivity interface on which we can log success or failure, and an ActivityScope which can retrieve the current activity from a context:

```
interface IActivity
{
void LogSuccess();
void LogFailure();
}
class ActivityScope
{
...
public IActivity GetCurrentActivity()
{
if (!_context.HasCurrentActivity())
{
return null;
}
return _context.GetActivity();
}
}
```

With this implementation, all clients of the API have to make sure GetCurrentActivity() returns an object as opposed to null. All callers look like this:

```
ActivityScope activityScope = new ActivityScope();
activityScope.CreateActivity();
... // Do a bunch of stuff
var activity = activityScope.GetCurrentActivity();
if (activity != null)
{
activity.LogSuccess();
}
```

While there is a single ActivityScope implementation, there are hundreds of calls to GetCurrentActivity, all coming with a boilerplate null check. The Null Object alternative for this is to provide a NullActivity, for which LogSuccess and LogFailure don’t do anything. ActivityScope can return NullActivity instead of null if there is no Activity in the context:

```
class NullActivity : IActivity
{
public void LogSuccess() { }
public void LogFailure() { }
}
class ActivityScope
{
...
private static NullActivity _nullActivity = new NullActivity();
public IActivity GetCurrentActivity()
{
if (!_context.HasCurrentActivity())
{
return _nullActivity;
}
return _context.GetActivity();
}
}
```

Now callers don’t need to worry about getting back a null, and can use the API like this:

```
activityScope.GetCurrentActivity().LogSuccess();
```

Yet another way to reduce branching is when it used for mapping between two types:

```
if (a == IdType.Foo)
{
b = "Foo string";
}
else if (a == IdType.Bar)
{
b = "Bar string";
}
else if ...
```

A pattern like this (which can also take the form of a big switch/case statement) can usually be replaced with indexing into an array or looking up the corresponding value in a hash map:

```
Dictionary<IdType, string> IdTypeToStringMap = new Dictionary<IdType, string>()
{
{ IdType.Foo, "Foo" },
{ IdType.Bar, "Bar" },
...
};
...
b = IdTypeToStringMap[a];
```

This is, again, easier to maintain, since it is declarative - the mapping is given as data (IdTypeToStringMap), not as code (long series of if/else).

This goes back to the C++ Seasoning talk, namely the *No Raw Loops* guideline. Here’s
a C# example: given a list of numbers, we want to get the square of all the odd
numbers in the list.

```
var numbers = new List<int> { 6, 1, 2, 7, 3, 4, 9, 5, 8 };
// Get the squares of all odd numbers
```

One way to do this is to maintain a list of numbers, iterate over the list, check if numbers are odd, and if so, square them and add them to the list:

```
// Get the squares of all odd numbers
IEnumerable<int> SquareOdds(IEnumerable<int> numbers)
{
var squares = new List<int>();
foreach (int number in numbers)
{
if (number % 2 != 0)
{
squares.Add(number * number);
}
}
return squares;
}
var squares = SquareOdds(numbers);
```

A neater way to do this is to use a generator instead of manually maintaining the list of squares:

```
// Get the squares of all odd numbers
IEnumerable<int> SquareOdds(IEnumerable<int> numbers)
{
foreach (int number in numbers)
{
if (number % 2 != 0)
{
yield return number * number;
}
}
}
var squares = SquareOdds(numbers);
```

That being said, what I would actually recommend is using Linq:

```
// Get the squares of all odd numbers
var squares = numbers.
Where(number => number % 2 != 0).
Select(number => number * number);
```

Fewer lines of code and no branching whatsoever [*]. Where and Select are generic algorithms, and their arguments are the predicates we use. This makes the intent of the code clear at a glance - we are filtering the collection with a predicate (number => number % 2 != 0) and applying a transformation to it with another predicate (number => number * number). Also, the filtering and transformation are library functions, so we can be fairly certain they work well, and only need to worry about maintaining our predicates.

It might not look like a big deal in this simple made-up example, but as code evolves, it becomes harder and harder to follow the iteration logic, as the code gets littered with break, continue, and return statements (see the Chrome example quoted in the Write Less Code section).

Key takeaways:

- Try to keep functions linear (or as linear as possible)
- Default to throwing instead of propagating errors up the call stack
- Consider creating a null object when code is littered with null checks
- Separate algorithm logic from predicates to make the intent of the code clear (in other words, no raw loops).

The most interesting question I was asked is what are the performance implications of using an STL algorithm or Linq.

The default answer is, of course, you have to measure for your particular case! Blanket statements cannot be made about performance, as there are many factors involved: compiler, runtime, standard library, OS, architecture, whether code is on a hot path or not, and so on and so forth.

Still, my recommendation is to use the library algorithms and, only if they become the bottleneck (which in most cases shouldn’t happen), look into replacing them with handcrafted code. Another thing to keep in mind is that standard library authors know what they’re doing, so it’s very likely that library code is already pretty well optimized. I ran a simple wall clock benchmark for 1M iterations for some of the examples I used throughout the presentation (both the handcrafted and the library versions), and in all cases the code leveraging library functions ran slightly faster.

[*] | Cyclomatic complexity of this is actually higher when computed by looking at basic blocks (eg. from Visual Studio’s Analyze menu), since the compiler will automatically add a finally block to dispose of the Linq-returned IEnumerables in case of exception. That being said, I prefer compiler-generated complexity to developer-generated complexity. |

```
print("Hello World!")
```