A generator (as I will use the term here) is an object that can “generate” other objects on demand. They work like random generators, except that they need not generate numbers or do so randomly: you ask it for the next value, and it gives it to you.

The naive generator is simply a class that supports this method:

T Next();

Generators work a bit like iterators, but they are slightly different:

- Iterators work on both finite and infinite sequences, while generators are always (supposed to be) infinite.
- Iterators are typically used in loops to process elements in sequence on the spot. Generators are generally use over some time span (similar to how random numbers are often used in simulations).
- Iterators are usually restarted on each use; generators are rarely restarted.

The idea with generators is that they encapsulate state-tracking data that would otherwise pollute your class. In a tower defense game, for example, you may want to generate enemies that get progressively more difficult to beat. One way to implement this is to maintain a difficulty variable, and update it each time you make a monster:

var monster = new Monster(difficulty); difficulty *= 1.1f;

In this simple example it is not so bad; but things become messy when the logic is more complicated and you have to generate more types of objects.

The alternative using generators is:

var monster = monsterGenerator.Next();

Once you have the concept of a generator, you can start doing some interesting things; very similar to the way LINQ in C# allows you to do some interesting things with sequences. We can build a framework to make more generators from existing ones, and indeed it resembles LINQ in many ways. Here are some of the core operations:

**Where:**Makes a new generator that filters the elements of a source generator with a predicate.**Select:**Makes a new generator that applies a selector function to elements of a source generator.**Zip:**Makes a new generator that applies a selector function to elements of two or more generators.**Choose:**Makes a new generator that uses an integer generator to choose elements from a sequence.**Choose:**Makes a new generator that uses an integer generator to choose a generator from a sequence to generate an element from.**RepeatEach:**Makes a generator that generates each element of a source generator a number of times.

We also need some functions to make basic generators:

**Constant:**Makes a generator that generates the same element each time (this is for example useful in functions that take generators instead of fixed values).**Repeat:**Makes a generator that generates a elements from a sequence and repeats the cycle over and over.**Count:**Makes a generator that generates integers from 0 to n-1 and repeats the cycle over and over.**UniformRandomInt:**Makes a generator that generates pseudo-random integers.**UniformRandomFloat:**Makes a generator that generates pseudo-random floats.**Iterate:**Makes a generator that uses a few initial elements, and a function repeatedly applied to the last few generated elements.**Skip:**Makes a new generator that skips over ‘n number of elements of a source generator.**Pad:**Makes a new generator padded (on the left) with elements from a sequence, or a value repeated a number of times.

And finally, a few convenience methods:

**Next(n)**: Returns the next n elements in an IEnumerable.**MoveNext(n):**Advances the generator n times.

To give you an idea of how these functions can be used to make new generators, here are some examples (I use [] for lists here):

var digit0 = Count(2); // 0 1 0 1 0 1 ... var digit1 = digit0.RepeatEach(2); // 0 0 1 1 0 0 1 1 0 0 ... var digit2 = digit1.RepeatEach(2); //0 0 0 0 1 1 1 1 0 0 0 0... var choice = Count(3); //0 1 2 0 1 2 var binary = Choose([digit2, digit1, digit0], choice); // 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 ...

var random = UniformRandomFloat(); var randomInsideCircle = random .Zip(random, (x, y) => new Vector(x, y)) .Where(v => v.magnitude < 1) .Select(v => v*radius);

(I discuss the copying of source generators a bit later; for now: the two uses of the random generator will not generate the same elements.)

var fibonacci = Iterate(1, 1, (x, y) => x + y);

In C#, IEnumerables represent *only *a sequence. A generator, on the other hand, represents a sequence *and* a state. When we make a new generator from some source generator, we do not want a call to Next on the derived generator affect the state of the source generator.

An implementation that keeps references to source generators in derived generators can lead to unexpected results.

var count = Count(5); var even = count.Select(x => x * 2); Print(even.Next()); // 0 Print(even.Next()); // 2 Print(even.Next()); // 4 Print(count.Next()); // 3 - already advanced by "even".

To get generators to behave as we expect, **we need to copy source generators **(usually in the constructor of the derived generator).

In many cases it is convenient to look at the current value without advancing the generator. This is a bit like *peeking* at a stack or queue. To make this possible, it is necessary to split Next into to atomic operations: one to retrieve the current value, and one to advance the generator.

This makes the Generator class look very much like an IEnumerator (it supports Current and MoveNext). But because generators are infinite, it is not necessary to return whether there is a next element (there always is); nor is it necessary to keep the generator in a “before first” state; it is always OK to go directly to the first state.

It is possible to construct “impossible” generators:

var impossible = Constant(0).Where(x > 0);

This generator will go into an infinite loop and never generate a single element. Unfortunately, it is not possible to detect cases like this automatically. One way to deal with this is to maintain a “time-out” counter in generators where this can happen, and throw an exception when the counter reaches a threshold.

It is also possible to create generators that will overflow:

var count = Iterate(0, x => x + 1); //will eventually overflow

In example like that the overflow is expected. However, a user of the method below may not expect the returned generator to overflow:

static IGenerator<float> Average(this IGenerator source) { var sourceCopy = sournce.Clone(); var sum = Iterate(0, x => x + sourceCopy.Next());//Not a good way! var count = Iterate(1, x + 1).Pad(1, 1); var average = sum.Zip(count, (x, y) => x / y); }

so it is important to mark potentially unsafe generators.

Laziness can also produce unexpected results if you are not careful. In an earlier implementation, I made the Next(n) method a lazy IEnumerable, like this:

public IEnumberable<T> Next(this IGenerator<T> source, int n) { for(int i = 0; i < n; i++) yield return next; }

This scheme allows you to distribute computations over time, and pad with arbitrary large sequences. However, the problem is that calls to next may be made in the incorrect order, giving unexpected results. For example:

var count = Count(100); var first50 = count.Next(50); var second50 = count.Next(50); var element101 = count.Next(); foreach(var element in second50) Print(element); //prints 1, 2, 3, 4, ..., 50! foreach(var element in first50) Print(element) // prints 51, 52, ..., 100! Print(element101) // prints 0! foreach(var element in second50) Print(element); // prints 101, 102, ..., 150!

The problem is that Next is only called once the element is retrieved from the IEnumerable. And if the elements are retrieved more than once, then the elements change!

So far I have not found a good solution for this; it looks like you may need to evaluate the elements immediately.

You may recognize the image at the top of this post as a dragon curve. Iterations of this curve can be constructed with a sequence called the paper folding sequence or dragon curve sequence (interpreted as left and right turns by 90 degrees followed by moving forward by a fixed amount, LOGO style): 1 1 0 1 1 0 0 1 1 1 0 0 1 0 0 …

This sequence have a simple description:

Take the alternating sequence 1 0 1 0 1 0, add a blank between each two elements: 1 ( ) 0 ( ) 1 ( ) 0 ( ) 1 ( ) 0. Then fill in the paper folding sequence in the blanks; a few steps are shown below:

**1**(1) 0 ( ) 1 ( ) 0 ( ) 1 ( ) 0- 1 (
**1**) 0 (1) 1 ( ) 0 ( ) 1 ( ) 0 - 1 (1)
**0**(1) 1 (0) 0 ( ) 1 ( ) 0 - 1 (1) 0 (
**1**) 1 (0) 0 (1) 1 ( ) 0 - 1 (1) 0 (1)
**1**(0) 0 (1) 1 (1) 0

You basically use parts of the sequence already constructed to construct the rest. The sequence is defined recursively.

S(n) = 1 if n mod 4 == 0 = 0 if n mod 4 == 2 = S((n + 1)/2) otherwise

I would have like to be able to construct this sequence something like this:

IGenerator<int> paperFoldingSequence; paperFoldingSequence = Count(2).Interleave(paperFoldingSequence);

Because elements are generated one by one, it should be possible. And indeed it is. Normally, a copy is made of the source sequence when constructing the derived sequence. In this case, it leads to an infinite loop (as the copy calls the constructor, which makes a copy that calls the constructor…). However, it is possible to delay making the copy until an element is actually required from the sequence. This works; but there is a problem; every two iterations a new copy of the sequence is made.

I don’t think there is a way around this; in practice the memory overhead can be reduced by using a queue to enqueue elements as they are generated and dequeue them as they are required (so that you do not need to create copies every two elements). The memory still grows at the same proportion – but this way it’s by one floats every two iterations instead of one generator every two iterations.

The downside is that this cannot be expressed in a general way (at least, not anything sensible I could come up with), so you have to define the generator as its own class, and cannot build it from the generator functions as shown above.

The C# code for this will be part of our free Unity Extensions package, but you can check out a version of it below.

It does not require Unity, but it does depend on a random number implementation (you can just drop System.Random in place of IRandom).

]]>

A while back I developed a mild obsession with pentagons (mathematical ones, not symbolistic!) It started when I discovered some beautiful (simple and to me, unknown) theorems of quadrangles, such as Varignon’s theorem. I already came across Miquel’s pentagon theorem, and wondered what other gems I could find.

Here is what I found: Pentagons (2.4 MB PDF).

My search was on the surface a bit disappointing: pentagons as such are not widely studied. I guess it is because some more general theorems (that apply to general polygons) contain the theory, and the specifics as applied to pentagons are not so interesting.

Nevertheless, I did discover a few theorems, and the journey took me into some very interesting corners of geometry; a very rewarding experience. I started to collect these into a document, which is shared below. It’s not comprehensive or complete; there are a few gaps.

(At some stage I may return to look at this again. In particular, there are many theorems of the type “if there is n, there is n+1”, which seems to me to hint at a very general theorem which can be used to prove a bunch of specifics.)

Also, when I started, I did not realise how many of the theorems will generalise to general polygons, so the collection looks a bit silly in retrospect (kind of like listing all the properties of the number 8 that equally apply to even numbers).

Even so, what’s done is done, and can perhaps satisfy someone else’s curiosity.

Here is the list of contents:

- Notation
- Standard labeling
- Cycle notation (
*I introduce a convention that to me makes it easier to keep track of symbols in my head.)* - Area

- Five points in a plane
- General Pentagons
- Monge, Gauss, Ptolemy (
*Theorems that have analogues for quadrangles. For example, one theorem relates the areas of six triangles spanned by certain vertices of the pentagon.*) - Cyclic Ratio Products (alla Ceva en Melenaus) (
*Theorems involving products of cevians and other ratios.*) - Miquel (
*Thereoms analoguous to Miquel’s theorem for triangles.*) - Conics (
*Circumscribed and inscribed conics are uniquely determined by a pentagon.*) - Complete Pentagons (
*Includes Miquel’s pentagram theorem.*) - The Centroid Theorem (
*A general theorem about centroid applied to pentagons.*)

- Monge, Gauss, Ptolemy (
- Special Pentagons (
*Including how to construct these pentagons.*)- Cyclic Pentagons (
*Pentagons with vertices on a circle.*) - Tangent Pentagons (
*Pentagons with all sides tangent to a circle.*) - Orthocentric Pentagons (
*Pentagons whose altitudes intersect in a single point.*) - Mediocentric Pentagons (
*Pentagons whose medians intersect in a single point.*) - Paradiagonal Pentagons (
*Pentagons whose diagonals are parallel to oposite sides. Also called golden pentagons, affine regular pentagons, and equal area pentagons.*) - Equilateral Pentagons (
*Pentagons with all sides of equal length.*) - Equiangular Pentagons (
*Pentagons with all interior angles equal.*) - Brocard Pentagons (
*Pentagons that have a Brocard point.*) - Classification By Subangles (
*What the relationships between subangles imply for the pentagon.*)

- Cyclic Pentagons (

A while back I needed to implement fast minimum and maximum filters for images. I devised (what I thought was) a clever approximation scheme where the execution time is not dependent on the window size of the filter. But the method had some issues, and I looked at some other algorithms. In retrospect, the method I used seems foolish. At the time, I did not realise the obvious: a 1D filter could be applied to first the rows, and then the columns of an image, which makes the slow algorithm faster, or allows you to use one of the many published fast 1D algorithms.

I wanted to write down my gained knowledge, and started to work on a blog post. But soon it became quite long, so I decided to put it into a PDF document instead. You can download it below.

The document is somewhat weird: it is very detailed for a “simple” image algorithm (it is more than 50 pages!). It does have several tips that applies to implementation of other image processing algorithms. It also has, what I believe to be, a very clear description of the Monotonic Wedge Algorithm, with code that reflects the explanation in text closely. (I had trouble understanding the algorithm from the original journal publication. And the authors provide code that was even further optimised and thus less clear to follow).

I intended to give some performance analysis and results, but other activities has robbed me of any free time. Perhaps later.

Also, the code has been edited for readability, and hence, might contain typos that were introduced in the process. If you spot any, please let me know.

]]>1 The Problem2 Exact Algorithms2.1 The Naive Algorithm2.2 The Max-Queue2.3 Implicit Queue Algorithm2.4 The Monotonic Wedge Algorithm3 Approximate Algorithms3.1 The Power Mean Approximation3.2 The Power Mean Variant Algorithm3.3 The Contra-Harmonic Mean Approximation4 Other Algorithm Concepts4.1 Separation4.2 Implementing Minimum Filters4.3 Windows with Even Diameters4.4 Filtering a Region of Interest4.5 Maximum and Minimum Filters for Binary ImagesA Image ContainersA.1 Image Class InterfaceA.2 Image LoopsA.3 Image IteratorsA.4 Image Access ModifiersB Fixed-width DequesC Max-queuesD Summed Area TablesD.1 Calculating a SATD.2 Finding a Sum from a SATD.3 Checking for Overflow 50D.4 Large SATs 53

Grab it here.

]]>- A Reference for Functional Equations I have not posted in a while; one reason is...
- Update to Functional Equations Reference (version 1.3) This is a substantial update of this reference document. The...
- Difference and Functional Equations Reference Original image by openDemocracy. The document below contains tables and...

(Original Image by Valerie Everett)

It is sometime necessary to move an object in a physics simulation to a specific point. On the one hand, it can be difficult to analyse the exact force you have to apply; on the other hand it might not look so good if you animate the object’s position directly.

A compromise that works well in many situations is to use a spring-damper system to move the object.

The trick is simple: we apply two forces—the one is proportional to the displacement; the other is proportional to the velocity. Tweaked correctly, they combine to give realistic movement to the desired point.

The *spring force* is proportional to the difference between the current position and the position where we want the object:

Here, is a positive value called the *spring constant*.

As you can see, the force gets smaller as our object approaches the desired position, and becomes zero when it reaches that position. Unfortunately, in the absence of friction or drag, the *velocity* is not zero at this point, so the object overshoots the desired position, and moves past it. The force becomes bigger, but in the opposite direction. The object keeps on moving, slowing down, and finally starts moving in the opposite direction towards the desired position. This goes on indefinitely.

When there is friction or drag, we might be lucky enough for the system to slow down the object sufficiently so that its velocity becomes zero when the object reaches the desired position. This can be tricky to accomplish, though, and might impact the simulation environment in undesirable ways.

It is better to add a counteracting force explicitly. We add a *damper force* that is propostional to the velocity of the object, again opposite in direction. Here is the *viscous damping coefficient*, also a positive number.

We then apply the sum of the forces to our object:

The trick is to choose to get the behaviour we want.

Fortunately, this is easy. The following table summarizes how the damping constant affects behaviour. Here, is the mass of the object.

The object oscillates. | |

The object oscillates, but the oscillations die down. | |

The object moves to the desired position without oscillating in minimum time. | |

The object moves to the desired position without oscillating, and takes longer as c increases. |

If you want to see an explanation of how this works, see the Wikipedia article on damping.

By choosing , we are left with only one parameter to tweak (the spring constant), with which we can adjust the time it will take for the object to reach the desired spot.

A simple implementation of this idea is given by the following function. The function should be called for every simulation frame, until we are satisfied that the object reached its spot:

private void MoveTo(Rigidbody rigidbody, Vector3 newPosition, float springConstant) { Vector3 desiredDisplacement = rigidbody.position - newPosition; Vector3 springForce = -springConstant * desiredDisplacement; float viscousDampingCoefficient = 2 * sqrt(rigidbody.mass * springConstant); Vector3 dampingForce = -viscousDampingCoefficient * rigidbody.velocity; Vector3 totalForce = springForce + dampingForce; rigidbody.AddForce(totalForce); }

This will work even when there is drag or friction, except that the object will move slower (in this case we can decrease the artificial damping, although it is a bit risky). When there is an external force applied to the object, the object will come to rest at some point away from the desired position. By increasing the spring constant, the distance between this point and the desired point can be made smaller. Thus, we can also use this scheme to maintain objects at a certain height, for example, which can give a rather realistic simulation of a hovercraft or even a drifting object.

**Update: **This is an example of a PID controller (proportional–integral–derivative controller), for which there is a C++ implementation in my Special Numbers Library.

(Original image by GoAwayStupidAI).

Below are four C++ implementations of the region quadtree (the kind used for image compression, for example). The different implementations were made in an attempt to optimise *construction *of quadtrees. (For a tutorial on implementing region quadtrees, see Issue 26 [6.39 MB zip] of Dev.Mag).

**NaiveQuadtree**is the straightforward implementation.**AreaSumTableQuadtree**uses a summed area table to perform fast calculations of the mean and variance of regions in the data grid.**AugmentedAreaSumTableQuadtree**is the same, except that the area sum table has an extra row and column of zeros to prevents if-then logic that slows it down and makes it tricky to understand.**SimpleQuadtree**is the same as AugmentedAreaSumTableQuadtree , except that no distinction is made (at a class level) between different node types.

The interfaces of all quadtrees are the same, but I did not want to extend from a base class. (Instead, a compile time check is performed on the classes, using boost concepts).

The results of the performance (on my machine!) of the trees are as follows:

(milliseconds) | N |
S |
AST |
AAST |

3 | 4 | 4 | 3 | |

64×64 |
14 | 14 | 14 | 13 |

128×128 |
55 | 52 | 55 | 52 |

256×256 |
229 | 233 | 218 | 214 |

512×512 |
950 | 1036 | 937 | 922 |

1024×1024 |
4064 | 4459 | 5396 | 3891 |

As it turns out, the different implementations do not differ significantly. Constructing the nodes takes long, and not so much the calculations necessary to determine whether a node should split or not, and what data should be in the node. Had I profiled properly before I started I would not have gone through this exercise…

Between these four implementations, the NaiveQuadtree is the one I recommend; I left in the other implementations for anyone interested.

The one good thing that came from this experiment is that I found out that using a zero-augmented summed area table can increase performance quite a bit. This is useful for max-filters and other algorithms that use these tables.

You can download the source code:

Quadtree.zip (44 KB, Visual Studio).

(It requires the boost library for concept checking, but nothing else. Everything will still work if you remove all references to the boost library. If you already have boost, just hook up to the include path in Visual Studio).

…or read the online documentation.

]]>When implementing image algorithms, I am prone to make these mistakes:

- swapping x and y;
- working on the wrong channel;
- making off-by-one errors, especially in window algorithms;
- making division-by-zero errors;
- handling borders incorrectly; and
- handling non-power-of-two sized images incorrectly.

Since these types of errors are prevalent in many image-processing algorithms, it would be useful to develop, once and for all, general tests that will catch these errors quickly for *any* algorithm.

This post is about such tests.

The general idea is to exploit the fact that many algorithms satisfy invariants such as these:

some_transform(algorithm(image)) == algorithm(some_transform(image))

For example: a 3-by-3 box blur is invariant under a vertical flip:

box_blur3x3(flip_vertical(image)) == flip_vertical(box_blur3x3 (image))

It does not matter whether we apply the flip before or after applying the box filter—the result should be the same.

What kind of errors can the above test expose? Since it checks whether the algorithm (not the image) is vertically symmetric, the test can catch certain off-by-one errors along the vertical axis. Here is an example of a faulty implementation of the blur algorithm. Here the function `sum`

adds all the pixel values in the rectangle x0..x1 – 1 and y0..y1 – 1.

//C-like pseudo-code Image & box_filter3x3(Image & image) { forXY(image, x, y) { x0, y0 = max(0, x – 1), max(0, y – 1) x1, y1 = min(x + 1, image.width), min(y + 1, image.height) result = sum(image, x0, x1, y0, y1) / ((x1 – x0) * (y1 – y0)); } image = result; return image; }

Can you spot the error? The actual window used is only 2 by 2. But how does the test catch this error? To see why the test will fail, notice the difference in windows for the top-left and bottom-left pixels:

Top Left:

x0, y0 == 0, 0 x1, y1 == 1, 1

Bottom Left

x0, y0 == width – 2, height – 2 x1, y1 == width , height

As you can see, the two windows have different sizes! The bottom-left pixel window is 1 by 1, but the top left pixel is 2 by 2. Thus flipping the image before and after applying the box-blur gives different results.

The correct algorithm is

//C-like pseudo-code Image & box_filter3x3(Image & image) { forXY(image, x, y) { x0, y0 = max(0, x – 1), max(0, y – 1) x1, y1 = min(x + 2, image.width), min(y + 2, image.height) result = sum(image, x0, x1, y0, y1) / ((x1 – x0) * (y1 – y0)); } image = result; return Image; }

Now the windows for the top and bottom left pixels are:

x0, y0 == 0, 0 x1, y1 == 2, 2

Bottom Left

x0, y0 == width – 2, height – 2 x1, y1 == width , height

We can see the window sizes are now the same.

Here is a list of suggested transforms and the types of errors they can catch:

`flipVertical` `flipHorizontal` |
Off-by-one errors. |

`flipDiagonal` |
Swapping x and y. |

`rotateChannels` |
Working on the wrong channel. |

`invert` |
Calculation mistakes, certain kinds of division-by-zero. errors. |

`scaleIntensity` |
Calculation mistakes. |

`adjustGamma` |
Certain kinds of statistical ordering errors. |

`crop` |
Certain kind of incorrect border calculations, certain power-of-two errors. |

`translateVertival` `translateHorizontal` |
Certain kinds of incorrect border calculations, off-by-one errors. |

`desaturate` |
Certain channel processing errors. |

Many algorithms should (in theory) also be invariant under scaling. However, because of the complexity involved in interpolation and sampling, I do not recommend using scaling for testing. It is quite difficult to determine under some interpolation or sampling scheme whether an algorithm should in fact be (exactly) invariant or not under scaling. This of course also applies to other transforms that rely on sampling or interpolation.

You will probably also select transforms that are more specific to the algorithms you are implementing. Keep these as simple as possible—not only to avoid implementation errors, but also to avoid subtle misconceptions: it must be easy to see (or well established) that a certain algorithm is invariant under a transform. Test your own transform functions aggressively.

Writing test code for each transform is easy, but tedious (and hence error prone). Here I show how a generic test can be implemented using C++ macros or functional programming. The generic test can be used to test whether any algorithm is invariant under a given transform.

Probably the easiest way to implement a general test-function in C++ is to implement it as a macro. Here is how this looks:

//C++ #define TEST_TRANSFORM_INVARIANCE(image, transform, command) \ do { \ Image image; \ make_test_image(image); \ \ Image original_image(image); \ command; \ \ Image image1(image); \ transform(image1); \ \ image = original_image; \ transform(image); \ command; \ \ if (image != image1) \ report_useful_failure_message(...); \ } while(0)

We can now use this macro like this:

TEST_TRANSFORM_INVARIANCE (image, flipVertical, my_algorithm(image));

The nice thing about the macro approach is how easy functions that take any number of parameters can be tested:

TEST_TRANSFORM_INVARIANCE (image, flipVertical, my_algorithm(image, 10, 20));

Notice that the first parameter is a *variable name*. The macro declares this variable, and the user can use it in the command parameter to pass the image to the algorithm.

If macros are not available, or using them is not desirable, you can still use a generic approach. Unfortunately, we need to define functions with a consistent argument list for each test we want to perform, which means we might need to write more code in some languages.

The general test function can be defined as follows in C++:

//C++ test_transform_inavriance( Image & (* function)(Image &), Image & (* transform)(Image &)) { Image image; make_test_image(image); Image original_image(image); function(image); Image image1(image); transform(image1); image = original_image; transform(image); function(image); if (image != image1) report_useful_failure_message(...); }

To use it with a single argument function, we simply call it like this:

test_transform_inavriance(flip_vertical, my_algorithm);

To use it with a function that takes more than one parameter, we need to define a function to call the extra parameters:

// C++ Image & test_my_algorithm_wrapper(Image & image) { my_algorithm(image, 10, 20); } ... test_transform_inavriance(flip_vertical, my_algorithm_wrapper);

In languages that support closures, you can wrap functions more cleanly so that you do not need to define a function for each test command. For example, in Python:

# Python def wrap(fn, args): def wrapper(image): fn(image, *args) return fn ... test_transform_inavriance(flip_vertical, wrap(my_algorithm, 10, 20));

Both the macro and functional programming approaches allow you to combine tests of different transforms in convenient functions:

// C++ #define TEST_ALL(image, command)\ do { \ TEST_TRANSFORM_INVARIANCE (image, flip_vertical, command); \ TEST_TRANSFORM_INVARIANCE (image, flip_horizontal, command); \ ... } while(0)

// C++ void test_all(Image & (* fn)(Image &)) { test_transform_inavriance(flip_vertical, function); test_transform_inavriance(flip_ horizontal, function); ... }

It is important that your images do not destroy the very asymmetry that you are trying to expose. For instance, if we ran the test at the very beginning of this post with an all-zero image, the test will have passed. The image should be asymmetric under the transform we are using, that is, the following must hold:

transform(image) != image

For different transforms, this means different things. For example, for flip_diagonal, the image must not be square, for inverse, the image must not be the constant grayscale image 0.5.

It is useful to build an extra test into our test macro or function, to make sure that we do not inadvertently break this requirement. Here is the modified macro:

// C++ #define TEST_TRANSFORM_INVARIANCE(image, transform, command)\ do { \ Image image; \ make_test_image(image); \ \ Image original_image(image); \ transform(image); \ \ if(image == original_image) \ report_unsuitable_test_image(...); \ \ image = original_image; \ command; \ \ Image image1(image); \ transform(image1); \ \ image = original_image; \ transform(image); \ command; \ \ if (image != image1) \ report_useful_failure_message(...); \ } while(0)

For many algorithms, using an image of noise (independent for each channel) works well. Make sure the image have unequal dimensions that are prime numbers (for example, 11 and 13). Using prime numbers ensures that the two dimensions will not have any common divisors, which makes certain kind of tests for recursive algorithms (such as quad-tree compression) more robust.

In the example implementation, we have the line

if (image != image1) report_useful_failure_message(...);

This is a somewhat simplistic scheme. What is a useful failure message? Of course, the message must report the test that failed. But it should also give information about how the test failed:

- whether it was a mismatch of the number of channels, image dimensions, or pixel values;
- the number of channels and image dimensions;
- the number of pixels for which the test failed; and
- the first pixel location where the test failed, and the expected and actual pixel values.

These bits of information can help us hunt down the cause. For example, if the test data is an 11-by-13 image, and the test fails for 13 pixels, the error is most likely a border problem caused by an off-by-one error. If the first pixels that fails is 0, 0, and the test fails for (almost) all pixels, and the pixel values differ only by sign, we can guess that there is some sign error made that affects the entire image.

The actual test will thus be a bit more complicated, delegating to a function such as this:

test_image_equality( Image & image1, Image image 2, FailureInfo & info) { info.has_failed = false; info.failed_pixels_count = 0; info.dimensions_image1 = image1.dimensions; info.dimensions_image2 = image2.dimensions; info.channels_image1 = image1.channels info.channels_image2 = image2.channels if(image1.channels != image2.channels) { info.has_failed = true; info.failure_type = CHANNEL_MISMATCH; return; } if(image1.dimensions != image2.dimensions) { info.has_failed = true; info.failure_type = DIMENSION_MISMATCH; return; } forXY(image, x, y) { if(abs(image1(x, y) - image2(x, y)) < THRESHOLD) { if (!info.has_failed) { info.has_failed = true; info.failure_type = VALUE_MISMATCH; info.first_failed_x = x; info.first_failed_y = y; info_first_failed_image1 = image1(x,y); info_first_failed_image2 = image2(x,y); } info.failed_pixels_count++; } } }

In the test macro or function, we call the function like this:

... FailureInfo info; \ test_image_equality(image, image1, info); \ if (info.has_failed) report_failure(info); \ ...

It is important that you understand the transforms you use for testing very well. It is therefore a good idea to use you own transforms, and not those of a library. This way, unknown design choices in the library cannot bite you. Many image libraries do not for example specify how they handle borders, division by zero, rounding, etc. These details are often not important when using the algorithms in a task—but they can make a test fail (or succeed) when it should not.

The same danger exists when you use third party algorithms as building blocks for your own. In this case testing your code will be harder. As a starting point, consider testing whether third party algorithms satisfy the invariants you expect to hold. If they do not, you may need to modify your test code to accommodate for the discrepancy. If they do, you can be a little more confident that your tests will be reliable.

(Yes, it is not normally recommended to test other people’s code with unit tests. However, this is a once-off test to make sure that your own tests are reliable, and since it is so easy to implement (since you already have the test macro / function and transforms), I do not see the harm. Just keep the third-party library test separate from your own.)

Correctness of an image algorithm is a subtle issue, mostly because the discrete, quantized, finite model of image computation is inherently an approximation of the “real” thing. The question is too broad to tackle here (and frankly, I do not have enough knowledge to do it), so I will focus on aspects that are specifically relevant to this kind of testing.

Let us start with an example of an essentially correct algorithm that fails a test that we expect that it should not:

// C++ Image & threshold(Image & image) { forXY(image, x, y) image(x, y) = (image(x, y) <= 0.5f) ? 0 : 1; return image; }

We expect this simple algorithm to be invariant under the inverse transform. However, it is not. Consider a pixel value of 0.5: the inverse of this is 0.5. So both the pixel and its inverse will map to zero, and hence the following will not be true:

inverse(threshold(image)) == threshold(inverse(image)) //not true inverse(threshold(0.5f)) == inverse(0) == 1 //left side threshold(inverse(0.5f)) == threshold(0.5f) == 0 //right side

It looks like the problem is easily solved by changing the algorithm:

// C++ Image & threshold(Image ℑ) { forXY(image, x, y) image(x, y) = (image(x, y) < 0.5f) ? 0 : 1; return image; }

But, for floating point numbers on a machine, there is another number 0.5 – epsilon such that the following algorithm is equivalent to the one above:

// C++ Image & threshold(Image & image) { forXY(image, x, y) image(x, y) = (image(x, y) <= 0.5f - epsilon) ? 0 : 1; return image; }

But for this algorithm our invariant does not hold for a pixel with the value 0.5 – epsilon! Since the last two algorithms are equivalent, it means the modified algorithm is also not correct in this strict sense. In fact, it is not possible to implement an algorithm that is correct in this sense (using floating-point numbers).

But we feel that any of the above implementation *is* correct, and hence our test is wrong. So how do we change it, and should we?

We can change one (or more) of three things:

- the test data
- the test transform
- how we test for equality in images

Because of the simplicity and general usefulness of both the transform and the test data, I feel it is best to leave these as is. That means we have to change the measure of equality. Since we use random data, we might want to use the “is probably equal” measure—that is, two images are equal if a certain percentage of pixels are equal (within some threshold). Since the likelihood of generating 0.5 is low, our test should pass. I do not like this idea for two reasons:

First, it is not generally useful. For example, another test might fail when the borders are not equal (a much larger number of pixels), even though we feel that it must not for some specific algorithm. We then need another equality measure to handle that case.

Second, it is bound to hide a class of errors where a small number of pixels are incorrectly processed, and we do not want that.

Instead, we would like some measure that is generally useful, but can be tweaked for the specific algorithm to overcome those cases where we expect the algorithm to fail.

One way to do this is to use equality masks: a simple bit array that says whether we are interested in that pixel or not. In this case, we need the mask to be constructed from the test data; here is how we do it:

//C++ BitImage & make_ignore_value_mask( BitImage & mask, Image & image, float value) { forXY(image, x, y) mask = image(x, y) == 0.5f; }

We then change our macro to this:

#define TEST_TRANSFORM_INVARIANCE (image, mask, transform, command, mask_command) \ do { \ Image image; \ make_test_image(image); \ \ Image original_image(image); \ transform(image); \ \ if(image == original_image) report_unsuitable_test_image(...); \ \ image = original_image; \ command; \ \ Image image1(image); \ transform(image1); \ \ image = original_image; \ transform(image); \ command; \ \ BitImage mask(image.width, image.height); \ mask_command; \ \ if (mask * image != mask * image1) report_useful_failure_message(...); \ } while(0)

We call the macro like this:

TEST_TRANSFORM_INVARIANCE (image, mask, inverse, threshold(image), make_ignore_value_mask(mask, image, value));

The functional programming solutions are similarly modified:

// C++ test_transform_inavriance( Image & (* function)(Image &), BitImage & (*mask_creation_function) (BitImage & mask, Image & image), Image & (* transform)(Image &)) { Image image; make_test_image(image); Image original_image(image); if(image == original_image) report_unsuitable_test_image(...); image = original_image; function(image); Image image1(image); transform(image1); image = original_image; transform(image); function(image); BitImage mask; mask_creation_function(mask, image); if (mask * image != mask * image1) report_useful_failure_message(...); }

We need a wrapping function to call our extra argument:

BitImage & void wrapped_ make_ignore_value_mask( BitImage & mask, Image & image) { return make_ignore_value_mask(mask, image, 0,5f); }

Now we can call our test function like this:

test_transform_inavriance(image, mask, inverse, threshold, wrapped_make_ignore mask(mask, image));

In languages that support closures such as Python, we can again use the wrapping technique as before:

test_transform_inavriance(image, mask, inverse, threshold, wrap(make_ignore mask(mask, image), 0.5))

Here are a few suggestions for useful mask types:

`ignore_none` |
A constant array of 1s. Probably the one to be used with most algorithms. |

`ignore_value` `ignore_values` |
Ignores all the pixels in the test data that equal a given value or list of values. |

`ignore_range` |
Ignores all the pixels in the test data that fall in the given range. |

`ignore_out_of_range` |
Ignores all the pixels in the test data that fall outside a given range. |

`ignore_rect` |
Ignores all the pixels inside a specified rectangle. |

`ignore_border` |
Ignore all pixels in a boarder of specified width. |

Be careful for inadvertent ignore_all masks. For example, the ignore_border mask might be all zero when the border is thicker than half the smallest image dimension. It is a good idea to build in a test to check that the mask is not all-zero in the test macro or function.

Also watch out when transforms or algorithms change image dimensions—make sure the mask is constructed correctly.

We have now answered how we can change the test to pass for the thresholding example (and more generally); it remains to answer whether we should. My feeling is: only when you *have *to. Generally, I test without regard for such subtle issues. When a test fails, I carefully try to understand whether it is because the algorithm is fundamentally wrong, or whether it is just an obscure instance that makes the test fail. In the latter case, I will amend the test with the necessary masks.

**Update**: Here is another example of how an invariance test can fail even when the algorithm is fundamentally correct. Consider a region quadtree compression scheme. Suppose we want to compress a 5× 5 image, and suppose the algorithm divides the image in four rectangles. Because 5 is odd, the rectangles will differ in size. The biggest rectangle should always be in the same spot, regardless of the reflection; thus the algorithm should fail when testing pixels along the center. We cannot really address this issue with masks. To me it is unclear how to handle this at all (and clearly, we do not want to limit our tests to powers-of-two images only).

When you have a large group of transforms to construct invariance checks from, you can easily test for a whole bunch of errors with just a few lines of code. If the tests pass, it is easy to think that the algorithm is correct. It may not be:

- Perhaps the test image is unsuitable. This may be by coincidence if the test image is random.
- Any particular test does not catch all errors of a certain type.
- All transforms do not cover the entire set of possible errors.

A set of invariance tests does in general not prove the *correctness* of a particular algorithm. It merely exposes *certain kinds of errors*. Therefore, additional tests for correctness are necessary.

The basic structure of the test macro / function is as follows:

- Create test data.
- Calculate transform(algorithm(test_image)).
- Calculate algorithm(transform(test_image)).
- Compare these results and report any failures.

Remember:

- Check that the test data is changed by the transform.
- Use masks to cater for peripheral special cases where certain pixels, values, or regions in an image should be ignored for comparisons.
- Check that your masks are not all zero (sanity check).
- Report useful information when a test fails:
- what kind of failure (dimension mismatch, channel mismatch, value mismatch);
- in the case of value mismatch, the number of pixels that are mismatched and the values of the first pixels from each image that fail to match.

Let me know in the comments

]]>**Fast = not toooo slow…*

For the image restoration tool I had to implement min and max filters (also erosion and dilation—in this case with a square structuring element). Implementing these efficiently is not so easy. The naive approach is to simply check all the pixels in the window, and select the maximum or minimum value. This algorithm’s run time is quadratic in the window width, which can be a bit slow for the bigger windows that I am interested in. There are some very efficient algorithms available, but they are quite complicated to implement properly (some require esoteric data structures, for example monotonic wedges (PDF)), and many are not suitable for floating point images.

So I came up with this approximation scheme. It uses some expensive floating point operations, but its run time is *constant* in the window width.

The crux of the algorithm is the following approximation, which works well for , .

I am not going into the mathematical details of why this a fair approximation.

To turn this into a max filter is straightforward:

Raise each pixel to the power *p*.

The summed area table (or integral) of an image is a table containing in each cell (x, y), the sum of all the pixels top and left of (x, y); for the power image it is given by:

**Update:** This can be efficiently be calculated using the following recurrence:

and if or .

Use the cumulative image sum to calculate the mean in the window. Here, is the window radius, *A* stands for average:

That’s it!

The minimum filter is implemented by finding the maximum of the inverse image and then inverting the result. This approach uses two more passes through the image; so far I could not find a direct approximation for the minimum value. (**Update:** It seems using gives an approximation for finding the minimum of a set of values.)

- For very large images, you might run into floating point issues when calculating the cumulative sum image. You can reduce this effect somewhat by subtracting the (total image) mean of the power image from each pixel before calculating the image sum.
- For the image sizes I worked on, the value was suitable. The higher , the more accurate results, as long as you have no underflow.
- Raising powers is a slow operation. By choosing a suitable value of , you can use a suitable, faster approximation for calculating the power.
**Update:**It is possible to implement this as a two-pass algorithm, first working on columns, then on rows. This requires constructing two tables, which makes the algorithm slower. However, using this approach allows bigger images to be processed before overflow in the summed are tables becomes a problem.

Initially I thought that once the maximum can be found, we can also find the second maximum, and so on, so that any statistical order filter can be constructed.

The basic idea was that if is the maximum, we could find the second largest value like this:

The idea is that we remove the maximum value from the sum, so that the result must yield the second largest element. Of course, we do not use the actual maximum, but the approximation—and this is where the formula fails. When we use the approximate maximum, it is easy to show that the formula above will yield the exact same approximate maximum, and not the *second* largest element.

I should have know it seemed too good to be true!

**Update:** while looking for links for this post, I found this article: Robust Local Max-Min Filters by Normalized Power-Weighted Filtering (PDF) which initially looked like it was essentially the same thing. Although it is very similar, it is in fact different. The approximate maximum is given by the value:

for .

Using gives an approximation to the minimum.

The nice thing about this approximation is that it seems to work well for all real , not just for .

]]>Many textures used for 3D art start from photographs. Ideally, such textures should be uniformly lit so that the texture does not interfere with the lighting applied by the 3D software. Often, lighting artefacts must be removed by hand. This can be tedious and time consuming.

The tool provided here aims to automate this process. It is still in an experimental phase, so it is very crude. Below you can see some of the before and after pictures.

I will say more about how these work in a later post.

If you want to give it a try, go here to download (caveats: command-line only; Windows only; requires ImageMagick.)

The tool can also improve artefacts in tileable textures:

**BEFORE **(distracting horizontal striping)

**AFTER** (reduced striping)